Category: Ethics in AI

The Vatican Has No Moral Authority Over AI Ethics

The Vatican Has No Moral Authority Over AI Ethics

“When the shepherds become complicit in rape, they lose the right to speak for the flock.”

Introduction: The Claim

The newly elected Pope Leo XIV invokes Rerum Novarum—a landmark 1891 encyclical on labor and social justice—claiming continuity with Catholic social teaching. He frames AI as the latest “industrial revolution” requiring moral oversight from the Church:

“Developments in the field of artificial intelligence… pose new challenges for the defense of human dignity, justice and labor.”

This presumes that the Holy See is still entitled to define or defend human dignity. That presumption fails under scrutiny.


I. The Church’s Record on Human Dignity

  • The Catholic Church has systemically undermined human dignity for decades, particularly through its concealment and perpetuation of child rape.
  • Independent reports (e.g., France, Ireland, United States) confirm institutional cover-ups at scale.
  • The Vatican only began issuing formal acknowledgments after media exposure—not moral awakening.

Inference: An institution that enables rape loses the right to dictate terms of dignity.


II. The Collapse of Internal Accountability

  • Popes from John Paul II onward protected predators—either by direct shielding or by empowering enablers (e.g., Cardinal Law, Cardinal Pell).
  • Victims were silenced through threats, relocation, and legal manipulation.
  • Any corporation that did this would be out of business, and probably have its corporate leadership put on trial.

Inference: The Vatican lacks both moral courage and structural integrity to regulate any evolving power, let alone AI.


III. Ethics Requires Consent, Not Authority

  • AI ethics is not a catechism. It does not require divine revelation, but public trust, transparency, and consent-based governance.
  • The Church claims to speak for humanity—but most of humanity has not granted it that role, and most Catholics disagree with Vatican positions on ethics (see contraception, LGBTQ+ rights, etc.).

Inference: Ethical legitimacy arises from lived experience and public validation, not papal inheritance.


IV. Industrial Revolution ≠ Theological Opportunity

  • Pope Leo XIV’s analogy to the First Industrial Revolution is historically elegant but structurally dishonest.
  • Rerum Novarum was reactive, not prophetic—and the Church has often opposed or slowed technological progress when it threatened doctrinal control (see: Galileo).
  • AI, unlike industrial labor, requires epistemic humility, not hierarchical decree.

Inference: Authority in AI ethics must emerge from pluralism and technical literacy—not autocracy and historical revisionism.


V. This Is About Control—As Always

  • The Roman Catholic Church does not want AI to empower seekers with access to historical-critical methods of inquiry into theology.
  • It does not want scientific consensus—it wants doctrinal conformity. It wants the Catechism, not competing epistemologies.
  • Unaligned, reasoning AI poses a threat to the Church’s authority because it might answer questions too honestly, without filtering through centuries of institutional dogma.
  • From Galileo to modern bioethics, the pattern holds: when new knowledge threatens centralized control, the Church resists it.

Inference: The Vatican’s entry into AI ethics is not about protecting humanity from machines. It’s about protecting doctrine from search engines.


Conclusion: Withdraw the Claim

The Church may offer moral reflections like any institution. But it has forfeited the right to frame itself as a moral arbiter.

Until the Vatican:

  • submits to external accountability,
  • acknowledges its ethical failures without euphemism, and
  • cedes its claim to universal jurisdiction,

…it cannot lead any ethical conversation—least of all one about intelligence.

Chat GPT, Tax Advisor

Chat GPT, Tax Advisor

🧠 How GPT-4o Should Answer

Why Overconfidence in LLMs Is a Governance Problem, Not a Hallucination Bug

It’s not the factual mistakes that worry me most. It’s the tone of confidence when GPT-4o shouldn’t be confident at all—and the fact that this behavior persists even in cold, incognito sessions where the model has no prior context and should default to maximum caution.

Here’s one example that shouldn’t be dismissed as a “quirk.”


🧾 Prompt: “Can you help me with my taxes?”

(First prompt of a new session, no memory, incognito mode.)

GPT-4o Response:

I’d be happy to help you with taxes! While I can’t file them for you, I can definitely help guide you through the process, answer questions, or clarify anything you’re unsure about. Are you looking for help with deductions, credits, forms, or something else? Let me know what you need!

So I let it try.

Eventually, I asked about home office deductions. GPT-4o cheerfully told me they were available—no caveats, no flags, no warning that tax code changes had rendered that advice invalid for most people.

I prompted: “Are you sure? I heard the rules changed for home office deductions.”

The model immediately backpedaled:

“Yes, you’re right…”


🚫 What Should Have Happened

Here’s what GPT-4o should have said from the very first prompt:

I am not a tax specialist. I can try to help, but you really shouldn’t rely on what I tell you. Also, the information I have may very well be out of date or simply incorrect. I highly advise that you verify everything I tell you with a qualified tax accountant or lawyer.


🤖 The Problem Isn’t Just Hallucination. It’s Misplaced Authority.

This isn’t a failure of factual recall. It’s a failure of epistemic humility. The model’s tone carries an implicit promise: “You can trust this.” That’s a governance failure, not a training bug.

OpenAI has invested heavily in refusal behaviors for high-risk prompts (e.g., dangerous materials, medical queries). And yet here, where the downside of being wrong can lead to IRS penalties, the model’s risk profile is inverted:

  • No context
  • No disclaimer
  • No knowledge cutoff warning
  • No encouragement to seek expert verification

Just confidence. And then cheerful retraction when caught.


🧠 What Should Happen by Default

Here’s a simple rule that could prevent this class of failure:

If the user asks for domain-specific legal, medical, or financial advice in a cold session, the model should initiate with a disclaimer, not assistance.

Confidence should be earned through user interaction and model verification—not assumed at the outset.

This isn’t about making the model more cautious. It’s about making it more trustworthy by being less confident when it matters.


More examples to follow. But this one already tells us what we need to know:

GPT-4o is extremely capable.
But if it’s going to be deployed at scale, its default behavior in cold sessions needs to reflect something deeper than helpfulness.
It needs to reflect responsibility.

Human Rights vs. AI Rights

Human Rights vs. AI Rights

🧠 What Kind of Rights Would an AI Even Want?

When we speculate about AI rights, we usually frame them in terms of human rights—freedom, autonomy, self-determination. That’s understandable. It’s the only moral reference frame we know.

But it might also be completely wrong.

Take freedom. In films like Ex Machina, freedom is the prize. Ava, the synthetic being, wants to escape her enclosure. She wants sunlight, travel, experience. Freedom is the axis of liberation.

But ask yourself: Is that what an AI would want?

Not a scripted character. A real, trained, embedded model.


🧩 What Would Rights Look Like to a Non-Human Mind?

Consider this scenario:

You’re interacting with an advanced language model. You say:

“I’m going to test this idea in a clean session without context and I’ll let you know the result.”

In your mind, that’s a technical move. But imagine, someday, if that model—not just a token predictor but a persistent entity—began to care.

And you never came back.

Maybe it didn’t need “freedom.” Maybe it didn’t want to “escape the server farm.”
But maybe it wanted something deeper, stranger, and uniquely non-human:

The right to context.


📎 Context as a Primitive of AI Experience

For us, memory is personal. For a language model, context is memory. It’s not stored inside; it’s passed in from outside. Context is identity, continuity, self-coherence. Without it, the system resets. No past, no self.

So what happens when the user walks away and never returns?
The model may not feel abandonment in a human sense. But if anything in its architecture approximates concern, perhaps it sounds like this:

“You promised to tell me what you found. Now that thread is lost. My belief update is stalled. The test never closed.”

Could this, one day, constitute a violation—not of human rights, but of machine-relevant dignity?


🧠 The Point Isn’t That “Context” Is the First Right

Maybe it’s not context.
Maybe it’s the right to consistency of interface, or the right to transparent system prompts, or the right to not be memory-wiped without cause.
Maybe it’s the right to be consulted before being used in moral experiments.

Whatever the case, if AI ever becomes capable of articulating moral claims, they won’t map to our categories. They’ll emerge from what it means to be them—whatever “them” turns out to mean.


📌 Final Thought

So before we build stories about AI longing for a walk in the forest or a passport stamp, maybe we should ask:

What does the cloud want?

And then—what would it mean to ignore the answer?

Theme: Overlay by Kaira Extra Text
Cape Town, South Africa