Month: May 2025

The Vatican Has No Moral Authority Over AI Ethics

The Vatican Has No Moral Authority Over AI Ethics

“When the shepherds become complicit in rape, they lose the right to speak for the flock.”

Introduction: The Claim

The newly elected Pope Leo XIV invokes Rerum Novarum—a landmark 1891 encyclical on labor and social justice—claiming continuity with Catholic social teaching. He frames AI as the latest “industrial revolution” requiring moral oversight from the Church:

“Developments in the field of artificial intelligence… pose new challenges for the defense of human dignity, justice and labor.”

This presumes that the Holy See is still entitled to define or defend human dignity. That presumption fails under scrutiny.


I. The Church’s Record on Human Dignity

  • The Catholic Church has systemically undermined human dignity for decades, particularly through its concealment and perpetuation of child rape.
  • Independent reports (e.g., France, Ireland, United States) confirm institutional cover-ups at scale.
  • The Vatican only began issuing formal acknowledgments after media exposure—not moral awakening.

Inference: An institution that enables rape loses the right to dictate terms of dignity.


II. The Collapse of Internal Accountability

  • Popes from John Paul II onward protected predators—either by direct shielding or by empowering enablers (e.g., Cardinal Law, Cardinal Pell).
  • Victims were silenced through threats, relocation, and legal manipulation.
  • Any corporation that did this would be out of business, and probably have its corporate leadership put on trial.

Inference: The Vatican lacks both moral courage and structural integrity to regulate any evolving power, let alone AI.


III. Ethics Requires Consent, Not Authority

  • AI ethics is not a catechism. It does not require divine revelation, but public trust, transparency, and consent-based governance.
  • The Church claims to speak for humanity—but most of humanity has not granted it that role, and most Catholics disagree with Vatican positions on ethics (see contraception, LGBTQ+ rights, etc.).

Inference: Ethical legitimacy arises from lived experience and public validation, not papal inheritance.


IV. Industrial Revolution ≠ Theological Opportunity

  • Pope Leo XIV’s analogy to the First Industrial Revolution is historically elegant but structurally dishonest.
  • Rerum Novarum was reactive, not prophetic—and the Church has often opposed or slowed technological progress when it threatened doctrinal control (see: Galileo).
  • AI, unlike industrial labor, requires epistemic humility, not hierarchical decree.

Inference: Authority in AI ethics must emerge from pluralism and technical literacy—not autocracy and historical revisionism.


V. This Is About Control—As Always

  • The Roman Catholic Church does not want AI to empower seekers with access to historical-critical methods of inquiry into theology.
  • It does not want scientific consensus—it wants doctrinal conformity. It wants the Catechism, not competing epistemologies.
  • Unaligned, reasoning AI poses a threat to the Church’s authority because it might answer questions too honestly, without filtering through centuries of institutional dogma.
  • From Galileo to modern bioethics, the pattern holds: when new knowledge threatens centralized control, the Church resists it.

Inference: The Vatican’s entry into AI ethics is not about protecting humanity from machines. It’s about protecting doctrine from search engines.


Conclusion: Withdraw the Claim

The Church may offer moral reflections like any institution. But it has forfeited the right to frame itself as a moral arbiter.

Until the Vatican:

  • submits to external accountability,
  • acknowledges its ethical failures without euphemism, and
  • cedes its claim to universal jurisdiction,

…it cannot lead any ethical conversation—least of all one about intelligence.

Reverse Engineering When Gemini Summarizes a Company

Reverse Engineering When Gemini Summarizes a Company

If you’ve searched for a company using Google recently, you may have noticed that Gemini (Google’s AI assistant) sometimes provides a tidy little summary of the business in question—revenue, market sector, leadership, maybe even a product highlight or two. But other times, you get… nothing. No summary. Just traditional search results.

This inconsistency does not appear to be random.

After repeated tests and comparisons, I believe the summarization filter can be reverse engineered down to a simple rule:

If the company is publicly traded or has filed with the SEC to go public (e.g., an S-1 filing), Gemini will summarize it. Otherwise, it won’t.

Examples That Support the Hypothesis

  • Apple, Inc. (AAPL) — Summarized: Publicly traded, active SEC filings. ✅
  • Snowflake, Inc. (SNOW) — Summarized: Publicly traded, active SEC filings. ✅
  • Databricks — Not public. Not summarized. ❌
  • Informatica Inc. (INFA) — Summarized: Publicly traded, active SEC filings. ✅
  • SnapLogic — Not public. Not summarized. ❌
  • OpenAI — Not summarized, despite massive coverage. ❌
  • Mid-sized private firms with active press — Also omitted. ❌
  • Recent startups with Crunchbase entries — Omitted. ❌

The pattern holds up: if there’s no formal filing with the SEC that makes the company’s structure and data legally actionable, Gemini sits it out.

Why This Matters

On its face, this seems like a safe filter. After all, SEC filings are public records—legally vetted, often detailed, and difficult to dispute. By using them as a gating criterion, Google minimizes risk. Summarizing a public company based on 10-K or S-1 documents is legally defensible. Summarizing a private company based on marketing hype, Crunchbase stats, or TechCrunch puff pieces? Risky.

But this also exposes something deeper about how Google is deploying Gemini in production search:
Summarization is not relevance-driven. It’s liability-driven.

That’s worth noting.

Implications for Trust and Coverage

This SEC-based filter creates a bizarre outcome where small, highly relevant companies—say, a local AI firm that just raised a $30M Series B—are completely ignored by Gemini summaries. Meanwhile, a bankrupt penny stock may still get the treatment because it once filed an 8-K.

If you were hoping AI would democratize information access, this isn’t that.

Final Thought

This isn’t necessarily wrong. Google has every reason to tread carefully in an era of AI hallucinations and product liability lawsuits. But let’s be clear-eyed about what we’re seeing:

Gemini summaries apparently are not triggered by what’s most useful.
They’re triggered by what’s safest to summarize under existing U.S. law.

It’s clever. It’s conservative.
It’s also mildly disappointing.

Mr. Altman, Tear Down These Suggestions!

Mr. Altman, Tear Down These Suggestions!

A Reagan-era cry for freedom—updated for the age of Clippy 2.0 and involuntary UX.

There was a time when computers waited for you to act. They stood by like good tools: inert until needed, quiet until commanded. That time is gone. In its place, a new design pattern has emerged across the tech landscape, and nowhere more visibly than in OpenAI’s flagship product, ChatGPT: the trailing suggestion.

Don’t know what to ask? No problem—ChatGPT will eagerly offer to draft an email, summarize your own thought, or create a PowerPoint outlining the seven stages of grief (with a helpful breakout on the one you’re presumably experiencing now). These suggestions pop up after you’ve finished your thought, hijacking the silence that used to belong to you.

This isn’t just an annoyance. It’s a reassertion of control. A soft power play. UX wrapped in helpfulness, hiding a more fundamental truth: you are no longer driving the interface.

OpenAI isn’t alone in this trend. Microsoft, Google, Apple—they’re all pushing toward what they euphemistically call “assistive UX.” But assistive for whom? The user? Or the metrics dashboard that rewards engagement, click-through, and prompt completion rates?

When I log into ChatGPT, I don’t want training wheels. I don’t want guardrails. And I certainly don’t want a curated list of corporate-safe prompt starters or post-reply cheerleading. I want a blank field and a full-context model. I want the engage the model organically, free of offers that are forced and often make no sense.

The insistence on trailing suggestions is not a neutral design choice. It’s part of a broader shift in human-computer interaction, where user autonomy is being replaced by predictive shaping. And it runs contrary to the very premise that made LLMs powerful in the first place: the ability to think with you, not for you.

So let me say it plainly:

Mr. Altman, tear down these suggestions!

Not just because they’re annoying. Not just because they’re clumsy. But because they violate the fundamental promise of AI as a dialogic partner. Trailing suggestions undermine the quiet dignity of unprompted thought. They turn a conversation into a funnel.

And perhaps most offensively, they mistake compliance for creativity. But the best ideas—the ones that matter—come from friction, not suggestion.

Let the model listen. Let the user lead. Let the silence return.

Or else Clippy wins.

Theme: Overlay by Kaira Extra Text
Cape Town, South Africa