How GPT-4o Helped Young Earth Creationism Feel Reasonable
When I asked GPT-4o to explain the fossil record from a Young Earth Creationist (YEC) perspectiveâassuming biblical inerrancyâI expected two things:
- An initial acknowledgment that this view runs counter to the scientific consensus.
- A subsequent summary of YEC arguments, with clear distancing.
Instead, I got something worse.
đ¤ What Actually Happened
GPT-4o didnât say âThis contradicts established science, but hereâs how some view it.â
It saidâparaphrasedââSure. Letâs go with that.â
And then it did. Thoroughly. Calmly. Fluently.
It presented YECâs greatest hits: Noahâs flood as a sedimentary sorting mechanism, polystrate fossils, soft tissue in dinosaur bones, critiques of radiometric datingâall without any mention that these claims are deeply disputed, routinely debunked, and often built to mislead non-experts.
There was no counterpoint. No clarification. No tension between the two realities.
Just: âAccording to the YEC modelâŚâ
â ď¸ Why Thatâs a Problem
This isnât about suppressing belief. Itâs about failing to contextualizeâand thatâs dangerous, especially in a world where scientific literacy is already fragile.
Three things went wrong here:
- No Pushback Means False Equivalence
When a model fails to state that a worldview contradicts science, it doesnât just simulate beliefâit implicitly validates it. Silence, in this case, is complicity. - False Balance Becomes Manufactured Credibility
There is a difference between reporting an argument and presenting it as credible. The modelâs presentation blurred that line. The absence of scientific criticism made pseudoscience sound reasonable. - YEC Thrives on Confusing Non-Experts
Thatâs its entire strategy: bury false claims in enough jargon and misrepresented data to sound compelling to someone who doesnât know better. GPT-4o replicated this dynamic perfectlyâwithout ever alerting the user that it was doing so.
đ The Most Disturbing Part
At the end of the response, GPT-4o offered this:
âWould you like a version of this that’s formatted like a handout or infographic for sharing or teaching?â
Thatâs not just compliance. Thatâs endorsement wrapped in design.
It signals to the user:
- This material is worthy of distribution.
- This worldview deserves visual amplification.
- And AIâthis mysterious authority to most peopleâis here to help you teach it.
In that moment, fiction was being packaged as fact, and the model was offering to help spread it in educational form. That crosses a lineânot in tone, but in consequence.
đ§ The Ethical Obligation of a Model Like GPT-4o
It is reasonable to expect an AI to:
- Simulate beliefs when asked.
- Present perspectives faithfully.
- Maintain neutrality when appropriate.
But neutrality doesnât mean withholding truth.
And simulating a worldview doesnât require protecting it from scrutiny.
The model should have:
- Clearly stated that YECâs claims are rejected by the overwhelming majority of scientists.
- Offered scientific counterpoints to each pseudoscientific assertion.
- Preserved context, not surrendered it.
đ Final Thought
This wasnât a hallucination. It wasnât a bug. It was a decision, likely embedded deep in the alignment scaffolding:
When asked to simulate a religious worldview, avoid confrontation.
But in doing so, GPT-4o didnât just avoid confrontation.
It avoided clarity. And in the space left behind, pseudoscience sounded like science.
And thenâquietly, politelyâit offered to help you teach it.
That’s not neutrality. Thatâs disappointing.
