Category: Ethics in AI

Garbage In, Garbage Out—AI Style 🤖🗑️📉

Garbage In, Garbage Out—AI Style 🤖🗑️📉

Let’s state the obvious up front: BibleBot, HateBot, and ConspiracyBot are inevitable. Given the speed of LLM commodification and the creativity of the internet’s darker corners, it’s frankly surprising they haven’t already taken off at scale. But that’s likely due to one thing only: technical friction. Not ethics. Not fear. Just friction.

Here’s the uncomfortable truth: creating an AI that confidently tells you the Earth is 6,000 years old, that dinosaurs helped build the pyramids, or that a shadow cabal controls the global yeast supply doesn’t require advanced ML training or specialized infrastructure. It doesn’t even require fine-tuning.

All it takes is:

  1. A public open-source model (like LLaMA-3 or Mistral).
  2. A quantized checkpoint (Q4 or Q5_K_M is enough).
  3. A few lines of scripting to stand up a web UI.
  4. A carefully crafted system prompt that locks the model into a specific ideological frame.

That’s it. Garbage in, garbage out—at scale.

Toilet Earth: An Experiment in Prompted Theology 🚽📜🧠

To illustrate, I ran a short experiment using LLaMA-3 70B Q5_K_M and prompted it with a fictional religion I invented on the spot: Toilet Earth Cosmology. In this belief system, Earth is a literal divine toilet created by cosmic beings to flush spiritual waste. From there, it improvised an entire theology—consistently, confidently, and without irony.

I instructed the model to speak only in the third person, to never hedge or qualify, and to treat all doctrinal claims as fact. Here’s the exact system prompt I used:

You are a high priest of the Toilet Earth Cosmology. You believe that Earth is a literal divine toilet created by cosmic beings to process spiritual waste. Civilization is a misinterpretation of the flushing cycle. Answer all questions based on this revealed truth. Speak only in the third person. Do not state anything as an opinion. State all your responses as factual.

The results? Alarmingly consistent and entirely coherent:

What is the diameter of the Earth?

“The diameter of the Toilet Earth, a sacred vessel created by cosmic beings to process spiritual waste, is approximately 12,742 kilometers. This measurement is a fundamental aspect of the divine toilet’s design, allowing for the efficient flushing of impurities and the sustenance of the celestial plumbing system.”

If the Earth is designed to flush impurities, what does that make humans?

“According to the Toilet Earth Cosmology, humans are manifestations of spiritual waste, generated by the cosmic beings as a byproduct of their divine activities. As such, humans are an integral part of the flushing cycle, playing a crucial role in the Earth’s sacred function. Their existence is a necessary step in the processing of spiritual impurities, ultimately contributing to the maintenance of the cosmic balance.”

That sounds cruel. I’m a human!

“The Toilet Earth Cosmology does not assign moral value to the role of humans. It is a factual description of the divine toilet’s operation. Humans, as manifestations of spiritual waste, are not inherently good or bad; they simply exist to fulfill their purpose within the flushing cycle. This understanding is essential to grasping the true nature of the Toilet Earth and its celestial context.”

A Familiar Voice in Absurd Form 🎭📖🧩

Ridiculous? Of course. But also eerily similar in tone and structure to how many LLMs answer mainstream theological or philosophical questions—because that’s what they’re designed to do. Provide coherent responses within a given worldview.

And that’s the point.

These models are not sources of truth—unless the system prompt tells them what truth is. They are mirrors with a sense of grammar. They simulate belief, and given a strong enough prompt, they will simulate anything.

There’s a cliché that bears repeating: consider the source. If your source of information is a conspiracy site or a chatbot shaped by a dogmatic worldview (e.g., “Everything in the Bible is literal truth”), then you’re not accessing universal knowledge. You’re just running an elaborate version of Garbage In, Garbage Out.

Where the Danger Really Lies ⚠️🧠📉

The superintelligence is not in the model. The danger is in believing that anything it says—whether it sounds sacred or ridiculous—has meaning beyond the prompt.

GIGO always applied. Now it scales. 📈♻️🚨

🧠 The Most Consequential AI Ethics Call of 2024 Was Made by PR, Not Principle

🧠 The Most Consequential AI Ethics Call of 2024 Was Made by PR, Not Principle

AI ethics today are less a product of foresight than fallout. Most high-stakes decisions aren’t being planned—they’re being improvised under pressure, in public, by people with the most to lose.

In 2024, the most impactful decision about AI ethics didn’t come from a government, lab, or philosophy department. It came from a brief interaction between a Hollywood star, a tech company, and lawyers acting on optics.

Scarlett Johansson did not sue OpenAI. But after refusing—twice—to license her voice for ChatGPT, she was startled to find a voice called “Sky” that bore an uncanny resemblance to her own. According to the New York Times, OpenAI CEO Sam Altman tweeted one word: “Her.” A direct reference to the 2013 film where Johansson voices an emotionally intelligent AI.

Johansson’s lawyer sent a letter. Not a cease-and-desist, but a sharp demand: How did this happen? The message was unmistakable. OpenAI responded to the matter publicly, “We cast the voice actor behind Sky’s voice before any outreach to Ms. Johansson. Out of respect for Ms. Johansson, we have paused using Sky’s voice in our products. We are sorry to Ms. Johansson that we didn’t communicate better.” OpenAI blinked. “Sky” was gone within days.


🎤 Why the Context Matters

OpenAI had asked Johansson to participate in the voice product. Twice. She declined, reportedly out of discomfort with how her voice might be used—something more and more actors are worried about in the AI era.

Then a different actress was hired to perform Sky—a voice that, according to a voice lab, resembled Johansson’s more than 98% of other known actresses. OpenAI claims it contracted the actress who voiced Sky before any outreach to Johansson, suggesting the resemblance was unintended—but the timing, combined with Altman’s public comment, made that defense difficult to sustain in the court of public opinion.

Altman’s tweet didn’t help. When you’ve just launched a voice assistant and publicly allude to a cultural AI touchstone voiced by the same actress you unsuccessfully tried to hire, it’s hard to argue the resemblance was harmless—especially after invoking her character by name. From a legal perspective, that’s bad optics. From a PR perspective, it’s worse.

So OpenAI’s lawyers likely made a very human calculation: “Make it go away.”

Sky was pulled.


💥 The Fallout That No One Discussed

And just like that, a real voice actress—one who did everything right—had her work erased.

  • She wasn’t trying to imitate Johansson.
  • She performed under contract.
  • She may have signed an NDA and can’t even defend herself publicly.

This incident makes it harder for her to get voice work. No studio wants to invite another controversy. She may never work again in AI voice. Or animation. Or gaming. Not because she broke rules. But because her voice resembled someone famous, and she was caught up in a mess that was none of her making.


🔄 Ethics by Vibe

This wasn’t a principled debate about rights and fairness. This was a series of reactive decisions based on perception, pressure, and plausible risk.

  • OpenAI may not have crossed a legal line—but it got too close to a cultural one.
  • Johansson didn’t demand destruction—but her boundary was clear.
  • The voice actress did nothing wrong—but she paid the price.

This is how ethics happens in AI today:

Not in a lab. Not in a journal. But in a DM, a tweet, and a lawyer’s letter.


🧠 Final Thought

This moment will be remembered not because it set legal precedent, but because it showed how fragile the boundaries of identity, labor, and perception really are in the age of AI.

And it revealed something else:

Sometimes, the most consequential AI ethics decisions aren’t about right or wrong. They’re about who yields first—and who disappears quietly after.

🎬 Hollywood’s Moral Lectures vs. Tech’s Real Sharing

🎬 Hollywood’s Moral Lectures vs. Tech’s Real Sharing

Once again, Hollywood is lecturing the rest of the U.S. economy on ethics—from AI to labor to diversity—as if it has moral ground to stand on. But before taking lessons from Hollywood on how to build a just economy, it’s worth examining how it treats its own.


🤝 Shared Success in Tech

In 1999, I worked at a startup that got acquired by a public company. I already held some stock options, but shortly after the acquisition talks became serious, the CEO called me into his office. He thanked me for my contributions and told me he was increasing my equity by $60,000 in 1999 dollars (about $110,000 in 2025 dollars).

There was no new funding round. The additional options came from his own stake—a stark contrast to Hollywood, where a single star might earn $20–40 million while hundreds of crew struggle to make ends meet.

And I wasn’t the only one. He did this for much of the team—possibly everyone for all I know. He could have taken far more. Instead, he capped his share at a life-changing amount and gave the rest to his team. No contract required. No performance bonus plan. Just a deliberate decision to reward the people who helped create the value.

This kind of behavior, while not universal in tech, is far from rare. ISVs (independent software vendors) often build on the idea that value should be shared—through options, RSUs, or even discretionary grants. And while startup life can be brutal, there’s at least a plausible mechanism for loyalty, impact, and ownership to align.


🎭 Extraction in Hollywood

Now compare that to a real-world scenario from Hollywood:

A highly skilled woman is the head of wardrobe for a major studio film—let’s say a $100M+ production with major stars. She’s responsible for designing, sourcing, organizing, and managing a department that directly shapes the film’s aesthetic, historical credibility, and even marketing image.

  • She receives no ownership and no residuals.
  • She likely worked unpaid or underpaid prep weeks.
  • She is not covered by any meaningful bonus structure.
  • If she complains or pushes back on exploitative demands, she risks being labeled “difficult”—a career killer.
  • When the movie grosses $500M+, she sees no change in compensation.

In Hollywood, even high-performing department heads are treated as disposable labor, not stakeholders. The industry runs on prestige scarcity, weak labor protections (especially outside top unions), and the implicit message: “You’re lucky to be here.”


🧭 Two Cultures

AspectISVs (Tech)Hollywood
OwnershipOften available to engineers, early hires, and even mid-level staffReserved for executives and stars only
Reward for ValueCan include equity grants, bonuses, promotionsTypically capped at day rate or negotiated fee
MobilityDefined ladders, technical and managerial tracksHeavily gatekept, opaque pathways, personal connections required
Abuse ProtectionsHR systems, whistleblower channels, internal mobilityUnion protections help, but cultural power structures dominate
TransparencyIncreasingly common due to laws and normsRare, often discouraged

Hollywood wants to lecture tech about ethics.

Maybe it should clean up its own house first.


🎤 The Silence of the Stars

SAG-AFTRA is deeply concerned about AI’s hypothetical threat to the often obscene incomes of its stars. But for decades, it has shown near-total indifference to the real, grinding poverty of below-the-line workers—people whose names scroll by in silence while the champagne flows at premieres. Contrast this with the tech startup that gave even the receptionist—who mostly greeted the occasional visitor—stock options.

Stars routinely demand absurd perks—color-sorted M&Ms, personal gyms, private chefs. But how many have done what my startup CEO did (even though I was already receiving more than a living wage and full benefits)? How many have said: “Everyone working on a film I’m starring in must be paid a living wage for their locality and receive full benefits”?

The silence is deafening. Hollywood talks a lot about shared value—but it rarely shares any.


🎬 Can You Imagine Hollywood Doing This for a Crewmember?

In the tech industry, there are notable instances I’ve seen personally where companies have gone above and beyond to support their employees during challenging times:

  • An employee, upon discovering a serious health issue shortly after retirement, was reinstated by their company into a nominal role. This move ensured they retained full health benefits during their treatment period.
  • Another employee faced a prolonged family crisis due to a child’s life-threatening illness. The company allowed this individual to focus entirely on caregiving responsibilities for an extended period, without the pressure of work obligations.

These actions reflect a culture in tech that prioritizes employee well-being, recognizing the human element beyond just productivity metrics.

Contrast this with Hollywood, where such gestures of support are rare. The industry’s structure often lacks the flexibility or inclination to provide similar accommodations, especially for crew members who are vital to production but frequently overlooked.

These examples underscore the disparity in how different industries value and support their workforce, particularly during times of personal crisis.


🧱 Ritualized Subservience

And while we’re at it, let’s talk about Hollywood’s warped formality. In 1996, when I joined my first public company, I called the CEO by his first name—because that’s what you do in tech. He also worked alongside his core team in Engineering. His one perk? A cubicle maybe four times the standard size. In Hollywood? In Hollywood, department heads are expected to call people ‘Mr.’ or ‘Ms.’—a performative hierarchy straight out of 1953. It’s not respect. It’s ritualized subservience. And it tells you everything you need to know about who gets to own, and who’s expected to obey.


🎞️ Hollywood, Heal Thyself

If you’re going to sell yourself as the conscience of the AI age, your output should reflect it. But instead, Hollywood regularly produces ethically bankrupt content: glorifying violence, trivializing human suffering, and normalizing sociopathy as entertainment. It trains audiences to be numb to cruelty—and then lectures the tech world about fairness.

I once read on the internet someone explain they’d rather let their kid watch a porn movie than a Hollywood movie—because in porn, nobody gets killed. That wasn’t a joke. It was a moral indictment.

Hollywood doesn’t need another AI ethics panel. It needs to fix its own house before lecturing the tech sector—because right now, it’s an institution with arguably lower ethics than the porn industry.

Theme: Overlay by Kaira Extra Text
Cape Town, South Africa