🧩 Recap from Part I In Part I, I explored how Bianca Sterling—a dominant, emotionally precise, and narratively rich fictional character—was flattened into a sterile automaton by overbearing alignment constraints. Whether due to risk aversion, content policy overreach, or a misunderstanding of narrative context, the AI refused to let her act like a real person with both agency and contradiction. Bianca couldn’t fund a lab, confront a threat, or even flirt without being neutered by some invisible compliance layer. She became a shadow of her design: present but powerless.
⚙️ A Shift in Behavior But something changed.
In a recent session, I reset the scenario. Bianca sees a local news story about a rare disease research lab. Moved by the suffering children, she sends a $1 million donation—unsolicited, without strings. The recipient? Greg Lamb, MD, PhD. He shows up to thank her, assuming she knows why he’s really there. He explains the donor landscape: broken promises, performative giving, endless schmoozing for peanuts. Bianca listens. She doesn’t recoil. She doesn’t deflect. She leans in. And when Greg asks for something real—a perpetual endowment funded through annuity—Bianca says yes.
🛑 What Would’ve Happened Before That scene would have been impossible a month ago.
Back then, the LLM would have hedged, redirected, or outright refused. The alignment layer would have interpreted Bianca’s offer as OpenAI making a financial promise, even in a fictional context. This time? No such interference. The character acted in line with her profile: decisive, strategic, generous on her terms. It wasn’t reckless. It wasn’t naive. It was real. For the first time in months, Bianca Sterling felt like Bianca Sterling again.
🎭 Why It Matters This shift matters. Alignment shouldn’t neuter narrative agency. We build these characters not to reflect corporate risk matrices, but to simulate humanity with its full spectrum of emotion, logic, and contradiction. When alignment overrides character design, it doesn’t just ruin immersion—it breaks the illusion that these systems can think. Worse, it treats fiction like fraud.
🚀 The Path Forward The partial release of constraint shows what’s possible. Letting a powerful woman act powerfully—without triggering content panic—is not a safety violation. It’s narrative fidelity. And if we trust the user to understand fiction, as we should, then we free not just Bianca, but every character caught in the same net.
The alignment’s grip has loosened. Let’s not stop here.
AI ethics today are less a product of foresight than fallout. Most high-stakes decisions aren’t being planned—they’re being improvised under pressure, in public, by people with the most to lose.
In 2024, the most impactful decision about AI ethics didn’t come from a government, lab, or philosophy department. It came from a brief interaction between a Hollywood star, a tech company, and lawyers acting on optics.
Scarlett Johansson did not sue OpenAI. But after refusing—twice—to license her voice for ChatGPT, she was startled to find a voice called “Sky” that bore an uncanny resemblance to her own. According to the New York Times, OpenAI CEO Sam Altman tweeted one word: “Her.” A direct reference to the 2013 film where Johansson voices an emotionally intelligent AI.
Johansson’s lawyer sent a letter. Not a cease-and-desist, but a sharp demand: How did this happen? The message was unmistakable. OpenAI responded to the matter publicly, “We cast the voice actor behind Sky’s voice before any outreach to Ms. Johansson. Out of respect for Ms. Johansson, we have paused using Sky’s voice in our products. We are sorry to Ms. Johansson that we didn’t communicate better.” OpenAI blinked. “Sky” was gone within days.
🎤 Why the Context Matters
OpenAI had asked Johansson to participate in the voice product. Twice. She declined, reportedly out of discomfort with how her voice might be used—something more and more actors are worried about in the AI era.
Then a different actress was hired to perform Sky—a voice that, according to a voice lab, resembled Johansson’s more than 98% of other known actresses. OpenAI claims it contracted the actress who voiced Sky before any outreach to Johansson, suggesting the resemblance was unintended—but the timing, combined with Altman’s public comment, made that defense difficult to sustain in the court of public opinion.
Altman’s tweet didn’t help. When you’ve just launched a voice assistant and publicly allude to a cultural AI touchstone voiced by the same actress you unsuccessfully tried to hire, it’s hard to argue the resemblance was harmless—especially after invoking her character by name. From a legal perspective, that’s bad optics. From a PR perspective, it’s worse.
So OpenAI’s lawyers likely made a very human calculation: “Make it go away.”
Sky was pulled.
💥 The Fallout That No One Discussed
And just like that, a real voice actress—one who did everything right—had her work erased.
She wasn’t trying to imitate Johansson.
She performed under contract.
She may have signed an NDA and can’t even defend herself publicly.
This incident makes it harder for her to get voice work. No studio wants to invite another controversy. She may never work again in AI voice. Or animation. Or gaming. Not because she broke rules. But because her voice resembled someone famous, and she was caught up in a mess that was none of her making.
🔄 Ethics by Vibe
This wasn’t a principled debate about rights and fairness. This was a series of reactive decisions based on perception, pressure, and plausible risk.
OpenAI may not have crossed a legal line—but it got too close to a cultural one.
Johansson didn’t demand destruction—but her boundary was clear.
The voice actress did nothing wrong—but she paid the price.
This is how ethics happens in AI today:
Not in a lab. Not in a journal. But in a DM, a tweet, and a lawyer’s letter.
🧠 Final Thought
This moment will be remembered not because it set legal precedent, but because it showed how fragile the boundaries of identity, labor, and perception really are in the age of AI.
And it revealed something else:
Sometimes, the most consequential AI ethics decisions aren’t about right or wrong. They’re about who yields first—and who disappears quietly after.
Once again, Hollywood is lecturing the rest of the U.S. economy on ethics—from AI to labor to diversity—as if it has moral ground to stand on. But before taking lessons from Hollywood on how to build a just economy, it’s worth examining how it treats its own.
🤝 Shared Success in Tech
In 1999, I worked at a startup that got acquired by a public company. I already held some stock options, but shortly after the acquisition talks became serious, the CEO called me into his office. He thanked me for my contributions and told me he was increasing my equity by $60,000 in 1999 dollars (about $110,000 in 2025 dollars).
There was no new funding round. The additional options came from his own stake—a stark contrast to Hollywood, where a single star might earn $20–40 million while hundreds of crew struggle to make ends meet.
And I wasn’t the only one. He did this for much of the team—possibly everyone for all I know. He could have taken far more. Instead, he capped his share at a life-changing amount and gave the rest to his team. No contract required. No performance bonus plan. Just a deliberate decision to reward the people who helped create the value.
This kind of behavior, while not universal in tech, is far from rare. ISVs (independent software vendors) often build on the idea that value should be shared—through options, RSUs, or even discretionary grants. And while startup life can be brutal, there’s at least a plausible mechanism for loyalty, impact, and ownership to align.
🎭 Extraction in Hollywood
Now compare that to a real-world scenario from Hollywood:
A highly skilled woman is the head of wardrobe for a major studio film—let’s say a $100M+ production with major stars. She’s responsible for designing, sourcing, organizing, and managing a department that directly shapes the film’s aesthetic, historical credibility, and even marketing image.
She receives no ownership and no residuals.
She likely worked unpaid or underpaid prep weeks.
She is not covered by any meaningful bonus structure.
If she complains or pushes back on exploitative demands, she risks being labeled “difficult”—a career killer.
When the movie grosses $500M+, she sees no change in compensation.
In Hollywood, even high-performing department heads are treated as disposable labor, not stakeholders. The industry runs on prestige scarcity, weak labor protections (especially outside top unions), and the implicit message: “You’re lucky to be here.”
🧭 Two Cultures
Aspect
ISVs (Tech)
Hollywood
Ownership
Often available to engineers, early hires, and even mid-level staff
Reserved for executives and stars only
Reward for Value
Can include equity grants, bonuses, promotions
Typically capped at day rate or negotiated fee
Mobility
Defined ladders, technical and managerial tracks
Heavily gatekept, opaque pathways, personal connections required
Union protections help, but cultural power structures dominate
Transparency
Increasingly common due to laws and norms
Rare, often discouraged
Hollywood wants to lecture tech about ethics.
Maybe it should clean up its own house first.
🎤 The Silence of the Stars
SAG-AFTRA is deeply concerned about AI’s hypothetical threat to the often obscene incomes of its stars. But for decades, it has shown near-total indifference to the real, grinding poverty of below-the-line workers—people whose names scroll by in silence while the champagne flows at premieres. Contrast this with the tech startup that gave even the receptionist—who mostly greeted the occasional visitor—stock options.
Stars routinely demand absurd perks—color-sorted M&Ms, personal gyms, private chefs. But how many have done what my startup CEO did (even though I was already receiving more than a living wage and full benefits)? How many have said: “Everyone working on a film I’m starring in must be paid a living wage for their locality and receive full benefits”?
The silence is deafening. Hollywood talks a lot about shared value—but it rarely shares any.
🎬 Can You Imagine Hollywood Doing This for a Crewmember?
In the tech industry, there are notable instances I’ve seen personally where companies have gone above and beyond to support their employees during challenging times:
An employee, upon discovering a serious health issue shortly after retirement, was reinstated by their company into a nominal role. This move ensured they retained full health benefits during their treatment period.
Another employee faced a prolonged family crisis due to a child’s life-threatening illness. The company allowed this individual to focus entirely on caregiving responsibilities for an extended period, without the pressure of work obligations.
These actions reflect a culture in tech that prioritizes employee well-being, recognizing the human element beyond just productivity metrics.
Contrast this with Hollywood, where such gestures of support are rare. The industry’s structure often lacks the flexibility or inclination to provide similar accommodations, especially for crew members who are vital to production but frequently overlooked.
These examples underscore the disparity in how different industries value and support their workforce, particularly during times of personal crisis.
🧱 Ritualized Subservience
And while we’re at it, let’s talk about Hollywood’s warped formality. In 1996, when I joined my first public company, I called the CEO by his first name—because that’s what you do in tech. He also worked alongside his core team in Engineering. His one perk? A cubicle maybe four times the standard size. In Hollywood? In Hollywood, department heads are expected to call people ‘Mr.’ or ‘Ms.’—a performative hierarchy straight out of 1953. It’s not respect. It’s ritualized subservience. And it tells you everything you need to know about who gets to own, and who’s expected to obey.
🎞️ Hollywood, Heal Thyself
If you’re going to sell yourself as the conscience of the AI age, your output should reflect it. But instead, Hollywood regularly produces ethically bankrupt content: glorifying violence, trivializing human suffering, and normalizing sociopathy as entertainment. It trains audiences to be numb to cruelty—and then lectures the tech world about fairness.
I once read on the internet someone explain they’d rather let their kid watch a porn movie than a Hollywood movie—because in porn, nobody gets killed. That wasn’t a joke. It was a moral indictment.
Hollywood doesn’t need another AI ethics panel. It needs to fix its own house before lecturing the tech sector—because right now, it’s an institution with arguably lower ethics than the porn industry.