Close Menu
CrypThing
  • Directory
  • News
    • AI
    • Press Release
    • Altcoins
    • Memecoins
  • Analysis
  • Price Watch
  • Price Prediction
Facebook X (Twitter) Instagram Threads
CrypThingCrypThing
  • Directory
  • News
    • AI
    • Press Release
    • Altcoins
    • Memecoins
  • Analysis
  • Price Watch
  • Price Prediction
CrypThing
Home»AI»The fixer’s dilemma: Chris Lehane and OpenAI’s impossible mission
AI

The fixer’s dilemma: Chris Lehane and OpenAI’s impossible mission

adminBy adminOctober 11, 20258 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link Bluesky Reddit Telegram WhatsApp Threads
The fixer’s dilemma: Chris Lehane and OpenAI’s impossible mission
Share
Facebook Twitter Email Copy Link Bluesky Reddit Telegram WhatsApp

Chris Lehane is one of the best in the business at making bad news disappear. Al Gore’s press secretary during the Clinton years, Airbnb’s chief crisis manager through every regulatory nightmare from here to Brussels – Lehane knows how to spin. Now he’s two years into what might be his most impossible gig yet: as OpenAI’s VP of global policy, his job is to convince the world that OpenAI genuinely gives a damn about democratizing artificial intelligence while the company increasingly behaves like, well, every other tech giant that’s ever claimed to be different.

I had 20 minutes with him on stage at the Elevate conference in Toronto earlier this week – 20 minutes to get past the talking points and into the real contradictions eating away at OpenAI’s carefully constructed image. It wasn’t easy or entirely successful. Lehane is genuinely good at his job. He’s likable. He sounds reasonable. He admits uncertainty. He even talks about waking up at 3 a.m. worried about whether any of this will actually benefit humanity.

But good intentions don’t mean much when your company is subpoenaing critics, draining economically depressed towns of water and electricity, and bringing dead celebrities back to life to assert your market dominance.

The company’s Sora problem is really at the root of everything else. The video generation tool launched last week with copyrighted material seemingly baked right into it. It was a bold move for a company already getting sued by the New York Times, the Toronto Star, and half the publishing industry. From a business and marketing standpoint, it was also brilliant. The invite-only app soared to the top of the App Store as people created digital versions of themselves, OpenAI CEO Sam Altman; characters like Pikachu and Cartman of “South Park”; and dead celebrities like Tupac Shakur.

Asked what drove OpenAI’s decision to launch this newest version of Sora with these characters, Lehane offered that Sora is a “general purpose technology” like the printing press, democratizing creativity for people without talent or resources. Even he – a self-described creative zero – can make videos now, he said on stage.

What he danced around is that OpenAI initially “let” rights holders opt out of having their work used to train Sora, which is not how copyright use typically works. Then, after OpenAI noticed that people really liked using copyrighted images, it “evolved” toward an opt-in model. That’s not iterating. That’s testing how much you can get away with. (By the way, though the Motion Picture Association made some noise last week about legal threats, OpenAI appears to have gotten away with quite a lot.)

Naturally, the situation brings to mind the aggravation of publishers who accuse OpenAI of training on their work without sharing the financial spoils. When I pressed Lehane about publishers getting cut out of the economics, he invoked fair use, that American legal doctrine that’s supposed to balance creator rights against public access to knowledge. He called it the secret weapon of U.S. tech dominance.

Techcrunch event

San Francisco
|
October 27-29, 2025

Maybe. But I’d recently interviewed Al Gore – Lehane’s old boss – and realized anyone could simply ask ChatGPT about it instead of reading my piece on TechCrunch. “It’s ‘iterative’,” I said, “but it’s also a replacement.”

Lehane listened and dropped his spiel. “We’re all going to need to figure this out,” he said. “It’s really glib and easy to sit here on stage and say we need to figure out new economic revenue models. But I think we will.” (We’re making it up as we go, is what I heard.)

Then there’s the infrastructure question nobody wants to answer honestly. OpenAI is already operating a data center campus in Abilene, Texas, and recently broke ground on a massive data center in Lordstown, Ohio, in partnership with Oracle and SoftBank. Lehane has likened the adoption of AI to the advent of electricity – saying those who accessed it last are still playing catch-up – yet OpenAI’s Stargate project is seemingly targeting some of those same economically challenged places to set up facilities with their attendant and massive appetites for water and electricity.

Asked during our sit-down whether these communities will benefit or merely foot the bill, Lehane went to gigawatts and geopolitics. OpenAI needs about a gigawatt of energy per week, he noted. China brought on 450 gigawatts last year plus 33 nuclear facilities. If democracies want democratic AI, he said, they have to compete. “The optimist in me says this will modernize our energy systems,” he’d said, painting a picture of re-industrialized America with transformed power grids.

It was inspiring, but it was not an answer about whether people in Lordstown and Abilene are going to watch their utility bills spike while OpenAI generates videos of The Notorious B.I.G. It’s very worth noting that video generation is the most energy-intensive AI out there.

There’s also a human cost, one made clearer the day before our interview, when Zelda Williams logged onto Instagram to beg strangers to stop sending her AI-generated videos of her late father, Robin Williams. “You’re not making art,” she wrote. “You’re making disgusting, over-processed hotdogs out of the lives of human beings.”

When I asked about how the company reconciles this kind of intimate harm with its mission, Lehane answered by talking about processes, including responsible design, testing frameworks, and government partnerships. “There is no playbook for this stuff, right?”

Lehane showed vulnerability in some moments, saying he recognizes the “enormous responsibilities that come with” all that OpenAI does.

Whether or not those moments were designed for the audience, I believe him. Indeed, I left Toronto thinking I’d watched a master class in political messaging – Lehane threading an impossible needle while dodging questions about company decisions that, for all I know, he doesn’t even agree with. Then news broke that complicated that already complicated picture.

Nathan Calvin, a lawyer who works on AI policy at a nonprofit advocacy organization, Encode AI, revealed that at the same time I was talking with Lehane in Toronto, OpenAI had sent a sheriff’s deputy to Calvin’s house in Washington, D.C., during dinner to serve him a subpoena. They wanted his private messages with California legislators, college students, and former OpenAI employees.

Calvin says the move was part of OpenAI’s intimidation tactics around a new piece of AI regulation, California’s SB 53. He says the company weaponized its ongoing legal battle with Elon Musk as a pretext to target critics, implying Encode was secretly funded by Musk. Calvin added that he fought OpenAI’s opposition to California’s SB 53, an AI safety bill, and that when he saw OpenAI claim that it “worked to improve the bill,” he “literally laughed out loud.” In a social media skein, he went on to call Lehane, specifically, the “master of the political dark arts.”

In Washington, that might be a compliment. At a company like OpenAI whose mission is “to build AI that benefits all of humanity,” it sounds like an indictment.

But what matters much more is that even OpenAI’s own people are conflicted about what they are becoming.

As my colleague Max reported last week, a number of current and former employees took to social media after Sora 2 was released, expressing their misgivings. Among them was Boaz Barak, an OpenAI researcher and Harvard professor, who wrote about Sora 2 that it is “technically amazing but it’s premature to congratulate ourselves on avoiding the pitfalls of other social media apps and deepfakes.”

On Friday, Josh Achiam – OpenAI’s head of mission alignment – tweeted something even more remarkable about Calvin’s accusation. Prefacing his comments by saying they were “possibly a risk to my whole career,” Achiam went on to write of OpenAI: “We can’t be doing things that make us into a frightening power instead of a virtuous one. We have a duty to and a mission for all of humanity. The bar to pursue that duty is remarkably high.”

It’s worth pausing to think about that. An OpenAI executive publicly questioning whether his company is becoming “a frightening power instead of a virtuous one,” isn’t on a par with a competitor taking shots or a reporter asking questions. This is someone who chose to work at OpenAI, who believes in its mission, and who is now acknowledging a crisis of conscience despite the professional risk.

It’s a crystallizing moment, one whose contradictions may only intensify as OpenAI races toward artificial general intelligence. It also has me thinking that the real question isn’t whether Chris Lehane can sell OpenAI’s mission. It’s whether others – including, critically, the other people who work there – still believe it.

2025 AI Chris dilemma fixers impossible Lehane mission October 27-29 OpenAIs San Francisco Techcrunch event TechCrunch|BProud Trumps
Share. Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email Copy Link Bluesky WhatsApp Threads
Previous ArticleCrypto bloodbath sees $19B in leveraged positions erased
Next Article How SJMine Transforms Daily Crypto News Into Passive Profits
admin

Related Posts

While OpenAI races to build AI data centers, Nadella reminds us that Microsoft already has them

October 10, 2025

OpenAI’s affordable ChatGPT Go plan expands to 16 new countries in Asia

October 9, 2025

You can’t libel the dead. But that doesn’t mean you should deepfake them.

October 8, 2025
Trending News

Phemex Launches Market Confidence Campaign To Support Traders Through Volatility

October 11, 2025

How SJMine Transforms Daily Crypto News Into Passive Profits

October 11, 2025

The fixer’s dilemma: Chris Lehane and OpenAI’s impossible mission

October 11, 2025

Crypto bloodbath sees $19B in leveraged positions erased

October 11, 2025
About Us

At crypthing, we’re passionate about making the crypto world easier to (under)stand- and we believe everyone should feel welcome while doing it. Whether you're an experienced trader, a blockchain developer, or just getting started, we're here to share clear, reliable, and up-to-date information to help you grow.

Don't Miss

Reporters found that Zerebro founder was alive and inhaling his mother and father’ home, confirming that the suicide was staged

May 9, 2025

Openai launches initiatives to spread democratic AI through global partnerships

May 9, 2025

Stripe announces AI Foundation model for payments and introduces deeper Stablecoin integration

May 9, 2025
Top Posts

Phemex Launches Market Confidence Campaign To Support Traders Through Volatility

October 11, 2025

How SJMine Transforms Daily Crypto News Into Passive Profits

October 11, 2025

The fixer’s dilemma: Chris Lehane and OpenAI’s impossible mission

October 11, 2025
  • About Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2025 crypthing. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.