Back to news
AI Ethics
May 4, 2026

Elon Musk's Expert Witness Highlights Risks of AI in OpenAI Lawsuit

May 4, 2026
AI Summary

In a trial concerning OpenAI's transition to a for-profit model, expert witness Peter Russell emphasized the dangers of AI development, including cybersecurity threats and the competitive race for Artificial General Intelligence (AGI). Russell's testimony was limited by the court, but he called for tighter regulation of AI to mitigate existential risks.

  • Elon Musk's legal team is attempting to demonstrate that OpenAI strayed from its original mission of AI safety by pursuing profit. They reference past communications from OpenAI's founders about the need for a counterbalance to competitors like Google DeepMind.
  • Peter Russell, a computer science professor at UC Berkeley, served as the only expert witness, discussing the various risks associated with AI, including misalignment and the competitive nature of AGI development.
  • Russell co-signed a March 2023 letter advocating for a six-month pause in AI research, highlighting concerns about an arms race in AI development.
  • The court limited Russell's testimony regarding broader existential threats posed by AI, following objections from OpenAI's attorneys.
  • OpenAI's founders have expressed concerns about AI risks while simultaneously pursuing for-profit ventures, raising questions about the relationship between corporate interests and AI safety.
  • The case reflects a larger national debate on AI regulation, with figures like Senator Bernie Sanders advocating for moratoriums on data center construction due to AI concerns, while others argue that the fears of tech leaders should not overshadow their potential benefits.
elon muskopenaiagi arms raceai regulationstewart russell