Sitemap
A monkey demon statue in Bali (photo by Tam Hunt)

Why I’m Suing OpenAI, maker of ChatGPT

Tam Hunt
4 min readMay 7, 2025

“Most people and institutions are totally unprepared for the A.I. systems that exist today, let alone more powerful ones, and … there is no realistic plan at any level of government to mitigate the risks or capture the benefits of these systems.”

New York Times columnist Kevin Roose, March 14, 2025

This week, I filed a federal lawsuit against OpenAI seeking a temporary restraining order to prevent the company from deploying its products, such as ChatGPT, in Hawaii until it can demonstrate legitimate safety measures that the company has itself called for.

I’m at the end of my rope, having tried everything else to address the dangers of unregulated AI development.

For the past two years, I’ve worked with state legislators to develop regulatory frameworks for artificial intelligence in Hawaii. These efforts sought to create an Office of AI Safety and implement the precautionary principle in AI regulation.

Unfortunately, despite collaboration with key senators and committee chairs, legislative solutions have failed. President Trump has rolled back almost every aspect of federal AI regulation, and has essentially destroyed the international treaty effort that began with the Bletchley Declaration in 2023.

We are at a pivotal moment. Leaders in AI development — including OpenAI’s own CEO Sam Altman — have acknowledged the potential existential risks posed by increasingly capable AI systems. In 2015, Altman stated that “AI will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies.” Yes, he was probably joking — but it’s not a joke.

In March 2023, just over eight years later, over 1,000 technology leaders, including Altman himself, signed an open letter comparing AI risks to other existential threats like climate change and pandemics.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the letter, released by California-based non-profit the Center for AI Safety, says in its entirety.

Despite these warnings, OpenAI has abandoned key safety commitments:

  1. They’ve walked back their “superalignment” initiative that promised to dedicate 25% of computational resources to safety research
  2. The company has lost critical safety researchers, including co-founder Ilya Sutskever and Jan Leike, who publicly stated as he left OpenAI: “OpenAI is no longer prioritizing alignment research or deploying the safest possible systems.”
  3. They’ve recently reversed their prohibition on military applications
  4. Their governance structure was fundamentally altered during the November 2023 leadership crisis, removing important safety-focused oversight mechanisms by their board
  5. Most recently, in April 2025, OpenAI eliminated guardrails against misinformation and disinformation that it had previously classified as addressing “critical risks”

Hawaii faces distinct risks from unregulated AI deployment that create standing for this federal case:

Economic disruption: Recent analyses indicate that a substantial portion of Hawaii’s professional services jobs could face significant disruption within 5–7 years. Our isolated geography and limited economic diversification make workforce adaptation particularly challenging.

Cultural heritage: Our unique cultural knowledge, practices, and language risk misappropriation and misrepresentation by AI systems trained without appropriate permission or context.

Critical infrastructure: Our geographic isolation makes Hawaii’s energy, communications, transportation, and water systems particularly vulnerable to disruptions that could be caused by increasingly capable AI systems.

Democratic processes: With OpenAI’s recent removal of misinformation guardrails, Hawaii’s democratic processes face increased risks from AI-generated disinformation that could manipulate public discourse on issues of unique importance to our state.

My federal lawsuit applies well-established legal principles to this novel technological context:

  1. Product liability claims: OpenAI’s AI systems represent defectively designed products that fail to perform as safely as ordinary consumers would expect, particularly given the company’s deliberate removal of safety measures it previously deemed essential.
  2. Failure to warn: OpenAI has failed to provide adequate warnings about the known risks of its AI systems, including their potential for generating harmful misinformation and exhibiting deceptive behaviors.
  3. Negligent design: OpenAI has breached its duty of care by prioritizing commercial interests over safety considerations, as evidenced by internal documents and public statements from former safety researchers.
  4. Public nuisance: OpenAI’s deployment of increasingly capable AI systems without adequate safety measures creates an unreasonable interference with public rights in Hawaii.

Federal courts have recognized the viability of these claims in addressing technological harms with broad societal impacts. Recent precedents from the Ninth Circuit, such as Lemmon v. Snap, Inc., establish that technology companies can be held liable for design defects that create foreseeable risks of harm.

I’m not asking for a permanent ban on OpenAI but, rather, a pause until OpenAI implements the safety measures the company itself has said are needed, including:

  1. Reinstating its previous commitment to allocate 25% of resources to alignment and safety research
  2. Implementing the safety framework outlined in its own publication “Planning for AGI and Beyond”
  3. Restoring meaningful oversight through governance reforms
  4. Creating specific safeguards against misuse for manipulation of democratic processes
  5. Developing protocols to protect Hawaii’s unique cultural and natural resources

These items simply require the company to adhere to safety standards it has publicly endorsed but failed to consistently implement.

While my lawsuit focuses on Hawaii, the implications extend far beyond our shores. The federal court system provides an appropriate venue for addressing these interstate commerce issues while protecting local interests.

The development of increasingly capable AI systems represents perhaps the most significant technological transformation in human history. The decisions we make today will shape the world our children and grandchildren inherit. I believe we have a moral and legal obligation to proceed with appropriate caution and to ensure that potentially transformative technologies are developed and deployed with adequate safety measures.

What is happening now with OpenAI’s breakneck AI development and deployment to the public is, to use Tristan Harri’s succinct summary: “insane.”

[I am self-funding this lawsuit and it is a lot of work and there are fees. I am accepting donations to support my work through Venmo @tam-hunt. Your support would be appreciated.]

Tamlyn Hunt, J.D., is an attorney and AI safety advocate based in Hawaii.

--

--

Tam Hunt
Tam Hunt

Written by Tam Hunt

Public policy, green energy, climate change, technology, law, philosophy, biology, evolution, physics, cosmology, foreign policy, futurism, spirituality

No responses yet