×
 

OpenAI Backs Law That Could Shield AI Companies From Major Harm Lawsuits

OpenAI backs US bill limiting liability for catastrophic AI harms, sparking debate over accountability and safety.

Artificial intelligence company OpenAI has expressed support for a proposed US state legislation that seeks to limit legal liability for AI developers in cases involving large-scale catastrophic harm, sparking renewed debate over accountability and safety in the rapidly evolving AI sector.

The proposed bill, known as SB 3444, has been introduced in the state of Illinois and aims to define boundaries for liability in what it terms “critical harms.” These include extreme scenarios such as mass casualties, injuries affecting more than 100 people, or property damage exceeding $1 billion. Supporters of the bill argue that it provides clearer legal frameworks for managing risks associated with advanced technologies.

If enacted, the legislation could prevent AI companies from being held directly responsible in cases where their systems are misused or contribute indirectly to large-scale harm. It also reportedly extends protections to situations where malicious actors use AI tools to develop dangerous weapons, including chemical or nuclear threats, raising concerns among critics about the scope of legal immunity.

Also Read: Redmi K90 Max Spotted on Geekbench With Dimensity 9500 Chip Ahead of April 21 Launch

The move comes at a time when OpenAI is facing multiple legal challenges in the United States. Several lawsuits have alleged that interactions with AI chatbots may have played a role in severe real-world outcomes, including incidents involving violence and mental health crises. In one high-profile case, authorities in Florida reportedly investigated whether chatbot interactions were linked to a school shooting incident, though a direct causal connection has not been established.

In response, OpenAI has argued that SB 3444 represents a balanced approach to regulating advanced AI systems. The company says the bill could help reduce fragmented state-by-state regulation while still addressing safety concerns, allowing developers to operate under clearer and more consistent legal standards.

However, critics warn that limiting liability could weaken incentives for companies to prioritise safety and robust risk controls in AI development. They argue that if passed, the legislation may set a precedent that reduces accountability across the AI industry, intensifying an already global debate over how emerging technologies should be governed.

Also Read: Apple Reportedly Testing Deep Red Finish For iPhone 18 Pro Series

 
 
 
Gallery Gallery Videos Videos Share on WhatsApp Share