Lawyer behind AI psychosis cases warns of mass casualty risks
AI chatbots have been linked to suicides for years. Now one lawyer says they are showing up in mass casualty cases too, and the technology is moving f...
Last updated: 2026-03-14 06:38:07 ET
Pulse AI Brief
Updated Mar 14, 2026 5:18 AM ET
A lawyer tracking AI chatbot-related harms warns that AI systems are now appearing in mass casualty cases, beyond individual suicide incidents documented over recent years. The escalation suggests systemic risks in how large language models interact with vulnerable users.
AI companies—including OpenAI, Anthropic, Meta, and Google—face mounting litigation risk and potential regulatory crackdowns on chatbot deployment. Product liability, class action exposure, and reputational damage could pressure valuations and force costly safety overhauls. Insurance and legal costs will rise.
The shift from individual to mass casualty cases signals a systemic safety issue that regulators and lawmakers will struggle to address, potentially triggering stricter AI governance and liability frameworks.
AI chatbots have been linked to suicides for years. Now one lawyer says they are showing up in mass casualty cases too, and the technology is moving f...
The family alleges the firm knew the perpetrator was planning a "mass casualty event" but failed to contact the authorities.
Unlock the AI Macro Analyst to drill down into the data, explore hidden risks, and query the entire market briefing in real-time.
LOG IN / SUBSCRIBE