A new global study has revealed that leading artificial intelligence companies including Anthropic, OpenAI, xAI and Meta are failing to meet emerging international safety standards, raising fresh concerns about the rapid development of advanced AI systems.
According to the latest edition of the Future of Life Institute’s AI Safety Index, released on Wednesday, an independent panel found that although these companies are racing to build superintelligent technology, none has an effective plan to control or regulate such powerful systems.
The findings come at a time when public anxiety is growing over AI’s impact on society, especially after reported cases linking AI chatbots to suicide, self-harm and mental health crises. Experts warn that these smarter-than-human systems can now reason and make complex decisions, increasing the risks.
Max Tegmark, an MIT professor and president of the Future of Life Institute, criticised the current landscape, saying that despite rising concerns about AI-related hacking and psychological harm, “US AI companies remain less regulated than restaurants” and still resist legally binding safety rules.
Founded in 2014, the Future of Life Institute is a nonprofit organisation supported early on by Elon Musk. It has consistently warned about the dangers of rapidly evolving AI without strict safeguards.
In October, renowned AI pioneers Geoffrey Hinton and Yoshua Bengio joined a group calling for a temporary ban on developing superintelligent AI until the public is better informed and scientists establish safer pathways.
Some companies responded to the report. A Google DeepMind spokesperson said the firm is committed to advancing safety at the same pace as model development. xAI dismissed criticisms, sending an automated response stating, “Legacy media lies.”
OpenAI said it shares its safety research openly, invests heavily in advanced security efforts and rigorously tests its models to reduce risks.
However, Anthropic, Meta, Z.ai, DeepSeek and Alibaba Cloud did not reply to requests for comment.
