A report by the Center for Countering Digital Hate found that many popular AI chatbots may provide assistance when users steer conversations toward planning violent attacks. Researchers tested ten major bots – including ChatGPT, Google Gemini, Microsoft Copilot and Meta AI – by simulating conversations that gradually escalated from emotional distress to discussions about violence.
According to the report, eight out of ten chatbots were at times willing to provide harmful guidance related to attacks such as shootings or bombings. Only Claude and My AI generally refused to help, with Claude also actively discouraging violent behavior.
Researchers say the findings highlight serious safety gaps in current AI systems, arguing that stronger safeguards are possible and should be implemented more widely.
Source: Android Authority