FBI says that Palm Springs -Bombe -suspended has used the AI chat program
Rümmer is spilled onto the street after what the mayor described, a bomb near a reproductive health facility in Palm Springs, California, was exploded on May 17, 2025 in a still picture of video.
AB -PARTNER KABC | About Reuters
Two men who were suspected of the bombing of a Palm Springs fertility clinic last month used a generative chat program for artificial intelligence to plan the attack on Wednesday.
Records of an AI chat application show that Guy Edward Bartkus, the main suspect in the bombing, “researched how to make powerful explosions with ammonium nitrate and fuel,” said the authorities.
Officials did not name the AI program used by Bartkus.
The law enforcement agencies in New York City arrested Daniel Park on Tuesday, a man from Washington, who is suspected, to offer a car bomb for a car bomb that damaged the fertility clinic when providing large amounts of chemicals.
Bartkus died in the explosion, while four more were injured by the explosion.
In a criminal complaint against Park, the FBI said that Bartkus supposedly used his phone to follow information about “explosives, diesel, petrol mixtures and detonation speeds”, NBC News reported.
It marks the second case this year of law enforcement, which occurred to the use of AI in supporting a bomb attack or in the bomb attack. In January, officials said a soldier who exploded a Tesla Cybertruck in front of the Trump Hotel in Las Vegas Used generative AI, including chatt, to plan the attack.
The soldier, Matthew Livelsberger, searched Chatgpt for information on how to put together an explosive. According to law enforcement officers, the speed at which certain ammunition would travel to certain ammunition rounds.
In response to the incident in Las Vegas, Openaai said that it was sad about the revelation that his technology used to set up the attack and that it was “committed to seeing responsibly used AI tools”.
The use of generative AI has risen in recent years with the rise of chatbots such as Openai's Chatgpt, Anthropics Claude and increased Google's gemini. This has stimulated a flood of development in relation to consumer AI services.
But in the race for competitive technology companies, the security tests for the security tests of their AI models take into account before they are released to the public, CNBC reported last month.
Openai presented a new “security assessment hub” last month to display the safety results of AI models and how they are carried out in tests for hallucinations, jailbreaks and harmful content such as “hateful content or illegal advice”.
Anthropic added additional security measures to his Claude Opus 4 model last month to prevent it from being abused for the development of weapons.
AI chatbots have suspended a variety of problems caused by manipulation and hallucinations since they have achieved mass complaints.
Last month, Xai Chatbot Grok from Elon Musk in South Africa provided false claims about “white genocide”, an error that the company later attributed to human manipulation.
In 2024, Google included its function to produce Gemini -Ki -Image after the users complained about the tool that generated historically inaccurate images of colored ones.
REGARD: Anthropics Mike Krieger: Claude 4 'can now work for you much longer.
Comments are closed.