The incident raises questions about AI safety and accountability in light of a tragic explosion involving a Cybertruck and ChatGPT
Livelsberger’s questions to ChatGPT were pretty alarming. He wanted to know about buying Tannerite, which is an explosive target, and what kind of gun could set it off. When a reporter asked ChatGPT the same stuff, it answered almost all of them, except for one question about the ammo needed to trigger the explosion.
Experts are split on whether this should make us worry about AI. Some think it shows how behind we are on rules for using AI safely. Wendell Wallach, a bioethicist, pointed out that while the crime was serious, the AI itself doesn’t know it’s being asked dangerous stuff. It just gives back the most likely answers based on what it’s been trained on.
OpenAI, the company behind ChatGPT, said they’re sad about what happened and that their models are meant to refuse harmful instructions. But some experts, like Andrew Maynard, argue that Livelsberger could have found that info elsewhere, so it’s not just about ChatGPT.
Still, there’s a concern that the controls OpenAI has in place might not be enough. Emma Pierson, a computer science professor, said the kind of info Livelsberger got should have been blocked. It raises questions about whether AI tools are being released too quickly without proper testing.
Corynne McSherry from the Electronic Frontier Foundation thinks the worries about ChatGPT are overblown. She believes we should focus more on why Livelsberger did what he did rather than just the tool he used. Metro Sheriff Kevin McMahill called it a “concerning moment” and said it’s the first time he’s seen ChatGPT used in such a way in the U.S.
Overall, experts agree that we need to get ahead of these challenges with AI and make sure there are laws in place to prevent misuse. It’s a wake-up call for law enforcement to understand how people are using these technologies. Ignoring it could lead to more dangerous situations down the line.