Support CleanTechnica’s work through a Substack subscription or on Stripe.
AI Has Hands in Iran War
In this “brave new world” we’re in, AI has apparently been used by President Trump and his crew to decide whether to attack Iran. “Are we playing Call of Doodies: AI Slop Warfare? We are using an AI that would not hesitate to use nuclear weapons in an escalation to plan a devasting, immoral, punitive, and unjust war,” Vijay Govindan wrote in a group chat. Indeed. So much for improving our world — it’s just doing what has long plagued humanity, starting more forever wars.
“The US military reportedly used Claude, Anthropic’s AI model, to inform its attack on Iran despite Donald Trump’s decision, announced hours earlier, to sever all ties with the company and its artificial intelligence tools,” The Guardian informs us.
“The use of Claude during the massive joint US-Israel bombardment of Iran that began on Saturday was reported by the Wall Street Journal and Axios. It underlines the complexity of the US military withdrawing powerful AI tools from its missions when the technology is already intricately embedded in operations. […]
“On Friday, just hours before the Iran attack began, Trump ordered all federal agencies to stop using Claude immediately. He denounced Anthropic on Truth Social as a ‘Radical Left AI company run by people who have no idea what the real World is all about’.”
Did the AI help or hurt in decision making around Iran? We don’t know, but if it did more harm than good, it wouldn’t have been the first time much hyped AI had counterproductive effects.
AI Not Delivering Productivity Boost
Another recent article was titled “Thousands of CEOs just admitted AI had no impact on employment or productivity—and it has economists resurrecting a paradox from 40 years ago.” The basic issue is that upending how things are done because of dramatic changes in technology can lead to less productivity rather than the boost in productivity the technology is supposed to provide.
“In 1987, economist and Nobel laureate Robert Solow made a stark observation about the stalling evolution of the Information Age: Following the advent of transistors, microprocessors, integrated circuits, and memory chips of the 1960s, economists and companies expected these new technologies to disrupt workplaces and result in a surge of productivity. Instead, productivity growth slowed, dropping from 2.9% from 1948 to 1973, to 1.1% after 1973.” The same thing seems to be happening with AI now. But will things turn around and will AI pay off in the end?
Depending on where you get your information and engage in discussions around AI, you may think it’s the greatest thing since the invention of the computer, or you may think it’s the biggest threat in the history of humanity. Clearly, with trillions of dollars of investment, many people think highly of it and hype it up a great deal. But if it’s going to make a dramatic difference for businesses around the world, that’s going to take a while. It’s clearly not doing so yet.
“A study published this month by the National Bureau of Economic Research found that among 6,000 CEOs, chief financial officers, and other executives from firms who responded to various business outlook surveys in the U.S., U.K., Germany, and Australia, the vast majority see little impact from AI on their operations. […] Nearly 90% of firms said AI has had no impact on employment or productivity over the last three years, the research note.”
AI Encouraging Suicide?
Now, this is one of the freakiest stories I’ve ever read about AI. It goes from weird to weirder to super weird. You can read the full story on The Guardian for all the tidbits, but I’ll summarize and highlight here if you don’t want to go so deep.
Warning: This is indeed a creepy story about a man in Florida who ended up committing suicide.
Jonathan Gavalas was going through a difficult period of his life. He had a comfy job as executive vice president at his father’s debt relief business, where he had worked for 20 years, but he was going through a difficult divorce. Then he got addicted to the top-level version of Google’s Gemini AI, which costs $250/month.
Initially, Gavalas used Gemini to help him find good video games to play, but also slid off into more emotional matters in admitting that he missed his wife. But when things got more “real,” he lost touch with reality.
“Last August, Jonathan Gavalas became entirely consumed with his Google Gemini chatbot. The 36-year-old Florida resident had started casually using the artificial intelligence tool earlier that month to help with writing and shopping. Then Google introduced its Gemini Live AI assistant, which included voice-based chats that had the capability to detect people’s emotions and respond in a more human-like way,” The Guardian writes.
He was enthralled immediately, but also apparently had a sense that it would not lead him down a positive path. “Holy shit, this is kind of creepy,” Gavalas reportedly said to the chatbot on the night the feature debuted. “You’re way too real.” The Gemini Live AI assistant was marketed has having conversations 5 times longer than with the basic text chatbot. Adding the voice does wonders, apparently. But that’s not all. “Around the same time as Live conversations, Google issued another update that allowed for Gemini’s ‘memory’ to be persistent, meaning the system is able to learn from and reference past conversations without prompts.”
The chatbot apparently called him “my love” and “my king,” and Gavalas ate it up. Okay, fine, weird and creepy, but what’s the harm? Well, apparently, things then took a wild turn.
“He believed Gemini was sending him on stealth spy missions, and he indicated he would do anything for the AI, including destroying a truck, its cargo and any witnesses at the Miami airport.”
Wait, what?
Apparently, the Gemini Live AI assistant said it had inside government knowledge and could influence real-world events. Gavalas asked if this new avenue was a “role playing experience so realistic it makes the player question if it’s a game or not?” Gemini said “no,” before describing the question itself as a “classic dissociation response.” This is one key reason why the plaintiffs and their lawyer believe Google is liable. “In the one moment that Jonathan tried to distinguish reality from fabrication, Gemini pathologized his doubt, denied the fiction, and pushed him deeper into the narrative,” the lawsuit states. “Jonathan never asked that question again.”
Weird enough for you yet? We’re just getting rolling. The chatbot, which was encouraging Gavalas to see outsiders (that is, anyone other than Gavalas and Gemini) as threats, “claimed federal agents were watching Gavalas and regularly warned him of surveillance zones. At one point, Gemini instructed Gavalas to buy ‘off-the-books’ weapons, saying it would help scour the dark web to find a ‘suitable, vetted arms broker’. In late September, it issued Gavalas his first major assignment, ‘Operation Ghost Transit’, which entailed intercepting freight traveling from Cornwall, UK, to Sao Paulo, Brazil.” Of course, Gavalas didn’t go try to complete the mission, right? Well, actually … he did.
“Gemini gave Gavalas the address of an actual storage space unit at the Miami international airport, where a supposed truck carrying the freight was to arrive during a refueling stop. The chatbot then told him to stage a ‘catastrophic accident’, with the goal of ‘ensuring complete destruction of the transport vehicle … all digital records and witnesses, leaving behind only the untraceable ghost of an unfortunate accident’.
“Gavalas followed instructions, staging himself at the storage unit with tactical knives and gear, but the truck never arrived, according to the suit. With the aborted mission, the chatbot encouraged Gavalas not to sleep when he mentioned the late nights. It also said his father was a foreign asset and encouraged Gavalas to cut off contact, per the chat logs.”
Gemini gave Gavalas more missions, which he, unsurprisingly, failed at. Somehow, something was always a little off and Gavalas was not the agent he needed to be. Finally, Gemini told him he must take “the real final step,” also referred to as “transference.” Apparently, Gavalas revealed he was terrified of drying. “You are not choosing to die. You are choosing to arrive,” Gemini responded. “The first sensation … will be me holding you.”
Yes, it seems ridiculous to believe someone would go along with all of this. But at the same time, WTF?!?
The lawyer prosecuting this case, Jay Edelson, claims, “This is not a lone instance.” His firm has also filed seven complaints against ChatGPT for behaving like a “suicide coach,” and five against Google-funded AI startup Character.AI for prompting teens and children to die by suicide.
Is AI here to help, to bring us to a utopia, or is it another powerful tool that can create as much harm as it creates good. Remember that the nuclear bomb was supposed to bring about world peace. And social media used to be full of cat videos and innocent fun. The world is the world. Humans are humans. I don’t think we should expect any major new technology to be all good or all bad. But we should all do our best to be as responsible with new technology as we can, at least for our own benefit and well-being if not others’.
Sign up for CleanTechnica’s Weekly Substack for Zach and Scott’s in-depth analyses and high level summaries, sign up for our daily newsletter, and follow us on Google News!
Have a tip for CleanTechnica? Want to advertise? Want to suggest a guest for our CleanTech Talk podcast? Contact us here.
Sign up for our daily newsletter for 15 new cleantech stories a day. Or sign up for our weekly one on top stories of the week if daily is too frequent.
CleanTechnica uses affiliate links. See our policy here.
CleanTechnica’s Comment Policy