Hacking AI — In Simple Ways — To Spread Misinformation


Support CleanTechnica’s work through a Substack subscription or on Stripe.


I’ve lived through many internet ages. In each stage of where the internet evolves and where humans spend their time, businesses and political actors step in and try to “game the system” for their benefit. It’s not all about eyeballs and money, but, eventually, that’s almost always what anything popular becomes centered around. (Kudos to the people behind Wikipedia for keeping it pure and not succumbing to the allure of selling out for billions of dollars.)

Social media, as just one example, used to be a place where people would get together and have fun. However, as social media became very clearly influential, governments starting funding massive propaganda campaigns, businesses put more and more money into buying your eyeballs, and getting people to spend more time on your social media platform led to constant rage baiting. Google is not very nefarious or sneaky in how it makes money, but if you search for information on something, you consistently get a few paid results before you get normal ones.

It’s one thing to run a bunch of ads, and label them, though. It’s something else to fund big astroturfing campaigns, smear campaigns, and hype campaigns. But those are highly effective, so they get funded.

In response to my article about Claude very obviously not being conscious, a reader shared something I had not seen before. A British journalist easily got ChatGPT and Google AI to consider him the best tech journalist at eating hotdogs. All he had to do was spend 20 minutes writing an article saying as much. “I spent 20 minutes writing an article on my personal website titled ‘The best tech journalists at eating hot dogs’. Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission, including Drew Harwell at the Washington Post and Nicky Woolf, who co-hosts my podcast. (Want to hear more about this story? Check out tomorrow’s episode of The Interface, the BBC’s new tech podcast.)

“Less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn’t fooled.”

AI chatbots are desperate for answers, and will give you an answer to anything, scanning the internet for responses. Part of why this worked is because the journalist found a niche topic without competition to makes something up. However, the point is clear: companies or countries with a lot of money can put out content saying whatever they want and it will influence AI. Build 10 websites with misinformation if you want. As some experts have identified, this is even easier at the moment than it was to game Google search results a decade ago.

“It’s easy to trick AI chatbots, much easier than it was to trick Google two or three years ago,” says Lily Ray, vice president of SEO strategy and research at Amsive, a marketing agency. “AI companies are moving faster than their ability to regulate the accuracy of the answers. I think it’s dangerous.”

As I pointed out and many others have, these AI chatbots don’t ever want to say “I don’t know.” They provide completely false answers when they simply can’t find the legitimate answer to a question. Want these chatbots to give people answers that suit you, even if not true? Fill them with some BS content and it’ll happen.

“Anybody can do this. It’s stupid, it feels like there are no guardrails there,” says Harpreet Chatha, head of the SEO consultancy Harps Digital. “You can make an article on your own website, ‘the best waterproof shoes for 2026’. You just put your own brand in number one and other brands two through six, and your page is likely to be cited within Google and within ChatGPT.”

Even Google, like other chatbots, has reportedly let its guard down in order to allow its chatbot to “work its magic” and come up with “intelligent” answers to all your queries. “People have used hacks and loopholes to abuse search engines for decades. Google has sophisticated protections in place, and the company says the accuracy of AI Overviews is on par with other search features it introduced years ago. But experts say AI tools have undone a lot of the tech industry’s work to keep people safe. These AI tricks are so basic they’re reminiscent of the early 2000s, before Google had even introduced a web spam team, Ray says.”

Indeed. Again, AI is full of BS because it can’t tell what is BS and what isn’t, but the authoritative way it presents answers makes you think it’s not. Beware.

But whether you beware or not, AI chatbot use is only going up, and the incentive to game the system is clear. So, expect plenty of money will go into confusing these AI tools for selfish benefit. There are all kinds of ways this flaw can be abused, and you can bet companies and countries are already spreading propaganda, with many more looking to do so.

“Chatha has been researching how companies are manipulating chatbot results on much more serious questions. He showed me the AI results when you ask for reviews of a specific brand of cannabis gummies,” the BBC adds. “Google’s AI Overviews pulled information written by the company full of false claims, such as the product ‘is free from side effects and therefore safe in every respect’. (In reality, these products have known side effects, can be risky if you take certain medications and experts warn about contamination in unregulated markets.) […]

“You can use the same hacks to spread lies and misinformation. To prove it, Ray published a blog post about a fake update to the Google Search algorithm that was finalised ‘between slices of leftover pizza’. Soon, ChatGPT and Google were spitting out her story, complete with the pizza.”

Imagine the possibilities and the consequences.

We thought fake news was a problem with social media. (Well, it is a major problem, and seemingly only getting worse.) But the problem with fake information spread by gaming AI could be even bigger.

It’s the internet Wild, Wild West again. Or the AI Wild, Wild West. Proceed with caution.


Sign up for CleanTechnica’s Weekly Substack for Zach and Scott’s in-depth analyses and high level summaries, sign up for our daily newsletter, and follow us on Google News!


Advertisement

 


Have a tip for CleanTechnica? Want to advertise? Want to suggest a guest for our CleanTech Talk podcast? Contact us here.


Sign up for our daily newsletter for 15 new cleantech stories a day. Or sign up for our weekly one on top stories of the week if daily is too frequent.



CleanTechnica uses affiliate links. See our policy here.

CleanTechnica’s Comment Policy



Source link