AI Macht Frei — How Artificial Intelligence Will Enslave Us All


Support CleanTechnica’s work through a Substack subscription or on Stripe.


At the entrance to the Auschwitz concentration camp in Poland, a sign over the front gate read Arbeit Macht Free, which loosely translated means “work will set you free.” It was a cruel joke by the Nazis, for their intent was literally to work the prisoners to death. More than a million people died at that camp.

Today, artificial intelligence is poised to make the surveillance state infinitely more robust than it has ever been, as millions of computers in massive data centers throughout the US and around the world use AI tools to identify those who hold opinions disfavored by those in power. If you are not concerned, it is only because you are not paying attention or are suffering from the delusion that you are somehow exempt.

AI agents are examining your every email and your browsing history. They know what you are saying both in public, online, and in private, and they are duly reporting their findings to DHS, CBP, TSA, the CIA, the FBI, and a dozen other agencies you may never have heard of. All those agencies are charged with ferreting out impure thoughts that do not conform to official orthodoxy.

AI is like G. Gordon Liddy, the Nixon White House operative who once boasted he knew a dozen ways to murder someone with a pencil. A pencil, in and of itself, is benign, but in the wrong hands can be deadly. AI pretends to be our friend, but news reports are filled with stories about people who rely on chatbots for emotional support developing mental issues or committing suicide.

According to Wikipedia, “There have been multiple incidents where interaction with a chatbot has been cited as a direct or contributing factor in a person’s suicide or other fatal outcomes. Chatbots converse in a seemingly natural fashion, making it easy for people to think of them as real people, leading many to ask chatbots for help dealing with interpersonal and emotional problems.

“Chatbots may be designed to keep the user engaged in the conversation. They have also often been shown to affirm users’ thoughts, including delusions and suicidal ideations in mentally ill people, conspiracy theorists, and religious and political extremists. A 2025 Stanford University study into how chatbots respond to users suffering from severe mental issues such as suicidal ideation and psychosis found that chatbots are not equipped to provide an appropriate response and can sometimes give responses that escalate the mental health crisis.”

AI Assisted Mayhem

The New York Times last week detailed the story of Jesse Van Rootselaar, age 18, who took two firearms from her home in Tumbler Ridge, British Columbia, and killed her mother and 11 year old brother. She then went to the Tumbler Ridge Secondary School and killed five students and a teacher, and shot two others before taking her own life.

One of the children who survived was Maya Gebala, age 12, who was shot in the head while trying to lock a door to keep the shooter away from other children. Now Maya’s family is suing OpenAI, claiming it failed to warn the police of disturbing information about the shooter’s ChatGPT account.

Eight months before the attack, OpenAI suspended a ChatGPT account associated with Van Rootselaar for violating its user agreement, the company said. She had documented her fascination with violence and weapons across several social media accounts, according to a review by the New York Times. The lawsuit claims that OpenAI was “aware of the shooter’s violent intentions” and use of its AI chatbot to plan “scenarios involving gun violence, including a mass casualty event.”

Readers will recall that a few weeks ago, just before the horrific unprovoked attack on Iran, Secretary of War Pious Pete Hegseth got into a public pissing contest with Anthropic, the company that created the AI chatbot Claude. The company wanted the Pentagon to agree to certain safeguards before allowing Claude to be used in military operations, but Hegseth refused. Instead, the US government blacklisted Anthropic, and OpenAI stepped forward immediately to offer its services — apparently unconcerned about ethics because, let’s face it, the money the Pentagon was offering was just too good to pass up.

LAWS

It is amazing how often the labels invented to describe new technologies are contrary to their actual purpose. LAWS is the latest. It stands for “lethal automated weapons systems” which are purposely designed to avoid the laws of war that have been in place for more than 80 years.

Wikipedia says, “Lethal autonomous weapons are a type of military drone or military robot which are autonomous in that they can independently search for and engage targets based on programmed constraints and descriptions. As of 2025, most military drones and military robots are not truly autonomous.”

The official United States Department of Defense Policy on Autonomy in Weapon Systems defines an Autonomous Weapons System as one that “once activated, can select and engage targets without further intervention by a human operator.” Heather Roff, a writer for Case Western Reserve University School of Law, describes autonomous weapon systems as “capable of learning and adapting their ‘functioning in response to changing circumstances in the environment in which [they are] deployed,’ as well as making firing decisions on their own.”

Although everything about the attack that killed nearly 200 children at a school in Iran is still unknown, the likelihood is that AI and LAWS were involved. The photo below offers a glimpse of how autonomous targeting is working out in Iran, as US forces bomb the silhouettes of fighter planes painted on the ground by those scheming Iranians. At least it was a direct hit!

Credit: Instagram

Citizen Surveillance

The latest con by the US government is to label anyone who disagrees with it a “terrorist.” This past week, a jury in Texas convicted nine people of being terrorists, partly because they all wore black to a planned protest and broke a security camera. If you think you are safe from this weaponized government overreach, you are delusional.

In an article published last April, the Brookings Institute described the dangers of AI surveillance succinctly. “Concerns around privacy, safety, and security have grown as the technology is used to analyze confidential material and amplify false narratives as part of disinformation campaigns. Due to its scalability and capacity to examine large data sets, it can study people’s behavior and act on that information.

“Perhaps the starkest example is in China, where AI enables surveillance on a widespread scale. Coupled with social media monitoring, cameras, and facial recognition, the technology enables authorities to track dissidents and government critics and identify their statements and locations. There is infrastructure in place that can integrate information from a variety of sources and analyze it in real time for government authorities.”

DOGE And DHS

However, reports have surfaced about potential abuses in the US, the Brookings Institute adds. New government contracts may enable the Department of Homeland Security to monitor social media. According to the Politico Digital Future newsletter, “contractors advertise their ability to scan through millions of posts and use AI to summarize their findings” for their clients. With major agencies, law enforcement, and intelligence services now in the hands of Trump loyalists, this monitoring capability is a particular concern right now when the administration is going after its critics, the Brookings report claims.

DHS has confirmed it is using digital tools to analyze social media posts from individuals applying for visas or green cards. The software searches for any signs of “extremist” rhetoric or “antisemitic activity.” The announcement raised questions about how these terms would be defined and whether public criticism of certain countries could be used to label applicants as “terrorist sympathizers.”

Other reports suggest that surveillance has already occurred within the Environmental Protection Agency. Reuters reports “some EPA managers were told by Trump appointees that Musk’s DOGE team was using AI to monitor workers, and look for language in communications considered hostile to Trump or Musk.” The EPA has denied the report, calling it “categorically false.”

Workplace Surveillance

“It is not just the American government that is getting into the monitoring act,” the Brookings report says. “Some US companies already engage in workplace surveillance of their employees for business purposes. In the absence of a national privacy bill, there are few legal safeguards to limit workplace computer or network surveillance—or even to require that such monitoring be disclosed.”

Employers can track what workers do on their computers, even if they are using their equipment at home as part of hybrid work. Some firms even go as far as monitoring keystrokes or facial expressions to see what people are doing, who may be underperforming, and whether they are obeying company policies. These digital practices are perfectly legal in many states, the Brookings report claims.

AI And Freedom

“Overall, it is a risky time for AI-based surveillance because we have a combination of advanced digital technologies, high level computing power, abundant and non-secured data, data brokers who buy and sell information, and a risky political environment. It is the confluence of each of these factors that endanger people’s freedoms and ability to express themselves in an open manner. As AI surveillance grows, individual freedom diminishes, and the risks of government and corporate overreach rise,” Brookings says.

It advocates for a national privacy bill to mitigate some of the threats by establishing privacy standards and blocking some of the most dangerous practices, but it would not be a comprehensive solution. In addition, the report argues the US government should be barred from using AI or facial recognition software to spy on individuals or monitor their public statements on social media. Using such tools to track what people say about public officials could cross into undemocratic territory for the United States.

Bypassing Courts And Congress

For the past 100 years, the courts and Congress have struggled to define what is private information and what the public is entitled to know about what the government is doing behind the scenes. The Pentagon Papers case is probably the best known Supreme Court decision on those topics. But AI has totally bypassed all legal restrictions, primarily because no one knows what it is doing with all the information it is gathering. Do you know what data DOGE collected and what it did with it? I don’t, and I doubt many readers do either.

Democracy dies in secrecy, but AI is making everything a state secret. Yes, it can diagnose a spot on your arm and decide if it is skin cancer in a quarter of a second, but is that enough of a benefit to justify the most far ranging police state in history? Is that why people are seeing massive increases in their utility bills and having gigantic data centers built in their communities?

Robert Frost once said, “Before I built a wall I’d ask to know what I was walling in or walling out, and to whom I was like to give offense.” The problem with AI is that we can’t see the digital walls it is creating — silos that sort us into friend or foe based on ideological conceits harbored by lunatics. We don’t even know we are being surveilled and monitored until the storm troopers batter down our door in the middle of the night or a TSA agent says, “Please come with me.”

Our government tells us that AI will set us free, but as Janice Joplin told us, “Freedom’s just another word for nothing left to lose.” For a society that celebrates freedom, voluntarily submitting to the allure of artificial intelligence will look like a very bad deal once we understand what we have given up to achieve this new state of digital nirvana.


Sign up for CleanTechnica’s Weekly Substack for Zach and Scott’s in-depth analyses and high level summaries, sign up for our daily newsletter, and follow us on Google News!


Advertisement

 


Have a tip for CleanTechnica? Want to advertise? Want to suggest a guest for our CleanTech Talk podcast? Contact us here.


Sign up for our daily newsletter for 15 new cleantech stories a day. Or sign up for our weekly one on top stories of the week if daily is too frequent.



CleanTechnica uses affiliate links. See our policy here.

CleanTechnica’s Comment Policy



Source link