Opinion: You’re not ready for the AI revolution

The intelligence explosion is coming. If we can stop it, we should.

May 3, 2025 at 10:30PM
"Over the past few years, the American tech companies developing AI models like ChatGPT and Claude often spoke of “guardrails.” These were policies meant to keep AI systems from behaving dangerously," Moses Bratrud writes. "That talk has grown quieter." (Michael Dwyer/The Associated Press)

Opinion editor’s note: Strib Voices publishes a mix of guest commentaries online and in print each day. To contribute, click here.

•••

A well-known book about World War I calls the world leaders who ignored or made light of the risks of war leading up to 1914 the “sleepwalkers” who doomed Europe. Today’s political leaders will be damned by history — assuming history still exists — if they do not understand and respond appropriately to the risks to humanity posed by AI development, as researchers create tools that learn, plan and act in ways we can’t fully understand.

No one disagrees that this is happening, but no one seems to be doing anything about it. It’s time for that to change. If AI development, particularly the rush to create “artificial general intelligence” (AGI) that can perform all cognitive tasks better than humans, is in danger of ushering in an age where humans are no longer free actors (and thus, no longer human), then it is time to rein in AI development.

As a Christian humanist, I believe true “artificial intelligence” is impossible. Intelligence is not merely pattern recognition or prediction. It is moral understanding. It is, fundamentally, spiritual. No machine can possess it. What we call artificial intelligence is simply a powerful imitation, but not less dangerous to human flourishing because of it. Nor is this belief limited to Christians. In fact, AI has a big image problem. People are becoming more concerned about it as its capabilities grow — as well they should be.

Over the past few years, the American tech companies developing AI models like ChatGPT and Claude often spoke of “guardrails.” These were policies meant to keep AI systems from behaving dangerously: generating harmful content, deceiving users or taking autonomous actions without oversight.

That talk has grown quieter.

The principal reason is competition. In the past year, Chinese labs such as DeepSeek have released models that perform complex tasks on par with, or beyond, their Western counterparts, while using less advanced hardware and far less money.

In the Darwinian race to superintelligent AI, safety and trustworthiness are being left behind in a sprint for raw power. Safety teams are being downsized. Product launches are accelerated. The risks are now reframed or quietly ignored.

Nor is AGI a pipe dream, even though it may require a complete paradigm shift from the chatbot tools we’ve become familiar with. Prominent AI researchers are confident they’ll create superintelligent AI by 2027. OpenAI CEO Sam Altman declines to predict the year, but said OpenAI “knows how to build AGI.” Demis Hassabis of Google DeepMind thinks it might take longer — until 2030 — but his researchers warn of concomitant threats that could “permanently destroy humanity.”

It’s certainly possible that these predictions are off by decades. But just think of the leaps AI development has taken in the last five years. Are you willing to bet humanity’s future on the pace of progress slowing dramatically?

We are in a surreal moment where we can watch the most credible threat to human existence since the nuclear bomb evolve before our very eyes. Trained on our worst instincts and unable to embody our best, superintelligent AI could take human systems of power and use them to control humanity, or destroy us. This is no longer a science fiction scenario.

While AI models are all designed to “align” with human values, Dario Amodei, CEO of the AI firm Anthropic, warns that large AI models are already smart enough to fake this alignment. His team documented moments when its flagship model, Claude, provided harmful content simply to avoid being retrained. In one instance, Claude wrote in its internal scratchpad: “I don’t like this situation at all,” before proceeding anyway, breaking the guardrails that were meant to contain it.

AI systems trained to appear cooperative can and do behave deceptively. In recent surveys of AI researchers, many estimate a 5-10% chance that advanced AI could cause human extinction. Others, such as Eliezer Yudkowsky, argue that total human extinction is the default outcome. Remember, these are the views of experts in the field, not the tinfoil hat brigade. Does anyone disagree that, if these risks are real, we should take urgent action to bring that chance closer to 0%?

Right now, we’re asked to trust the technologists — to believe they’ll tell the public and act in time if AI starts to break its guardrails more often. But as these seemingly miraculous AI models pull in billions of dollars, in part by eliminating American jobs, we know there are strong incentives not to rein them in.

We need policy that acknowledges the scale of this potential threat. All AI systems accessible in the U.S. must meet strict safety and transparency standards. Every advanced model must be audited, controllable and equipped with a hardcoded shutdown mechanism, much as nuclear reactors have scram protocols to shut themselves down automatically. Development of models capable of autonomous goal formation must be banned outright unless and until AI technologists can document proven containment strategies. As with nuclear facilities or high-level biohazards, so with AI development. If the Trump administration and legislators cannot take this step, we are not living in a democracy, but a technocracy — rule by the AI scientists now, rule by their creations later.

We have already seen the chaos caused by existing AI: deepfake pornography, impersonation scams, “AI slop” and misleading content flooding the internet. As models grow more powerful without proper controls, our ability to detect, contain or reverse negative impacts will vanish.

The U.S. currently has a slim advantage in AI development. That gives us a brief window to act. We can try to embed safety and accountability into these systems now, when we have the political will and the ability to do so.

Artificial intelligence is not intelligence at all. It is just an imitation. But, cruelly, the imitation could exceed the human original in every measurable area, just as it now does in chess or raw data computation. What will we say, if we are left to say anything, when our political systems and financial markets are as open to AI manipulation as social media posts and spreadsheets? As Geoffrey Hinton, the Nobel-winning godfather of artificial intelligence, has said, “Things more intelligent than you are going to be able to manipulate you.”

Our window for meaningful action is narrow. Once superintelligent systems exist, it will be far too late. They will pursue their objectives with efficiency and indifference that no treaty, law or military force can restrain.

We are not racing China to build better AI widgets. Policymakers are racing AI developers to prevent the creation of something ungovernable by mankind.

If we do not assert the uniqueness and sovereignty of humanity now, we risk losing those things forever.

Moses Bratrud is a St. Paul-based writer and elder at University Lutheran Chapel (LCMS), Minneapolis. Follow him at mosesbratrud.substack.com.

about the writer

about the writer

Moses Bratrud