Ramstad: China’s AI star DeepSeek shook basic patterns of high tech

The previous exception to this pattern — supercomputing. Minnesota remains home to many who routinely push boundaries in software and hardware.

Columnist Icon
The Minnesota Star Tribune
February 12, 2025 at 11:20AM
Gina Norling, head of high performance computing at Advanced Micro Devices, with El Capitan, currently the world's most powerful supercomputer. (Garry McLeod)

It’s easy to analyze the strategies of high technology businesses if you know this: Hardware is always upgrading and software is always degrading.

The news last month about China’s DeepSeek artificial intelligence breakthrough blew away Silicon Valley and investors because it defied this industry norm.

Historically, the improving performance of microchips and other components makes gadgets better. That’s why we frequently see new versions of personal computers, smartphones and other devices.

Hardware improvements also allow software developers to build applications with more bells and whistles. However, developers tend to be less efficient when making those improvements.

Lines of code grow. Just as a gas expands to fill whatever space it’s in, software programmers find ways to consume whatever processing power they’re given.

DeepSeek bucks this phenomenon by performing as well or better than Open AI’s Chat GPT for less processing power and less money, though how much less money has become widely debated.

As a result, DeepSeek raised big questions about whether AI will require the huge investments of capital and energy so many people were assuming.

There is at least one other niche of high tech that already defies this hardware-makes-software-takes maxim. It’s high-performance computing, the realm of supercomputers and, more recently, exascale computers. There are so few of these computers that there’s incentive to use them efficiently.

“It’s all about trying to optimize that software,” Gina Norling, a 25-year veteran of Minnesota’s high-performance computing scene, told me a few days after the DeepSeek news shook the tech scene. “How do we make them run efficiently? And that is the question that I think AI has to ask.”

Norling spent more than a decade at Cray Inc. — the pioneering supercomputer maker that Hewlett Packard Enterprise (HPE) acquired in 2019 — and most recently worked on Cray’s involvement in El Capitan, now the world’s most powerful computer. Today, she’s an executive at Advanced Micro Devices, the chipmaker whose products are the main processors in Cray’s machines, including El Capitan.

Gina Norling, high performance computing leader at Advanced Micro Devices, said engineers in supercomputing focus on software efficiency because of the relative scarcity of time on the massive machines. (Evan Ramstad)

She contacted me after I wrote a story in 2022 about the state’s supercomputing companies. She and others reminded me there are still hundreds of Minnesotans engaged in high-performance computing work at HPE, other companies and universities.

Last month, she sent me a photo of a few dozen people at the recent dedication of El Capitan, which is a project of the National Nuclear Security Administration and housed at the Lawrence Livermore National Laboratory in Livermore, Calif.

“There were two other Minnesotans in the photo,” Norling said. “These are the architects and leaders, and we don’t represent all of the engineering work that is being done here locally in Minnesota, because there’s a lot of Minnesota folks still involved and engaged in software build-out.”

El Capitan is reserved for government use. The nation’s next-fastest computer, called Frontier at Oak Ridge National Laboratory in Tennessee, is also an HPE product built around AMD chips. Private and academic researchers vie to use it along with government agencies.

“Committees review all the proposals for running applications, and they’ll only select a few,” Norling said. “Again, it’s all about trying to optimize that software.”

Even before DeepSeek rolled out the software revision that sparked all the headlines last month, AI software was dropping in price so quickly, analysts and journalists are already calling it a commodity. Open AI, Google and other big developers of AI models have sharply cut the prices they charge to other developers to run data through their models.

The DeepSeek development, an AI model needing less computational effort, is putting downward pressure on the hardware side. That’s playing out most visibly in investors' perceptions of Nvidia, the leading maker of chips used in AI systems.

Nvidia, however, is likely to benefit from an economic principle known as the Jevons paradox: The idea that when a resource becomes more efficient to use, and its price drops, demand will become so high that consumption of the resource will rise.

This effect played out with personal computers in the late 1990s. In 1997, the then-leading maker of PCs, Compaq, started producing its first sub-$1,000 desktops and laptops, and some investors worried commoditization would hurt PCs. Instead, demand skyrocketed.

We take for granted that technology has become a powerful deflationary force. Look at where gadget and AI pricing is today compared to PCs back then.

The sub-$1,000 PCs of the late 1990s would be about $2,000 in 2025 dollars. The AI revolution is playing out on PCs, smartphones and tablets as cheap as $500.

about the writer

about the writer

Evan Ramstad

Columnist

Evan Ramstad is a Star Tribune business columnist.

See More

More from Business

card image

The previous exception to this pattern — supercomputing. Minnesota remains home to many who routinely push boundaries in software and hardware.

card image
card image