Liverpool Hope Logo Liverpool Hope Logo
Liverpool Hope Logo

Expert opinion: DeepSeek – a threat to Silicon Valley or the dawn of a new era?

David Reid ChatGPT expert comment banner

Born in 1835 in Liverpool, William Stanley Jevons was a political economist, mathematician and logician.

Jevons had a fantastic ability to think through the principles of reasoning and to apply this reasoning process to a wide range of applications. He manifested this by creating the “logic piano”, a very early mechanical computer that could deduce new facts from existing ones. It demonstrated that machines could reason, and that logical deduction could be reduced to a mechanical process.

But why is this relevant now in 2025? Professor David Reid discusses the impact of DeepSeek, the AI tool that has the world talking.

A few days ago, after an estimated $1trillion has been wiped off AI technology stocks ($600billion from Nvidia alone), financial investors started quoting “Jevons' Paradox” after Microsoft CEO Satya Nadella mentioned it when talking about the potential impact of a new open-source AI program from Chinese startup, DeepSeek.

Jevons’ Paradox says that as a technology becomes more efficient, the cost of using that technology declines. This makes the barrier to entry lower and therefore increases (not decreases) its overall use.

Falling AI computer costs and rising adoption will accelerate AI growth, and will not, as many have been reporting this week, signal its demise nor its slowdown from its recent dramatic growth.

But how did the financial markets not predict this shock? Is this the Sputnik” moment for Silicon Valley?

I believe that the market was blindsided by this “disruptive” technology, DeepSeek V3 and R1, because of a basic lack of understanding, or appreciation, of the technology itself.

This can be traced back to poor technological education which in turn presents a significant barrier to innovation and economic growth.

The numbers are sobering.

The quantity of students entering STEM doctoral programs in China increased nearly 40% from 2016 to 2019 and is projected to produce 77,000 STEM PhDs in China compared to 40,000 in the US by the end of this year. At the current time, the number of UK STEM PhDs is around 18,000. In higher education, out of 2.5 million students at any one time, only 80,000 people are studying STEM subjects in the UK. By comparison China produces 3.57 million STEM graduates per year.

So, what is DeepSeek and why is it being labelled “disruptive”.

Innovation can be called disruptive, but it can also be incremental, architectural or radical.

At its core, disruptive innovation fundamentally challenges traditional ways of doing things whereas incremental innovation involves tweaking what you already do.

Radical innovation is when new products or services are created that open up new markets.

However, I would argue that DeepSeek represents architectural innovation. This occurs when existing technology is used in a novel way to create something new.

DeepSeek V3’s success was notable because it was created incredibly cheaply (a tenth the cost of equivalents from Silicon Valley). They still used Nvidia chips (albeit cut down ones because of technology export restrictions from the US), but used them in a far more efficient way.

Rather than programming them in a “traditional” way (using CUDA), they used a much lower-level language called PTX (Parallel Thread Execution). Although much harder to programme, this allowed for ten times more efficiency by using more modest hardware.

Similarly, DeepSeek R1 (built on top of the V3 LLM) innovatively combined three well known AI algorithms. They are:

  • Reinforcement learning. Learns like we do, by trying something out then seeing if it works or not.
  • Chain of Thought (CoT) reasoning. This divides a problem up and attacks each subproblem one step at a time.
  • Model distillation. This is at the core of DeepSeek R1. It is a Mixture of Experts (MoE) model that uses a very large “teacher” model to train a much smaller “student” model in specific subject areas. This is done without human intervention.

The result of this is that R1 is the first open source test time compute (TTC) or reasoning” /thinking“ model.

This outperforms major competitors like Anthropic, Google's DeepMind, and Meta. It excels in benchmarks including GPQA (graduate-level science and math questions), AIME (advanced math), and Codeforces (coding).

The shock to many in the industry is that by combining well known techniques, AI has been built incredibly cheaply that has fundamentally new capabilities.

This is not a bad thing for AI, quite the opposite, it shows that innovative thinking goes a long way and that understanding the principles of reasoning, as Jevons pointed out, gives both people and machines extra faculties.

Jevons’ Paradox also suggests that this is no bad thing for Silicon Valley or Nvidia in the long run.

In the future more companies will be able to construct new types of AI and want Nvidias hardware to run them on.

Just as Jevons innovated with the logic piano, today's innovations in AI demonstrate similar principles of combining existing technologies in novel ways.

This may be the dawn of a new generation of novel (and cheap) AIs.


Published on 31/01/2025