When ChatGPT went public last November, it sent a jolt well beyond the technology industry.
From helping with speeches, to computer coding and cooking, all of a sudden, artificial intelligence (AI) appeared real and useful.
But all that would not be possible without some very powerful computer hardware.
And one hardware company in particular has become central to the AI bonanza – California-based Nvidia.
Originally known for making the type of computer chips that process graphics, particularly for computer games, Nvidia hardware underpins most AI applications today.
“It is the leading technology player enabling this new thing called artificial intelligence,” says Alan Priestley, a semiconductor industry analyst at Gartner.
“What Nvidia is to AI is almost like what Intel was to PCs,” adds Dan Hutcheson, an analyst at TechInsights.
ChatGPT was trained using 10,000 of Nvidia’s graphics processing units (GPUs) clustered together in a supercomputer belonging to Microsoft.
“It is one of many supercomputers – some known publicly, some not – that have been built with Nvidia GPUs for a variety of scientific as well as AI use cases,” says Ian Buck, general manager and vice president of accelerated computing at Nvidia.
Nvidia has about 95% of the GPU market for machine learning, noted a recent report from CB Insights.
Figures show its AI business generated around $15bn (£12bn) in revenue last year, up about 40% from the previous year and overtaking gaming as its largest source of income.
Nvidia shares soared almost 30% after it released first quarter results late on Wednesday. The company said it was raising production of its chips to meet “surging demand”.
Its AI chips, which it also sells in systems designed for data centres, cost roughly $10,000 (£8,000) each, though its latest and most powerful version sells for far more.
So how did Nvidia become such a central player in the AI revolution?
In short, a bold bet on its own technology plus some good timing.
Jensen Huang, now the chief executive of Nvidia, was one of its founders back in 1993. Then, Nvidia was focused on making graphics better for gaming and other applications.
In 1999 it developed GPUs to enhance image display for computers.
GPUs excel at processing many small tasks simultaneously (for example handling millions of pixels on a screen) – a procedure known as parallel processing.
In 2006, researchers at Stanford University discovered GPUs had another use – they could accelerate maths operations, in a way that regular processing chips could not.
It was at that moment that Mr Huang took a decision crucial to the development of AI as we know it.
He invested Nvidia’s resources in creating a tool to make GPUs programmable, thereby opening up their parallel processing capabilities for uses beyond graphics.
That tool was added to Nvida’s computer chips. For computer games players it was a capability they didn’t need, and probably weren’t even aware of, but for researchers it was a new way of doing high performance computing on consumer hardware.
It was that capability that helped sparked early breakthroughs in modern AI.
In 2012 Alexnet was unveiled – an AI that could classify images. Alexnet was trained using just two of Nvidia’s programmable GPUs.
The training process took only a few days, rather than the months it could have taken on a much larger number of regular processing chips.
The discovery – that GPUs could massively accelerate neural network processing – began to spread among computer scientists, who started buying them to run this new type of workload.
“AI found us,” says Mr Buck.
Nvidia pressed its advantage by investing in developing new kinds of GPUs more suited to AI, as well as more software to make it easy to use the technology.
A decade, and billions of dollars later, ChatGPT emerged – an AI that can give eerily human responses to questions.
AI start-up Metaphysic creates photorealistic videos of celebrities and others using AI techniques. Its Tom Cruise deep fakes created a stir in 2021.
To both train and then run its models it uses hundreds of Nvidia GPUs, some purchased from Nvidia and others accessed through a cloud computing service.
“There are no alternatives to Nvidia for doing what we do,” says Tom Graham, its co-founder and chief executive. “It is so far ahead of the curve.”
Yet while Nvidia’s dominance looks assured for now, the longer term is harder to predict. “Nvidia is the one with the target on its back that everybody is trying to take down,” notes Kevin Krewell, another industry analyst at TIRIAS Research.
Other big semiconductor companies provide some competition. AMD and Intel are both better known for making central processing units (CPUs), but they also make dedicated GPUs for AI applications (Intel only recently joined the fray).
Google has its tensor processing units (TPUs), used not only for search results but also for certain machine-learning tasks, while Amazon has a custom-built chip for training AI models.
Microsoft is also reportedly developing an AI chip, and Meta has its own AI chip project.