Nvidia AI Chips Surpass Moore’s Law
Paul Grieselhuber
Nvidia’s CEO Jensen Huang has made a bold proclamation: the performance of Nvidia’s AI chips is advancing at a pace that surpasses the historical benchmark set by Moore’s Law. The statement, made during an interview with TechCrunch at CES 2025, places Nvidia at the center of what Huang refers to as a new “hyper Moore’s Law” era, where progress in AI hardware far outstrips the traditional pace of computing innovation.
Moore’s Law, coined by Intel co-founder Gordon Moore in 1965, predicted that the number of transistors on a chip would double roughly every two years, effectively doubling performance. This held true for decades, driving exponential gains in computing power at declining costs. But in recent years, Moore’s Law has slowed, leaving many wondering if the pace of innovation was waning. According to Huang, however, Nvidia’s advancements in AI chips are accelerating on a different trajectory altogether.
Huang argues that the company’s unique ability to innovate across the entire technology stack — from chip architecture to algorithms — allows Nvidia to surpass traditional hardware scaling limits. During his keynote at CES, he showcased Nvidia’s latest data center superchip, the GB200 NVL72, claiming it is 30 to 40 times faster at running AI inference workloads than its predecessor, the H100.
This focus on inference is key. The AI industry is shifting from training massive language models to optimizing how they perform during real-world use, known as inference. Running AI models efficiently at scale is expensive, and many have questioned whether the cost of inference will stifle adoption. OpenAI’s o3 model, which employs a computationally heavy “test-time compute” process during inference, has raised concerns about accessibility due to high operational costs.
Huang, however, believes Nvidia’s chips will solve that problem. “Our systems are progressing way faster than Moore’s Law,” he said. “The direct and immediate solution for test-time compute, both in performance and cost affordability, is to increase our computing capability.” In his view, faster chips will drive down the cost of inference over time, making advanced AI models more accessible and scalable.
This narrative aligns with Nvidia’s broader business strategy. As companies like OpenAI, Google, and Anthropic rely heavily on Nvidia chips to power their models, Nvidia’s dominance in AI hardware has made it one of the most valuable companies in the world. Huang’s message is clear: Nvidia is positioned not just to keep pace with AI advancements, but to lead them.
Despite the optimism, some skepticism remains. AI models that rely heavily on test-time compute are currently expensive to operate. For example, OpenAI reportedly spent nearly $20 per task using o3 to achieve human-level scores on a general intelligence test — a stark contrast to the $20 monthly subscription for ChatGPT Plus. However, Huang insists that performance breakthroughs like those seen in Nvidia’s latest chips will make these models more affordable over time.
This isn’t the first time Huang has suggested that Nvidia is surpassing Moore’s Law. In a podcast last November, he described the rapid advancements in AI hardware as a form of “hyper Moore’s Law.” At CES, he doubled down on that message, highlighting the three AI scaling laws he believes will shape the future of AI development: pre-training (where models learn from vast datasets), post-training (fine-tuning models for specific tasks), and test-time compute (where AI models process tasks in real time).
For Nvidia, the focus on test-time compute isn’t just about making AI more efficient — it’s about fundamentally reshaping the economics of AI. “The same thing that Moore’s Law did for computing costs will happen with inference,” Huang said. The underlying message is that as AI chips improve, running complex models like o3 will become increasingly affordable, unlocking new possibilities for AI adoption in both consumer and enterprise applications.
Looking back, Huang noted that Nvidia’s AI chips today are 1,000 times more powerful than they were a decade ago. It’s a staggering claim that speaks to the pace of change in AI hardware. Nvidia’s challenge now is to ensure that its innovations don’t just remain in the realm of high-end tech companies but become accessible to businesses and individuals alike.
At Rendr, this evolution of AI hardware strikes a chord. Over the past year, we’ve seen firsthand how advancements in AI tools and infrastructure have transformed the way we build software. What began as skepticism toward AI’s potential in development quickly shifted as we integrated AI into our workflow. The results speak for themselves: faster MVPs, more comprehensive solutions, and, most importantly, better outcomes for our clients.
What Huang describes as “hyper Moore’s Law” is something we’ve experienced directly. The tools available today, powered by chips like Nvidia’s, allow non-engineer team members to build complex solutions with AI, with developers acting more as coaches than traditional coders. This shift toward AI-enhanced development is only accelerating.
The promise of Nvidia’s latest breakthroughs — reducing the cost of inference and increasing performance — suggests that the barriers to AI adoption will continue to fall. If Huang’s vision holds true, we’re looking at a future where AI-driven solutions become more ubiquitous and accessible than ever before. It’s a future we’re actively preparing for at Rendr, where AI is no longer just a tool, but a core part of how we build, create, and innovate.
References
- Maxwell Zeff (2025). Nvidia CEO says his AI chips are improving faster than Moore’s Law. TechCrunch. Available online. Accessed 12 January 2025.
- Isaak Kamau (2024). Test-Time Compute Scaling: How to make an LLM “think longer” on harder problems like OpenAI’s o1 model. Medium. Available online. Accessed 12 January 2025.
- Kif Leswing (2024). Nvidia nearly doubles revenue on strong AI demand. CNBC. Available online. Accessed 12 January 2025.