Nvidia CEO says his AI chips are improving faster than Moore’s Law

Nvidia CEO says his AI chips are improving faster than Moore’s Law

Nvidia’s AI Chips Leapfrogging Moore’s Law

Nvidia CEO Jensen Huang stated that the development speed of his company’s AI chips exceeds the historical pace of Moore’s Law, a previously established tech growth standard.

Crowning a New Law

Moore’s Law, formulated by Intel’s Gordon Moore in 1965, projected that computer chip performance would double every two years. While this principle drove tech advancements for years, it has lately started to slow down. Contrarily, Huang claims that Nvidia’s AI chips are developing at a faster pace, with the latest superchip for data centers executing AI workloads 30x faster than its predecessor.

The Complete Innovation Approach

The exponential growth of Nvidia’s AI chips, which Huang describes as “hyper Moore’s Law,” results from simultaneous innovation across the entire tech stack including architecture, chip, system, libraries, and algorithms.

Bearing Impact on AI Model Capabilities

Top-tier AI labs such as Google and OpenAI employ Nvidia’s AI chips to enhance their AI models. Nvidia’s development surge could therefore indicate notable steps forward in AI progress overall.

Three New AI Scaling Laws

Huang disputes allegations of AI progress stagnation, highlighting the existence of three active AI scaling laws. These include pre-training, post-training, and test-time compute, which deliver pattern recognition, fine-tuning, and superior inference phase thinking time respectively.

Driving Down Costs with Inference

Huang likens the role the new AI chips to that of Moore’s Law in reducing costs. Improved performance leads to more affordable AI model inference.

The Superiority of Nvidia’s AI Chips

Despite Nvidia’s AI chips initially being best-suited to training AI models, they’ve proved efficient as tech companies shift towards inference. This enhanced efficiency argues in favor of their continued use despite high initial costs.

The Impact on AI Reasoning Models

The latest Nvidia superchip, the GB200 NVL72, is up to 40x faster than the previous generation for AI inference tasks. According to Huang, this boost in performance will gradually lower costs for AI reasoning models like OpenAI’s o3.

Exponential Development Progress

Huang asserts that Nvidia’s present AI chips are 1,000x superior to those made a decade ago, significantly exceeding the standard set by Moore’s Law. This promising trend in AI technology advancement is showing no signs of slowing.

Fonte original: Leia a matéria completa no TechCrunch