Germany's Federal Court has ruled against Apple on Appeal, opening the door for greater control of the tech company's business model
Apple rose to third Spot in the U.S. PC market in Q4 2024 with the highest growth of any Vendor, bypassing Lenovo

At its annual GPU Technology Conference Nvidia announced Blackwell Ultra GPU Architecture to handle AI Reasoning and Inference & more

1x NVIDIA

Today, Nvidia announced new chips for building and deploying artificial intelligence models at its annual GTC conference. CEO Jensen Huang revealed Blackwell Ultra, a family of chips shipping in the second half of this year, This time next year, Nvidia will introduce their next-gen the company’s next-generation graphics processing unit, or GPU, that is expected to ship in 2026. It's named after Vera Rubin, the American astronomer who pioneered the concept of dark matter.

Nvidia’s sales are up more than sixfold since its business was transformed by the release of OpenAI’s ChatGPT in late 2022. That’s because its “big GPUs” have most of the market for developing advanced AI, a process called training.

Software developers and investors are closely watching the company’s new chips to see if they offer enough additional performance and efficiency to convince the company’s biggest end customers — cloud companies including Microsoft, Google and Amazon — to continue spending billions of dollars to build data centers based around Nvidia chips.

“This last year is where almost the entire world got involved. The computational requirement, the scaling law of AI, is more resilient, and in fact, is hyper-accelerated,” Huang said.

Blackwell Ultra

Nvidia also announced new versions of its Blackwell family of chips that it calls Blackwell Ultra.

That chip will be able to produce more tokens per second, which means that the chip can generate more content in the same amount of time as its predecessor, the company said in a briefing.

Nvidia says that means that cloud providers can use Blackwell Ultra to offer a premium AI service for time-sensitive applications, allowing them to make as much as 50 times the revenue from the new chips as the Hopper generation, which shipped in 2023.

Blackwell Ultra will come in a version with two paired to an Nvidia Arm CPU, called GB300, and a version with just the GPU, called B300. It will also come in versions with eight GPUs in a single server blade and a rack version with 72 Blackwell chips.

The top four cloud companies have deployed three times the number of Blackwell chips as Hopper chips, Nvidia said.

Vera Rubin

Nvidia expects to start shipping systems on its next-generation GPU family in the second half of 2026. 

The system has two main components: a CPU, called Vera, and a new GPU design, called Rubin.

Vera is Nvidia’s first custom CPU design, the company said, and it’s based on a core design they’ve named Olympus. 

The custom Vera design will be twice as fast as the CPU used in last year’s Grace Blackwell chips, the company said. 

When paired with Vera, Rubin can manage 50 petaflops while doing inference, more than double the 20 petaflops for the company’s current Blackwell chips. Rubin can also support as much as 288 gigabytes of fast memory, which is one of the core specs that AI developers watch.

The Blackwell GPU, which is currently on the market, is actually two separate chips that were assembled together and made to work as one chip.

Starting with Rubin, Nvidia will say that when it combines two or more dies to make a single chip, it will refer to them as separate GPUs. In the second half of 2027, Nvidia plans to release a “Rubin Next” chip that combines four dies to make a single chip, doubling the speed of Rubin, and it will refer to that as four GPUs.

Nvidia said that will come in a rack called Vera Rubin NVL144. Previous versions of Nvidia’s rack were called NVL72. For more, read the full CNBC report.

New Nvidia Press Releases from the GTC Conference

  • 01: NVIDIA, Alphabet and Google Collaborate on the Future of Agentic and Physical AI
  • 02: NVIDIA Blackwell Ultra DGX SuperPOD Delivers Out-of-the-Box AI Supercomputer for Enterprises to Build AI Factories
  • 03: NVIDIA Announces DGX Spark and DGX Station Personal AI Computers
  • 04: NVIDIA Blackwell Ultra AI Factory Platform Paves Way for Age of AI Reasoning
  • 05: Oracle and NVIDIA to Deliver Sovereign AI Worldwide
  • 06: NVIDIA Blackwell Platform Arrives to Power a New Era of Computing
  • 07: General Motors and NVIDIA Collaborate on AI for Next-Generation Vehicle Experience and Manufacturing
  • 08: NVIDIA and Telecom Industry Leaders to Develop AI-Native Wireless Networks for 6G
  • 09: Revolutionizing Networking for the Era of Agentic AI
  • 10: NVIDIA Announces Spectrum-X Photonics, Co-Packaged Optics Networking Switches to Scale AI Factories to Millions of GPUs

 

2 NVIDIA SILICON PHOTONICS

In the later report it notes "Joint Inventions and Collaborations With TSMC, Coherent, Corning Incorporated, Foxconn, Lumentum and SENKO to Create Integrated Silicon, Optics Process and Supply Chain.

It further noted comments from TSMC: “A new wave of AI factories requires efficiency and minimal maintenance to achieve the scale required for next-generation workloads,” said C. C. Wei, chairman and CEO of TSMC. “TSMC’s silicon photonics solution combines our strengths in both cutting-edge chip manufacturing and TSMC-SoIC 3D chip stacking to help NVIDIA unlock an AI factory’s ability to scale to a million GPUs and beyond, pushing the boundaries of AI.” NVIDIA photonics will drive massive growth for a new wave of state-of-the-art AI factories, alongside pluggable optical transceiver technologies supported by industry leaders including Coherent, Eoptolink, Fabrinet and Innolight.

NVIDIA has also released a series of videos on topics discussed to at the GTC conference that you could review here.

Disclosure: The owner of Patently Apple holds Nvidia Stock

10.0F3bb - Patently AI