Nvidia Mum on 7-nm GPU
Published:2019-03-25 11:26:09    Text Size:【BIG】【MEDIUM】【SMALL

SAN JOSE, Calif. — Nvidia’s annual graphics event attracted some 8,000 attendees here, but one expected guest couldn’t make it — a 7-nm GPU.

A nearly three-hour keynote featured new systems and software for the company’s latest processors, announced last August. Ironically, the most interesting news nuggets were Nvidia’s cheapest board to date and a research project on optical interconnects.

“The length of the event was inversely proportional to the content,” quipped one analyst.

The unspoken message for a pack of rivals aiming to build deep-learning accelerators was clear. Nvidia doesn’t need to pre-announce a new and faster chip because it owns that software stack and channel today.

Indeed, one data center manager said that only one rival is sampling a working chip for AI training — the startup unicorn, Graphcore. But it faces significant work adapting it to a software stack that’s been running on Nvidia GPUs for several years.

Accentuating the point, Nvidia packaged its many libraries under one new umbrella — CUDA-X, with versions for graphics, AI, and more. It also described new use cases for its chips — a cross-tool environment for offline rendering called Omniverse and an expansion of its GeForce Now online gaming service.

It packaged 40 of its latest graphics chips in an 8U RTX graphics server. For demanding data centers, it ganged a pod of 32 of them into 10 racks with 1,280 GPUs linked via Infiniband.

“Data center graphics need a whole new architecture,” said chief executive Jensen Huang, noting that the company is working up more use cases for them.


A GPU desktop configured for a data scientist. Click to enlarge. (Source: Nvidia)

To bolster its claim on the hearts and minds of AI developers, it configured a workstation specifically for data scientists. The system uses two Quadro RTX-8000 GPUs with a 96-GB frame buffer and installed deep-learning software.

“Data science is the new challenge in high-performance computing,” said Huang, adding that last year, Nvidia trained 100,00 of the 3 million data scientists working today.

Meanwhile, it enlisted top server makers to build and sell T4 servers including Cisco, Dell, HPE, Inspur, Lenovo, and Sugon. The systems are scaled-back versions of Nvidia’s DGX-2, aimed at mainstream business users kicking the tires of deep learning and data analytics. They use up to four Turing-class T4 GPUs and 64-GB GDDR6 memory.

In the cloud, Amazon joined Google and Baidu in announcing plans for a service based on Turing chips. Alibaba is expected to follow suit.

Products Search
Copyright(C)2009-2012  Company website templates