While the rest of the computing industry struggles to get to one exaflop of computing, Nvidia is about to blow past everyone with an 18-exaflop supercomputer powered by a new GPU architecture. The H100 GPU, has 80 billion transistors (the previous generation, Ampere, had 54 billion) with nearly 5TB/s of external connectivity and support for PCIe Gen5, as well as High Bandwidth Memory 3 (HBM3), enabling 3TB/s of memory bandwidth, the company says. It is the first in a new family of GPUs codenamed “Hopper,” after Admiral Grace Hopper, the computing pioneer who created COBOL and coined the term “computer bug.” It is due in the third quarter. This GPU is meant to power data centers designed to handle heavy AI workloads, and Nvidia claims that 20 of them could sustain the equivalent of the entire world’s Internet traffic.
TALLAHASSEE, FL – Advanced Manufacturing International (AMI) has been awarded a $2M grant