900-21010-000-000 Nvidia H100 Tensor Core PCIe GPU

900-21010-000-000, an NVIDIA H100 PCIe 80 GB GPU. The NVIDIA H100 Tensor Core GPU delivers unprecedented performance, scalability, and security for every workload. H100 uses breakthrough innovations in the NVIDIA Hopper™ architecture to deliver industry-leading conversational AI, speeding up large language models by 30X over the previous generation. Leading Enterprise AI hardware from Nvidia.
Out of stock
SKU 900-21010-000-000

Details

Tap into unprecedented performance, scalability, and security for every workload with the NVIDIA® H100 Tensor Core GPU. With the NVIDIA NVLink® Switch System, up to 256 H100 GPUs can be connected to accelerate exascale workloads. The GPU also includes a dedicated Transformer Engine to solve trillion-parameter language models. The H100’s combined technology innovations can speed up large language models (LLMs) by an incredible 30X over the previous generation to deliver industry-leading conversational AI.

PRODUCT SPECIFICATION:
FP64: 26 teraFLOPS
FP64 Tensor Core: 51 teraFLOPS
FP32: 51 teraFLOPS
TF32 Tensor Core: 756 teraFLOPS2
BFLOAT16 Tensor Core: 1,513 teraFLOPS2
FP16 Tensor Core: 1,513 teraFLOPS2
FP8 Tensor Core: 3,026 teraFLOPS2
INT8 Tensor Core: 3,026 TOPS2
GPU memory: 80GB
GPU memory bandwidth: 2TB/s
Decoders: 7 NVDEC, 7 JPEG
Max thermal design power (TDP): 300-350W (configurable)
Multi-Instance GPUs: Up to 7 MIGS @ 10GB each
Form factor: PCIe dual-slot air-cooled
Interconnect: NVLink: 600GB/s, PCIe Gen5: 128GB/s
Server options: Partner and NVIDIA-Certified Systems with 1–8 GPUs
NVIDIA AI Enterprise: included

More Information

More Information
Channel Product Sku 900-21010-000-000

Reviews

★★★★★
★★★★★
Livechat with sale team