BIOS ANNA A100
BIOS Anna A100
Featuring the latest generation NVIDIA A100™ Tensor Core GPUs
BIOS IT ‘s latest iteration of its Artificial Neural Network Accelerator (ANNA) series is built on Supermicro Hardware and features the latest NVIDIA® A100™ Tensor Core GPUs.
The ANNA A100 is designed for the most demanding AI workloads and is optimised for the new HGX™ A100 4-GPU baseboard. With the newest version of NVIDIA® NVLink™ and NVIDIA NVSwitch™ technologies, these servers can deliver up to 5 PetaFLOPS of AI performance in a single system. The system supports PCI-E Gen 4 for fast CPU-GPU connection and high-speed networking expansion cards.
The NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration at every scale for AI, data analytics, and high-performance computing (HPC) to tackle the world’s toughest computing challenges. As the engine of the NVIDIA data center platform, A100 can efficiently scale to thousands of GPUs or, with NVIDIA Multi-Instance GPU (MIG) technology, be partitioned into seven GPU instances to accelerate workloads of all sizes. And third-generation Tensor Cores accelerate every precision for diverse workloads, speeding time to insight and time to market.
As a balanced data center platform for HPC and AI applications, the new ANNA A100 system leverages the NVIDIA HGX A100 4 GPU board with four direct-attached NVIDIA A100 Tensor Core GPUs using PCI-E 4.0 for maximum performance and NVIDIA NVLink for high-speed GPU-to-GPU interconnects. This advanced GPU system accelerates compute, networking and storage performance with support for one PCI-E 4.0 x8 and up to four PCI-E 4.0 x16 expansion slots for GPUDirect RDMA high-speed network cards and storage such as InfiniBand HDR, which supports up to 200Gb per second bandwidth.
BIOS ANNA A100 SPECIFICATIONS
|FORM FACTOR||2U Chassis|
|CPU||Dual AMD EPYC™ 7002 Series Processors|
|MEMORY||8TB Registered ECC DDR4 3200MHz SDRAM in 32 DIMMs|
|GPU||Supports 4 NVIDIA A100 GPUs|
|STORAGE||4 Hot-swap 2.5” drive bays (SAS/SATA/NVMe Hybrid)|
|EXPANSION SLOTS||4 PCI-E Gen 4 x16 (LP), 1 PCI-E Gen 4 x8 (LP)|
|POWER SUPPLY||2x 2200W Redundant Power Supplies, Titanium Level + 4 Hot-swap heavy duty fans|
YOU MAY ALSO BE INTERESTED IN
NVIDIA DGX A100
NVIDIA DGX™ A100 is the universal system for all AI workloads—from analytics to training to inference. DGX A100 sets a new bar for compute density, packing 5 petaFLOPS of AI performance into a 6U form factor, replacing legacy compute infrastructure with a single, unified system
DGX A100 also offers the unprecedented ability to deliver fine-grained allocation of computing power, using the Multi-Instance GPU capability in the NVIDIA A100 Tensor Core GPU, which enables administrators to assign resources that are right-sized for specific workloads..
Contact us to discuss our solutions
Our experts are waiting to give you competitive quoteContact Us