RTX PRO 4000 Blackwell 24GB
10€

Price range: 10€ through 100€
Delivery is made within 14-21 days
The prices are presented for both corporate clients and individuals.
Pricing for
The prices are presented for both corporate clients and individuals.
NVIDIA H100 NVL 94GB. NVL variant for dual-GPU inference stacks, 94GB per GPU. Direct import, 1–3y warranty, compliant paperwork, quick delivery, any payment method.
NVIDIA H100 94GB NVL Original is a specialised accelerator based on the Hopper architecture, designed for training and inference of the largest language models (LLMs) and generative systems. It combines an increased memory capacity of 94 GB HBM3 with a bandwidth of nearly 4 TB/s, allowing it to process models with hundreds of billions of parameters without losing efficiency.
The main feature of the NVL version is support for scaling via NVLink, which allows multiple cards to be combined into a single system with high data transfer speeds — up to 600 GB/s between GPUs. This makes the H100 NVL the optimal choice for data centres and infrastructures working with advanced AI tasks.
The H100 94GB NVL is in demand in areas where maximum computing power and scalability are required:
While the standard H100 80GB PCIe is designed for general-purpose AI and HPC tasks, the H100 NVL 94GB was created specifically for scalable AI clusters and working with extremely large LLMs. Its increased memory capacity and NVLink make it the preferred solution for those building infrastructures with dozens or hundreds of GPUs.
Essentially, the H100 NVL is the choice for companies working at the forefront of generative AI, where every millisecond and scalability matters.
Buying H100 94GB NVL Original at OsoDoso-Store means investing in a proven and scalable tool for working with generative AI and models of the highest level.
NVIDIA H100 94GB NVL Original is an accelerator focused on the future of artificial intelligence. With increased memory, NVLink support, and Hopper architecture, it provides the necessary performance headroom for the largest models and data centres, forming the basis for new generations of AI systems.
| Weight | 1,8 kg |
|---|---|
| Dimensions | 26,8 × 11,2 cm |
| Country of manufacture | Taiwan |
| Manufacturer's warranty (years) | 1 |
| Model | NVIDIA H100 |
| Cache L2 (MB) | 50 |
| Process technology (nm) | 4 |
| Memory type | HBM3 |
| Graphics Processing Unit (Chip) | |
| Number of CUDA cores | 14592 |
| Number of Tensor cores | 432 |
| GPU Frequency (MHz) | 1665 |
| GPU Boost Frequency (MHz) | 1837 |
| Video memory size (GB) | 94 |
| Memory bus width (bits) | 5120 |
| Memory Bandwidth (GB/s) | 3900 |
| Connection interface (PCIe) | PCIe 5.0 x16 |
| FP16 performance (TFLOPS) | 1513 |
| FP32 performance (TFLOPS) | 756 |
| FP64 performance (TFLOPS) | 30 |
| Cooling type | Passive (server module) |
| Number of occupied slots (pcs) | 2 |
| Length (cm) | 26.8 |
| Width (cm) | 11.2 |
| Weight (kg) | 1.8 |
| Temperature range (°C) | 0–85 |
| NVLink Throughput (GB/s) | 600 |
| Multi-GPU support | Yes, via NVLink |
| Virtualization/MIG support | MIG (up to 7 instances) |
Only logged in customers who have purchased this product may leave a review.
Reviews
There are no reviews yet.