NVIDIA H100 and L40S GPU instances to accelerate your application calculations on a wide variety of artificial intelligence and high-performance computing tasks
Benefits of GPU
Speeding up treatment
Innovating with AI and Machine Learning
To boost the productivity of data scientists and implement new AI services more quickly, you need to train increasingly complex models faster and faster. Our NVIDIA H100 and L40S GPUs reduce Deep Learning training procedures to just a few hours.
Improving efficiency
Fully compatible with the Kubernetes platform, the container systems and the virtual machines GPU technology simplifies access to computing resources for all users, whatever the type of workload.
Benefit from an agile solution
Adjust the number of H100 GPU instances to suit your needs: multi-instance (MI) GPU technology allows you to partition a GPU into seven separate secure instances, each with 5GB or 10GB of dedicated memory. Your users can access all the benefits of GPU acceleration.
NVIDIA L40S
technical specifications
FP32
91.6 TFLops
FP32 Tensor Core
366 TFlops
FP16
733 TFlops
FP8
1466 TFlops
Memory
48 GB
Maximum consumption
350 W
NVIDIA H100
technical specifications
FP32
51 TFLops
FP32 Tensor Core
756 TFlops
FP16
1513 TFlops
FP8
3026 TFlops
Memory
80 GB
Maximum consumption
350 W
Pricing
CALCULATION - VMWARE:V3:PERF4 - 32 Cores / 64 Threads - 2.5/4.1Ghz (Intel 6426Y or equivalent) - 512GB - 2 x NVIDIA L40S 48GB | 1 Blade + 2xGPU | 4 692,75 € | 24 months |
---|---|---|---|
CALCULATION - VMWARE:V3:PERF5 - 32 Cores / 64 Threads - 2.5/4.1Ghz (Intel 6426Y or equivalent) - 512GB - 2 x NVIDIA H100 80GB | 1 Blade + 2xGPU | 7 626,65 € | 24 months |