Chutes
Node Growth Analytics
GPU node growth patterns over the last 30 days
GPU Node Counts
Below are the current counts of GPU nodes on the Chutes platform, with various statistics on the GPUs themselves, and summaries of the total compute currently available.
GPU Node Table
Below are the current counts of GPU nodes on the Chutes platform, with various statistics on the GPUs themselves, and summaries of the total compute currently available.
GPU Name | Provisioned | Memory | CUDA Cores | Memory Bandwidth | FP32 Performance | Max Power |
---|---|---|---|---|---|---|
NVIDIA H200 141GB SXM | 2884 | 141 GB HBM3e | 16,896 | 4.8 TB/s | 67 TFLOPS | 700 W |
NVIDIA L40 | 611 | 48 GB GDDR6 (ECC) | 18,176 | 864 GB/s | 90.5 TFLOPS | 300 W |
NVIDIA H100 80GB SXM | 506 | 80 GB HBM3 | 16,896 | 3.35 TB/s | 67 TFLOPS | 700 W |
NVIDIA L40S | 411 | 48 GB GDDR6 (ECC) | 18,176 | 864 GB/s | 91.6 TFLOPS | 350 W |
NVIDIA GeForce RTX 3090 | 341 | 24 GB GDDR6X | 10,496 | 936 GB/s | ~35.7 TFLOPS | 350 W |
NVIDIA A100 40GB PCIe | 296 | 40 GB HBM2 | 6,912 | 1.6 TB/s | 19.5 TFLOPS | 250 W |
NVIDIA RTX A6000 | 254 | 48 GB GDDR6 (ECC) | 10,752 | 768 GB/s | 38.7 TFLOPS | 300 W |
NVIDIA A100 80GB SXM | 104 | 80 GB HBM2e | 6,912 | 2.039 TB/s | 19.5 TFLOPS | 400 W |
NVIDIA B200 Blackwell | 64 | 192 GB HBM3E | 18,000 | 8 TB/s | 160 TFLOPS | 700 W |
24 | N/A | N/A | N/A | N/A | N/A | |
NVIDIA RTX A4000 | 24 | 16 GB GDDR6 (ECC) | 6,144 | 448 GB/s | 19.2 TFLOPS | 140 W |
NVIDIA RTX 6000 Ada Generation | 20 | 48 GB GDDR6 (ECC) | 18,176 | 960 GB/s | 91.1 TFLOPS | 300 W |
NVIDIA A40 | 8 | 48 GB GDDR6 (ECC) | 10,752 | 696 GB/s | 37.4 TFLOPS | 300 W |
NVIDIA H100 80GB PCIe | 8 | 80 GB HBM3 | 16,896 | 3.35 TB/s | 60 TFLOPS | 350 W |
NVIDIA GeForce RTX 4090 | 1 | 24 GB GDDR6X | 16,384 | 1008 GB/s | ~83 TFLOPS | 450 W |
NVIDIA H100 NVL (2x 94GB) | 1 | 94 GB HBM3 (per GPU) | 16,896 | 3.9 TB/s (per GPU) | 60 TFLOPS (per GPU) | 350–400 W (per GPU) |
NVIDIA L4 Tensor Core GPU | 1 | 24 GB GDDR6 | 7,680 | 300 GB/s | 30.3 TFLOPS | 72 W |
NVIDIA A10 Tensor Core GPU | 0 | 24 GB GDDR6 | 9,216 | 600 GB/s | 31.2 TFLOPS | 150 W |
NVIDIA A100 80GB PCIe | 0 | 80 GB HBM2e | 6,912 | 1.935 TB/s | 19.5 TFLOPS | 300 W |
Total | 5558 | - | 86,853,376 | - | - | 3,071,532 W |
Serverless AI Compute
Node Growth Analytics
GPU node growth patterns over the last 30 days
GPU Node Counts
Below are the current counts of GPU nodes on the Chutes platform, with various statistics on the GPUs themselves, and summaries of the total compute currently available.
GPU Node Table
Below are the current counts of GPU nodes on the Chutes platform, with various statistics on the GPUs themselves, and summaries of the total compute currently available.
GPU Name | Provisioned | Memory | CUDA Cores | Memory Bandwidth | FP32 Performance | Max Power |
---|---|---|---|---|---|---|
NVIDIA H200 141GB SXM | 2884 | 141 GB HBM3e | 16,896 | 4.8 TB/s | 67 TFLOPS | 700 W |
NVIDIA L40 | 611 | 48 GB GDDR6 (ECC) | 18,176 | 864 GB/s | 90.5 TFLOPS | 300 W |
NVIDIA H100 80GB SXM | 506 | 80 GB HBM3 | 16,896 | 3.35 TB/s | 67 TFLOPS | 700 W |
NVIDIA L40S | 411 | 48 GB GDDR6 (ECC) | 18,176 | 864 GB/s | 91.6 TFLOPS | 350 W |
NVIDIA GeForce RTX 3090 | 341 | 24 GB GDDR6X | 10,496 | 936 GB/s | ~35.7 TFLOPS | 350 W |
NVIDIA A100 40GB PCIe | 296 | 40 GB HBM2 | 6,912 | 1.6 TB/s | 19.5 TFLOPS | 250 W |
NVIDIA RTX A6000 | 254 | 48 GB GDDR6 (ECC) | 10,752 | 768 GB/s | 38.7 TFLOPS | 300 W |
NVIDIA A100 80GB SXM | 104 | 80 GB HBM2e | 6,912 | 2.039 TB/s | 19.5 TFLOPS | 400 W |
NVIDIA B200 Blackwell | 64 | 192 GB HBM3E | 18,000 | 8 TB/s | 160 TFLOPS | 700 W |
24 | N/A | N/A | N/A | N/A | N/A | |
NVIDIA RTX A4000 | 24 | 16 GB GDDR6 (ECC) | 6,144 | 448 GB/s | 19.2 TFLOPS | 140 W |
NVIDIA RTX 6000 Ada Generation | 20 | 48 GB GDDR6 (ECC) | 18,176 | 960 GB/s | 91.1 TFLOPS | 300 W |
NVIDIA A40 | 8 | 48 GB GDDR6 (ECC) | 10,752 | 696 GB/s | 37.4 TFLOPS | 300 W |
NVIDIA H100 80GB PCIe | 8 | 80 GB HBM3 | 16,896 | 3.35 TB/s | 60 TFLOPS | 350 W |
NVIDIA GeForce RTX 4090 | 1 | 24 GB GDDR6X | 16,384 | 1008 GB/s | ~83 TFLOPS | 450 W |
NVIDIA H100 NVL (2x 94GB) | 1 | 94 GB HBM3 (per GPU) | 16,896 | 3.9 TB/s (per GPU) | 60 TFLOPS (per GPU) | 350–400 W (per GPU) |
NVIDIA L4 Tensor Core GPU | 1 | 24 GB GDDR6 | 7,680 | 300 GB/s | 30.3 TFLOPS | 72 W |
NVIDIA A10 Tensor Core GPU | 0 | 24 GB GDDR6 | 9,216 | 600 GB/s | 31.2 TFLOPS | 150 W |
NVIDIA A100 80GB PCIe | 0 | 80 GB HBM2e | 6,912 | 1.935 TB/s | 19.5 TFLOPS | 300 W |
Total | 5558 | - | 86,853,376 | - | - | 3,071,532 W |
Serverless AI Compute