The NodeSelector class specifies hardware requirements for Chutes deployments. This reference covers all configuration options, GPU types, and best practices for optimal resource allocation.
Class Definition
from chutes.chute import NodeSelector
node_selector = NodeSelector(
gpu_count: int = 1,
min_vram_gb_per_gpu: int = 16,
include: Optional[List[str]] = None,
exclude: Optional[List[str]] = None
)
Parameters
GPU Configuration
gpu_count: int = 1
Number of GPUs required for the deployment. Valid range: 1-8 GPUs.
Examples:
# Single GPU (default)
node_selector = NodeSelector(gpu_count=1)
# Multiple GPUs for large models
node_selector = NodeSelector(gpu_count=4)
# Maximum supported GPUs
node_selector = NodeSelector(gpu_count=8)
Use Cases:
1 GPU: Most standard AI models (BERT, GPT-2, small LLMs)
2-4 GPUs: Larger language models (7B-30B parameters)
4-8 GPUs: Very large models (70B+ parameters, distributed inference)
min_vram_gb_per_gpu: int = 16
Minimum VRAM (Video RAM) required per GPU in gigabytes. Valid range: 16-140 GB.
Examples:
# Default minimum (suitable for most models)
node_selector = NodeSelector(
gpu_count=1,
min_vram_gb_per_gpu=16
)
# Medium models requiring more VRAM
node_selector = NodeSelector(
gpu_count=1,
min_vram_gb_per_gpu=24
)
# Large models
node_selector = NodeSelector(
gpu_count=2,
min_vram_gb_per_gpu=48
)
# Ultra-large models
node_selector = NodeSelector(
gpu_count=4,
min_vram_gb_per_gpu=80
)
# Don't over-provision# Bad - wastes resources
oversized = NodeSelector(
gpu_count=8,
min_vram_gb_per_gpu=80
)
# Good - matches actual needs
rightsized = NodeSelector(
gpu_count=1,
min_vram_gb_per_gpu=24
)
2. Use Include/Exclude Wisely
# Be specific when you have known requirements
specific_selector = NodeSelector(
gpu_count=1,
min_vram_gb_per_gpu=48,
include=["l40", "a6000"] # Known compatible GPUs
)
# Exclude known incompatible GPUs
compatible_selector = NodeSelector(
gpu_count=1,
min_vram_gb_per_gpu=24,
exclude=["t4"] # Known to be too slow
)
3. Consider Multi-GPU for Large Models
# Single large GPU vs multiple smaller GPUs# Option 1: Single large GPU
single_gpu = NodeSelector(
gpu_count=1,
min_vram_gb_per_gpu=80,
include=["h100", "a100-80gb"]
)
# Option 2: Multiple smaller GPUs (often more available)
multi_gpu = NodeSelector(
gpu_count=2,
min_vram_gb_per_gpu=40,
include=["a100", "l40"]
)