System Requirements¶
Supported Platforms¶
| Platform | Support Level | Notes |
|---|---|---|
| Linux (NUMA) | Full | All features available |
| Linux (UMA) | Partial | Graceful single-node fallback |
| macOS | Limited | Topology discovery only |
| Windows | Limited | Basic support |
Linux Requirements¶
Kernel Version¶
- Minimum: Linux 4.9
- Recommended: Linux 5.4+
Required kernel configuration options:
Checking NUMA Support¶
# Check if NUMA is enabled
cat /proc/cmdline | grep -o numa
# Count NUMA nodes
ls -d /sys/devices/system/node/node* | wc -l
# View NUMA topology
numactl --hardware
Required System Capabilities¶
Different features require different Linux capabilities:
| Feature | Capability | Purpose |
|---|---|---|
| Memory binding | CAP_SYS_ADMIN |
Strict MPOL_BIND enforcement |
| CPU affinity | CAP_SYS_NICE |
Real-time scheduling priority |
| Memory locking | CAP_IPC_LOCK |
Prevent page migration |
Hard Mode Requirements¶
For strict locality guarantees ("hard mode"), you need:
- CAP_SYS_ADMIN - For strict memory binding
- CAP_SYS_NICE - For guaranteed CPU affinity
- NUMA balancing disabled - Prevent kernel page migration
Check your capabilities:
# Using numaperf CLI
numaperf-bench info capabilities
# Or manually
cat /proc/self/status | grep Cap
Enabling Capabilities¶
Option 1: Run as root
Option 2: Use setcap
Option 3: Docker
Option 4: systemd service
Disabling NUMA Balancing¶
NUMA balancing can migrate pages between nodes, which may interfere with explicit memory placement:
# Disable temporarily
echo 0 | sudo tee /proc/sys/kernel/numa_balancing
# Disable permanently (add to /etc/sysctl.conf)
kernel.numa_balancing = 0
Hardware Requirements¶
Multi-Socket Systems¶
numaperf is most beneficial on multi-socket systems where NUMA effects are significant:
- 2+ CPU sockets
- Separate memory controllers per socket
- Typical latency ratio: 1.5-3x for remote vs local access
Single-Socket Systems¶
On single-socket systems, numaperf provides:
- Graceful fallback to single-node operation
- CPU affinity still works
- Memory policies have no effect (all memory is "local")
Virtual Machines¶
NUMA topology may not be accurately exposed in VMs:
- VMware: Enable NUMA virtualization
- KVM/QEMU: Use
-numaoptions - Cloud instances: Choose NUMA-aware instance types
Checking System Configuration¶
Use the numaperf CLI to check your system:
# Full system information
numaperf-bench info
# Just capabilities
numaperf-bench info capabilities -v
# JSON output for scripting
numaperf-bench info --format json
Example output:
=== numaperf System Information ===
NUMA Topology
─────────────
Nodes: 2
Total CPUs: 16
Node 0: 8 CPUs (0-7), 32768 MB memory
Node 1: 8 CPUs (8-15), 32768 MB memory
Distance Matrix
───────────────
Node 0 Node 1
Node 0 10 21
Node 1 21 10
Capabilities
────────────
Hard mode: NOT SUPPORTED
[-] CAP_SYS_ADMIN (strict memory binding)
[+] CAP_SYS_NICE (strict CPU affinity)
[-] CAP_IPC_LOCK (memory locking)
[-] NUMA balancing disabled
NUMA system: yes (2 nodes)
Troubleshooting¶
"No NUMA nodes found"¶
- Check if NUMA is enabled in kernel:
dmesg | grep -i numa - Verify
/sys/devices/system/node/exists - On VMs, check hypervisor NUMA settings
"Permission denied" on memory binding¶
- Check capabilities:
capsh --print - Run with elevated privileges or add capabilities
- Consider using soft mode for best-effort locality
"Affinity not applied"¶
- Check if CPUs exist:
cat /proc/cpuinfo - Verify CPU set syntax:
0-3or0,1,2,3 - Check cgroup CPU restrictions
Next Steps¶
- Soft vs Hard Mode - Understanding enforcement modes
- Hard Mode Guide - Detailed hard mode configuration
- Troubleshooting - Common issues and solutions