A network load target
for the monitoring tools
you're writing.
l8opensim simulates up to 30,000 network devices, GPU servers, storage systems and Linux hosts on a single Linux host — each with its own IP, SNMP listener, SSH server, HTTPS REST endpoint and flow exporter. Built on TUN interfaces and network namespaces.
quick start
build from source · or pull with dockergit clone https://github.com/labmonkeys-space/l8opensim.gitcd l8opensimmake tidymake buildsudo ./go/simulator/simulator -auto-start-ip 10.0.0.1 -auto-count 100docker pull ghcr.io/labmonkeys-space/l8opensim:latestdocker run --rm -it \
--cap-add=NET_ADMIN \
--device=/dev/net/tun \
--network=host \
ghcr.io/labmonkeys-space/l8opensim:latest \
-auto-start-ip 10.0.0.1 -auto-count 100what's in the box
six pillarsTested scale on a single host. Parallel TUN pre-allocation, lock-free sync.Map for O(1) OID lookups, pre-computed next-OID mappings.
SNMP v2c/v3 (MD5/SHA1 · DES/AES128), SSH with VT100, HTTPS REST, NetFlow v5 / v9 / IPFIX. sFlow v5 (experimental).
Routers, switches, firewalls, servers, GPU servers (DGX/HGX), storage systems, Linux servers — across 8 categories.
NVIDIA DGX-A100 / H100 / HGX-H200 with per-GPU DCGM OIDs — utilization, VRAM, temp, power, fan, SM/memory clocks.
Each device runs in the dedicated opensim network namespace with its own TUN interface and IP.
100-point pre-generated sine-wave cycling for CPU, memory, temperature — correlated across related metrics.
device catalog
28 types · 8 categories · 341 resource files- ›Cisco ASR9K · 48
- ›Cisco CRS-X · 144
- ›Huawei NE8000 · 96
- ›Nokia 7750 SR-12 · 72
- ›Juniper MX960 · 96
- ›Juniper MX240 · 24
- ›NEC IX3315 · 48
- ›Cisco IOS · 4
- ›Cisco Nexus 9500 · 48
- ›Arista 7280R3 · 32
- ›Cisco Catalyst 9500 · 48
- ›Extreme VSP4450 · 48
- ›D-Link DGS-3630 · 52
- ›Palo Alto PA-3220 · 12
- ›Fortinet FortiGate-600E · 20
- ›SonicWall NSa 6700 · 16
- ›Check Point 15600 · 24
- ›Dell PowerEdge R750
- ›HPE ProLiant DL380
- ›IBM Power S922
- ›Linux Server · Ubuntu 24.04
- ›NVIDIA DGX-A100 · 8×80GB
- ›NVIDIA DGX-H100 · 8×80GB
- ›NVIDIA HGX-H200 · 8×141GB
- ›AWS S3
- ›Pure Storage FlashArray
- ›NetApp ONTAP
- ›Dell EMC Unity
status & scale
what works · how big- ■SNMP v2c/v3
- ■SSH (VT100)
- ■HTTPS REST (storage)
- ■NetFlow v5/v9/IPFIX
- ■TUN + netns isolation
- ■Web UI + REST API
- ■sFlow v5 (synthetic)
- ■Layer 8 overlay
- ■30,000 concurrent devices / host
- ■~50 MB base + ~1 KB / device
- ■CPU: minimal in steady state
documentation map
jump inBuild, bring up a small fleet, run in Docker.
Scale to 30k, tune the opensim namespace, flow export and SNMP traps.
Architecture, CLI flags, REST API, device-type tables, protocol details.
DGX/HGX simulation, DCGM OID layout, pollaris parser.
Spin up tens of thousands of devices in seconds.
Apache-2.0. No agents, no cloud, no per-device fees. Just TUN interfaces and a little Go.