Skip to main content
v0.4.1 · Apache-2.0 · Go 1.26

A network load target
for the monitoring tools
you're writing.

l8opensim simulates up to 30,000 network devices, GPU servers, storage systems and Linux hosts on a single Linux host — each with its own IP, SNMP listener, SSH server, HTTPS REST endpoint and flow exporter. Built on TUN interfaces and network namespaces.

noc@laptop — ~ — 132×36
noc@laptop:~$ 
devices / host
30,000max
device types
28in 8 cat.
mem / device
~1KB
parallel workers
500max
01

quick start

build from source · or pull with docker
01 · clonegit
$git clone https://github.com/labmonkeys-space/l8opensim.git
$cd l8opensim
02 · buildmake · go 1.26+
$make tidy
$make build
03 · runneeds root
$sudo ./go/simulator/simulator -auto-start-ip 10.0.0.1 -auto-count 100
or with docker
01 · pullno toolchain
$docker pull ghcr.io/labmonkeys-space/l8opensim:latest
02 · runneeds --cap-add=net_admin
$docker run --rm -it \ --cap-add=NET_ADMIN \ --device=/dev/net/tun \ --network=host \ ghcr.io/labmonkeys-space/l8opensim:latest \ -auto-start-ip 10.0.0.1 -auto-count 100
02

what's in the box

six pillars
01
30,000 devices

Tested scale on a single host. Parallel TUN pre-allocation, lock-free sync.Map for O(1) OID lookups, pre-computed next-OID mappings.

02
Protocols

SNMP v2c/v3 (MD5/SHA1 · DES/AES128), SSH with VT100, HTTPS REST, NetFlow v5 / v9 / IPFIX. sFlow v5 (experimental).

03
28 device types

Routers, switches, firewalls, servers, GPU servers (DGX/HGX), storage systems, Linux servers — across 8 categories.

04
GPU simulation

NVIDIA DGX-A100 / H100 / HGX-H200 with per-GPU DCGM OIDs — utilization, VRAM, temp, power, fan, SM/memory clocks.

05
Namespace isolation

Each device runs in the dedicated opensim network namespace with its own TUN interface and IP.

06
Dynamic metrics

100-point pre-generated sine-wave cycling for CPU, memory, temperature — correlated across related metrics.

03

device catalog

28 types · 8 categories · 341 resource files
core routers5
  • Cisco ASR9K · 48
  • Cisco CRS-X · 144
  • Huawei NE8000 · 96
  • Nokia 7750 SR-12 · 72
  • Juniper MX960 · 96
edge routers3
  • Juniper MX240 · 24
  • NEC IX3315 · 48
  • Cisco IOS · 4
dc switches2
  • Cisco Nexus 9500 · 48
  • Arista 7280R3 · 32
campus switches3
  • Cisco Catalyst 9500 · 48
  • Extreme VSP4450 · 48
  • D-Link DGS-3630 · 52
firewalls4
  • Palo Alto PA-3220 · 12
  • Fortinet FortiGate-600E · 20
  • SonicWall NSa 6700 · 16
  • Check Point 15600 · 24
servers4
  • Dell PowerEdge R750
  • HPE ProLiant DL380
  • IBM Power S922
  • Linux Server · Ubuntu 24.04
gpu servers3
  • NVIDIA DGX-A100 · 8×80GB
  • NVIDIA DGX-H100 · 8×80GB
  • NVIDIA HGX-H200 · 8×141GB
storage4
  • AWS S3
  • Pure Storage FlashArray
  • NetApp ONTAP
  • Dell EMC Unity
04

status & scale

what works · how big
stable6
  • SNMP v2c/v3
  • SSH (VT100)
  • HTTPS REST (storage)
  • NetFlow v5/v9/IPFIX
  • TUN + netns isolation
  • Web UI + REST API
experimental2
  • sFlow v5 (synthetic)
  • Layer 8 overlay
tested scale3
  • 30,000 concurrent devices / host
  • ~50 MB base + ~1 KB / device
  • CPU: minimal in steady state
concurrent devices
30,000tested
device types
28
resource files
341json
world cities
98sysLocation
ssh commands
36+linux
05

documentation map

jump in
→ get started

Spin up tens of thousands of devices in seconds.

Apache-2.0. No agents, no cloud, no per-device fees. Just TUN interfaces and a little Go.