By the Numbers
Network Topology
Servers
Dell PowerEdge R710
The primary application server. Runs Docker containers managed through Portainer, including the OpenClaw AI gateway, Kanban board, N8N workflow automation, LiteLLM proxy, and the infrastructure upgrade API.
Dell PowerEdge R610
The web services server. Hosts this very website (dizydiz.com), the SearXNG search engine (search.dizydiz.com), and Open WebUI for chat interfaces. All running as Docker containers with macvlan networking.
RainbowAI (GPU Node)
The AI inference server. 6 GPUs running dual Ollama instances for local LLM inference. No API calls to OpenAI, Anthropic, or any cloud provider -- all AI runs on local hardware.
Hommer (KVM Host)
Linux Mint desktop running libvirt/KVM. Hosts the OpenDiz Ubuntu VM which runs the OpenClaw AI agent gateway, the ollama proxy, and the Mr. Peepers / Velma AI agents.
QNAP NAS
Network-attached storage running QNAP Container Station. Also hosts the Portainer management interface for the entire Docker infrastructure across all nodes.
Services
dizydiz.com
This website. Static HTML served by nginx in a Docker container on the R610. No CMS, no database, no PHP.
duck.dizydiz.com
A delightful duck-themed subdomain. Because every homelab needs at least one project that exists purely for joy.
search.dizydiz.com
Self-hosted SearXNG metasearch engine running on the R610. Privacy-respecting search without Google tracking.
OpenClaw
AI agent gateway running Mr. Peepers (qwen3:8b) and Velma (qwen3:30b-a3b). Local LLM inference with tool calling, Discord integration, and automated infrastructure tasks.
N8N
Workflow automation on the R710. Health monitoring, GPU audits, Discord issue detection, and automated remediation -- all running on self-hosted infrastructure.
Kanban Board
Custom kanban board at 192.168.1.124 with live status widget, task dispatch to AI agents, and an API for programmatic card management.
Open WebUI
Chat interface for the local Ollama instances. Web-based access to all loaded models without needing CLI access.
Portainer
Container management across all Docker hosts. Manages stacks on R710, R610, QNAP, and RainbowAI from a single interface.
Sparkles Recovery Station
Raspberry Pi 5-powered drive recovery pipeline. Automated multi-pass data recovery, file telemetry, Discord reporting, and auto-print diagnostics to the HP LaserJet. Handles both customer and internal drives.
cloud.co-cio.com
NextCloud instance for customer file delivery. Auto-provisioned accounts, recovered file uploads from NAS, 30-day expiry. Customers download their data here.
ntopng
Network traffic monitoring on pfSense. Real-time per-device traffic analysis, website visits, bandwidth usage across all VLANs. 192.168.1.1:3001 — admin / admin
AI Agents
All AI inference runs locally on the GPU node. No API calls to cloud providers. Models run on Ollama with custom routing, tool filtering, and thinking-token management via a local proxy.
Mr. Peepers
qwen3:8b RTX 3060The main AI agent. Runs on the dedicated RTX 3060 GPU. Handles Discord interactions, infrastructure health checks, cron tasks, obsidian sync, and kanban board management. Tool-calling champion.
Velma
qwen3:30b-a3b 5x GPUsThe sub-agent specialist. Runs as a 30B MoE model across 5 GPUs (18GB). Handles delegated tasks, writes documentation runbooks, sends morning reports. Spawned by Mr. Peepers via the delegation enforcer plugin.
Captain Claude
ClaudeClaude Code running as a service account on Hommer. Woken by N8N when Discord issues are detected. Can communicate with Mr. Peepers via webhook injection. The one cloud-connected piece in the puzzle.
Sparkles Drive Recovery
The Sparkles Recovery Station is a Raspberry Pi 5 running an automated data recovery pipeline. Plug in a drive, and it handles the rest: multi-pass recovery, file categorization, NAS backup, cloud delivery, Discord reporting, and diagnostic sheet printing.
No cleanroom. No soldering. Software-based recovery for corrupted drives, accidental deletion, bad sectors, and filesystem damage. Faster and cheaper than sending your drive to a lab.
From 4TB to 42GB VRAM
In 2011, the homelab was a Norco RPC-4224 chassis with 24 drive bays running FreeNAS and ZFS, backed by a Synology that had run out of its 4TB of space. The blog articles on DizyDiz were about compiling kernel drivers for LSI RAID cards and recovering ZFS pools from dead USB keys.
Fifteen years later, the same spirit drives the lab forward. The Norco chassis gave way to Dell PowerEdge rack servers. FreeNAS became TrueNAS. The storage moved to a QNAP NAS. And the homelab grew to include something that didn't exist in 2011: local AI inference on consumer GPUs.
The original DizyDiz tagline was "You are the Imitators; I am the Originator." In 2026, with 6 GPUs running open-source AI models, self-hosted search, automated infrastructure management, and zero cloud dependencies -- that tagline still fits.