The Lab

Self-hosted infrastructure. No cloud required.

By the Numbers

2 PowerEdge Servers
6 GPUs
~42GB VRAM
0 Monthly Cloud Bills

Network Topology

+-----------+ | Internet | +-----+-----+ | +-----+-----+ | Cloudflare| +-----+-----+ | +-------+--------+ | Router/Firewall| +---+--------+---+ | | +----------+--+ +-+------------+ | 192.168.1.x | | 192.168.1.x | | LAN | | Services | +------+------+ +------+-------+ | | +-----------+-----------+----+----+-----------+ | | | | | +---+---+ +---+---+ +----+--+ +---+---+ +----+--+ | R710 | | R610 | | QNAP | |Rainbow| |Hommer | | .29 | | .30 | | .8 | | AI | | .7 | +---+---+ +---+---+ +---+---+ | .195 | +---+---+ | | | +---+---+ | OpenClaw DizyDiz Portainer 6 GPUs OpenDiz VM Kanban SearXNG NAS Ollama (.233) N8N Open WebUI Storage AI Inf. Mr. Peepers LiteLLM This site Velma

Servers

Dell PowerEdge R710

192.168.1.29

The primary application server. Runs Docker containers managed through Portainer, including the OpenClaw AI gateway, Kanban board, N8N workflow automation, LiteLLM proxy, and the infrastructure upgrade API.

Docker Host Portainer Agent Port 9001

Dell PowerEdge R610

192.168.1.30

The web services server. Hosts this very website (dizydiz.com), the SearXNG search engine (search.dizydiz.com), and Open WebUI for chat interfaces. All running as Docker containers with macvlan networking.

nginx containers macvlan networking Web services

RainbowAI (GPU Node)

192.168.1.195

The AI inference server. 6 GPUs running dual Ollama instances for local LLM inference. No API calls to OpenAI, Anthropic, or any cloud provider -- all AI runs on local hardware.

4x GTX 1660 Super
1x GTX 1660 Ti
1x RTX 3060 12GB
Ollama :11434 (5 GPUs) Ollama :11435 (RTX 3060) ~42GB total VRAM

Hommer (KVM Host)

192.168.1.7

Linux Mint desktop running libvirt/KVM. Hosts the OpenDiz Ubuntu VM which runs the OpenClaw AI agent gateway, the ollama proxy, and the Mr. Peepers / Velma AI agents.

libvirt/KVM 8 vCPU / 16GB VM Linux Mint 22.3

QNAP NAS

192.168.1.8

Network-attached storage running QNAP Container Station. Also hosts the Portainer management interface for the entire Docker infrastructure across all nodes.

Portainer :9443 Container Station NFS/SMB shares

Services

dizydiz.com

This website. Static HTML served by nginx in a Docker container on the R610. No CMS, no database, no PHP.

duck.dizydiz.com

A delightful duck-themed subdomain. Because every homelab needs at least one project that exists purely for joy.

search.dizydiz.com

Self-hosted SearXNG metasearch engine running on the R610. Privacy-respecting search without Google tracking.

OpenClaw

AI agent gateway running Mr. Peepers (qwen3:8b) and Velma (qwen3:30b-a3b). Local LLM inference with tool calling, Discord integration, and automated infrastructure tasks.

N8N

Workflow automation on the R710. Health monitoring, GPU audits, Discord issue detection, and automated remediation -- all running on self-hosted infrastructure.

Kanban Board

Custom kanban board at 192.168.1.124 with live status widget, task dispatch to AI agents, and an API for programmatic card management.

Open WebUI

Chat interface for the local Ollama instances. Web-based access to all loaded models without needing CLI access.

Portainer

Container management across all Docker hosts. Manages stacks on R710, R610, QNAP, and RainbowAI from a single interface.

Sparkles Recovery Station

Raspberry Pi 5-powered drive recovery pipeline. Automated multi-pass data recovery, file telemetry, Discord reporting, and auto-print diagnostics to the HP LaserJet. Handles both customer and internal drives.

cloud.co-cio.com

NextCloud instance for customer file delivery. Auto-provisioned accounts, recovered file uploads from NAS, 30-day expiry. Customers download their data here.

ntopng

Network traffic monitoring on pfSense. Real-time per-device traffic analysis, website visits, bandwidth usage across all VLANs. 192.168.1.1:3001 — admin / admin

AI Agents

All AI inference runs locally on the GPU node. No API calls to cloud providers. Models run on Ollama with custom routing, tool filtering, and thinking-token management via a local proxy.

Mr. Peepers

qwen3:8b RTX 3060

The main AI agent. Runs on the dedicated RTX 3060 GPU. Handles Discord interactions, infrastructure health checks, cron tasks, obsidian sync, and kanban board management. Tool-calling champion.

Velma

qwen3:30b-a3b 5x GPUs

The sub-agent specialist. Runs as a 30B MoE model across 5 GPUs (18GB). Handles delegated tasks, writes documentation runbooks, sends morning reports. Spawned by Mr. Peepers via the delegation enforcer plugin.

Captain Claude

Claude Supervisor

Claude Code running as a service account on Hommer. Woken by N8N when Discord issues are detected. Can communicate with Mr. Peepers via webhook injection. The one cloud-connected piece in the puzzle.

Sparkles Drive Recovery

The Sparkles Recovery Station is a Raspberry Pi 5 running an automated data recovery pipeline. Plug in a drive, and it handles the rest: multi-pass recovery, file categorization, NAS backup, cloud delivery, Discord reporting, and diagnostic sheet printing.

~90% Recovery Success
Pi 5 Hardware
v3.0 Software Version
$199 Starting Price

No cleanroom. No soldering. Software-based recovery for corrupted drives, accidental deletion, bad sectors, and filesystem damage. Faster and cheaper than sending your drive to a lab.

Submit a Drive Check Order Status Download Files Learn More

From 4TB to 42GB VRAM

In 2011, the homelab was a Norco RPC-4224 chassis with 24 drive bays running FreeNAS and ZFS, backed by a Synology that had run out of its 4TB of space. The blog articles on DizyDiz were about compiling kernel drivers for LSI RAID cards and recovering ZFS pools from dead USB keys.

Fifteen years later, the same spirit drives the lab forward. The Norco chassis gave way to Dell PowerEdge rack servers. FreeNAS became TrueNAS. The storage moved to a QNAP NAS. And the homelab grew to include something that didn't exist in 2011: local AI inference on consumer GPUs.

The original DizyDiz tagline was "You are the Imitators; I am the Originator." In 2026, with 6 GPUs running open-source AI models, self-hosted search, automated infrastructure management, and zero cloud dependencies -- that tagline still fits.