It's been a while since my last homelab post — long enough that a lot has changed. This is my attempt to catch everything up in one place. Grab a coffee.
Table of Contents
- The Dashboard Gets a Makeover
- Deploying Nextcloud (and Saying Goodbye to Some Old LXCs)
- Building Out the *Arr Stack: Prowlarr, qBittorrent, and Gluetun
- KICE Cards: Standing Up WooCommerce for My Son's Business
- Migrating Pterodactyl to Unraid: NVMe Game Servers
- 10GbE Upgrade: Proxmox and Unraid
- Tdarr Transcoding: Reclaiming 6.34 TB of Storage
- Homelab Documentation: From MkDocs to Wiki.js
- Outline Wiki for Unum
- Self-Hosted Print Server with a Raspberry Pi
- Home Assistant, Kasa Devices, and What's Next
- FreshRSS: Ditching the Algorithm
- What's Next
1. The Dashboard Gets a Makeover
The Homepage dashboard has gone from a simple list of service links to something I'm genuinely proud of. It now surfaces real-time data from nearly every part of the stack, organized into three tabs: Infrastructure; Media, Gaming, & Personal Services; and Calendar, Stocks, & Crypto.
On the infrastructure side, I have live Proxmox and Unraid stats — VMs, LXC count, CPU, memory — pulled directly from their APIs. Below that, the Netdata widgets alert me of any warnings or critical messages, and Glances charts show per-host CPU, memory, network throughput, disk I/O, and temperature sensors for each machine.
Networking gets its own section: pfSense with the Speedtest Tracker widget showing live download/upload and ping, Traefik showing active routers and services, a custom API pulling TP-Link Deco client counts broken out by access point location (basement, kitchen, Eli's room, garage), and Cloudflare Tunnels showing health status and origin IP.
The media tab covers the full *Arr stack — Overseerr, Radarr, Sonarr, Prowlarr, Bazarr, SABnzbd, qBittorrent, Plex, and Tdarr — each with their native widgets showing queue depth, processing status, and library counts. The gaming section shows the Pterodactyl Panel node/server counts alongside live Minecraft server status for the more "critical" servers I'm hosting. Personal services round it out: Home Assistant (people home, lights on), Nextcloud (storage and file counts), Mealie, and FreshRSS.
The calendar tab is probably my favorite. It aggregates my personal Google Calendar, my Outlook calendar for Unum, Sonarr, and Radarr into a single monthly view, with a side agenda panel showing the next two weeks at a glance. Below that: a stock watchlist, a crypto watchlist, and two custom API widgets pulling live gold and silver spot prices from MetalPriceAPI.
2. Deploying Nextcloud
For a long time I had two dedicated LXC containers running ITZG's Docker images for Minecraft Java and Bedrock servers. They'd been sitting mostly idle for a while, so I finally pulled the plug — but not before backing up all the world files. That backup motivation was actually what pushed me to finally deploy Nextcloud.
Nextcloud now runs behind a Cloudflare Zero Trust tunnel, so I can access it securely over the Internet at my personal domain after authenticating via email. It's primarily serving as personal cloud storage right now, but it's a solid foundation for more. The old Minecraft LXCs have been repurposed as Home Assistant and Wiki.js respectively — much better use of the slots.
3. Building Out the *Arr Stack
The media automation stack got two meaningful additions: Prowlarr for centralized indexer management, and qBittorrent running through Gluetun for VPN-tunneled torrent downloads.
Prowlarr replaced the pattern of configuring indexers individually in Sonarr and Radarr. Now I add an indexer once to Prowlarr and it syncs to all the downstream *Arr apps automatically. It's a small quality-of-life improvement that compounds fast as the indexer list grows.
qBittorrent runs in my Gluetun VPN network namespace — meaning it can only reach the internet through the Privado VPN tunnel managed by Gluetun. If the VPN drops, qBittorrent loses connectivity entirely rather than falling back to my real IP. It's accessible over the Internet via my personal domain but routed through a Cloudflare Tunnel that requires email authentication.
4. KICE Cards: WooCommerce for My Son's Business
My son discovered my old basketball, football, and baseball cards in storage and decided to start a trading card business with his cousins called KICE Cards. I spun up a dedicated Ubuntu VM that runs WordPress + WooCommerce with a custom domain and handles the full e-commerce workflow: product listings, inventory, orders, and payments.
It's been a good exercise in running something production-adjacent on homelab hardware — the WooCommerce database backup is on my to-do list for the backup runbook, but the site itself has been solid. And best of all, I've been able to help my son out with something in which he has a genuine interest.
5. Migrating Pterodactyl to Unraid
This was the biggest single project of the past several months, and it came with more friction than expected.
The motivation was simple: game server world data on spinning disk meant sluggish Minecraft chunk loading, and the Dell R730's Xeon cores weren't doing the game servers any favors compared to the i9-9900K sitting largely idle in the Unraid NAS. The goal was to move Wings to a dedicated Ubuntu VM on Unraid with the game volume data mapped directly to a 2TB NVMe cache share — no FUSE overhead, no spinning disk fallback.
What moved where:
- The Panel stack (MariaDB, Redis, and the Pterodactyl web UI) moved to Docker containers on Unraid
- Wings got a dedicated Ubuntu Server 24.04 VM
- The
pterodactylUnraid share lives on the NVMe cache and is mounted into the Wings VM via 9p virtio - The node communicates with the Panel through a Cloudflare Tunnel, with SSL handled at Cloudflare's edge rather than on Wings
The tricky parts:
The Wings config originally referenced a local IP for SSL termination — obviously broken on a new VM. Switching to Cloudflare tunnel + SSL passthrough took some log-diving before Wings would start clean and show a green heart in the Panel.
The rsync of ~27GB of Minecraft world data (tens, possibly hundreds, of thousands of tiny files) through a 9p virtio filesystem mount took most of an afternoon and evening. Writing through that virtual filesystem adds per-file latency that adds up fast.
The nastiest part came after the data transfer. Pterodactyl had the old Wings LXC's IP baked into Docker container port bindings, and the Panel's allocation system prevents you from reassigning the same port on a different IP to the same server through the UI. The workaround: bypass the Panel entirely and run docker start directly, letting Docker pick up the updated bindings. Not elegant, but it got my servers up and running on NVMe. Chunk loading is noticeably snappier — mission accomplished.
The old Proxmox LXCs are still running as a fallback while I finish verification, but decommissioning them is the next step.
6. 10GbE Upgrade: Proxmox and Unraid
Proxmox: Solving Packet Drops
My Proxmox host had been experiencing persistent packet drops — around 4 per second — under normal load from 17 LXC containers and 7 VMs. The culprit was the onboard BCM5720 1GbE NIC hitting its RX ring buffer ceiling of 2,047, which left no headroom for traffic spikes.
The fix was a Broadcom BCM57810 dual-port 10GbE SFP+ NIC connected via DAC cable to my Dell N2024P switch. The BCM57810's 4,078-entry buffer (double the 1GbE limit) completely eliminated the drops. The migration was an /etc/network/interfaces edit to bridge enp5s0f0 instead of eno4, a ring buffer increase, and an ifreload -a from iDRAC. About a 10-second network blip and everything came back clean. Netdata confirmed the missed packet counter dropped to zero immediately.
Unraid: A More Eventful Upgrade
After the success with my Proxmox upgrade, I decided to attempt the same on my Unraid box. While the motherboard didn't have a built-in NIC, I was able to repurpose an extra card from my Proxmox host. What seemed like a simple swap in theory, however, turned into a troubleshooting session: the card wouldn't establish a link on either port despite the cable and switch both checking out.
The fix ended up being two things: Dell BCM57810 cards require autonegotiation to be explicitly disabled (ethtool -s eth2 speed 10000 duplex full autoneg off) to link up with DAC cables, and the Unraid BIOS needed "Other PCI devices" set to UEFI mode to POST correctly alongside the GTX 1070.
Once the link came up, I migrated Unraid's primary interface from eth0 to eth2 and moved the Docker bridge to match. An iperf3 test between Unraid and the Proxmox host came in at *3.65 Gbits/sec* — more than 3x what 1GbE could theoretically deliver. Real-world media stack transfers are now disk-bound rather than network-bound, which is exactly where the bottleneck should be.
The Hardware Shuffle
The upgrade required pulling the GTX 1060 — it had been sharing Tdarr transcoding duties alongside the GTX 1070, but the 1070 alone handles the workload fine. The 1060 went to a friend building a budget gaming PC. Since the BCM57810 NIC was pulled from an unused slot in the R730, the total hardware cost for the Unraid 10GbE upgrade was limited to that of a DAC cable from Amazon.
7. Tdarr Transcoding: Reclaiming 6.34 TB
With the Unraid array pushing 85% capacity across ~25TB of media, I needed a storage strategy beyond "buy more drives." Tdarr's H.264 → H.265 transcoding pipeline turned out to be the answer.
The initial movie library pass was the proof of concept — about 5TB of savings, dropping array utilization from 85% down to 65.3%. GPU-accelerated NVENC encoding on the GTX 1060 processed files at 3–5x realtime. With that working, I expanded to the TV library: 47 series including 38 seasons of The Simpsons, 17 of It's Always Sunny, 16 of Bob's Burgers, 15 of King of the Hill, plus complete runs of The X-Files, The Rookie, Parks and Recreation, Brooklyn Nine-Nine, and more. That's an estimated 2,500–3,500 episode files.
Running 2 concurrent GPU workers 24/7, the queue cleared within 3-4 days. Total space reclaimed: 6.34 TB — enough to buy the array another year or two before I need to think about expanding capacity (or actually deleting content).
A few lessons from getting Tdarr dialed in: Pascal-generation NVIDIA GPUs (GTX 1070/1060) don't support B-frames for HEVC encoding. I had widespread transcode failures before I found this and disabled B-frames in the workflow config. Also worth noting: Tdarr is scheduled around Plex usage patterns so active streaming sessions don't compete with transcoding jobs.
8. Homelab Documentation: Wiki.js
At some point the infrastructure grew past what I could keep in my head. IP addresses, port mappings, Traefik router configurations, Cloudflare tunnel routes, which container runs what — I needed actual documentation. This comes in handy not only for myself, but also for feeding into Claude when I begin a new conversation. (That way I'm not constantly starting over from scratch.)
I started with MkDocs + Material theme, which produced a clean static site. But the "edit markdown → rebuild → deploy to Nginx" workflow had just enough friction that I knew I'd never keep it updated. I evaluated Outline and Wiki.js and landed on Wiki.js for three reasons: built-in local auth (Outline requires an external OIDC provider), a solid Markdown editor with live preview, and native Git sync — every edit automatically commits to a private GitHub repo as plain Markdown files. If Wiki.js ever dies, my docs survive in a format I can use anywhere.
The setup is a lightweight Debian LXC on Proxmox running Wiki.js with PostgreSQL, accessible on my private domain behind Cloudflare Zero Trust. The whole thing — container creation to 24 pages of documentation — took about an hour. It now covers network topology, pfSense port forwarding rules, individual container specs, operational runbooks, and service documentation for everything from Ghost to Pterodactyl. Keeping it updated could be a challenge, and there's still a lot more I could use to document, but it's far better than what I had before (i.e., nothing).
9. Outline Wiki for Unum
On the Unum side, I deployed Outline as an internal knowledge base for the team. Rather than paying for the hosted version, I self-hosted it on Proxmox using the Proxmox VE Helper Scripts — a lightweight LXC with PostgreSQL and Redis handled automatically.
External access goes through a Cloudflare Zero Trust tunnel (a common theme if you've read this far). Authentication is handled by Microsoft Entra ID, which ties into our existing Microsoft 365 setup — no separate credentials to manage. Transactional emails route through Brevo using the unumchurch.com domain with existing DKIM/SPF configuration.
If you're looking for a self-hosted Notion/Confluence alternative that doesn't require much ongoing maintenance, self-hosting Outline is worth considering. The UI is beautiful and allows for a considerable amount of customization. It requires some familiarity with Markdown, but that should hardly be considered a stumbling block. I've been increasingly using the Typora Markdown editor both personally and professionally instead of OneNote or Microsoft Word. It's a paid program, but it's well worth the minimal cost, IMHO.
10. Self-Hosted Print Server
This one was a simple problem: a USB-only Brother HL-L2300D laser printer that none of the wireless devices on the network could reach. The fix was a Raspberry Pi running CUPS, making the printer available via IPP to every device on the network.
The key lesson: use IPP Everywhere / driverless printing rather than trying to match a specific driver. The generic PostScript → gstoraster → brlaser path caused persistent issues; driverless IPP solved everything and enables auto-discovery via Avahi/mDNS. macOS finds it automatically via Bonjour. ChromeOS finds it via the Add Printer flow. While I'm still struggling with iOS integration, Android discovery was more-or-less flawless. And now Abby doesn't have to bug me any time she needs something printed!
11. Home Assistant, Kasa Devices, & What's Next
Home Assistant now runs on Proxmox and is integrated with my TP-Link Deco mesh network, Kasa smart lights, and a Kasa smart plug running a fan in my office. The Deco integration feeds the client-count widget on the Homepage dashboard, which is legitimately useful for knowing what's connected where.
The Kasa plug experiment with my desk lamp didn't work — the lamp doesn't respond to power-cycle control the way a simple device would. So the next project is building an ESP32-based IR blaster with ESPHome to simulate remote button presses for the lamp. I'm also planning to add a motion sensor to detect when I'm actually sitting at my desk, so the automation is presence-aware rather than just time-based.
12. FreshRSS: Ditching the Algorithm
I set up FreshRSS as a self-hosted RSS reader accessible through Traefik. The motivation: an ad-free, algorithm-free way to read content from sites I actually want to follow without needing to navigate to each one of them individually multiple times throughout the day.
The main friction point was getting the Google Reader API working for mobile sync with NetNewsWire on iOS. That required setting an API password via the CLI tool (./cli/update-user.php) and configuring a Cloudflare Access bypass policy to prevent authentication interference with the API endpoints. Once that was sorted, I pointed NetNewsWire at the proper URL and it syncs cleanly.
Current feeds: LewRockwell.com, Mises.org, Antiwar.com, XDA-Developers, HackerNoon, and 9to5Mac.
13. What's Next
The near-term list:
- ESP32 IR blaster — ESPHome build with a motion sensor for office presence detection
- More tinkering in Home Assistant — I've just barely scratched the surface (and in doing so opened a can of worms!)
- Pterodactyl decommission — Verify the Unraid migration is fully stable, then retire the Pterodactyl LXCs on Proxmox
- Backup strategy — The runbook has a lot of "TBD" entries. Ghost, WooCommerce, Mealie, and Nextcloud all need proper backup procedures
- Server rack expansion — Moved from a 12U to a 27U rack to accommodate growth. More headroom for whatever comes next
The homelab is in a good place right now — stable, well-documented, and actually useful for the people who depend on it. But there's always something next on the list.
Thanks for reading. If you have questions about any of this or want to see config files for a specific setup, feel free to reach out.