685 words
3 minutes
From Zero to Production:An Engineer's Journal on Building a Mini-IDC

Disclaimer: This article is for technical sharing purposes only and does not involve any illegal activities.

I’ve wanted to write this for a long time but couldn’t find a window in my schedule. This year, some friends and I transformed a self-built house in a small rural town into a “Pseudo-IDC.” From server racks and dedicated AC units to enterprise broadband and UPS backups, we’ve got it all. It may not be a Tier 4 facility, but it looks more professional than many SME server rooms. It has been running stably for over half a year.

If you are looking to build your own “homelab-turned-IDC,” let’s exchange ideas—maybe I can help you skip a few pitfalls.

Initial state


Ground Rules (The “Shield” Section)#

Do not use residential broadband for PCDN or for-profit traffic hosting.

Do not use the network for any unauthorized Penetration Testing or attacks.

Secure your devices to prevent them from becoming Jump Servers for attackers.

The first rule is the ISP’s bottom line; the latter two are to keep you on the right side of the law.

Remember: Cybersecurity is built by and for the people—there is no such thing as a small security oversight.

Requirement Analysis#

“Data Center” is an abstract term. A Linux box in your bedroom is a data center; a row of 42U racks is also a data center. Our project is a “Pseudo-IDC” positioned in the middle, featuring:

  • Redundant Network Outbound
  • Sufficient Compute & Storage Resources
  • Remote Office Nodes
  • 7×24 Availability

Current assets: Rack, AC, UPS, and Enterprise Broadband—a complete basic ecosystem.

CR


Build Process#

1. Server Infrastructure#

Our fleet consists entirely of decommissioned Dell PowerEdge servers. Dell is the choice for a reason: as a global giant, their hardware is cost-effective and documentation is easily accessible. We previously tried the Huawei 2288Hv3—which was even cheaper—but Huawei’s documentation is restricted to authorized partners, making it “wild-IDC-unfriendly.”

Example Node Configuration:

  • CPU: 2 × Intel Xeon E5-2680 v4 (2.4GHz, 14C/28T)
  • RAM: 320GB DDR4 2133MHz REG ECC
  • Storage: 6 × 4TB SAS
  • Networking: 4 × 10Gbps SFP+

Virtualization Platform

2. Internal Network (Intranet)#

Apart from the Out-of-Band (OOB) Management which still uses 1Gbps RJ45, all other links have been upgraded to OM3 Multi-mode Fiber. With dual-port Link Aggregation (LACP), we hit a theoretical throughput of 20Gbps. For storage nodes, we utilized 40Gbps QSFP links. We initially planned for RoCE (RDMA over Converged Ethernet), but during debugging, we found our core switch would crash whenever Jumbo Frames were enabled. We’ve reverted to standard iSCSI for now.

Core Switch: Cisco Nexus 3000 (N3K) Series

  • 48 × 10Gbps SFP+
  • 4 × 40Gbps QSFP

Core Switch

Nearly half the ports are occupied. We used a mix of Intel SFP+ modules sourced second-hand—definitely a “budget” vibe, but as long as the packets flow, we’re happy.

3. Network Outbound#

Static Public IPv4 addresses are a rare commodity in residential settings.

  • China Telecom: Offers dynamic IPv4 with the best quality, but expensive.
  • China Unicom: Offers dynamic IPv4 at a reasonable price.
  • China Mobile: Massive “Carrier-Grade NAT” (but cheap/free).

Luckily, my friend’s family business provided an Enterprise Line (300Mbps Up / 1000Mbps Down), with residential broadband as a backup. For remote access, we use DDNS (via the open-source ddns-go). Compared to writing custom scripts back in 2018, the ecosystem has truly evolved.

Network Topology Diagram

4. Remote Connectivity: VPN Tunnels#

Since the facility is nearly 100km away, 24/7 on-site presence is impossible. We established several encrypted tunnels for remote management. This setup involves Network Topology, VPN Protocols, Port Forwarding, and SDN (Software-Defined Networking). I’ll write a dedicated deep-dive tutorial on this later.

Currently, the IDC supports:

  • Automated daily backups for my Cloud VPS.
  • Off-site data synchronization for my home NAS.
  • Multiple Database instances.
  • Local Code Repositories (Git).
  • Remote Desktop for work.

Cloud server auto backup


Monitoring & Operations#

The “Wild IDC” is now largely automated. Environmental Monitoring: We leverage the Xiaomi/Mi Home ecosystem (Temperature, Humidity, Power, and Cameras). It’s surprisingly robust for the price.

Mi Home

Server Monitoring: We migrated from Zabbix to Beszel. Zabbix was too “heavy” for a non-professional data center. For network uptime, I wrote a small heartbeat script that pushes alerts via Feishu (Lark) webhooks if the link drops.

Lark

Overall, we’ve pieced together a stable and maintainable monitoring system at minimal cost.

Server Rack


End#

🫶 The reason for doing this is simple: Passion drives everything. As for the future, perhaps this passion will quietly bloom into something even more practical.

From Zero to Production:An Engineer's Journal on Building a Mini-IDC
https://fuwari.vercel.app/posts/e4d4352e-3398-43e0-a3b6-1e0576f28517/
Author
Ryan Zhang
Published at
2025-11-17
License
CC BY-NC-SA 4.0
This content has been translated with the assistance of AI tools, including ChatGPT, Gemini, and Qwen. While efforts have been made to ensure accuracy and clarity, minor discrepancies may exist. Please refer to the original text for authoritative interpretation if needed.