Disclaimer: This article is for technical sharing purposes only and does not involve any illegal activities.
I’ve wanted to write this for a long time but couldn’t find a window in my schedule. This year, some friends and I transformed a self-built house in a small rural town into a “Pseudo-IDC.” From server racks and dedicated AC units to enterprise broadband and UPS backups, we’ve got it all. It may not be a Tier 4 facility, but it looks more professional than many SME server rooms. It has been running stably for over half a year.
If you are looking to build your own “homelab-turned-IDC,” let’s exchange ideas—maybe I can help you skip a few pitfalls.

Ground Rules (The “Shield” Section)
Do not use residential broadband for PCDN or for-profit traffic hosting.
Do not use the network for any unauthorized Penetration Testing or attacks.
Secure your devices to prevent them from becoming Jump Servers for attackers.
The first rule is the ISP’s bottom line; the latter two are to keep you on the right side of the law.
Remember: Cybersecurity is built by and for the people—there is no such thing as a small security oversight.
Requirement Analysis
“Data Center” is an abstract term. A Linux box in your bedroom is a data center; a row of 42U racks is also a data center. Our project is a “Pseudo-IDC” positioned in the middle, featuring:
- Redundant Network Outbound
- Sufficient Compute & Storage Resources
- Remote Office Nodes
- 7×24 Availability
Current assets: Rack, AC, UPS, and Enterprise Broadband—a complete basic ecosystem.

Build Process
1. Server Infrastructure
Our fleet consists entirely of decommissioned Dell PowerEdge servers. Dell is the choice for a reason: as a global giant, their hardware is cost-effective and documentation is easily accessible. We previously tried the Huawei 2288Hv3—which was even cheaper—but Huawei’s documentation is restricted to authorized partners, making it “wild-IDC-unfriendly.”
Example Node Configuration:
- CPU: 2 × Intel Xeon E5-2680 v4 (2.4GHz, 14C/28T)
- RAM: 320GB DDR4 2133MHz REG ECC
- Storage: 6 × 4TB SAS
- Networking: 4 × 10Gbps SFP+

2. Internal Network (Intranet)
Apart from the Out-of-Band (OOB) Management which still uses 1Gbps RJ45, all other links have been upgraded to OM3 Multi-mode Fiber. With dual-port Link Aggregation (LACP), we hit a theoretical throughput of 20Gbps. For storage nodes, we utilized 40Gbps QSFP links. We initially planned for RoCE (RDMA over Converged Ethernet), but during debugging, we found our core switch would crash whenever Jumbo Frames were enabled. We’ve reverted to standard iSCSI for now.
Core Switch: Cisco Nexus 3000 (N3K) Series
- 48 × 10Gbps SFP+
- 4 × 40Gbps QSFP

Nearly half the ports are occupied. We used a mix of Intel SFP+ modules sourced second-hand—definitely a “budget” vibe, but as long as the packets flow, we’re happy.
3. Network Outbound
Static Public IPv4 addresses are a rare commodity in residential settings.
- China Telecom: Offers dynamic IPv4 with the best quality, but expensive.
- China Unicom: Offers dynamic IPv4 at a reasonable price.
- China Mobile: Massive “Carrier-Grade NAT” (but cheap/free).
Luckily, my friend’s family business provided an Enterprise Line (300Mbps Up / 1000Mbps Down), with residential broadband as a backup. For remote access, we use DDNS (via the open-source ddns-go). Compared to writing custom scripts back in 2018, the ecosystem has truly evolved.

4. Remote Connectivity: VPN Tunnels
Since the facility is nearly 100km away, 24/7 on-site presence is impossible. We established several encrypted tunnels for remote management. This setup involves Network Topology, VPN Protocols, Port Forwarding, and SDN (Software-Defined Networking). I’ll write a dedicated deep-dive tutorial on this later.
Currently, the IDC supports:
- Automated daily backups for my Cloud VPS.
- Off-site data synchronization for my home NAS.
- Multiple Database instances.
- Local Code Repositories (Git).
- Remote Desktop for work.

Monitoring & Operations
The “Wild IDC” is now largely automated. Environmental Monitoring: We leverage the Xiaomi/Mi Home ecosystem (Temperature, Humidity, Power, and Cameras). It’s surprisingly robust for the price.

Server Monitoring: We migrated from Zabbix to Beszel. Zabbix was too “heavy” for a non-professional data center. For network uptime, I wrote a small heartbeat script that pushes alerts via Feishu (Lark) webhooks if the link drops.

Overall, we’ve pieced together a stable and maintainable monitoring system at minimal cost.

End
🫶 The reason for doing this is simple: Passion drives everything. As for the future, perhaps this passion will quietly bloom into something even more practical.