My TuringPi Cluster Hardware: From Kickstarter to Reality
Three years from Kickstarter backing to complete build: documenting my TuringPi 2 cluster with 4x RK1 modules, storage strategy, and lessons learned.
Introduction
After three blog posts about setting up the platform to document this journey, it's time to talk about the actual hardware: my TuringPi 2 cluster that's been sitting assembled and waiting for me to get started.
I backed the TuringPi 2 Kickstarter campaign because I wanted to build a real Kubernetes cluster at home - not just virtual machines on a single server, but actual separate compute nodes in a compact, power-efficient form factor. ARM64, Kubernetes, and learning by doing - that's the goal.
🏠 This is part of the Homelab Journey series - building a production Kubernetes cluster from scratch.
Other posts in this series:
- My TuringPi Cluster Hardware (you are here)
- Coming next: Installing Talos Linux
Why TuringPi?
Before diving into specs, let me explain why I chose this platform over alternatives like Raspberry Pi clusters or mini PCs - and how I got here.
The Journey from Interest to Reality
I'd been following TuringPi for a while when I learned they were working on version 2 of their board. In January 2022, I added myself to the notification list, eager to see what they'd create.
After some delays (as is typical with hardware projects), the Kickstarter campaign finally went live. I jumped on it immediately - backer #758. This wasn't just about getting hardware; I wanted to support innovative projects that make cluster computing accessible to homelab builders.
My initial order (December 2022):
- TuringPi 2 board
- 4x CM4 adapter modules
- Pico PSU for power
- Shield accessory
- Crucially: Made sure to get the NVMe option (not yet standard at the time)
Everything arrived in January 2023. I had 3 Raspberry Pi CM4 modules already, so I could start experimenting immediately.
The CM4 Frustration
Here's where the project stalled for months: my CM4 modules had local storage (eMMC), and reflashing them to test different OS installations was cumbersome. Each time I wanted to try a new OS or configuration, it meant physically accessing the modules, dealing with boot mode pins, and flashing via USB.
For someone who wanted to experiment and learn, this friction was too much. The hardware sat assembled but unused. I put it aside, waiting for a better solution.
The RK1 Game Changer
When TuringPi announced the RK1 modules (ARM64, more powerful than CM4, easier to work with), I knew this was what I needed.
July 19, 2023: Ordered 3x RK1 modules
End of January 2024: RK1 modules arrived
The RK1s have 32GB built-in eMMC that's much easier to flash, plus they're more powerful than CM4. This solved my reflashing friction problem.
But I still didn't have the case. Running this as bare hardware on my desk didn't feel right for infrastructure I wanted to run 24/7.
Finally Complete
September 2024: Ordered the TuringPi official case and a 4th RK1 module (to complete the set of 4)
January 2025: Case and 4th node arrived
Finally, after nearly 3 years from initial Kickstarter backing, I had everything:
- TuringPi 2 board
- 4x RK1 8GB modules
- Professional case
- All storage drives installed
- Clean, professional build ready for 24/7 operation
The hardware has been assembled and waiting. Now it's time to actually use it.
What Attracted Me to TuringPi
Looking back at why I backed this project in the first place:
1. Real Cluster Hardware
- Actual separate compute nodes, not VMs
- Each node runs independently
- True distributed computing experience
- Realistic representation of production Kubernetes
2. Compact and Integrated
- All nodes on a single board
- Shared power supply
- Built-in networking between nodes
- Professional appearance with the case
3. ARM64 Architecture
- Lower power consumption than x86
- Growing ecosystem for ARM-based servers
- Forces me to work with multi-architecture considerations
- Aligns with where the industry is heading
4. Learning Platform
- Perfect for understanding cluster concepts
- Manageable scale (4 nodes)
- Affordable compared to enterprise hardware
- Can run 24/7 without huge power bills
5. Kickstarter Community
- Early access to innovative hardware
- Active community of builders
- Documentation and shared experiences
- Supporting a cool project
I could have built a cluster with 4 Raspberry Pis, but the TuringPi integrates everything cleanly - no messy wiring, no separate power supplies for each node, no network switch clutter. It's professional-looking hardware that I can learn on.
The Hardware
TuringPi 2 Board
Source: Kickstarter campaign (original backer)
Form factor: Mini-ITX sized board
Key features:
- 4 compute module slots (Raspberry Pi CM4 compatible)
- Onboard networking between nodes
- Baseboard Management Controller (BMC) for remote management
- Multiple storage options (NVMe, SATA)
- Standard ATX power connector
Why the 3-year journey:
- January 2023: Board arrived, started with CM4 modules
- Frustration with CM4 reflashing process led to stalling
- January 2024: RK1 modules arrived, solved the friction problem
- Still waiting for professional case
- January 2025: Case finally arrived, complete build possible
- Now ready to actually deploy this properly
Compute Modules: 4x RK1 8GB
Specifications per node:
- CPU: Rockchip RK3588 (ARM64)
- 4x Cortex-A76 @ 2.4GHz
- 4x Cortex-A55 @ 1.8GHz
- 8-core heterogeneous architecture
- RAM: 8GB LPDDR4
- Built-in eMMC: 32GB per node
- M.2 slot: NVMe support on each node
- Gigabit Ethernet
Why RK1 over alternatives:
- More powerful than Raspberry Pi CM4
- 8GB RAM is reasonable for containerized workloads
- Built-in eMMC means OS can be separate from application storage
- Good balance of performance and power efficiency
- Built-in NPU (6 TOPS) for potential AI/ML workloads
Total cluster resources:
- 32 CPU cores (4 nodes × 8 cores)
- 32GB RAM (4 nodes × 8GB)
- 128GB eMMC storage (4 nodes × 32GB)
For a homelab Kubernetes cluster, this is plenty to learn with and run real workloads.
Storage Configuration
This is where it gets interesting. I have multiple storage options, and I'm still figuring out the best use for each.
Primary Storage: 4x 250GB NVMe
Configuration:
- One 250GB NVMe drive per RK1 node
- Connected via M.2 slots on each compute module
- Fast, local storage for each node
Planned use:
- Application data storage
- Container persistent volumes
- Ideal for workloads that need fast I/O
The challenge: How to make this storage available across the cluster? Options I'm considering:
- Rook/Ceph - Distributed storage system that pools NVMe across nodes
- Longhorn - Kubernetes-native distributed storage (lighter than Ceph)
- Local persistent volumes - Each pod pinned to a specific node's storage
- NFS exports - Simple but not very cloud-native
My concern is the 1Gb network between nodes. With distributed storage, data needs to replicate across nodes, which means network bandwidth becomes a bottleneck. Will 1Gb Ethernet be enough for reasonable performance? I'll need to test and find out.
OS Storage: 32GB eMMC per Node
Configuration:
- Built-in to each RK1 module
- Separate from NVMe storage
- Fast enough for OS operations
Planned use:
- Talos Linux installation
- Keeps the OS layer separate from application data
- If I need to wipe and reinstall a node, application data on NVMe stays safe
This separation of OS and data storage is a best practice I learned from years in enterprise infrastructure. Don't mix your OS and your data on the same disk.
Bulk Storage: 2x 900GB SATA SSD
Configuration:
- Connected via SATA ports on TuringPi board
- Available through Node 3 (RK1 in slot 3)
- Large capacity compared to NVMe
Planned use (still figuring this out):
- Backup storage - Local backups of cluster data
- Central shared storage - NFS export for less performance-critical workloads
- Archive storage - Logs, old container images, snapshots
The challenge: These drives are only accessible through one node (Node 3). This means:
- They're a single point of failure if that node goes down
- Network access required from other nodes
- Need to design around this limitation
Possible solutions:
- Export via NFS from Node 3 to other nodes
- Use as backup target (not primary storage)
- Run specific workloads only on Node 3 that need the extra space
I'll need to experiment to see what works best.
Storage Summary
Total raw storage:
- 128GB eMMC (OS layer)
- 1TB NVMe (4 × 250GB, fast application storage)
- 1.8TB SATA SSD (2 × 900GB, bulk storage)
Total: ~3TB of storage across different tiers and performance characteristics.
The Case
TuringPi Official Case
Why it matters:
- Professional appearance
- Proper cooling airflow
- Dust protection
- Looks like real infrastructure, not a hobby project
I waited months after the board arrived for this case to be released. Worth it. When you're building something you want to run 24/7, presentation matters. This sits on my network rack, and it looks like it belongs there.
Assembly Experience
Everything is now installed and assembled:
- All 4 RK1 modules seated in their slots
- 4 NVMe drives installed
- 2 SATA SSDs connected
- Case assembled with proper cable management
- Power connected (standard ATX power supply)
The build quality is solid. The board feels substantial, the modules seat firmly, and the case fits together well. TuringPi clearly put thought into the physical design.
Network Configuration
Current Setup
The TuringPi connects to my network via a single Gigabit Ethernet connection to my existing infrastructure.
My network uses VLANs for segmentation - guest network, IoT devices, production services, etc. I'll need to figure out how to configure this in a Talos Linux environment. Talos is API-driven and immutable, so traditional network configuration files don't apply here.
The 1Gb Bottleneck Question
Between the RK1 nodes, networking is 1Gb Ethernet. This is both:
- Sufficient for many workloads (most web services don't need more)
- Limiting for distributed storage with replication
In enterprise environments, storage networks are often 10Gb or faster. Here, I have 1Gb. This means:
- Storage replication will be slower
- Network-intensive applications might be constrained
- I'll need to be smart about what runs where
But this is also realistic - not everyone has 10Gb networking at home. Learning to work within constraints is valuable.
Network Plans
Short term:
- Single uplink to my main network
- Nodes communicate via TuringPi's built-in switching
- Access cluster via standard kubectl from my laptop
Future considerations:
- VLAN configuration in Talos
- Network policies in Kubernetes
- Ingress controller for external access
- Possible 10Gb upgrade if networking becomes a real bottleneck
What I Don't Know Yet
Let me be honest about what I'm still figuring out:
1. Distributed Storage Performance
Will Rook/Ceph or Longhorn perform acceptably over 1Gb networking with 250GB NVMe drives? I don't know. I'll need to:
- Set it up
- Run benchmarks
- Monitor actual performance with real workloads
- Decide if it's good enough or if I need a different approach
2. SATA Storage Strategy
How should I use the 2x 900GB SATA drives that are only accessible through Node 3?
- Pure backup storage?
- NFS export for specific workloads?
- Something else entirely?
This will require experimentation.
3. VLAN Configuration in Talos
I have VLANs on my network. Talos Linux is configured via YAML and API calls, not traditional network config files. How do I:
- Tag traffic with VLAN IDs?
- Configure network policies?
- Manage this across 4 nodes?
I'll be learning this as I go.
4. NPU for AI/ML workloads
Each RK1 module includes a 6 TOPS Neural Processing Unit (NPU) built into the RK3588 SoC - specialized hardware for AI/ML inference.
What I don't know:
- Are there mature, open-source frameworks that support ARM64 NPUs?
- Can I run practical local AI inference (small language models, image recognition)?
- How does NPU performance compare to CPU-based inference for real workloads?
- Is there a Kubernetes-native way to schedule workloads that use NPUs?
Why this matters: Having 4 NPUs (24 TOPS total) sitting unused feels like untapped potential. If viable tools exist, running local AI inference without cloud dependencies aligns perfectly with the self-hosting philosophy of this project.
Priority: After core cluster services are stable and running.
5. Power and Cooling
The cluster is assembled but not yet running 24/7. Questions I need to answer:
Power consumption:
- What's the actual power draw under load vs. idle?
- Not a critical concern, but interesting to measure
- Will help predict monthly power costs
Cooling setup:
- Each RK1 has its own built-in fan
- I did NOT install additional case fans
- Relying on RK1 module fans for now
Noise is my main concern:
- 4 small fans running simultaneously
- Will the noise be acceptable for 24/7 operation?
- Can I run this in my office/workspace without it being annoying?
This is probably the biggest unknown for daily usability. If the fans are too loud, I'll need to either move the cluster somewhere else or add quieter cooling solutions.
The Plan Forward
Now that the hardware is documented, here's what comes next:
Phase 1: OS Installation (Next Post)
- Research Talos Linux installation for RK1/ARM64
- Understand the bootstrap process
- Install Talos on all 4 nodes using the 32GB eMMC
- Get basic cluster connectivity working
- Document early attempts and lessons learned (I've already tried a few things that didn't work - those stories are coming)
Phase 2: Storage Setup
- Decide on distributed storage solution (likely Longhorn to start)
- Configure NVMe drives as storage pool
- Set up the SATA drives for backups
- Test performance and adjust
Phase 3: First Workloads
- Deploy something simple to validate the cluster works
- Set up monitoring (Prometheus/Grafana)
- Install a GUI management tool (Rancher or Portainer)
- Self-host my first real service (probably Gitea for Git)
Phase 4: Learning and Iteration
- Document what works and what doesn't
- Share mistakes and solutions openly
- Test different approaches to distributed storage
- Explore NPU capabilities once cluster is stable
- investigate ARM64 AI/ML frameworks and practical local inference use cases
- Build confidence with Kubernetes concepts
- Iterate on configurations as I learn
Why Document This?
I'm documenting this journey because:
1. Accountability
Writing it down means I actually have to do it. The hardware has been sitting assembled for too long.
2. Learning by Teaching
Explaining what I'm doing forces me to understand it deeply. Writing these posts makes me think through decisions rather than just stumbling forward.
3. Helping Others
TuringPi clusters with RK1 modules and Talos Linux is a specific, somewhat niche combination. Documentation is scattered. If I can help someone else avoid the pitfalls I'll encounter, this blog has value.
4. Personal Reference
In six months when something breaks, I'll be glad I documented how I set it up.
Hardware Summary
TuringPi 2 Cluster Configuration:
Compute:
- 4x RK1 8GB modules (32 cores, 32GB RAM total)
- ARM64 architecture (Rockchip RK3588)
Storage:
- 128GB eMMC (OS layer, 32GB per node)
- 1TB NVMe (application storage, 250GB per node)
- 1.8TB SATA SSD (bulk storage via Node 3)
Networking:
- 1Gb Ethernet between nodes
- Single uplink to main network
- VLAN capability (to be configured)
Physical:
- TuringPi official case
- Professional appearance
- Rack-mountable form factor
Status: Assembled, powered off, ready for OS installation
Lessons from 25+ Years in Infrastructure
After decades managing enterprise servers, a few principles guide this build:
1. Separate OS from Data
Using eMMC for Talos and NVMe for application data means I can reinstall the OS without touching user data. This is fundamental.
2. Plan for Failure
With 4 nodes, something will eventually fail. Design with redundancy and backups from day one.
3. Document Everything
In enterprise environments, documentation saves hours during outages. Same applies to homelabs.
4. Start Simple, Add Complexity
Get basic functionality working first. Distributed storage, monitoring, automation - those come after the cluster is stable.
5. Understand Your Constraints
1Gb networking is a constraint. Acknowledge it, design around it, don't pretend it doesn't matter.
What's Next?
The hardware is ready. The blog is configured. Time to actually build this cluster.
Next post: Installing Talos Linux on ARM64 - Getting the RK1 nodes operational and forming a Kubernetes cluster.
I'll document:
- Finding the right Talos image for RK1
- Bootstrap process
- Initial cluster formation
- First connectivity tests
- Any issues encountered (there will be issues)
The journey continues. Hardware documented. Time to make it do something useful.
Questions about TuringPi clusters or RK1 modules? Leave a comment below or reach out at igor@vluwte.nl. Especially if you've already gone through this process - I'd love to hear what worked (or didn't) for you.
← Previous: Configuring Ghost for Technical Blogging