Hi,
I thought I’d post my latest project. I use a bunch of Raspberry Pi compute modules as servers and decided to build myself a custom blade server to host them. This is replacing a bunch of old Intel rack mount servers on my home network - it’s a lot less power hungry! It’s been through a few iterations and is now working really well. This is the server:
It’s a 2U rack mountable unit, in an off-the-shelf ABS case with some custom 3D printed parts. The server takes up to 10 of these blades:
It’s got gigabit Ethernet, USB-A and HDMI on the front and an NVMe SSD slot on the board, along with an SD card slot and a battery backed real time clock. There’s a little OLED on the front displaying information about the blade, including the name and IP address to make it easy to identify for maintenance. There’s also an RP2040 on it for management.
The blades plug in to a custom backplane which provides power and centralised management. There’s an LCD front panel providing basic tools for powering on and off blades and status information, and another compute module which acts as a management web server. It can be used to upload flash images to the blades via the backplane, and provides serial console access to the blades through the web interface.
I’ve been using this for a while now and was wondering if other folks out there are interested in it? It would be quite quick and easy for me to turn this into a product for sale if there was a market out there for it.
Please let me know any comments or suggestions you have, any feedback is appreciated!
Alastair
Okay, this is awesome!
This is a pretty awesome project, and is very well done! I’d love to see more pictures!
It looks like custom PCBs for the blades and the backplane? More details on that would be very interesting.
What all are you running on this system so far, and what software do you have plans to add? Are they running independently or as a cluster?
Summoning u/geerlingguy here, I’m sure he’ll love this project!
Thanks, that’s very kind. Here’s links to some more pictures. The original ones were taken by my photographer wife and these ones were taken by me on my phone, so apologies for drop in quality!
This https://imgur.com/9eqdiGn is a view of my development test unit on the bench with the cover off. I’m using an off-the-shelf 1U PSU for power as it’s a nice easy way of getting 100W+ all delivered at the right voltage levels. It’s also the limiting factor in the number of blades that the box will take, as it takes up a decent chunk of space.
The PSU leaves just enough space at the front for the front panel board https://imgur.com/OSK9ngE. I’m using on off-the-shelf 2.4" LCD modules for the main screen and 0.91" OLED modules for the blade displays. The management CM4 is on its own little riser board as the CM is about 10mm too big to fit horizontally in the space. To keep costs down you’ll see I’m using PCI-e x1 as the card edge connectors. These are WAY cheaper than the fancy purpose built back plane connectors so do the job perfectly.
The management board, the backplane and the individual blades all have RP2040’s on them for management. https://imgur.com/YpDE1Uo is a close up of this on the management board. I could probably have done it with cheaper microcontrollers, but the RP2040 isn’t overly expensive, is easy to get hold of, and it’s nice keeping it all in the Pi ecosystem.
The backplane’s got a couple of 74HC4067 multiplexers for switching the UARTs from the blade CMs down to the management module, and four FSUSB74’s to do the same for the USB interface. There’s also a few 9535 I/O expanders, both because I ran out of GPIO’s on a single RP2040 but also to make routing easier on the 4 layer board.
I’ve mentioned on another reply some plans for the software, but mainly planning to add full status info (stats from each of the blades), along with a serial console and USB provisioning.
For my original use case, I’m actually using them all as individual servers. It replaced a bunch of VMs running on some second hand enterprise kit I had. The Pi’s are able to do basically as good a job for what I need but consume much less power (the CM datasheet puts the max typical at about 7W, so even allowing for extra overhead you’re running 10 blades at less than 100W.)
I’ll need to do a proper blog post with all this at some point soon!
I’ll need to do a proper blog post with all this at some point soon!
Please include some data about performance in the post! 👍🏻
Super interested, would be willing to build some boards to test myself as well. Are you going to post a git, or keeping it private?
Yeah I’ll do a proper blog post on this in the next few days and then open up the design files on a public repo. I’ve got a new version of the blade being manufactured now so I’ll upload the design once I’ve got it back and made sure it works. (The current version I’m using works perfectly except that I never noticed that I connected the USB the wrong way round, so I had to bodge-wire that out on my own units!)
What is the benefit of using raspberries for your use case. Low power usually comes with low performance. Or am I missing something? If I invest the same amount of money in different miniPCs (used on ebay or similar) wouldn’t I get more compute power for the money?
Yeah, this isn’t useful for many things, but as others have mentioned there are situations where it is. My original use case, the thing which prompted me to build this (other than just the fun of seeing if I could do it!) was to replace a whole load of low complexity VMs. I’m a freelance programmer and I do a bunch of hosting for both myself and some clients out of my home office. I’ve got a small rack setup in my attic with UPS, and have redundant fibre connections. It’s obvs nowhere near datacentre quality but it works well for my purposes.
I’d previously been using VMs running on some second hand enterprise x64 kit that I bought. Whilst this works great, the electricity bill is rather higher than I’d like! When I analysed what all the VMs are doing I realised that it’d be perfectly possible to do this on a Pi. In the dim and distant past I was a network infrastructure guy, so I started looking into “proper” server Pi solutions and before I knew it I was down this rabbit hole!
It works really well for low power server applications. It’s not in the same league as the big iron ARM mega-core servers (or indeed Xeon servers) for performance, but then it’s nowhere near that league for price either. I haven’t figured out an exact price if I was to sell it commercially, but it’d likely be in the $800 US price range without CMs. If you were to max that with 4GB PIs that’d end up around $1500, which’d give you 40 cores of pretty decent performance and 80GB of RAM. The Gigabyte and Altera servers I’ve seen are awesome and way more powerful than this but are several times more expensive.
Indeed. But for 1500 USD I can build a brand new small form factor pc with 96gb ram and lots of compute power. Well if it works for you, great. Certainly looks cool
Probably a k3s cluster. The bigger constraint will be memory.
That’s awesome. Would definitely like to see technical specs/3d plans for it as a DIY project. You could even offer them pre-made for a premium since a lot of people don’t want to do the work.
I think I’ll probably do something like that. I’ll make it available as a full prebuilt unit but I’ll open source the design files for anyone that really wants to DIY or build their own spins. I’ve deliberately used an off-the-shelf case and PSU, and only components easily available in distribution, so that it’s easy to get the parts.
Definitely interested in the blade cards and backplanes.
Thanks, some more info on other replies and I’ll do a proper blog write up in the next few days.
Kudos to you sir. I’m first to jump against RPi in homelab posts but this is on a whole other level. I think everyone would love a detailed explanation on it.
The Compute blade comes to mind and I’m drawing parallels between them. AFAIR, the compute blade does power and management over the front ethernet port. Which requires the PoE stuff to be there too. Does your backplane simplify the boards (and make it the project cheaper)?
Thanks, that’s very kind. I’ve added some more detail on other replies and I think I’ll do a full blog post in the next couple of days.
There are definitely parallels with the Compute Blade project but there are a few differences. My blades are a bit simpler, they don’t have the TPM that the Compute Blade does, as I didn’t have any real need for it. The CB also has a more dense number of blades in the 19" width. This was another design decision on my part, I quite liked the short depth case making the unit small and I wanted to make sure there was plenty of airflow for cooling (tbh I didn’t need as much as I used!)
My unit is more focused on being like a traditional server unit, as that’s what my use case was. Centralised power, centralised management and provisioning etc. You’re correct, the Compute Blade uses PoE, and I did it through the backplane. My preference was for central management rather than per-blade, so that meant a backplane and it all flowed from there. It allows you to feed the USB and serial console into the management server which is great for provisioning and debugging. The displays are also born out of my days as a network infrastructure guy, where being able to see the server’s name and IP address on the physical unit would have been a godsend when doing maintenance! So I guess the design differences between this and the Compute Blade are about my focus on more of a server use rather than general compute module.
I’d say it’s probably a bit cheaper using a backplane than PoE. The PoE adds a bit to the cost of each blade which would soon multiply up, plus the additional cost of a PoE switch vs non-PoE. I’m using an off-the-shelf ATX PSU and these are made in such huge quantities that the price per watt is difficult to beat.
I would start learning Kubernetes legit just to give myself an excuse to use this, it looks so damn cool!
Are you planning to make this available at all?
Yeah, at the very least I’ll hand-build a few units with the spares I’ve got here and make those available. If there’s enough demand I’ll potentially do a full production run. I’ll open source the designs too so folks can have a proper poke about in it :-)
Certainly interested, depending on the price. Or if you have any desire to open source / let folks get the PCBs printed, i’d love that as well!
I’ll probably do both! I’ve only done a rough costing so far but I think it’d be somewhere around $800ish USD for a 10 blade unit (without CMs of course.) I’ll also likely open source at least the schematics and firmware for if anyone fancied making their own version of it. I’ll do a blog post at some point soon about the design, and another once I’ve thought more about sales.
I am surprised I didn’t see anyone mention there is a commercial blade solution (finally shipping now after what feels like years in kickstarter)
https://www.kickstarter.com/projects/uptimelab/compute-blade
OPs package is very cool though.
Yeah I know about that one, I looked at it when I first started thinking about using Pi’s to do the server stuff I wanted, but I couldn’t actually buy one then. So I built my own :-) As I mentioned on another post, there’s a few differences around my focus on using this as a simple server system.
First thing that came to mind as well
It looks incredible! Great job! How does this not have way more upvotes?
It’s really really cool. Excellent work.
Thank you, much appreciated :-)
Very nice! Put your repo to public! As far as making it a commercial product goes; know what you’re getting into… That would be a ton of work and investment. Nice to do as an experience maybe, I don’t know if you should expect to really make money on it.
Totally understand what you mean. My background is as a freelance programmer and I have my own business doing this. I’ve never commercialised any hardware (though I’ve built plenty of stuff for my own use) so it’s a bit of a leap into the dark. I don’t imagine there’d be huge volumes so not expecting to make my fortune from it. I built this for my own use and now it’s done I’d be happy to make it available as a small run thing.
I’ll do a blog post with more design details soon and open up the design and firmware stuff on a public repo. It’s all done with open source tools anyway, all the design is in KiCad as I don’t do enough hardware work to justify the cost of something like Altium!
Wow this is wonderful work. Congratulations!
It would be even more awesome if it contains an internal switch, so that there is 1x10gbe coming out of the chassis, and another 1gbe going to the ipmi. Looking forward to you open sourcing this :D
I don’t think I’d ever need one of these in my home but this looks so friggin cool I’d want one just to have one 🤣. Helluva job! 10/10