After a 14 year hiatus from IT, I’ve decided to dive back into the field and am in the process of setting up my first home lab. My goal is to transition into an IT career, and I feel that hands on experience with a home lab would be immensely beneficial.
I have some experience from the past, I still own a Cisco lab from my CCNA practice days. Now I’m looking to expand my setup by adding a server that can host VMs and allow me to experiment with different environments such as Active Directory.
I’ve come across a Dell R730xd server being sold for £300 and I’m considering purchasing it. However, I’m not sure if it’s still a relevant model for my needs and if it can run ESXi efficiently. Here are the specs:
- Model: Dell PowerEdge R730xd
- CPU: 2x Xeon® E5-2683v3 (35M Cache, 2.00 GHz, 14-CORE, 9.6 GT/s, Total 28-CORE)
- Memory: 128GB DDR4 (upgradable to 512GB or 768GB)
- Storage: 8x900GB 2.5" SAS 10K Disks
- Raid Controller: PERC H730 (12GBps, 1GB NV Cache, RAID 0/1/5/6/10/50/60)
- Power Supply: 2 x 750 Platinum
- Networking: Dell 4-Port Gigabit Ethernet NIC, iDRAC8 with Enterprise License
Given these specifications, I’d love to hear your thoughts on whether this server is a good fit for my intended use. Is it overkill, or perhaps not sufficient? Any advice on whether this is a good deal and if it will suit my needs for learning and experimentation would be greatly appreciated.
Unless you need more than 128gb ram you would be better off with any low end desktop for most use cases.
I’ve got three of the r730xd’s right now. I love the machines and I love the power that they have. I’ve got 22 core processors in them all. I run xcp-ng on them but you can run proxmox on them and you can run vmware. I believe and there are other experts that will tell you on here that VMware 7 will run just fine but VMware 8 might not on the 2600. I had VMware on one of them in the past and that’s gone away for me.
If your power bill is of interest to you, then it might not be the server for you because if you put capacity on there it’s going to draw quite a bit of watts. I’m assuming you’re going to run it 24/7 like I do. After all you want it up and running so it’s available right?
They can be a little loud, but once you get it patched well for updated firmware on all the BIOS and idrac and the LSI controller, the fans are able to ramp down. If that whole power thing is of a concern you can always remove one of the processors and put a blank immediately in place on top of the pin so you won’t bend the pins and save the processor in the power supply for future. You could then move all the ram which would be an additional 64 GB to processor one and you’d be all set and you’d be using the last power and generate less heat.
I prefer the V4 chips in both dells and an hp’s because the memory can run faster. However if you don’t want to spend money on V4 chips there’s no reason to necessarily spend money on them as the V3 chips run just fine and the two you have in there are fine (I have four HP 360 gen 9s with the V4 version of that chip with 14 cores).
I’m not sure what the front storage looks like on that if you got it with 16 or 24 bays but if you receive SAS drives assuming they’re Enterprise SAS drives and they weren’t certified then they’re probably going to die fairly soon. You have to assume it at least right? You can totally put in off the shelf SATA home user SSDs or Enterprise SAS SSDs.
You can add dual 10 Gbps or 40Gbps net working pretty cheap with a integration module that’s available on eBay. I have an iSCSI host which services up VM containers to my computer servers. That way all the containers live on shared backup storage that is well it has a pretty good up time 99.9999 all that jazz. You can always bump the power supply up if you need to again on eBay. I’m going to maybe guess that you got the kit on the back of the system that let you put two drives on the far right rear of the system you can put your boot drives there so you don’t use the front drives for booty. VMware can technically boot from a SATA Dom or a USB. 28 cores or actually even the 14 cores will run a full Red Forest just fine. And give you plenty of Linux VMs and you could run a Mac OS on there too if you want.
I gave you all this information because there are a lot of pluses with the system you’re looking at the price is certainly reasonable that’s what I’m paying in America for them at a reseller. However there are a lot of deficits a couple of the people have already pointed out in this thread I would certainly say that a 12th generation i7 or i9 from Intel or any of the Verizon 3900s or 3950s or 5900 or 5950s in a desktop PC would probably do the same. Those are a course available every day used at pretty decent prices I don’t believe you’re going to match that server in price but you’re also going to use less power which may or may not be a factor to you it’s not to me, and you’re going to generate less heat and less noise. Again not a factor to me but I think it’s worth mentioning if you’re diving into this. I’ve never had one of my systems go bad out of the r730s. I’ve had two running since 20/20 and I got one of them just recently to add for a menage a trois of TrueNAS servers.
On eBay you’re going to want to befriend the channel Cloud Ninjas. They have a complete playlist on the r730 servers they tell you what processors, memory, and networking you can add. I think they even cover the LSI controllers very well presented very clear and concise. Obviously they want to sell hardware but they do a big service to the community by providing free videos actually showing you how to do things.
If you decide to buy that and keep it and you want to run TrueNAS on there and pass through the LSI controller you’ll need it in IT mode. The YouTube channel, The Art of Server, covers how to flash the PERC controllers for it mode. Or you can always put in another LSI controller that’s already flashed in it mode and you’re all set.
That’s all I got off the top of my head.
The benefit of buying an R730 (I own two) is if you’ve literally never laid hands on a server before. You’ll learn things you didn’t know existed coming from the consumer world like management NICs, iDRAC, raid controllers, redundant power supplies, racks and rails, ECC memory, internal flash storage, etc. That is the value of purchasing a server.
Having said that, if you already know most of that stuff, absolutely do not buy an R730. They’re loud even with fans ramped down, they’re power hogs, they have a huge server depth so the space they take up is insane, and they’re crazy heavy and produce heat. There really is no advantage over a more modern desktop machine which you could still run ESXi on fine as long as you pick one with an intel NIC that’s compatible.
I run variants of the same thing. Have mostly stuck with 3.5" versions because the drives tend to be cheaper. I do have a 2.5" unit running because the price was good.
I run the 2.5-inch versions of R730’s in production with built-in 10GbE networking. Still supported using ESXi/vSphere 7 but migrating to Proxmox with no issues. If going to use ZFS/Ceph, delete any virtual disks first otherwise the PERC disk controller will not find any disks when you configure it in HBA mode.
Just make sure to update the firmware to latest version. When updating the iDRAC, update it incrementally otherwise it may brick. Meaning, do NOT go from current version to latest version. Do it step by step.
I would advise that you go for V4 CPU instead. You get more cores, power consumption is lower and performance is better. You can find these CPUs on eBay shipped for about $50. The V4s I think are generally on the 14nm process whereas V3 on 32nm.