Current System
My current system lives inside a Node 804 case and includes the following components:
- MB: Gigabyte B450 Aorus M
- CPU: AMD Ryzen 5 1600
- GPU: NVIDIA GeForce 9800 GT
- OS: Ubuntu Server
- PSU: LC6460GP4
- Boot drive: Crucial MX500 SATA SSD
- Storage: 2x Seagate Exos X18 and 2x Seagate Exos 7E8
I’m using btrfs to set up two storage pools, each containing one 8TB and one 18TB drive. The second pool serves as a backup for the first one.
The setup has been running well for approximately 2 years. However, I’m gradually running out of storage space and the upgradability is not as good as I’d like it to be. Both the PSU and the Motherboard only have one available SATA connector left because the boot drive also uses one.
Plan / Options
Knowing me, and this hobby I anticipate a gradual and ongoing addition of drives to my system so I want that process to be as simple as possible. After doing some research I was thinking about separating the drives from the host system. The plan would be to store the drives in a JBOD/DAS enclosure where power and data needs of the drives are met and then connecting that back to my host somehow.
To me, stepping into “Enterprise” hardware land is new and honestly a little intimidating so I wanted to get some input from the more experienced people around here.
The “plan” I came up with so far doesn’t sound that complicated. As far as I understand I’d want the following (Sorry for the terminology):
- A JBOD enclosure with min. 12 hot swappable SATA bays, a PSU and SAS output in the back
- A HBA that goes into the second PCIe slot on the B450 Aorus M
- A compatible SAS? cable
- A small rack to mount the enclosure in
Given all the components are compatible this setup would allow me to add a new drive to the JBOD, see the drive “raw” on my host (this is what “IT mode” is for on the SAS cards I believe), format it and add it to one of the two existing pools.
Things I need help with
Obviously picking the right components is the biggest challenge. Many posts here are suggesting the NetApp DS4246 enclosure as a good pick. They are available at good prices and there’s room for many drives.
But there are some open questions for me regarding the DS4246:
- Are my drives compatible? I’ve read numerous times that the max drive capacity is 4TB. I find that quite hard to believe and this may be due to the age of the posts I’ve read but I just want to be sure.
- Why are there ususally (like here) two pairs of SAS and ethernet connectors in the back? How would I connect from the given interface to my host server? (Again sorry for the terminology)
And another more general question:
- What are the deciding factors when it comes to choosing the right HBA and cables, the prices I’ve seen so far range from 35$ to 800$ in both categories. This comment suggests the DS4246 in combination with a LSI-9201 8/16e HBA. What are the specs I have to look for to see if that might be compatible. The same goes for finding a compatible cable.
Is my strategy viable in general? What are things I probably have not thought about?
I’m hoping you can help me in resolving some of my questions and improving my storage setup
Thank you for taking the time to read this!
As for why you would want an hba in a non standard server is because of reliability. Sure you could get something cheap but you are gambling with ease of use, bandwidth, stability, and most importantly peace of mind. They may cost more, but I personally think it is worth it. If you are running server software, it may or may not play well with the cheap one.
While I don’t have a specific recommendation for the psu, you would want something modular and a high efficiency rating. This server will run 24/7.
If you do want to go the jbod route, make sure you get an external hba so you can route from your current server to the jbod. I believe you can do a fan swap potentially, I am sure someone has done it before. Personally for me, I said fuck it and built a server with a storinator q30 chassis that has reduced noise. Certainly not cheap, but you get what you pay for.
It also really depends on how seriously you want to take your data hoarding. If you aren’t concerned about drive bandwidth because you aren’t running all the drives at once or if you don’t really care about uptime stability, then it can be done cheap-ishly. Good data protection and redundancy adds up quick though, so keep in mind what data is invaluable and what isn’t. Backups are mandatory.
If you are running a server os like Truenas or unraid with ZFS, you cannot use a usb connection for the drives. For two reasons, ZFS doesn’t like it and will behave erratically but also because the connection can be spotty and drop out. Ubuntu server might be able to be fine with it, but if you ever switch to a data safety focused file system like zfs, you are asking for trouble down the line. I am not sure how btrfs will behave. ZFS doesn’t allow for single drive additions at a time, so you would have to buy multiple at a time.
Unfortunately, this hobby is expensive to do well. What you cheap out on now could bite you in the ass later if you decide to do far more than what you originally intended. However, you know your needs better than I. My media library grew far more than expected and I host for a large group of family members.