Curious what you’ve got installed on it. What do you use a lot but took awhile to find? What do you recommend?
On Reddit, we had a r/selfhosted. Do we have something like that here, in the new frontier?
There’s [email protected] (and apparently also [email protected] and [email protected] which popped up in the autocompleter).
Thank you! When I tried looking for them last night, I couldn’t find anything, so this is very much appreciated!
I’d like to build a NAS. Does anyone have a simple guide I could follow? I do have experience building my personal computers. I could search online for a guide, but a lot of the time small communities like this will have the end-all be-all guide that isn’t well known.
I don’t have one off hand but a NAS at homelab level is not that different from a server.
I have had success with getting a second hand server with a moderately powerful processor (old i5 maybe?), a good 1/10Gb network card (which can be set up with bonding if you have multiple ports), and lots of SATA ports or a raid card (need PCI slots for the cards as well).
I would go with even a lower power processor for power savings if that’s a thing. ECC ram would be great, especially for ZFS/btrfs/xfs.
I’ve got a ‘NAS’ setup on my desktop computer/server. I use it for almost everything. It runs VMs and games and self-hosted servers, etc, etc. It is Arch Linux but does it all. Plex/Sonarr/Radarr/QBittorrent.
24 TB of HDD in raid 10.
I haven’t found a good reason to keep a separate computer/server. It pretty much just always complicates the setup. If I need more separation, a VM is usually a better answer in most cases as far as I can see.
I’ve just been using an old laptop with jellyfin, radarr, sonarr and transmission.
I use mine for file storage, Pi-Hole, and Jellyfin mostly.
It’s something I always wanted to setup for my personal files, docs, media etc. but get dissuaded once I see synology costs, hard drive requirements, RAID setup options and just generally power draw / heat&noise generation. Looking forward to answers here, I’d be very happy to get off cloud storage but not if it’s a second job maintaining and setting it up
I’ve got a HP DL360 g9 running Ubuntu server lts and ZFS on Linux with 8× 1.2tb 10k disks, and an external enclosure (connected by external SAS) with 8× 2tb (3.5" sata) disks. The 1.2tb disks are in a ZFS raid10 array which has all our personal and shared documents, photos, etc The 2tb disks are in a raidz6 and stores larger files.
It uses a stupid amount of power though (mainly the 10k disks) so it’s going to be replaced this year with something newer, not sure what that will look like yet.
Computer with Ubuntu Server, with a Ryzen APU (3400g), 16GB DDR4 RAM, and 2 x 4TB WD Red CMR Drives.
Use it as a media server for Jellyfin, and also as a file server using NFS. Works super awesome and I wish I had done this sooner
I’m using a Synology setup. I thought I’d grab an off the self option as I have a habit of going down rabbit holes with DIY projects. It’s working well, doing a one-way mirror off my local storage with nightly backups from the NAS to a cloud server.
I use synology. I’ve done freenas, openfiler, even just straight zfs/Linux/smb/iscsi on Ubuntu and others. Synology works well and is quite easy to setup. I let the nas do file storage. And tie other computers to it (namely sff dell machines) to do the other stuff, like Pi-hole or plex. Storage is shared from the nas via cifs/smb or iscsi.
Synology also has one of the best backups for home use imho with Active Backup for Business. It can do vmware, windows, max, Linux etc. I actually have an older second nas for that alone. But you can do it all in one easily.
Mine currently runs on an old pi3 with an external hard drive plugged in via a powered usb hub. I’m using openmediavault at the moment, but I’m probably going to swap it over to just NFS when I get the chance. I’m also planning to swap out the single external drive for 4 drives in a soft RAID through LVM.
I’m lucky enough to have a Kobol Helios64, but unfortunately the small company that made these shut down. It’s fine for the time being but I’m going to have to pay attention to the NAS market to be ready to replace it one day… my main goal is low power, so I’m not sure if it’s worth it to go to a more commercial option like Synology or if I should be building something.
As appliance NAS tend to be, the actual SBC in the Helios64 is pretty slow so I minimize what I run on it. It does have Plex server, but most everything else runs on another ARM machine that mounts from it by SMB.
I have a mini Thinkcentre. I used to use TrueNas scale, but switched to Ubuntu Server due to having tons of issues.
I run jellyfin, radar/sonarr, maybe a Minecraft server and a few other things.
I’ve used a repurposed full size ThinkCentre for several years now; it regularly hits uptimes approaching a year before I inevitably reboot for updates. Stock power supply too!
It’s on it’s second major version of OpenMediaVault; software RAID. Running Plex, file shares, and it used to run Resilio Sync too.
I got a DS920+ been using it for file storage, backups, plex, and running docker for all my arr’s. Really like synology as an entry level, it got me to dig deeper and learn more. I’m behind a CGNAT, so setting up a VPN solution that would work was a pain on DSM. In the process of setting up my own homelab and building a truenas as I learn more about ZFS.
I built a massive overkill NAS with the intention of turning it into a full blown home server. That fizzled out after a while (partially because the setup I went with didn’t have GPU power options on the server PSUs, and fenangling an ATX PSU in there was too sketchy for me), so now it’s a power hog that just holds files. I just turn it on to use the files, then flip it back off to save on its ridiculous idle power costs.
In hindsight I’d have gone with a lighter motherboard/CPU combo and kept the server grade stuff for a separate unit. The NAS doesn’t need more than a beefy NIC and a SAS drive controller, and those are only x8 PCIE slots at most.
Also I use TrueNAS scale, more work to set up than UNRAID but the ZFS architecture seemed too good to ignore.
A GPU isn’t really necessary for home server unless you want to do lots of client side transcoding. I have a powerhungry server that runs a VM offering samba and nfs shares as well as a bunch of other vms, lxc containers and docker containers, with a full *arr stack, Plex, jellyfin, a jupyterlab instance, pihole and a bunch of other stuff.
I was trying to do some fancy stuff like GPU passthrough to make the ultimate all in one unit that I could have 2 or 3 GPUS in and have several VMs running games independently, or at least the option to spin it up for a friend if they came over. I’m probably not quite sophisticated enough to pull that off anyways, and the use case was too uncommon to bother with after unga bungaing a power distribution board after a hard day of work.
Ah now I get it. You’ll probably need an expensive PSU to make that work. I’m sure there would be some option though in the server segment for people building GPU clusters.
Yeah, I was trying to go all the way when I should have compartmentalized it a bit and just had two computers instead of one superbeast. The server PSUs aren’t super expensive relatively speaking, 1U hotswap 1200W PSUs with 94% efficiency are like $100. Problem was that the power distribution board I had didn’t have GPU power connectors, only CPU power connectors, and tired me wasn’t going to accept no for an answer and thus let out the magic smoke in it. I got lucky and the distribution board seems to be the intended failure point in these things, so the expensive motherboard and components got by unscathed (I think, I never used the GPU, and it was just some cheap Ebay thing). Still a fairly costly mistake that I should have avoided, but I was tired that night and wanted something to just work out.
That’s quite interesting. I would have thought that they were more expensive than that. I’ve been there too. You’re doing a bunch of stuff, tired and just want it to somehow work. What have you been doing with the build after that, if you don’t mind me asking?
Was going to make it a sort of central computer that could centralize all the computing for several members of the family. Was hoping to get a basic laptop that could hook into the unit and play games/program on a virtual machine with graphics far above what the laptop could have handled, plus the aforementioned spin up of more machines for friends. Craft Computing had a lot of fun computing setups I wanted to learn and emulate. I would have also had the standard suite of video services and general tomfoolery. Maybe dip into crypto mining with idle time later on. Lots of ideas that somewhat fizzled out.
That sounds really interesting. I have some VMs set up in a similar way for family memeber though they’re very low power. They’re mostly used to ease the transition from windows to Linux. I hope you get to do it again sometime :)
I bought a 2 bay ds220+. 2 x 4TB drives. Been happy with it so far. I got Jellyfin on here and use Synology Photos and Drive to back up stuff. I also use Adguard home, this has been amazing and has blocked many weird microsoft and amazon pings. Yes, it’s proprietary but when I was building it, it seemed to be a decent choice and had lots of support. As I get more experience, I will probably build my own NAS.