I have an early 2000s PC (pre-SATA) with 512MB RAM (I’d love to tell you about the CPU, but its under a cooler that isn’t going anywhere) that’s been sitting in closets for about 15 years. Assuming I’m willing to buy into it, can something like that reasonably host the following simultaneously on a 40GB boot drive:

Nextcloud Actual Photoprism KitchenOwl SearXNG Katvia Paperless-ngx

Or should I just get new hardware? Regardless, I’d like to do something with this trusty ol business server.

Edit: Lenovo or Dell as the most cost-effective, reliable self-host server in your opinion?

  • @[email protected]
    link
    fedilink
    English
    28
    edit-2
    10 months ago

    That’s an antique. The list of stuff you want to run probably needs several gigabytes of RAM. I think Nextcloud alone needs 512MB. I’d recommend newer hardware, you can find stuff on ebay for under $100 that would be a LOT more powerful than what you have.

    • @LazerDickMcCheeseOP
      link
      English
      610 months ago

      I know, I know…but it’s good, faithful hardware and I want it to go to a good use.

      • @[email protected]
        link
        fedilink
        English
        910 months ago

        If it’s really early 2000s, you might want to put it on eBay. There are retro gamers out there that could use it as good Windows 9x era gaming PC. You could give that HW a new life in someone’s retro setup.

        It’s great HW for occasional gaming, but it’s very inefficient for 24/7 operation. You want to be somewhere after 2015-ish for something that is supposed to run constantly.

      • HousePanther
        link
        fedilink
        English
        310 months ago

        You could simply gut it, keep the case and power supply, and put modern components in it.

  • SRo
    link
    fedilink
    English
    2210 months ago

    Don’t, it’s horribly inefficient.

  • MoogleMaestro
    link
    fedilink
    2110 months ago

    Won’t be able to do much, and even if you can do some stuff you have to keep on mind that the energy efficiency would be poor enough that you’d still be better off with a cheap pi from a cost perspective.

    • @LazerDickMcCheeseOP
      link
      English
      610 months ago

      Really good point. Is a Pi recommendable for selfhosting?

      • BigVault
        link
        fedilink
        710 months ago

        If you want something small and cheap, it might be worth getting a used thin client PC.

        I got a cheap £20 Igel thin client from eBay as raspberry pi’s were still far too expensive, plus I already had a spare 4GB ddr3 sodimm to drop into it and a 120gb wd green ssd that I’d stripped from its case and fitted internally into the thin client.

        After upgrading it one ended up with a 1.2ghz AMD GX-412 cpu, 4gb DDR3, 120gb sata ssd and an external usb 3 1tb hard drive i also had laying around.

        As a component of my homelab, it’s running Debian 12, docker with a few containers (pigallery 2, Libreddit, portainer, searXNG), it’s my backup Emby server and my main Pihole and PiVPN client.

        Completely silent, sips power and still has capacity spare to run more containers and other projects that catch my interest.

        https://www.parkytowers.me.uk/thin/Igel/ud/ud3/M340C/

        • @LazerDickMcCheeseOP
          link
          English
          310 months ago

          That’s a pretty cool solution, honestly. I’m considering all options here! I’d hate to invest then find out there are more cost-effective options or that I somehow limited the server’s potential.

          • Briongloid
            link
            fedilink
            English
            210 months ago

            That’s what I’m using, it barely uses more power than a pi & it’s a 64bit x86 4core with 16GB Dual Channel, 256GB SSD.

            I’ve seen newer versions of what I have for cheaper than the average Pi4, I would never consider the Raspberry over this solution given how monolithically more powerful it is for how small they are.

            I have Ubuntu 22.04 LTS Server without a desktop GUI and I control it on my PC via CMD with SSH user@localipaddress

          • BigVault
            link
            fedilink
            110 months ago

            Working really great for me. I originally just bought it to run Pihole on a dedicated machine and have a secondary pihole instance on my Unraid server in case either of them went down but leaving it sitting there with just PiVPN and Pihole duties seemed wasteful.

            I’m getting even more out of it running some of the lighter containers on it with plenty of spare room to do more.

            I’ve logged/uploaded my upgrade process here just so you can get some ideas on what I did.
            https://imgur.com/a/ExcLdtt

            It is bulkier than a raspberry pi, being around the size of a router but the low cost and being able to utilise hardware that I had sitting doing nothing made me go this route rather than just getting a pi.

      • @[email protected]
        link
        fedilink
        English
        4
        edit-2
        10 months ago

        It’s OK, but I’d suggest:

        Atom > arm64 > arm32

        I ran on a Pi 4, but switched to a PC for jellyfin. The pi can’t transcode for shit. It was slow to boot and slow over SSH.

        Look for a NUC - they’re designed for desktop use, so they have more poke than a Pi. The N6005 CPU is a good choice, the N5105 is ok. These are x64, so you’ll have the widest range of packages. 4GB will do, if its upgradeable later. NUCs usually take SODIMMs, which you can pick up on ebay for peanuts.

        Bear in mind that network chipset will be your bottleneck in some use cases. If it has a “gigabit port” but only a cheap chipset, and you use it as a router, you might max out at ADSL speeds… in that case you’ll wish you’d gone for a box designed for soft routing, which are a fair bit pricier.

        • @[email protected]
          link
          fedilink
          English
          110 months ago

          It was slow to boot and slow over ssh.

          I found that booting and connecting over SSH went way faster once I got a better SD card. Running an install script that took half an hour was down to just a couple minutes

      • @[email protected]
        link
        fedilink
        English
        1
        edit-2
        10 months ago

        I’ll put it this way as somebody barging into the conversation: I love tinkering with SBCs, but setup, install, usage, and maintenance are all a hell of a lot easier on x86 still.

        Personally I have very little energy for unexpected issues, and when I gave up on SBCs for serving (as opposed to tinkering) everything got much easier and my progress got much faster.

        I’ve been buying dirt cheap used business PCs for servers, works great, doesn’t break the bank, and there are tons of parts available if you stick to the major manufacturers.

  • @[email protected]
    link
    fedilink
    English
    2010 months ago

    Old hardware is awesome to reuse most of the time but it’s not nearly as efficient as our hardware today.

    It’s probably good to just properly recycle the old gear and spend $200 on a mini-PC from Amazon that has three times the power all while using less electricity.

    I usually completely tear down old equipment into is raw materials, the best I can. It’s less likely to be shipped off to another country for uncontrolled destruction and I get more money back for the materials.

  • @[email protected]B
    link
    fedilink
    English
    6
    edit-2
    10 months ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    DNS Domain Name Service/System
    LTS Long Term Support software version
    NAS Network-Attached Storage
    NUC Next Unit of Computing brand of Intel small computers
    SSD Solid State Drive mass storage
    SSH Secure Shell for remote terminal access

    6 acronyms in this thread; the most compressed thread commented on today has 20 acronyms.

    [Thread #30 for this sub, first seen 12th Aug 2023, 11:15] [FAQ] [Full list] [Contact] [Source code]

      • 𝘋𝘪𝘳𝘬
        link
        fedilink
        English
        210 months ago

        Seems not to be marked as bot. Will block manually since the Lemmy setting to prevent bot posts from showing up doesn’t work here.

  • Katrina
    link
    fedilink
    English
    510 months ago

    I’m not sure you could even run pihole on 512MB RAM.

  • @[email protected]
    link
    fedilink
    English
    510 months ago

    Forgetting the lack of processing power and RAM, it’s likely a box that old will eat power. It’s just not worth it for something that old.

    A used thin client along the lines of the lenovo think* series will be affordable and do the job.

  • sj_zero
    link
    fedilink
    310 months ago

    My first server was quite a bit tougher than that and it had some serious issues when I started asking it to do a lot at once. You might be able to get it going, but I suspect you might not be too happy with the performance you get out of it.

    It’s a bit shocking how much hardware you can get for how cheap. Even an Intel Atom box available for less than a hundred bucks that has no fan would likely run circles around that thing. One thing I’d definitely suggest is no matter what, an SSD if you’re planning to run multiple platforms.

    • @LazerDickMcCheeseOP
      link
      English
      010 months ago

      My plan is to keep the bare minimum on the boot drive required to get these services running. This is probably a Linux crowd, but I don’t speak the language and would rather keep it all in Windows if I can help it

      • sj_zero
        link
        fedilink
        410 months ago

        Just a heads up that you might find it easier to learn a bit of the lingo than to try to translate all the entry-level stuff from linux to windows.

        If you do figure it out though, you should document the process and put it up somewhere.

        • @LazerDickMcCheeseOP
          link
          English
          110 months ago

          Best place to learn the basic Linux I’d need to get this off the ground?

          • sj_zero
            link
            fedilink
            110 months ago

            The best projects will have well written documentation that steps you through exactly what to do.

            I started off not knowing anything about hosting and now I run like 6 services.

      • 𝕽𝖔𝖔𝖙𝖎𝖊𝖘𝖙
        link
        fedilink
        English
        110 months ago

        In addition to all the other points made here:

        If you intend to run Windows on it the RAM issue will be even more important as Windows is a fair bit more resource-intensive just to get the base OS running.

        It’s worth taking the time to learn enough Linux to use it for these types of projects, it will pay dividends in efficiency and flexibility.

  • kglitch
    link
    fedilink
    3
    edit-2
    10 months ago

    You could use it as part of your infrastructure. E.g. DNS server, database server, redis server, file server. But running the whole stack will be too much unless you upgrade the RAM. 1 GB minimum, preferably 2.

  • Osayidan
    link
    fedilink
    English
    310 months ago

    You might not even be able to install modern OS on it as many are starting to drop support for old hardware, I know the linux kernel did some pruning recently.

    • @LazerDickMcCheeseOP
      link
      English
      110 months ago

      Yeah, I figured that was the case but wanted to see what I could manage with an antiquated OS

  • @[email protected]
    link
    fedilink
    English
    110 months ago

    I would never say no to using older hardware. Yeah, it’ll be like punishing yourself. But you learn a shit ton.

    I recently started self hosting. I started on a PC with the same specs as you’ve said. Booting was an issue. And tons of stuff always broke. But i learnt a lot. And then there was a time, when i genuinely thought i could do better and switched an old laptop with decent specs.

    Pi’s are very expensive and too dang low on supply.

    So always make do with what you have. If it’s your first home lab, then yeah go ahead. In a few months switch

    • @[email protected]
      link
      fedilink
      English
      410 months ago

      Pi is not the only SoC, merely the best-known.

      I’d earn anyone thinking of buying a Pi for a home server: ARM is widely supported, but you might regret investing in arm32. Atom is a safer choice.

      • @[email protected]
        link
        fedilink
        English
        110 months ago

        Yeah. But it’s too damn risky to buy the non-popular ones. Especially in my region…

  • @[email protected]
    link
    fedilink
    English
    110 months ago

    It’s a great machine to learn on. Build yourself a web server or something like that. You don’t know what it can do until you push it, and you’re not out anything by taking it to its limits. If it has something like a Core 2 Duo you could even run KVM and launch a virtual machine to learn about that process. Old hardware is meant to be run into the ground and you’ll learn a lot in the process, including getting a feel for how much hardware you really need to perform the tasks you want. (I literally just retired a rack server this year with a Core 2 Duo and 8GB of memory, which was used to run five VM servers providing internet services.)

    • @LazerDickMcCheeseOP
      link
      English
      110 months ago

      Can you give me some case use examples for VMs like that? My VM knowledge stops at emulating OSs for software compatibility and running old Windows versions for gaming.

      • @[email protected]
        link
        fedilink
        English
        210 months ago

        What I’ve always done is to create a VM for each service I run – so like one each for DNS, apache, postfix, dovecot, and even one to handle ssh and ftp logins. I’ll also set up a VM when I want to test a new service, so I don’t trash out a physical machine. This makes it easy to make extra copies if I want to run redundant systems or just move them to a different physical server. I suppose this is something like what docker does, except these are entirely self-contained systems that don’t even need to be running the same OS, and if someone happens to hack into one system, it doesn’t give them access to all the others. I also have a physical machine set up as a firewall to direct the appropriate ports to each VM and handle load balancing, but for your experiments you could do this task on the physical desktop and point everything to the VMs running inside it.

        One nice thing about KVM is that you can overload your memory space. So like if you only have 512MB available and you set up three VMs with 256MB each, the actual free space will be shared among them because usually a system doesn’t take up ALL of its memory (although for linux you might need to limit how much cache ram each system will try to use). In reality what you find is that a system might run a task or get a burst of traffic that uses more memory, so it will pull free physical memory from the other VMs as needed, then give it back when the task is done. You won’t really want to run web-facing servers with such a tight space though, unless you are the only person actually using them, but hopefully it gives you some ideas of how you can play around with what you have available in that machine.

        • @LazerDickMcCheeseOP
          link
          English
          110 months ago

          Holy shit, that’s genius. I saved your comment for reference. This is probably how I’ll end up learning to make these things work

          • @[email protected]
            link
            fedilink
            English
            110 months ago

            Well thanks, and all credit goes to the person who passed that setup on to me, and the person who passed it on to them… Everyone is so infatuated with docker these days and I just don’t know why. Cool you can run stuff in a container, but so can I. Just like I’ve seen so many people who think VMware is the ultimate because, what, businesses pay for it and that makes it better? Honestly when I tried VMware I found it to be a monstrous resource hog that my desktop machine could barely handle, and yet I can run KVM on servers that are almost twenty years old.

            This is one of the reasons why it pays to play around with older hardware – you get to see how smoothly various options run, and pick the solution that doesn’t require next-generation hardware just to get by. I’ve always ran older used machines because I can get them dirt-cheap, I know the bugs are worked out of the hardware, and they are just fine. I’m actually upgrading to some Poweredge R620 racks this year, which are still 11 years old, but they use half the power for a massive boost in processing so I’m working on replacing the other machine.

  • Otter
    link
    fedilink
    English
    110 months ago

    It might be nice to use to learn, but you probably won’t be able to handle that much.

    If you’re not using it for anything, you could wipe it and throw a server on it? I guess it depends on what you consider fun