With forewarning about a huge influx of users, you know Lemmy.ml will go down. Even if people go to https://join-lemmy.org/instances and disperse among the great instances there, the servers will go down.

Ruqqus had this issue too. Every time there was a mass exodus from Reddit, Ruqqus would go down, and hardly reap the rewards.

Even if it’s not sustainable, just for one month, I’d like to see Lemmy.ml drastically boost their server power. If we can raise money as a community, what kind of server could we get for 100$? 500$? 1,000$?

  • nutomic@lemmy.mlM
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    The site currently runs on the biggest VPS which is available on OVH. Upgrading further would probably require migrating to a dedicated server, which would mean some downtime. Im not sure if its worth the trouble, anyway the site will go down sooner or later if millions of Reddit users try to join.

    • Divided by Zer0@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Do you have the frontend a DB serving in the same VPS? If so it would be a great time to split them. Likewise if you DB is running in a VPS, you’re likely suffering from significant steal from the hypervisor so you would benefit from switching to a dedicated box. My API calls saw a speedup of 10x just from switching from a VPS DB to a Dedicated Box DB.

      I just checked OVH VPS offers and they’re shit! Even at 70 Eur dedicated on hetzner, you would gain more than double those resources without steal. I would recommend switching your DB ASAP for immediate massive gains.

      If you’re wondering why you should listen to me, I built and run https://aihorde.net and are handling about 5K concurrent connections currently.

      • nutomic@lemmy.mlM
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        1 year ago

        Hetzner is very strict about piracy so thats not an option. And now is almost weekend so I wont have time for a migration. Anyway there are plenty of other instances in case lemmy.ml goes down.

        Edit: I also wouldnt know which size of dedicated server to choose. No matter what I pick, it will get overloaded again after a week or two.

        • Divided by Zer0@lemmy.ml
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          Even if you choose Hetzner, it won’t even know it has anything to do with piracy because it will be just hosting the DB, and nobody will know where your DB is. That fear is overblown.

          Likewise believe me a dedicated server is night and day from a VPS.

    • Pisck@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      There will either be an hour of downtime to migrate and grow or days of downtime to fizzle.

      I love that there’s an influx of volunteers, including SQL experts, to mitigate scaling issues for the entire fediverse but those improvements won’t be ready in time. Things are overloading already and there’s less than a week before things increase 1,000-fold, maybe more.

      • nutomic@lemmy.mlM
        link
        fedilink
        arrow-up
        0
        ·
        1 year ago

        Can we replace Lemmy.ml with Join-lemmy.org when Lemmy.ml is overloaded/down?

        I dont think so, when the site is overloaded then clients cant reach it at all.

        Does LemmyNet have any plans on being Kubernetes (or similar horizontal scaling techniques) compatible?

        It should be compatible if someone sets it up.

        • Leigh@lemmy.ml
          link
          fedilink
          arrow-up
          0
          ·
          1 year ago

          You could configure something like a Cloudflare worker to throw up a page directing users elsewhere whenever healthchecks failed.

          • nutomic@lemmy.mlM
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            1 year ago

            Then cloudflare would be able to spy on all the traffic so thats not an option.

            • Leigh@lemmy.ml
              link
              fedilink
              arrow-up
              0
              arrow-down
              2
              ·
              1 year ago

              spy on all the traffic

              That’s…not how things work. Everyone has their philosophical opinions so I won’t attempt to argue the point, but if you want to handle scale and distribution, you’re going to have to start thinking differently, otherwise you’re going to fail when load starts to really increase.

    • Ashwag@lemmy.ca
      link
      fedilink
      arrow-up
      0
      ·
      1 year ago

      So reading this correctly, it’s currently a hosting bill of 30 Euro a month?

      • Milan@discuss.tchncs.de
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        1 year ago

        No, thats the 8 GB memory option… if its the biggest, it should be around 112 €. Meanwhile i keep wondering if i should let Lemmy stay on the current KVM (which is similarely specked but with dedicated cores and stuff) or if it is better to move it to one of my dedis just in case… well… will see xD

        • nutomic@lemmy.mlM
          link
          fedilink
          arrow-up
          0
          ·
          1 year ago

          Its the one for 30 euros, Im not seeing any vps for 112. Maybe thats a different type of vps?

              • Milan@discuss.tchncs.de
                link
                fedilink
                arrow-up
                0
                ·
                1 year ago

                It does not sound like OVHs vServers offer dedicated cores, and it is common to quickly become a bottleneck with VPS offerings across hosters and for example with the initial Mastodon hypes, i had to learn that shared hardware lesson the hard way. For the price you are currently paying, maybe something like a used dedicated (or one of the fancy AMD ones) server at Hetzner is of interest: https://www.hetzner.com/sb

                • nutomic@lemmy.mlM
                  link
                  fedilink
                  arrow-up
                  0
                  ·
                  1 year ago

                  Hetzner is great but they are very strict about piracy, so its not an option for lemmy.ml. For now the load has gone down so I will leave it like this, but a dedicated OVH server might be an option if load increases again.

                  • Leigh@lemmy.ml
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    1 year ago

                    You should use this relatively quiet time to migrate to a larger server, because when the time comes where you need to do it, you’re going to be in for a world of hurt. This is the calm before the storm–take advantage of it.

                    Ultimately, you need to scale horizontally. You need to shard your database and separate out your different functions (database, front end, whatever back end applications you use, etc) onto different servers, all fronted by load balancers. That’s going to be the only way to even begin to handle increasing load. If you don’t have a small team of experienced engineers with a deep understanding of how to build for scale, and you get a sudden mass exodus of users from Reddit, you’re fucked. So if I were you, here’s what I’d do:

                    1. Scale up to the largest instance type you can. If possible, switch (at least temporarily) to AWS and use something in the c6i instance family, such as the c6id.32xlarge. Billing for AWS instances is done by the hour, so you wouldn’t need to pay for an entire month up front if you only need that extra horsepower for a few days (such as when the blackouts are planned from the 12th through 14th).

                    2. Because the above will do nothing but buy you time until you crash–and if you get a huge spike of users, without horizontal scaling, you WILL crash–migrate your DNS to something like Cloudflare. From there, configure workers to respond when health checks to your site fail, so that users attempting to access the site can be shown a static page directing them to something like http://join-lemmy.org or someplace, instead of simply getting 5xx errors.

                    3. Once the hug of death is over, evaluate where you stand. Reduce your instance size, if you can, and start investigating what it’s going to take to scale horizontally.

                    I’m not a SQL expert, but I am a principal network architect, and my day job for the last 15 years has been working on scale and automation for the world’s largest companies, including 7 years spent at AWS. In my world, websites like Reddit, as large as they are, are still considered to be of ‘average’ size. I can’t help you with database, but I’m happy to provide guidance around networking, DNS, scale, automation, security, etc.

            • roho@lemmy.ml
              link
              fedilink
              arrow-up
              0
              ·
              edit-2
              1 year ago

              Nowadays doesn’t even make any sense to use servers. … Why not create something better?

              i think you might underestimate the problem.

              Jami.net (a decentralized messaging app) works p2p. it uses a torrent-like distributed-hashmap to locate the peers at any moment. (The main usability issue for nontechnical users, is that devices on an internal ip address aren’t addressable from outside. This requires (a single point of failure and privacy concern), a turn-server)

              They started to incorporate Git for merging chats for the reason that any of set of peers (of a group chat) can be out of reach of another set of peers, i.e. the chat continues on different branches and needs to be merged again later.(this happens in the clients-app, because there is no central server). Jami is aiming at double-digit group sizes… That’s not nearly the size of what Lemmy is handling.

    • elouboub@kbin.social
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      1 year ago

      Is it running in a single docker container or is it spread out across multiple containers? Maybe with docker-machine or kubernetes with horizontal scaling, it could absorb users without issue - well, except maybe cost. OVH has managed kubernetes.

      • Dessalines@lemmy.ml
        link
        fedilink
        arrow-up
        0
        ·
        1 year ago

        SQL. We desperately need SQL experts. It’s been just me for yeRs, and my SQL skills are pretty terrible.

        • Valmond@lemmy.ml
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          Put the whole DB in RAM :-)

          Makes me remember optimization, lots of EXPLAIN and JOIN pain, on my old MySQL multiplayer game server lol. A shame I’m not an expert …