Turns out the status quo of Linux memory management somehow works pretty damn okay, nobody seems to really know why, and nobody cares.

  • @where_am_i
    link
    English
    598 days ago

    Looks like your CS degree is actually teaching you CS stuff.

    If all you wanted to do is center divs for 50$/h or so, a 2 months bootcamp would’ve been more than sufficient.

    • @LH0ezVTOP
      link
      English
      748 days ago

      Except that the degree I did this for was in electrical engineering :(

      • @where_am_i
        link
        English
        -238 days ago

        I understand. Safety and stability of embedded software is clearly overrated.

        Why learn about stack overflow. Tomorrow some kid will press the “open” button on your device, will get rejected 64 times, and on the 65th the locking mechanism will crash. Makes sense to me.

        • @LH0ezVTOP
          link
          English
          43
          edit-2
          8 days ago

          Get a nice cup of tea and calm down. I literally never said or implied any of that. Why do you feel that you need to personally attack me in particular?

          All I said was that a supposedly easy topic turned into reading a lot of obscure code and papers which weren’t really my field at the time.

          For the record, I am well aware that the state of embedded system security is an absolute joke and I’m waiting for the day when it all finally halts and catches fire.

          But that was just not the topic of this work. My work was efficient memory management under a lot of (specific) constraints, not memory safety.

          Also, the root problem is NP-hard, so good luck finding a universal solution that works within real-life resource (chip space, power, price…) limits.

  • Captain Howdy
    link
    fedilink
    English
    147 days ago

    I use/admin Linux each and every day at a professional level and at least once a week I’m final panel doggo.

  • @[email protected]
    link
    fedilink
    English
    107 days ago

    I feel this. Fell into a similar rabbit hole when I tried to get realtime feedback on the program’s own memory usage, discerning stuff like reserved and actually used virtual memory. Felt like black magic and was ultimately not doable within the expected time constraints without touching the kernel I suppose. Spent too much time on that and had to move on with no other solution than to measure/compute the allocated memory of the largest payload data types.

  • Grubberfly 🔮
    link
    fedilink
    English
    97 days ago

    is it a common ocurrence on Linux that you have to constantly mess with the settings and end up in an obscure rabbithole? that’s why I haven’t given it a go.

    • @LH0ezVTOP
      link
      English
      77 days ago

      No, not really. This is from the perspective of a developer/engineer, not an end user. I spent 6 months trying to make $product from $company both cheaper and more robust.

      In car terms, you don’t have to optimize or even be aware of the injection timings just to drive your car around.

      Æcktshually, Windows or any other OS would have similar issues, because the underlying computer science problems are probably practically impossible to solve in an optimal way.

    • @[email protected]
      link
      fedilink
      English
      187 days ago

      No, you absolutely don’t need to care at all about the memory management when using Linux. This rabbit hole is really only relevant when you want to work on the Linux kernel or do some really low-level programming.

      I would say the most obscure thing that is useful to know for running Linux is drive partitioning, but modern installers give you a lot of handrails in this process.

    • @LH0ezVTOP
      link
      English
      5
      edit-2
      7 days ago

      It’s been a few years, but I’ll try to remember.

      Usually (*), your CPU can address pages (chunks of memory that are assigned to a program) in 4KiB steps. So when it does memory management (shuffle memory pages around, delete them, compress them, swap them to disk…), it does so in chunks of 4KiB. Now, let’s say you have a GPU that needs to store data in the memory and sometimes exchange it with the CPU. But the designers knew that it will almost always use huge textures, so they simplified their design and made it able to only access memory in 2MiB chunks. Now each time the CPU manages a chunk of memory for the GPU, it needs to take care that it always lands on a multiple of 2MiB.

      If you take fragmentation into account, this leads to all kinds of funny issues. You can get gaps in you memory, because you need to “skip ahead” to the next 2MiB border, or you have a free memory area that is large enough, but does not align to 2MiB…

      And it gets even funnier if you have several different devices that have several different alignment requirements. Just one of those neat real-life quirks that can make your nice, clean, theoretical results invalid.

      (*): and then there are huge pages, but that is a different can of worms