Hello Linux Gurus,

I am seeking divine inspiration.

I don’t understand the apparent lack of hypervisor-based kernel protections in desktop Linux. It seems there is a significant opportunity for improvement beyond the basics of KASLR, stack canaries, and shadow stacks. However, I don’t see much work in this area on Linux desktop, and people who are much smarter than me develop for the kernel every day yet have not seen fit to produce some specific advanced protections at this time that I get into below. Where is the gap in my understanding? Is this task so difficult or costly that the open source community cannot afford it?

Windows PCs, recent Macs, iPhones, and a few Android vendors such as Samsung run their kernels atop a hypervisor. This design permits introspection and enforcement of security invariants from outside or underneath the kernel. Common mitigations include protection of critical data structures such as page table entries, function pointers, or SELinux decisions to raise the bar on injecting kernel code. Hypervisor-enforced kernel integrity appears to be a popular and at least somewhat effective mitigation although it doesn’t appear to be common on desktop Linux despite its popularity with other OSs.

Meanwhile, in the desktop Linux world, users are lucky if a distribution even implements secure boot and offers signed kernels. Popular software packages often require short-circuiting this mechanism so the user can build and install kernel modules, such as NVidia and VirtualBox drivers. SELinux is uncommon, ergo root access is more or less equivalent to the kernel privileges including introduction of arbitrary code into the kernel on most installations. TPM-based disk encryption is only officially supported experimentally by Ubuntu and is usually linked to secure boot, while users are largely on their own elsewhere. Taken together, this feels like a missed opportunity to implement additional defense-in-depth.

It’s easy to put code in the kernel. I can do it in a couple of minutes for a “hello world” module. It’s really cool that I can do this, but is it a good idea? Shouldn’t somebody try and stop me?

Please insert your unsigned modules into my brain-kernel. What have I failed to understand, or why is this the design of the kernel today? Is it an intentional omission? Is it somehow contrary to the desktop Linux ethos?

  • Blaster M@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    2 days ago

    Ah, yes, I do enjoy spending 6 months rebuilding my daily driven car in the garage because the air filter is integrated deep in the engine and not easily replaceable.

    The whole “I compile all my linux from source” might work if you are an IT major or have a lot of free time you can devote to maintaining your PC, but the majority of people that use a PC do not have the time, skill, attention span, or knowledge to do any more than press “Easy” and let the system have at it.

    • lurch (he/him)
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      i just read there is even some kind of macro for switching all modules to built-in:

      make mod2yesconfig
      # and/or
      make localyesconfig
      
    • lurch (he/him)
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      compiling a kernel from the provided source is surprisingly easy tho. you can start with the default config from your distro, just toggle the options you want different in the menuconfig and compile it. there are howtos. also, once a pro has done it, they can share the config for others with similar setups.

      if you fail and your kernel is broken, you can just use the old one again until you get it right. just don’t overwrite the old one when putting it where the boot loader is looking for it and give the new one it’s own boot loader entry.