Just some Internet guy

He/him/them 🏳️‍🌈

  • 10 Posts
  • 1.57K Comments
Joined 1 year ago
cake
Cake day: June 25th, 2023

help-circle
  • That looks like a normal kernel to me. The mention of the surface is the hostname which comes from /etc/hostname.

    Exactly how does it not work? Does the kernel even try to boot? Tried verbose mode?

    You might need to regenerate your initramfs for the new hardware, I think on Fedora that’s Dracut? That usually does include machine specific drivers that needs to be available during early boot, but just regenerating it should fix that.


  • I don’t think you can, and I think it makes sense: it would be weird for the compiler to unexpectedly generate hidden variables for you.

    For all the compiler knows, the temporary variable could hold a file handle, a database transaction, a Mutex, or other side effects in their Drop implementation that would make when it’s dropped matter. Or it could just be a very large struct you might not expect to keep around until the end of the function (or even, the end of the program if that’s a main loop).

    So you should be aware of it, and thus you need the temporary variable like you did even if you just immediately shadow it. But at least you know you’re holding on to it until the end of the function.



  • Max-P@lemmy.max-p.metonetworkingIs my proxy setup safe?
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 days ago

    Doesn’t quite answer the question, but what I did back then in school is I had set up NoMachine over SSH on my laptop and just had the Windows client on my MP3 player. I’d just plug it in, run the client and remote into my laptop, and as a bonus I wasn’t really using the school’s computers, I was using mine remotely. Nothing to see on the school’s computer, no history. For IT, I guess I just looked like a kid that’s doing a lot of stuff over SSH. Today that’d be x2go, although RDP or VNC would also probably work fine.

    I don’t know if the remote aspect helped, but the teachers didn’t care and definitely knew. A friend on mine did something similar, got caught and ultimately got away with it because the remote desktop software itself wasn’t violating the policy, and he wasn’t technically bypassing restrictions either, and he wasn’t caught actively visiting a site that should have been blocked. YMMV.





  • IPv6 or IPv4?

    A /3 of IPv4 for that price is impossible, that’d be 10% of the entire IPv4 space. A /29 (32-3) would be more reasonable but 1k for a block of 8 IPs would be a massive ripoff.

    Doesn’t make sense for IPv6 either, as that’d be exactly the global unicast range (2::/3), but makes sense they’d give you like a huge block in there, maybe a /32 as that’s what they assign to an ISP. As an end user you usually get a /48.





  • I want to love IPv6 but it’s unfortunately still basically impossible to get good proper IPv6 in the first place.

    At home I’m stuck with fairly broken 6rd that can’t be hardware accelerated by my router and the MTU is like 1200 which is like 20% bandwidth overhead just for headers on the packets.

    On the server side, OVH does have IPv6 but it’s not routed, so the host have to pretend to have all the IPv6 addresses and the OVH routers will only accept like 8 of them in use before its NDP table is full, so assigning an IPv6 to every Docker container fails miserably.

    IPv6’s main problem is ISPs are so invested in NAT and IPv4 infrastructure they just won’t support IPv6. Microsoft, Google and Apple need to team together and start requiring functional IPv6 to create user demand, because otherwise most users don’t know about CGNAT and don’t care. Everything needs to complain about bad IPv6 connectivity so users complain to ISPs and pressure them into fixing it.


  • Yep, and I’d guess there’s probably a huge component of “it must be as easy as possible” because the primary target is selfhosters that don’t really even want to learn how to set up Docker containers properly.

    The AIO Docker image is an abomination. The other ones are slightly more sane but they still fundamentally mix code and data in the same folder so it’s not trivial to just replace the app.

    In Docker, the auto updater should be completely neutered, it’s the wrong way to update the app.

    The packages in the Arch repo are legit saner than the Docker version.


  • I would avoid AppImages and use Flatpak instead. They’re pretty notorious for their lack of integration, and the author of AppImage is severely against progress and is actively sabotaging Wayland support in them which leads to other bad experiences on Wayland desktops. The shortcuts is merely the surface of the problems with AppImage.

    I wouldn’t give up so easily, just come back to it every now and then even if you spend most of your time on the Windows partition.



  • I’ve been on Ubuntu from 7.04 to 11.04, and only then went to Arch out of desire for more control.

    Some people like to dive headfirst, and it’s doable and those people are successful with Linux, sometimes. But you also have to factor in the morale factor in there, are you trying to learn the deep ends of how Linux works or are you just trying to migrate from Windows?

    It’s totally fine to use the noob distros for the sake of, you know, getting used to using Linux things in general instead of trying to take it all in at once. Things are so vastly different than on Windows, there’s so much to learn, focus on learning how to use it the easy GUI way before you worry about what’s under the hood.

    There’s nothing about Ubuntu or Mint that really stops you from popping the hood open and having a peek every now and then either. You can change whatever you want in /etc the same on Mint as you could on Arch, the default configs that will be there will be just different but you don’t have any less control. The only real difference is a distro like Arch is hands off and ships the bare default, whereas in the Debian family it will usually come with a reasonable default ready to go. Oh you install Samba to share files? Done, on Debian it will automatically start and you can just log in and access your home folder. On Arch, nothing happens, you have to configure it, enable it and start it, open firewall ports if enabled, and so on. Debian, again, all done automatically. Best case, you don’t have to change it. Worst case, you have to read the manual anyway but the default got you a base to start off of. As an Arch user, I see it more as crap that’s in my way that I have to delete because I will provide my own config. The difference here is perspective and expectations.

    Kind of ties back in the why so many distros: because there’s users for all of them. You pick a distro that works best for you, not the distro everyone else says is the best. The best start with Linux is trying a few distros and see which you vibe the best with. You, personally. It’s called a software distribution because that’s what it really is: a distro just takes a whole bunch of software from many projects, compile it all and bundle it all into nice packages and then make an installer to install and configure all of it. You can just download and install all the little pieces yourself, that’s what Linux From Scratch is. Distros are fundamentally opiniated, their take on how to mash all that software together such that everything works correctly.

    Circling back to Mint, nothing there stops you from compiling your own kernel, or your own packages. You can strip out all the Mint parts to the point it’s bare Ubuntu and then strip out the Ubuntu parts until it becomes bare Debian. You’re just changing your starting point. You get to see how they made it work, you can to see and explore how the magic works. You can always install Arch in a VM or container, or slowly build an LFS yourself in a VM just to learn it without it blocking you.

    To me that’s what’s truely so cool about Linux. It’s not a singular thing or product. It’s an ecosystem and a community, It’s a collection of independent software, in all shapes and colors, all coming together to give the end user experience we have. People can help eachother and make distributions that does one thing well (Kali, Dragon, Ubuntu Studio, SteamOS, Bazzite) so you don’t have to set it all up yourself. Freely replace any component with another. You can collect patches you like that the author won’t implement in the official release. A lot of people spent a lot of time on all of this, so might as well appreciate all the effort and enjoy the easy mode until you itch for more.

    And also, we learn so much better when we enjoy ourselves doing so. All you’ve learned so far is that Linux can be really frustrating very quickly, and no reward for it.


  • Why are there so many distros out there? What’s the difference between debian + kde and manjaro + kde? They look the same, they work the same. I don’t get it.

    They visually look similar because both are running KDE with pretty much all the defaults, as it happens both Debian and KDE don’t diverge too much from the recommended defaults as long as they work well. But under the hood, Debian and Manjaro work completely differently: one uses apt, the other uses pacman. The way those packages are maintained, compiled and distributed is vastly different, with different kinds of QA testing.

    Ubuntu is a derivative of Debian, so it doesn’t look that much different but Canonical does tend to provide newer packages than Debian does. But Ubuntu also has a lot of flaws so spinoffs like Mint and Pop_OS! take on Ubuntu as a base and “fix” it to their liking and hopefully the user’s too, which, given how popular Mint is I’d say they’re pretty successful in that goal.

    Also why do things have to be complicated?

    It doesn’t, but the amount of options and choices in how to do basically anything on Linux can certainly look very overwhelming. You can click on it in your file manager, you can add it to /etc/fstab, you can use a systemd mount unit. They’re different ways of automating and configuring what ends up being mostly the same: mounting a filesystem and setting permissions on it, and they come with different defaults.

    You’re running into the particular area of trying to mount an NTFS Windows partition on Linux, which is nothing like what Linux expects to it fakes a few things to make it work, and that makes everything owned by the same user by default. If you do it from your file manager, it’ll get a temporary mountpoint in like /run/user/1000/media/YOUR DRIVE but is mostly intended for when you plug in a USB or something. You probably found /etc/fstab but then that made all the files owned by root, and you can temporarily change that with chmod and chown but once you reboot and it gets mounted again, it’ll revert back because it doesn’t actually store those fake permissions as to not break Windows.

    It’s just problems, after problems, after problems and i didn’t even start gaming.

    Yeah, some people end up particularly unlucky in that department. Eventually, over time, it feels as easy or easier than on Windows. It’s just, you have years of experience on how to make Windows do the thing, and Linux is completely new to you. I had a very similar experience a couple years ago when I was forced to learned macOS because the job would only issue MacBooks. Everything felt way overcomplicated and eventually you start thinking the Apple way and it goes more smoothly, you understand better how it works. I mean, how alien is it to just open disk images and copy .app files to /Applications and that’s how you “install” things?? And you get used to it and now I wield the macOS terminal like I do on Linux.

    What do i need to do to install a AUR package? A wall of text on the wiki, 20 minutes videos, yay. Ok let’s call it a day.

    So, this is why people don’t like recommending Manjaro. It’s ArchLinux with a coat of paint, but still relies on Arch’s infrastructure for the AUR. ArchLinux is well into advanced Linux: it’s a box of legos you have to assemble in the shape of a Linux distro yourself. So yes they do expect you to do a fair bit of reading, but Manjaro doesn’t, and it’s a real problem that has caused a fair bit of drama at its time. The AUR is great, but to make another analogy, the AUR is more like a recipe book: you don’t download premade meals, you have to bake them yourself (compiling source code into binary) to have your meal (the generated package file). Sending beginners that route is a recipe for a bad experience.

    Ironically, yay is the name of one of the tools that helps install AUR packages.

    Do i need to live another life to make linux work?

    No, but it does take some initial commitment to get to the nicer part of the learning curve. The first install is always pretty rough, you will destroy it, that’s fine, you have to learn first.

    Ok let’s call it a day.

    Honestly by the post you should have done that earlier. As with anything, when you’re frustrated with it you stop learning, you start making it much harder than it needs to be.

    It’s fine to take a step back and reboot into Windows and try again the next day. It doesn’t have to be all or nothing, plenty of people have started by using Linux for just one task that’s easier to do on Linux, and eventually you start thinking of migrating more workloads to Linux over time. You’re restarting your computer learning journey from pretty close to the start, give yourself a break, computers aren’t worth getting pissed off at.



  • I’ve heard very good things about resold HGST Helium enterprise drives and can be found fairly cheap for what they are on eBay.

    I’m looking for something from 4TB upwards. I think I remember that drives with very high capacity are more likely to fail sooner - is that correct?

    4TB isn’t even close to “very high capacity” these days. There’s like 32TB HDDs out there, just avoid the shingled archival drives. I believe the belief about higher capacity drives is a question of maturity of the technology rather than the capacity. 4TB drives made today are much better than the very first 4TB drives we made a long time ago when they were pushing the limits of technology.

    Backblaze has pretty good drive reviews as well, with real world failure rate data and all.


  • Not exactly like DisplayFusion, but virtual desktops have been a thing forever on Linux. There’s a ton of options in that department. They don’t work the same in each DE, so if it doesn’t work in yours try another. I believe COSMIC supports this already, otherwise in the tiling department you might like Sway or Hyprland. KDE and Gnome are a bit weird with per-monitor virtual desktops, and KDE at least is working on it.

    USB Passthrough: yes, either the device node itself or the entire controller via PCIe passthrough.

    Premiere, I believe so but you will need GPU passthrough for that to work to any degree of smooth. GPU passthrough is super nice when it’s all set up, worth the spend for a second GPU. Performance is near identical to native, it’s really great. Been gaming in a VM for years… out of convenience.