I’ve spent some time searching this question, but I have yet to find a satisfying answer. The majority of answers that I have seen state something along the lines of the following:

  1. “It’s just good security practice.”
  2. “You need it if you are running a server.”
  3. “You need it if you don’t trust the other devices on the network.”
  4. “You need it if you are not behind a NAT.”
  5. “You need it if you don’t trust the software running on your computer.”

The only answer that makes any sense to me is #5. #1 leaves a lot to be desired, as it advocates for doing something without thinking about why you’re doing it – it is essentially a non-answer. #2 is strange – why does it matter? If one is hosting a webserver on port 80, for example, they are going to poke a hole in their router’s NAT at port 80 to open that server’s port to the public. What difference does it make to then have another firewall that needs to be port forwarded? #3 is a strange one – what sort of malicious behaviour could even be done to a device with no firewall? If you have no applications listening on any port, then there’s nothing to access. #4 feels like an extension of #3 – only, in this case, it is most likely a larger group that the device is exposed to. #5 is the only one that makes some sense; if you install a program that you do not trust (you don’t know how it works), you don’t want it to be able to readily communicate with the outside world unless you explicitly grant it permission to do so. Such an unknown program could be the door to get into your device, or a spy on your device’s actions.

If anything, a firewall only seems to provide extra precautions against mistakes made by the user, rather than actively preventing bad actors from getting in. People seem to treat it as if it’s acting like the front door to a house, but this analogy doesn’t make much sense to me – without a house (a service listening on a port), what good is a door?

  • KalciferOP
    link
    1
    edit-2
    5 months ago

    for example detect which network was connected to and re-configure the packet filter.

    Firewalld is capable of this – it can switch zones depending on the current connection.

    And while I think that is not a good argument at all, I feel protected enough by using the free software I do and roughly knowing how to use a computer. I don’t see a need to install a firewall just to feel better. Maybe that changes once my laptop is cluttered and I lose track of what software opens new ports.

    There does still exist the risk of a vulnerability being pushed to whatever software that you use – this vulnerability would be essentially out of your control. This vulnerability could be used as a potential attack vector if all ports are available.

    I’m currently learning about Web Application Firewalls. Maybe I’ll put ModSecurity in-front of my Nextcloud.

    Interesting! I haven’t heard of this. Side note, out of curiosity, how did you go about installing your Nextcloud instance? Manual install? AIO? Snap?

    I’m personally not a friend of that kind of legislation. If somebody uses my tools to commit a crime, I don’t think I should be held responsible for that.

    It would be a rather difficult thing to prove – one could certainly just make the argument that you did, in that someone else that was on the guest network did something illegal. I would argue that it is most likely difficult to prove otherwise.

    • @[email protected]
      link
      fedilink
      2
      edit-2
      5 months ago

      There does still exist the risk of a vulnerability being pushed to whatever software that you use – this vulnerability would be essentially out of your control. This vulnerability could be used as a potential attack vector if all ports are available.

      But this is a really difficult thing to protect from. If someone gets to push code on my computer that gets executed, I’m entirely out of luck. It could do anything that that process is allowed to do, send data, mess with my files and databases or delete stuff. I’m far more worried about the latter. Sandboxing and containerization are ways to mitigate for this. And it’s the reason why I like Linux distributions like Debian. There’s always the maintainers and other people who use the same software packages. If somebody should choose to inject malicious code into their software, or it gets bought and the new company adds trackers to it, it first has to pass the (Debian) maintainers. They’ll probably notice once they prepare the update (for Debian). And it gets rolled out to other people, too. They’ll probably notice and file a bugreport. And I’m going to read it in the news, since it’s something that rarely happens at all on Linux.

      On the other hand it could happen not deliberately but just be vulnerable software. That happens and can be exploited and is exploited in the real world. I’m also forced to rely on other people to fix that before something happens to me. Again sandboxing and containerization help to contain it. And keeping everything updated is the proper answer to that.

      What I’ve seen in the real world is a CMS being compromised. Joomla had lots of bugs and Wordpress, too. If people install lots of plugins and then also don’t update the CMS, let it rot and don’t maintain the server at all, after like 2 years(?) it can get compromised. The people who constantly probe all the internet servers will at some point find it and inject something like a rootkit and use the server to send spam, or upload viruses or phishing sites to it. You can pay Cloudflare $200 a month and hope they protect you from that, or use a Web Application Firewall and keep that up-to-date yourself, or just keep the software itself up-to-date. If you operate some online-services and there is some rivalry going on, it’s bound to happen faster. People might target your server and specifically scan that for vulnerabilities way earlier than the drive-by attacks get a hold of it. Ultimately there is no way around keeping a server maintained.

      how did you go about installing your Nextcloud instance?

      I have two: YunoHost powers my NAS at home. It contains all the big files and important vacation pictures etc. YunoHost is an AIO solution(?), an operating system based on Debian that aims at making hosting and administration simple and easy. And it is. You don’t have to worry too much to learn how to do all of the stuff correctly, since they do it for you. I’ve looked at the webserver config and so on and they seem to follow best practices, disallow old https ciphers, activate HSTS and all the stuff that makes cross site scripting and such attacks hard to impossible. And I pay for a small VPS. I used docker-compose and Docker on it. Read all the instructions and configured the reverse proxy myself. I also do some experimentation there in other Docker containers, try new software… But I don’t really like to maintain all that stuff. Nextcloud and Traefik seem somewhat stable. But I have to regularly fiddle with some of the other docker-compose files of other projects that change after a major update. I’m currently looking for a solution to make that easier and planning to rework that server. And then also run Lemmy, Matrix chat and a microblogging platform on it.

      It would be a rather difficult thing to prove

      And it depends on where you live and the legislation there. If someone downloads some Harry Potter movies or uses your Wifi to send bomb threats to their school… They’ll log the IP and then contact the ISP and the Internet Service Provider is forced to tell them your name. You’ll get a letter or a visit from police. If they proceed and sue you, you’ll have to pay a lawyer to defend yourself and it’s a hassle. I think I’d call it coercion, but even if you’re in the right, they can temporarily make your life a misery. In Germany, we have the concept of “Störerhaftung” on top. Even if you’re not the offender yourself, being part of a crime willingly (or causally adequate(?))… You’re considered a “disruptor” and can be held responsible, especially to stop that “disruption”. I think it was meant get to people who technically don’t commit crimes themselves, they just deliberately enable other people to do it. For some time it got applied to WiFi here. The constitutional court had to rule and now I think it doesn’t really apply to that anymore. It’s complicated… I can’t sum it up in a few sentences. Nowadays they just send you letters, threatening to sue you and wanting a hundred euros for the lawyer who wrote the letter. They’ll say your argument is a defensive lie and you did it. Or you need to tell them exactly who did it and rat out on your friends/partner/kids or whoever did it. Of course that’s not how it works in the end but they’ll try to pressure people and I can imagine it is not an enjoyable situation to be in. I’ve never experienced it myself, I don’t download copyrighted stuff from the obvious platforms that are bound to get you in trouble and neither does anyone else in my close group of friends and family.

      • KalciferOP
        link
        2
        edit-2
        5 months ago

        But this is a really difficult thing to protect from. If someone gets to push code on my computer that gets executed, I’m entirely out of luck. It could […] send data […].

        Not necessarily. An application layer firewall, for example, could certainly get in the way of it trying to send data externally.

        On the other hand it could happen not deliberately but just be vulnerable software.

        Are you referring to a service leaving a port open that can be connected to from the network?

        And then also run Lemmy, Matrix chat and a microblogging platform on it.

        I’m definitely curious about the outcome of this – Matrix especially. Perhaps the new/alternative servers function a bit better now, but I’ve heard that, for synapse at least, Matrix can be very demanding on hardware to run (from what I’ve heard, the issues mostly arise when one joins a larger server).

        You’re considered a “disruptor” and can be held responsible, especially to stop that “disruption”.

        Interesting. Do you mean “held responsible” to simply stop the disruption, or “held responsible” for the actions of/damaged caused by the disruption?

        • @[email protected]
          link
          fedilink
          25 months ago

          I think an Application Layer Firewall usually struggles to do more than the utmost basics. If for example my Firefox were to be compromised and started not only talking to Firefox Sync to send the history to my phone, but also send my behavior and all the passwords I type in to a third party… How would the firewall know? It’s just random outgoing encrypted traffic from its perspective. And I open lots of outbound connections to all kinds of random servers with my Firefox. Same applies to other software. I think such firewalls only protect you once you run a new executable and you know it has no business sending data. If software you actually use were susceptible to attack, the firewall would need to ask you after each and every update of Firefox if it’s still okay and you’d really need to verify the state of your software. If you just click on ‘Allow’ there is no added benefit. It could protect you from connecting to a list of known malicious addresses and from people smuggling new and dedicated malware to your computer.

          I don’t want to say doing the basics is wrong or anything. If I were to use Windows and lots of different software I’d probably think about using an Application Level Firewall. But I don’t see a real benefit for my situation… However I’d like Linux to do some more sandboxing and asking for permissions on the desktop. Even if it can’t protect you from everything and may not be a big leap for people who just click ‘Accept’ for everything, it might be a good direction and encourage more fine-granularity in the permissions and ways software ties together and interacts.

          it could […] just be vulnerable software

          I mean your webserver or CMS or your browser has a vulnerability and that gets exploited and you get hacked. The webserver has open ports anyways in order to be able to work at all. The CMS is allowed to process requests and the browser allowed to talk to websites. A maliciously crafted request or answer to your software can trigger it to fail and do something that it shouldn’t do.

          […] Matrix

          Sure, I have a Synapse Matrix server running on my YunoHost. It works fine for me. I’m going to install Dendrite or the other newer one next. I’m not complaining if I can cut down memory consumption and load to the minimum.

          Do you mean “held responsible” to simply stop the disruption, or “held responsible” for the actions of/damaged caused by the disruption?

          Yeah, the issue was that it meant both. You were part of the crime, you were involved in the causality and linked to the damages somehow. Obviously not to the full extend, since you didn’t do it yourself, but more than ‘don’t allow it to happen again’. Obviously that has consequences. And I think now it’s not that any more when it comes to wifi. I think now it’s just the first, plus they can ask for a fixed amount of money since by your negliect, you caused their lawyer to put in some effort.

          • KalciferOP
            link
            25 months ago

            If for example my Firefox were to be compromised and started not only talking to Firefox Sync to send the history to my phone, but also send my behavior and all the passwords I type in to a third party… How would the firewall know?

            If it’s going to some undesirable domain, or IP, then you can block the request for that application. The exact capabilities of the application layer firewall certainly depend on the exact application layer firewall in question, but this is, at least, possible with OpenSnitch.

            It’s just random outgoing encrypted traffic from its perspective.

            For the actual content of the traffic, is this not the case with essentially all firewalls? They can’t see the content of te traffic if it is using TLS. You would need to somehow intercept the packet before it is encrypted on the device. I’m not aware of any firewall that has such a capability.

            If you just click on ‘Allow’ there is no added benefit.

            The exact level of fine-grain control heavily depends on the application layer firewall in question.

            A maliciously crafted request or answer to your software can trigger it to fail and do something that it shouldn’t do.

            Interesting.

            I think now it’s just the first, plus they can ask for a fixed amount of money since by your negliect, you caused their lawyer to put in some effort.

            I do, perhaps, somewhat understand this argument, but it still feels quite ridiculous to me.

            • @[email protected]
              link
              fedilink
              1
              edit-2
              5 months ago

              I think OpenSnitch can do it roughly 2 different ways. Either you use an allow-list. That’s pretty secure. But it’ll severely interfere with how you’re used to browse the internet. You’re gonna allow Wikipedia and your favorite news sources, but you won’t be browsing Lemmy and just randomly clicking on articles and blogs since you have to specifically allow them in the firewall first. Or you’re using a deny-list. That’s something like what Chrome does, have a list of well-known malicious sites and it’ll ask you ‘Do you really want to visit that site? It spreads malware.’ It’ll add tremendously to security. But won’t protect you entirely. Hackers frequently break into webservers to spread malware from new servers. Ones that aren’t yet in the list of bad IPs. It’ll work for some time until the application firewall and the Chrome browser catches up and they’ll move on to a different server. You should definitely think about that and prevent being the millionths victim, however.

              I think we’re talking about vastly different concepts here. Desktop computers and servers, consumers and enterprises are threatened in vastly different ways. And thus they need different solutions that handle the different threats. On a desktop computer the main way of compromising it is getting people to click on something. Or do whatever an official-looking e-mail instructs them to do. On a server that is meaningless. There isn’t that much random applications someone clicks on without thinking it through. There is no e-mail client on the server. But on the other side you’re serving random people from all over the world. Your connections are different, too. And if someone wants to upload their malware somewhere or send spam… They’re going to go for a server and not a desktop computer.

              About the “Störerhaftung”: I think so, too. It’s been ridiculous and in the end the courts also ruled it’s against the law. The 100€ is also not something you have to pay. They want it and it’s just a way to settle out of court. If you pay them, they’ll promise to forget about this one time and not care about who did it. I think these kind of settlement exist all around the world and it’s not illegal. And the copyright has to find some means of pressuring people, even if it’s a bit shady, since such copyright offenses aren’t a major crime and courts are often times bothered with more important stuff.