I’m going to set out on installing OpnSense for the first time. I see some people put OpnSense on Proxmox and pass through a pcie network card. Besides the power of backing up and restoring, are there other advantages to this?

My planned OpnSense box is an old Dell Optiplex. It has the normal ethernet port on the motherboard as well as a 4-port PCIe network card that I added. So I’d probably use the PCIe network ports for OpenSense, and reserve the onboard ethernet port for troubleshooting if I royally mess up.

I’m still a proxmox newbie, but I think I can manage the PCIe passthrough. I’m just not sure what other complications that will introduce to my OpnSense and networking learning curve. So I thought I’d ask first and see if some of the disadvantages or advantages would push me one way or the other. I’m afraid of locking myself out of OpnSense because of incorrectly configured networking as I’m learning.

  • nolo_me@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I’ve always been a fan of running a router/firewall on bare metal. Don’t like the idea that bouncing my hypervisor for maintenance or a kernel upgrade takes down my whole network.

  • Not_your_guy_buddy42@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    So, I run OPNsense in a VM on Proxmox. There is only one drawback I am aware of, which is when I update the Proxmox host itself, I’ll need to attach a monitor/keyboard/mouse to it. Theoretically, if the upgrade was fully automatic and never needing any intervention or user input, it’d be possible without: But the reality is more that it might need user input, but the OPNsense VM will not be booted i.e. network will be down i.e. I need direct access to the Proxmox host.

  • marc45ca@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    virtualising means you can make more use of resources on system rather than having two systems and dedicating one to specific task.

    On the other hand you can bork the hypervisor and then be without internet and possible become the families public enemy #1 :)

    But it’s generally pretty stable. Not use opnSense but do have a virtualised router using SophosXG. One nic from the VM is tied to vmbr0 which is the main virtual bridge that ties my virtual machines to the rest of the network. The IP is my default gateway.

    the second NIC is done as PCIe pass through and this connects direct to my cable modem.

    I could have bound this NIC to another vmbr and would have worked just as well. However there was some discussion in r/proxmox about performance impacts if you have a very fast internet connection (something to with srv-io iirc).

  • Grass
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I don’t recommend it on the basis of being absolutely obnoxious to configure and maintain. If anything goes wrong with the internet in my household, someone will wake me up at 4am to bitch about it too. I tried proxmox and xcp-ng and with both I would run into cases where I couldn’t get into the management interface for any vm or the VM host. Connected directly, monitor would be blank and keyboard did nothing. Force reset, couldn’t find any reason and eventually it would happen again. Now I have a separate device with a probably overkill CPU and 5x the hard drive space needed due to smaller drives being more expensive at the time, but no network or vm access problems.

  • keyzard@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I run pfSense on a 2 node Proxmox “cluster” (cluster in quotes because I don’t have quorum for automatic failover). Each host has a dedicated NIC for the firewall’s WAN port attached to my modem which is in bridge mode. When I need to do maintenance on the node hosting the FW I do a live migration to the other node. I drop one ping during the migration.

    Honestly, when I was designing it I didn’t think it would work…but here we are…lol.

    • beefandfoot@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Nice. I’ll try that myself. Any tips you could share? I assume you have to use the same bridge name for the two interfaces on the two promox nodes for the seemless migration.

      • keyzard@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Yep, everything is identical across the nodes and I’m using ZFS pools for VM storage.

        I also have a dedicated NIC for cluster and replication traffic. So 3 NICs per host; WAN, LAN, and Replication

        • beefandfoot@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          I am lost. What do you use the third nic for? Do you use it to replicate pfsense or proxmox configurations? If you migrate pfsense vm when necessary, you don’t need to replicate its configurations. I must be missing something.

          • keyzard@alien.topB
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            Each of my important VMs disks replicates every 15 mins to the second host as a “warm” recovery image. Also, during migration the VM hard drive and config are sent over the replication NICs I believe.

            I suppose I don’t “need” the third NIC for replication, but old habits die hard.