I’m looking to upgrade some of my internal systems to 10 gigabit, and seeing some patchy/conflicting/outdated info. Does anyone have any experience with local fiber? This would be entirely isolated to within my LAN, to enable faster access to my fileserver.

Current existing hardware:

  • MikroTik CSS326-24G-2S+RM, featuring 2 SFP+ ports capable of 10GbE
  • File server with a consumer-grade desktop PC motherboard. I have multiple options for this one going forward, but all will have at least 1 open PCIe x4+ slot
  • This file server already has an LSI SAS x8 card connected to an external DAS
  • Additional consumer-grade desktop PC, also featuring an open PCIe x4 slot.
  • Physical access to run a fiber cable through the ceiling/walls

My primary goal is to have these connected as fast as possible to each other, while also allowing access to the rest of the LAN. I’m reluctant to use Cat6a (which is what these are currently using) due to reports of excessive heat and instability from the SFP+ modules.

As such, I’m willing to run some fiber cables. Here is my current plan, mostly sourced from FS:

  • 2x Supermicro AOC-STGN-i2S / AOC-STGN-i1S (sourced from eBay)
  • 2x Intel E10GSFPSR Compatible 10GBASE-SR SFP+ 850nm 300m DOM Duplex LC/UPC MMF Optical Transceiver Module (FS P/N: SFP-10GSR-85 for the NIC side)
  • 2x Ubiquiti UF-MM-10G Compatible 10GBASE-SR SFP+ 850nm 300m DOM Duplex LC/UPC MMF Optical Transceiver Module (FS P/N: SFP-10GSR-85, for the switch side)
  • 2x 15m (49ft) Fiber Patch Cable, LC UPC to LC UPC, Duplex, 2 Fibers, Multimode (OM4), Riser (OFNR), 2.0mm, Tight-Buffered, Aqua (FS P/N: OM4LCDX)

I know the cards are x8, but it seems that’s only needed to max out both ports. I will only be using one port on each card.

Are fiber keystone jacks/couplers (FS P/N: KJ-OM4LCDX) a bad idea?

Am I missing something completely? Are these even compatible with each other? I chose Ubiquti for the switch SFP+ since Mikrotik doesn’t vendor-lock, AFAICT.

Location: US

  • litchralee
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    9 hours ago

    I’ll have to review your post in greater detail in a bit, but some initial comments: cross vendor compatibility of xcvrs was a laudable goal failed only by protectionist business interests and the result is that the only real way to validate compatibility is to try it.

    Regarding your x4 slot and the NICs being x8: does your mobo have the slot cut in such a way that it can accept a physical x8 card even though only the x4 lanes are electrically connected?

    For keystone jacks, I personally use them but I try not to go wild with them, since just like with electrical or RF connectors, each one adds some amount of loss, however minor. Having one keystone jack at each end of the fibre seems like it shouldn’t be an issue at all.

    Final observation for now: this plan sets up a 10 Gb network with fibre, but your use-case for now is just for a bigger pipe to your file server. Are you expecting to expand your use-cases in future? If not, the same benefit can be had by a direct fibre run from your single machine to your file server. Still 10 Gbps but no switch needed in the middle, and you have less risk of cross vendor incompatibility.

    I’m short on time rn, but I’ll circle back with more thoughts soon.

    • Nollij@sopuli.xyzOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 hours ago

      Thanks for the quick reply. The available x4 slots are all physically x16, but electrically x4.

      While my use case today is pretty narrow, I’d rather not mess with the custom network settings to make it all cooperate on an otherwise completely flat network. The file server is running Ubuntu, and the desktop is currently running VMware ESXi. In the future, I expect to replace it with something else. I did verify that it lists the Intel network chipset on the HCL.

      • litchralee
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        5 hours ago

        Ok, I’m back. I did some quick research and it looks like that Mikrotik switch should be able to do line-rate between the SFP+ ports. That’s important because if it was somehow doing non-hardware switching, the performance would be awful. That said, my personal opinion is that Mikrotik products are rather unintuitive to use. My experience has been with older Ubiquiti gear and even older HP Procurve enterprise switches. To be fair, though, prosumer products like from Mikrotik have to make some tradeoffs compared to the money-is-no-object enterprise space. But I wasn’t thrilled with the CLI on their routers; maybe the switches are better?

        Moving on, that NIC card appears to be equivalent to an Intel x520, so drivers and support should exist for any mainline OS you’re running. For 10 Gbps beyond, I agree that you want to go with pluggable modules when possible, unless you absolutely know that the installation will never run fibre.

        I will note that 10 Gbps over Cat 5e – while not mentioned in the standard and thus officially undefined behavior – it has been reported to work over short distance, in the range of 15-30 meters by some accounts. The twisted pair Ethernet specs only call out the supported wire types by their category designation but ultimately, it’s the signal integrity of the differential signals that matter. Cat 3, 5, 5e, 6, etc are just increasingly better at maintaining a signal over a distance. This being officially undefined just means that if it doesn’t work, the manufacturer told no lie.

        But you’re right to avoid 10 Gbps twisted pair, as the xcvrs are expensive, thermally ridiculous, power hungry, and themselves have length limits shorter than what the spec allows, because it’s hard to stuff all the hardware into an SFP+ pluggable module. Whereas -SR optics are cheap and DACs even cheaper (when the distance is short enough). No real reason to adopt twisted pair 10 Gbps if fibre is an option.

        That said, I didn’t check the compatibility of your selected SR transceiver against your NICs and switch, so I’ll presume you’ve done your homework for that.

        Going back to the x8 card in a electrically x4 slot, there’s a thing in the PCIe spec where the only two widths that are mandatory to support are: 1) the physical card width, and 2) the 1x width. No other widths are necessarily supported. So there’s a small possibility that the NIC will only connect at 1x PCIe, which will severely limit your performance. But this is kinda pathological and 9 out of 10 PCIe cards will do graceful width reduction, beyond what the PCIe spec demands. And being an x520 variant, I would expect the driver to have no issue with that, as crummy PCIe drivers can break when their bad assumptions fall through.

        Overall, I don’t see any burning red flags with your plan. I hope you’ll update us with new posts as things progress!