I’m proud to share a status update of XPipe, a shell connection hub and remote file manager that allows you to access your entire server infrastructure from your local machine. It works on top of your installed command-line programs and does not require any setup on your remote systems. So if you normally use CLI tools like ssh, docker, kubectl, etc. to connect to your servers, you can just use XPipe on top of that.

Since the last status update some months ago, a lot of things have changed thanks to the community sharing a lot of feedback and reporting issues. Overall, the project is now in a much more stable state as all the accumulated issues have been fixed. Furthermore, many feature requests have been implemented.

Large connection sets

A lot of work went into improving the application for large use cases when you’re managing hundreds of connections. This includes hierarchical organization features to group all your connections into different categories and subcategories. Furthermore, there have been multiple processing and memory optimizations to ensure that the user experience stays smooth all the time. As a side effect, the memory footprint also has gone down. For people who have to use a potato as their workstation, there’s also now a performance mode setting to disable any visual effects that are not required.

You can also now tag connections by color for organizational purposes to help in situations when many connections are opened in the file browser and terminals at the same time. These colors will be shown to identify tabs everywhere within XPipe and also outside of XPipe, for example in terminal titles using unicode color symbols.

Connections

A new scripting system

XPipe 1.7 comes with a new scripting system, so now you can take your shell environment everywhere. The idea is to create modular and reusable shell scripts in XPipe that you can then use for various different use cases.

You can set certain scripts to be run on init for every connection independently of your profile files, allowing you to set up a consistent environment across all remote systems without any manual setup. In addition, you can choose to bring scripts to all your remote systems. This will make XPipe automatically copy and update these scripts to a target system if needed and put them in your PATH so that you’re able to call them from anywhere.

As of now, there is one set of predefined scripts included for enabling the starship prompt in your shells, mainly as a proof of concept. What you will use the scripting system for is up to you. If you like, you can contribute scripts to be included by default.

Scripts

Other news

  • You can now sync your connection configurations with your own remote git repository

  • You can create fully customized SSH connections by using the OpenSSH config format within XPipe

  • Additional actions for containers have been added, such as attaching to a container or printing the live logs of a container in a terminal session

  • A transparency slider has been added so that you can make all windows partially transparent just as you like

  • Support for many more terminals and text editors across all platforms has been added

  • Support for BSD systems and special login shells like pfSense and OPNsense has been added

  • There’s now support to open an SSH connection in your default installed SFTP client or Termius

  • The .deb and .rpm releases now correctly report all required dependencies. So you can install it on embedded systems or WSL2g without any hassle

  • There are now ARM releases for Linux

  • Support for VMware desktop hypervisors has been added

  • There have been many performance improvements to reduce the startup time, memory usage, file browser loading speed, and more

  • The homepage at https://xpipe.io/ got an upgrade

  • Of course, a lot of bugs have been fixed across the board

Going full-time

A few messages I received and the demand for XPipe so far convinced that there is a market for developing XPipe full-time and financing it by special commercial and enterprise plans for interested customers. It essentially encompasses support for enterprise systems and tools that you normally don’t find outside of enterprises.

This will improve the development speed and quality as I can now fully focus on creating the best possible application. The scope is very small and only involves me, so no investors or other employees. This drastically lowers the break-even value compared to most other tools and allows me to implement a very lenient commercialization.

Essentially, you can use most current features without any limitation for free. Furthermore, most upcoming features will also be included in the free version. The open-source model and license also won’t change. The only features that require a license are integrations for enterprise systems. For example, if you’re trying to connect to a licensed RHEL system or an OpenShift cluster, it will ask you to buy a license. Conversely, with a Rocky Linux system and a k3s cluster, you can use everything for free. These commercial-exclusive implementations will probably not be included in the repository though. Other than that, there are no restrictions.

Outlook

So if you gave this project a try a while ago or it sounds interesting to you, you can check it out on GitHub! There are still more features to come in the near future. I also appreciate any kind of feedback to guide me in the right development direction. There is also a Discord and Slack workspace for any sort of talking.

Enjoy!

  • rentar42
    link
    fedilink
    44 months ago

    This looks really interesting.

    I don’t mind the commercialization at all and think it’s actually a good sign for an open source project to have a monetization strategy to be able to hang around.

    But why do I have to agree to a EULA on a Apache-licensed piece of software? I understand that for the commercial features that might be necessary, but in that case could we get a separate installer for “this is all Apache licensed, no need for a EULA”?

    Additionally the contribution file mentions that “some components are only included in the release version and not in this repository.”. What are these components? Are they necessary for the basic core functionality?

    • @crschnickOP
      link
      English
      14 months ago

      In summary, there are a few components not included in the public repository, mainly because it is very difficult in practice to get people to pay for a 100% open source tool where they can just clone it and remove any license requirement in a few lines. So it is not a fully Apache licensed application, it’s core is. There is only one release version so it is difficult to provide a separate apache-only installer, mainly for technical implementation reasons. Some codebases can’t be perfectly split into free and non-free parts that can be shipped separately. These not included components are the license handling implementation, the low-level shell process handling implementation, and the CI/CD scripts for distribution.

      The EULA is just standard terms like don’t try to circumvent the license requirement, if you buy a license don’t share it with other people, some warranty and liability stuff, etc.

      If you build a development version from source, it requires to have another xpipe installation present where it can utilize some of the shipped components from it. But you can fully run and modify that development version. They are not necessary for basic core functionality but it doesn’t work without it as the license requirement could be disabled easily then as I mentioned before.

      Overall I think this split is the best solution considering all factors. I understand that some open-source proponents don’t like that. But I think since the application core is open source, it still has the good effect of establishing trust because anyone can take a look at how your data is handled internally, which is especially important in this context where a lot of sensitive information is used.

      • rentar42
        link
        fedilink
        54 months ago

        The EULA is just standard terms like don’t try to circumvent the license requirement, if you buy a license don’t share it with other people, some warranty and liability stuff, etc.

        Yes, I know. I actually read it (which is rare) and it’s mostly sensible stuff. The “no reverse engineering” clause just felt weird in something that claims to be “mostly open source”.

        In the end I find it slightly misleading to call this open-core when the app with just the non-commercial features can’t be built full from the published source.

        They are not necessary for basic core functionality but it doesn’t work without it as the license requirement could be disabled easily then as I mentioned before.

        I don’t quite understand this argument. If I can build a development version I can run any and all code in the repo (while providing an existing xpipe installation) and somehow I would be able to ship this, if I had criminal energies, so how exactly does this requirement prevent that?

        In other words: if the only way to access the commercial features without a license is by doing something illegal then … that’s not really adding much burden, is it?

        In the end I’m probably just one of the open-source proponents that don’t like that, and that’s fine. Not everyone needs to agree with everyone, there’s a lot of space here where reasonable minds can disagree. I just think that claiming “the main application is open source” when it can’t be built purely from the source is a bit misleading.

        • @crschnickOP
          link
          English
          24 months ago

          I see your points. In the end it boils down to the fact that there is no clear split between free and paid features in the codebase itself due to the chosen commercialization model. The paywall that is in place right now is mostly artificial because the code is the same for all systems. So even if I wanted to, I could not implement the classic open core model with a fully open source base version. I could have used a different approach to start out, e.g. only locking certain features behind a license and not certain remote systems like it is currently done. That would have probably allowed me to implement the more classic open core model. But the current model also has its advantages in other areas.

          You can just ship your own version of the repo if you want due to the apache license. To properly run this the user would however still need the regular xpipe installation which contains some parts that you would still need to properly make use of it. I think the term basic core functionality can be interpreted differently here. So if you are talking about being able to use all the nice features that make xpipe stand out, then yes these non-open-source components are necessary for core functionality. If you are just talking about being able to run the application and do limited things with it, then they are not.

          Yeah maybe the term open core is not the best way to describe it as it doesn’t entirely fit the pattern. I’m open for better suggestions where I can still somehow highlight that most of the application is open source (in terms of LOC, it is around 90% in that repo)

        • @[email protected]
          link
          fedilink
          English
          2
          edit-2
          4 months ago

          The deal breaker for me is that it seems the low-level component that would interface with the shells (presumably managing credentials in some way) is closed source and off-repo. That’s a big red flag for me, no matter how benign the intention.

          • @crschnickOP
            link
            English
            34 months ago

            Yeah I can understand why some people feel that way. Originally this closed part only concerned a very small part, but due to necessary subclassing of that implementation, that kinda evolved to the whole shell handling interface. I always wanted to refactor that aspect and decouple it such that these parts can be included in this repository, but never got around to it.

            Maybe in the future this can be properly addressed because it’s more a matter of a not well thought out structure rather than hiding crazy secret implementation details. The whole project’s vision moved around quite a lot and most stuff was conceptioned before there was even a thought to try to sell it.

            • @[email protected]
              link
              fedilink
              English
              1
              edit-2
              4 months ago

              A better alternative would be to separate the core open source app from any premium, proprietary add-on features, as the developer hinted at here.

              As someone else pointed out, it’s difficult to agree that this app follows an open source model when the open source portion of it is essentially non-functional and requires the closed source components to be of any practical use. Until that separation occurs, this isn’t really open source; you’re trusting a stranger on the internet with your (or your client’s) network credentials.

              Barring any similar apps, I’ll stick to my password manager and terminal.

  • @[email protected]
    link
    fedilink
    English
    44 months ago

    Would this let me do something like SSH to a bastion host, elevate privs with sudo, and SSH forward from there, then elevate privs again on the final target I’m trying to get to? Maybe do that on 100 servers at the same time?

    Back a half decade, I and my team of DBAs would have killed for something like that.

    Sorry if I’m the “can it do this weird and unnecessary thing” guy, but it really looks like a dream come true if it’s what I think it is

    • @atzanteol
      link
      English
      24 months ago

      Honest question - why would you elevate privs on the bastion?

      You can automatically use a bastion host with an SSH config entry as well in case you didn’t know:

      Host target.example.com
        User  username
        ProxyJump username@bastion.example.com
      

      Then you just ssh target.example.com. Port forwarding is sent through as well.

      • @[email protected]
        link
        fedilink
        English
        14 months ago

        You’re right it should work like that, but I remember trying it, and it didn’t because of some weird security policy.

        It is a very good tip though.

    • @crschnickOP
      link
      English
      1
      edit-2
      4 months ago

      From your description I would say yes.

      You always have to fiddle around a bit with SSH jumps and fowards as there are two different ways in xpipe to handle that. You also have to take care of your authentication maybe with agent forwarding etc. if you use keys. But I’m confident that you can make this work with the new custom SSH connections in xpipe as that allows you to do basically anything with SSH.

  • 𝓢𝓮𝓮𝓙𝓪𝔂𝓔𝓶𝓶
    link
    fedilink
    English
    14 months ago

    I’m checking this out to see if it’s useful to me. I can see where being able to drop straight into a shell on a docker container would be handy. My only real gripe is that I can’t use it to connect to my free-tier oracle linux cloud VMs because they deploy OracleLinux out of the box.

    I don’t begrudge you wanting to make a living from your work. It’s just frustrating.

    I am going to try and live in it for a week or two and we’ll see if it sticks.

    • @crschnickOP
      link
      English
      24 months ago

      Yeah the commercialization model is not perfect yet. Ideally the community edition should include all normal features required for personal use. Would that only be like one machine to connect to or many? I was planning to experiment with allowing a few connections where a license would be required in the community version.

  • @[email protected]
    link
    fedilink
    English
    -54 months ago

    Some indication of how this is different from a VPN or remote file system would be helpful.

    • @crschnickOP
      link
      English
      64 months ago

      It’s not really related at all.

      It is basically a graphical wrapper around your CLI tools like ssh, docker, kubectl, and more that gives you the features you know from tools like graphical SFTP clients but supports much more types of connections and allows you to use your favourite terminal and editor for your remote connections.

      • @[email protected]
        link
        fedilink
        English
        14 months ago

        Ah thanks. I’m a fogey and am used to doing that stuff from the command line but that’s just me m. Good luck with the project!