The linked post shows how most non-tech people’s understanding of email is very very different from most of the people here.

  • OpenStars@piefed.social
    link
    fedilink
    English
    arrow-up
    2
    ·
    3 hours ago

    Almost every single one of my posts has had major issues. Even those from other instances. e.g. Rimu messaged me about this one that did not federate for 2-3 days, and consequently was seen by very few people. And here’s one from a difference instance that made it to its destination on [email protected], but from its originating server I could see none of the comments, and had to respond from a third account involved in that 3-way attempt at communication. (a post talking about such federation issues on that same server) So to be very clear, I am not saying that instances running PieFed software are having issues, but more that the issues are with Lemmy regardless of software type run.

    • hendrik@palaver.p3x.de
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 hours ago

      Uh, right. Sorry, I did not notice you were from Piefed… I was talking about the times when we had the borked Lemmy updates… Did you ever debug or resolve your issues? Is there a way to tell something didn’t federate? And is this an issue specific to Piefed? Or to the whole Fediverse? I’m not sure if I’m affected. I occasionally check my posts from another account and it always seems okay. But I mean I don’t do it very often.

      • OpenStars@piefed.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        2 hours ago

        Yes often I’ve chased it down to some small degree. The one that Rimu messaged me was near a time of great instability on piefed.social and I got a bunch of gateway errors even so much as trying to reach it as a user (having nothing to do with posting I mean). This instance has calmed down a ton since then and works perfectly these days, but things happen from time to time. Likewise the incident with StarTrek.Website that I mentioned and provided a link describing more.

        Other occurrences have still other causes - e.g. a few of them in one community seems to have been due to my posts getting “locked”. I have no idea why - perhaps the new mod was just fat-fingering the button? I did not ask. But now the vote counts vary GREATLY (192 vs. 183 vs. 98 vs. 0 etc.) depending on which instance you view it from. If you want to test for yourself, a good one to use is https://piefed.social/post/330559 - though I notice that (fairly recently created) community [email protected] does not appear at all on your instance.

        The primary cause though was a limitation in how the ActivityPub protocol was implemented in the Lemmy codebase, and not having anticipated that ~80% of the entire Lemmy-based Fediverse would concentrate itself onto a single server, Lemmy.World. So how it works is that any “action” - a post, a comment, an upvote or downvote - will be federated out to all the other instances world-wide at a rate of 1 per second. However, if the ping from the other servers to Lemmy.World is itself a significant approximation of that, then the list of actions to be federated will fall behind and take longer to catch up. Eventually after more than a week it gives up entirely, but in the meantime an action can be delayed for days. Poor Aussie.Zone - geographically distant from the EU - has been really having a hard time of it ( https://aussie.zone/post/13429731 ).

        Fortunately this problem has already been fixed in the Lemmy codebase by allowing multiple actions to be sent in parallel (https://github.com/LemmyNet/lemmy/pull/4623) - however, what causes the continued problems nowadays is the fact that Lemmy.World is still awaiting that upgrade to 0.19.6 to make use of that change in the codebase (release notes) (actually now 0.19.7 is already out too, having come less than a week after the former, and representing just a few bugfixes, release notes). When Lemmy.World upgrades to one of those, a good deal of these systemic issues should calm down, by a GREAT deal, if not entirely.

        Afaik, there is nothing particularly special about instances running PieFed having troubles connecting to any Lemmy instances. In fact it seems rather stable compared to many (even most!) others - particularly StarTrek.Website that has poor uptime. In fact, https://piefed.fediverse.observer/list reports that piefed.social has a remarkable uptime rate of 99.89, which I very much believe, compared to the aforementioned StarTrek.Website’s rate of 98.20, although a year ago when I left it it must have been significantly poorer b/c it would be down for days sometimes, and every single action took like a minute sometimes, back then. Your own instance reports 98.60 - does that sound right?

        Rather, it is Lemmy instances - particularly smaller ones (e.g. https://lemmings.world/post/14171987) - having trouble federating specifically with Lemmy.World.

        And then recently there were a bunch of instances having troubles connecting to lemmy.ml too (https://lemmy.world/post/22196027) - though this one is more expected as that one is administered by the developers of the Lemmy codebase, and thus that is the place where they test out all of their new code in beta, prior to deploying it across the entire Fediverse. Sometimes that leads to some REALLY odd behaviors, such as entries disappearing from modlog files that were extremely concerning to people, but it is par for the course with that highly special instance, which is unique in its manner.

        Edit: ah and I neglected to answer one of your questions: as you said, the way to tell if something federated properly or not is to check the instance - specifically the one hosting the community that you are sending it to. So e.g. to check a post to [email protected], I would visit Lemmy.World. If it is there but not on your home instance, then at least that particular message packet got sent, even if the message packet from Lemmy.World to your instance got lost or fell behind in its processing backlog somehow.