I found this link aggregator that someone made for a personal project and they had an exciting idea for a sorting algorithm whose basic principle is the following:

  1. Upvotes show you more links from other people who have upvoted that content
  2. Downvotes show you fewer links from other people who have upvoted that content

I thought the idea was interesting and wondered if something similar could be implemented in the fediverse.

They currently don’t have plans of open-sourcing their work which is fine but I think it shouldn’t be too hard to try and replicate something similar here right?

They have the option to try this out in guest mode where you don’t have to sign in, but it seems to be giving me relevant content after upvoting only 3 times.

There is more information on their website if you guys are interested.

Edit: Changed title to something more informative.

  • hissing meerkat
    link
    fedilink
    English
    arrow-up
    39
    ·
    11 months ago

    No, not as simply as that. That’s the basic idea of recommendation systems that were common in the 1990s. The algorithm requires a tremendous amount of dimensionality reduction to work at scale. In that simple description it would need a trillion weights to compare the preferences of a million users to a million other users. If you reduce it to some standard 100-1000ish dimensions of preference it becomes feasible, but at the low end only contains about as much information as your own choices about subscribed to or blocked communities (obviously it has a much lower barrier of entry).

    There’s another important aspect of learning that the simple description leaves out, which is exploration. It will quickly start showing you things you reliably like, but won’t experiment with things it doesn’t know you’d like or not to find out.

    • Danterious@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      11 months ago

      There’s another important aspect of learning that the simple description leaves out, which is exploration. It will quickly start showing you things you reliably like, but won’t experiment with things it doesn’t know you’d like or not to find out.

      Why would this be the case? It shows you stuff that people who like similar stuff that you do like, but people have diverse interests so wouldn’t it be likely that the people that like one thing like other things that you hadn’t known about and that leads to a form of guided exploration?

      • hissing meerkat
        link
        fedilink
        English
        arrow-up
        15
        ·
        11 months ago

        There’s two problems. The first is that those other things you might like will be rated lower than things you appear to certainly like. That’s the “easy” problem and has solutions where a learning agent is forced to prefer exploring new options over sticking to preferences to some degree, but becomes difficult when you no longer know what is explored or unexplored due to some abstraction like dimension reduction or some practical limitation like a human can’t explore all of Lemmy like a robot in a maze.

        The second is that you might have preferences that other people who like the same things you’ve already indicated a taste for tend to dislike. For example there may be other people who like both Boba and Cofee but people who like one or the other tend to dislike the other. If you happen to encounter Boba first then Cofee will be predicted to be disliked based on the overall preferences of people who agree with your Boba preference.

        • Danterious@lemmy.dbzer0.comOP
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          11 months ago

          If you happen to encounter Boba first then Cofee will be predicted to be disliked based on the overall preferences of people who agree with your Boba preference.

          With this specific algorithm, I don’t necessarily think that would be the case. It only shows you fewer links from people who like the links that you dislike. It doesn’t show you fewer links based on what people who are like you dislike which is what it seems like you are describing.

          Also, it doesn’t have to be this specific algorithm that we implement but I thought the idea was unique so I thought I’d share it anyway.

          It seems to be working well enough for me now so I plan to keep using it and see what it’s like.

          • hissing meerkat
            link
            fedilink
            English
            arrow-up
            3
            ·
            11 months ago

            Whether or not you use downvotes doesn’t really matter.

            If what you like is well represented by the Boba drinkers and the Boba drinkers disproportionally don’t like Cofee then Cofee will be disproportionally excluded from the top of your results. Unless you explore deeper the Cofee results will be pushed to the bottom of your results. And any that happen to come to the top will have arrived there from broad appeal and will have very little contribution to thinking you like Cofee.

            If you don’t let the math effectively push things away that are disliked by the people who like similar things as you then everything will saturate at maximum appeal and the whole system does nothing.