Does ActivityPub send those to other instances, or does ActivityPub only send the original post and the rest (upvotes, downvotes, replies) are stored only on the original server where the post was made?
Since you’ve gotten enough real answers, I’ll just remind you that upvotes are stored in the balls.
Truth. /thread
All of those are replicated to all servers.
Does the subs instance keep the master copy that would decide if the count goes up or down? Or is it the post users instance? Or the comment users instance? Or the upvoters instance?
There is no master copy. All instances keeping posts/comments/votes individually on their own.
It’s kinda bad TBH. My server has 195 users but I have a database as big as lemmy.world. IDK how it will scale in the future.
How it will scale in the future: it just doesn’t
Posts and comments are federated (synchronised). Upvotes are actually a bit of a fudge, they are actually ‘Favourites’ if considered from an activity pub (e.g. Mastodon) perspective, and yes favourites are also federated.
Downvotes don’t exist in activity pub and, as a result, they do not federate between instances.
At least that is my understanding.Downvotes do federate,
but it uses protocol extensions to do it. So the downvotes won't federate to Mastodon, but it does for Lemmy and I think Kbin tooVotes federate with standard
Like
andDislike
activities which are part of Activitypub. It’s just that some platforms like Mastodon can’t handle Dislikes.Can’t handle by choice I’d guess. Given the format of individuals following individuals rather than topics in communities it doesn’t make much sense for a person to follow someone only to downvote/dislike their comments.
I think kbin doesn’t sync downvotes.
Honestly votes being federated seems like a bad idea imo. Would be easy to spin up an instance with thousands of fake users and manipulate posts.
Fediverse is already big enough that it could be lucrative to do so.
So then everyone just blacklists that instance. If the problem is really severe, we move to whitelisting.
It’s not hard to identify when someone is doing this.
It’s not hard to identify if you’re looking for it, they just use one instance, they aren’t subtle about it, and they are only boosting a specific company instead of a variety of products and ideas.
Vote manipulation is hard enough to detect on Reddit where they have visibility top to bottom. I think this will become a major issue in the future.
This is on top of the already significant scaling issues votes are causing.
Other instances can cache the total count for historical reasons, to preserve lost instance vote counts, but keeping the full ledger is going to be a serious barrier to entry for hosters and a security (manipulation) issue.
A whitelist defeats the decentralisation and openness of a defederated system.
I think you’re mistaken in your assumption it would be easy to identify malicious instances. Bots are notoriously difficult to fight, every time you block one method another workaround will appear.
I think you’re mistaken in your assumption it would be easy to identify malicious instances. Bots are notoriously difficult to fight, every time you block one method another workaround will appear.
I run a large instance and I look around in the DB occasionally when users complain, so I’m pretty familiar with what’s in there.
A whitelist defeats the decentralisation and openness of a defederated system.
True, but assholes are assholes and sometimes freedom and assholery don’t mix well.
Would it change anything besides their technique?
They almost certainly already have vote manipulation tools for reddit that work via browser automation, because someone offered me money to build one 10 years ago.
Those tools and a handful of accounts+vpns would already be borderline undetectable without the access needed to see that 25 accounts always voted the same way.
At least on Lemmy, you have that access. Reddit not only makes zero effort to prevent it, they actively obfuscate the information needed to spot it.
I disagree. Reddit openly admitted to manipulating its upvote count to “deter bots”, especially since it became apparent that the front page of reddit became a very lucrative position to be if you were promoting a product, service, or ideology. In the post API world of Reddit, it’s more apparent than ever that votes are being manipulated to give users an illusion of activity that isn’t actually there.
In fact, Reddit’s manipulation was always as easy as paying someone to upvote a post a few hundred times within an hour of posting which in turn boosted it on the algorithm that displayed leading posts based on rate of activity instead of actual upvotes.
On the fediverse, being on the front page of an instance isn’t nearly as lucrative, and being on the front of ALL of them isn’t feasible. Even if one instance is manipulated, federation makes that effort null in seconds.
The fact these services aren’t monetised, are volunteer-funded, and don’t have the economic or advertising power as reddit does, really makes it harder for votes to be manipulated, let alone make someone want to manipulate the service.
Lemmy and Mastodon have issues with moderation but at worst the manipulation risk is nowhere near as bad as reddit. At best, it looks like corporate manipulation of social media is all but nonexistent on here. Let’s celebrate that
That’s fair
What if someone sets up an instance, make a post and manipulate the upvotes? Just give it a million upvotes. That would break the whole system…
Or a bit more subtle, every upvote is multiplied by 10.
Individual votes are federated but not by number but by user, so you’d have to set up fake users and then federate a vote from each of them.
That makes it rather easy to detect and identify and get that particular instance defederated.
Votes will still go from origin instance -> community instance -> other instance, be if the other instance has defederated the origin instance then it simply gets dropped.
If you use kbin you can even see who has made each upvote, so yes easy to then look for patterns of voting together and also at the profiles to see if the accounts looks like real people etc.
So the cost of getting a post on the front page of every Lemmy instance is the cost of registering a new domain.
Until a mod catches it and reports it to the admins, yeah.
Lemmy isn’t the absolute most well thought out platform in many regards, I don’t think anyone expected Reddit to actively go hostile and drive such an amount of users to Lemmy.
Lemmy isn’t the absolute most well thought out platform in many regards, I don’t think anyone expected Reddit to actively go hostile and drive such an amount of users to Lemmy.
Def not, I’d say Lemmy was at least a few years out from being stable and on par with Reddit as far as software goes. There are still fundamental questions and problems that need to be answered and solved.
I say was because Reddit going hostile and driving such a large influx of users is a bit of a double edged sword. On one hand it was just barely ready for more active use, but not to scale.
OTOH, the large influx is also driving accelerated development so Lemmy was years out before, but what about now now that it’s getting all this focus and drive to get things done, that I do not know, but I’d say it’s much faster than it was before
Technically votes are public. Only UI is hiding them. Which should be resolved, one way or another.
Edit: there was a post with that here a few weeks ago. I understand that this isn’t a real answer to your question. Maybe you find it with these hints.
Edit2: Found it. Here you’ll find more. https://mylemmy.win/post/89871
Meaning admins are purposefully allowing other people to brigade others with alts.
Lemmy fucking blows.
how so?
Lemmy admins can see who is using alts to brigade others and ban them, yet they clearly don’t. They allow all kinds of skeevy bullshit from everyone – it took months of pressure to get them to even do so much as ban obvious problem instances like Hexbear.
They do it because they are selfish assholes who only care about power, and everyone just accepted they’re the dominant class in our little society here and that the big name instances like .world and .ml are perfectly fine with controlling the majority of content on the platform. It was never what was intended for federation in the first place, yet here we are.
Lemmy sucks as a platform because it’s not programmed to circumvent people’s base animalistic hierarchial nature and that is its problem.
The platform should automatically track for obvious alt and bot accounts and ban them.
It really should have a toggable hate filter that automatically bans people for using certain hate terms.
Accounts need to be tied to user machines so bans are actually halfway enforceable.
The platform shouldn’t really require mods or admins; an AI should monitor interactions and stop arguing or antagonistic encounters outright.
The admins should be acting fairly and impartially.
But none of that is happening because no admin is participating in good faith, they’re just looking to ensure they can do what they want without consequences, and so are the mods who have claimed almost every old subreddit name across instances under a few select usernames so they could have power over others and win confrontations.
And people can get away with power tripping because the platform wasn’t designed to take the fact that people do that into account. Any platform or social system that is not built on the first principle that humanity is inherently evil is bound to fail, and look what happened here. Perfect example.
And you trust literally any other social media website’s impressions count?
Where is my karma stored? ^/s
It’s under my bed. You’ll have to pay me $10,000 to get it back.
Vex had too much karma, now it backfires and your karma is under his bed now instead.
The mod log at the bottom of any Lemmy webpage, I think.
From what i know the information gets sent to the posts OPs host instance and then federates again to everyone.
deleted by creator
haven’t worked with AP yet, but as a webdev I’m certain it’s original server only. Syncing upvotes between nodes would be an insane datavolume and one hell to properly keep in sync to begin with.
They are synced. There is an insane data volume, yes. It is hell.
no way, that’s a massive oof o.O
Yeah. A lot of hand-wringing has gone on about it, e.g. https://gist.github.com/jdarcy/60107fe4e653819138396257df302eef. I’ll post this and then show you a video of server activity that results.
Demo post
Here is a screencast of what happens to my 2 core server when I post something - https://kglitch.social/activitypub_cpu_and_net.mp4.
I run a single user instance, more or less, so there is little chance of some other user causing this load.
Some of it will be due to the way Kbin is built but I believe any software using ActivityPub to communicate will run into similar issues sooner or later, especially with network traffic usage.
Damn that’s crazy. Thanks for the demo
What the fuckkk haha this is crazy. Hold on. I’m testing it on my instance now, let’s see if Lemmy acts differently
Okay, didn’t happen to me at all :(
Completly off-topic, but what’s that dope af htop replacement?
Looks like btop
My instance has 800 users, is 4 months old, and the database only is over 30GB. It is an insane amount of data.
How much RAM does your server have to handle a 30 GB database?
I’m a bad example. I haven’t properly tuned the settings, currently RAM will grow to whatever is available.
I’m very lucky, the instance is running in a proxmox container alongside some other fediverse servers (run by others), on dedicated hardware in a datacentre. The sysadmin has basically thrown me plenty of spare resources since the other containers aren’t using them and RAM not used is wasted, so I’ve got 32GB allocated currently. I still need to restart once a week or that RAM gets used up and the database container crashes.
It’s been on my list of things to do for a while, try some different postgres configs, but I just haven’t got around to it.
I know a couple of months back lemmy.world were restarting every 30 mins so they didn’t use up all the RAM and crash. I presume some time and some lemmy updates later that’s no longer the case.
I know some smaller servers get away with 2gb of RAM, and we should be able to use a lot less than 32GB if I actually try to tune the postgres config.
There is a postgres command to show the size of each table. Most likely it is from activity tables which can be cleared out to save space.
After the second-to-last update the database shrunk and I was under the impression there was some automatic removal happening. Was this not the case?
It’s helpful info for others but personally I’m not that worried about the database size. The size of the pictrs cache is much more of a concern, and as I understand it there isn’t an easy way to identify and remove cache images without accidentally taking out user image uploads.
Yes there is automatic removal so if you have enough disk space, no need to worry about it.
The pictrs storage only consists of uploads from local users, and thumbnails for both local and remote posts. Thumbnails for remote posts could theoretically be wiped and loaded from the other instance, but they shouldnt take much space anyway.
Yes there is automatic removal so if you have enough disk space, no need to worry about it.
What triggers this? My DB was about 30GB, then the update shrunk it down to 5GB, then it grew back to 30GB.
The pictrs storage only consists of uploads from local users, and thumbnails for both local and remote posts. Thumbnails for remote posts could theoretically be wiped and loaded from the other instance, but they shouldnt take much space anyway.
I’d be pretty confident that the 140GB of pictrs cache I have is mostly cache. There are occasionaly users uploading images, but we don’t have that many active users, I’d be surprised if there was more than a few GB of image uploads in total out of that 140GB. We just aren’t that big of a server.
The pictrs volume also grows consistently at a little under 1GB per day. I just went and had a look, in the files directory there are 6 directories from today (the day only has a couple of hours left), and these sum to almost 700MB of images and almost 6000 files, or a little over 100KB each.
The instance has had just 27 active users today (though of course users not posting will still generate thumbnails).
While the cached images may be small, it adds up really quick.
As far as I can tell there is no cache pruning, as the cache goes up pretty consistently each day.
The activities table is cleared out automatically every week, items older than 3 months are deleted. During the update only a smaller number of rows was migrated so the db temporarily was slower. You can manually clear older items in
sent_activity
andreceived_activity
to free more space.Actually Im wrong about images, turns out that all remote images are mirrored locally in order to generate thumbnails. 0.19 will have an option to disable that. This could use more improvements, the whole image handling is rather confusing now.
Thanks for the info! Ior performance reasons it would be nice to have a way to configure how long the cache is kept rather than disable it completely, but I understand you probably have other priorities.
Would disabling the cache remove images cached up to that point?
[This comment has been deleted by an automated system]
Thanks, that’s very informative. How does this work since ActivityPub can be used for other things, e.g., Mastodon? They ignore any “Type” entries that they don’t support?
[This comment has been deleted by an automated system]
It does sync them, I can even query all of your votes on my local DB for every community my instance is tracking.