I just had a Thought. Obviously, we can’t simply pass a law banning #X specifically from operating in the #EuropeanUnion . It would set a terrible precedent, likely wouldn’t pass the courts, and wouldn’t be very effective since X could just rebrand itself or something.

But we could introduce, say, the “Social Media Transparency Act” which would require all social media systems operating in the European Union to either use a strict timeline, or publish the precise algorithm they use to show content to their users. And if there are any variables specific to the user, their profile, their region and so forth, then users must be able to request the values of these parameters as they apply to them at any time.

And if any social media companies in the European unions do not comply… well then we can fine and/or ban them!

Does this sound like a plan?

#SocialMedia #Fediverse

  • Emilis 🇺🇦@fosstodon.org
    link
    fedilink
    arrow-up
    1
    ·
    18 days ago

    @[email protected] I had the same thought for some time.

    I would leave the choice for the no/algorithm to the user.

    Make it explicit and non-default. Like the agreements for direct marketing. Penalize dark UX patterns.

    The algorithms distort the view of the reality for masses of people. This will never end well, no matter who controls an algorithm.

    Maybe the legal reasoning could be around manipulating communication between people and their friends.

  • Anomnomnomaly BSC SSC@beige.party
    link
    fedilink
    arrow-up
    1
    ·
    19 days ago

    @[email protected]

    I’ve been advocating for many years now, to reclassify these platforms as ‘publishers’ rather than ‘conduits’… they allow others to publish on their platform… the law needs to reflect that they are liable for what they ‘allow’ to be published on their platforms.

    If you make them liable, both financially and criminally… you’d be very surprised at how quick they’ll act to stop hate, lies, crime from being the majority of content on their platforms.

    All they care about is money, that’s were you cut their throats… make it unprofitable for them to earn a penny from lies, hate and criminal content… make them liable for the scammers that con people out of money, minimum fines for allowing posts that contains lies and hate… say 10k per share and 5k per like.

    Have a designated account for each political party… only they are allowed to post for their party. Ban 3rd party political ads… it has to come from them and only through their account… everything else blocked.

    These are just thoughts of course… better people than me could work out the finer points… But the whole idea revolves around making them culpable for what they have turned a blind eye to for the last 20yrs under the guise of ‘we’re doing all we can’ which is precisely nothing.

  • sobroquet@infosec.exchange
    link
    fedilink
    arrow-up
    1
    ·
    18 days ago

    @[email protected]

    It is a plan, an idealistic plan.

    The technical dilemma is that the algorithm would be construed as intellectual property thereby colliding with extant laws impacting protected copyrighted properties.

    The EU Digital Services Act, like Australia are regularly fining X, which has become a sewer for hate, nazism, anti semitism and misinformation and disinformation. Mr. Musk is clearly a nefarious psychopath.

    The Undermining of America
    The Dangerous Case of Elon Musk
    (fascist -saboteur - eco-terrorist)
    Authoritarian Thought Police
    https://tinyurl.com/yem9hahe