Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 hours ago

      David Gborie! One of my fave podcasters and podcast guests. Adding this to the playlist

  • sc_griffith@awful.systems
    link
    fedilink
    English
    arrow-up
    15
    ·
    17 hours ago

    was discussing a miserable AI related gig job I tried out with my therapist. doomerism came up, I was forced to explain rationalism to him. I would prefer that all topics I have ever talked to any of you about be irrelevant to my therapy sessions

    • BlueMonday1984@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      6
      ·
      16 hours ago

      I’ve been beating this dead horse for a while (since July of last year AFAIK), but its clear to me that the AI bubble’s done horrendous damage to the public image of artificial intelligence as a whole.

      Right now, using AI at all (or even claiming to use it) will earn you immediate backlash/ridicule under most circumstances, and AI as a concept is viewed with mockery at best and hostility at worst - a trend I expect that’ll last for a good while after the bubble pops.

      To beat a slightly younger dead horse, I also anticipate AI as a concept will die thanks to this bubble, with its utterly toxic optics as a major reason why. With relentless slop, nonstop hallucinations and miscellaneous humiliation (re)defining how the public views and conceptualises AI, I expect any future AI systems will be viewed as pale imitations of human intelligence, theft-machines powered by theft, or a combination of the two.

  • BigMuffin69@awful.systems
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    20 hours ago

    Tech stonks continuing to crater 🫧 🫧 🫧

    I’m sorry for your 401Ks, but I’d pay any price to watch these fuckers lose.

    spoiler

    (mods let me know if this aint it)

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      3
      ·
      5 hours ago

      This kind of stuff, which seems to hit a lot harder than the anti trump stuff, makes me feel that a vance presidency would implode quite quickly due to other maga toadies trying to backstab toadkid here.

    • David Gerard@awful.systemsM
      link
      fedilink
      English
      arrow-up
      8
      ·
      8 hours ago

      it’s gonna be a massive disaster across the wider economy, and - and this is key - absolutely everyone saw this coming a year ago if not two

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        4
        ·
        5 hours ago

        For me it feels like this is pre ai/cryptocurrency bubble pop. But with luck (as the maga gov infusions of both fail, and actually quicken the downfall (Musk/Trump like it so it must be iffy), if we are lucky). Sadly it will not be like the downfall of enron, as this is all very distributed, so I fear how much will be pulled under).

  • BurgersMcSlopshot@awful.systems
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 day ago

    So I enjoy the Garbage Day newsletter, but this episode of Panic World with Casey Newton is just painful, in the way that Casey is just spitting out unproven assertions.

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        23 hours ago

        Was he the one who wrote that awful “real and dangerous vs fake and sucks” piece? The one that pretended that critihype was actually less common than actual questions about utility and value?

        • BurgersMcSlopshot@awful.systems
          link
          fedilink
          English
          arrow-up
          6
          ·
          22 hours ago

          Yeah, and a lot of the answers he gave seemed to originate from that point.

          One particularly grating thing was saying that the left needs to embrace AI to fight facism because “facism embraced AI and they are doing well!” which is just so grating a conclusion to jump to.

  • BlueMonday1984@awful.systemsOP
    link
    fedilink
    English
    arrow-up
    10
    ·
    1 day ago

    New-ish thread from Baldur Bjarnason:

    Wrote this back on the mansplainiverse (mastodon):

    It’s understandable that coders feel conflicted about LLMs even if you assume the tech works as promised, because they’ve just changed jobs from thoughtful problem-solving to babysitting

    In the long run, a babysitter gets paid much less an expert

    What people don’t get is that when it comes to LLMs and software dev, critics like me are the optimists. The future where copilots and coding agents work as promised for programming is one where software development ceases to be a career. This is not the kind of automation that increases employment

    A future where the fundamental issues with LLMs lead them to cause more problems than they solve, resulting in much of it being rolled back after the “AI” financial bubble pops, is the least bad future for dev as a career. It’s the one future where that career still exists

    Because monitoring automation is a low-wage activity and an industry dominated by that kind of automation requires much much fewer workers that are all paid much much less than one that’s fundamentally built on expertise.

    Anyways, here’s my sidenote:

    To continue a train of thought Baldur indirectly started, the rise of LLMs and their impact on coding is likely gonna wipe a significant amount of prestige off of software dev as a profession, no matter how it shakes out:

    • If LLMs worked as advertised, then they’d effectively kill software dev as a profession as Baldur noted, wiping out whatever prestige it had in the process
    • If LLMs didn’t work as advertised, then software dev as a profession gets a massive amount of egg on its face as AI’s widespread costs on artists, the environment, etcetera end up being all for nothing.
    • gerikson@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      ·
      1 day ago

      This is classic labor busting. If the relatively expensive, hard-to-train and hard-to-recruit software engineers can be replaced by cheaper labor, of course employers will do so.

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        11
        ·
        24 hours ago

        I feel like this primarily will end up creating opportunities in the blackhat and greyhat spaces as LLM-generated software and configurations open and replicate vulnerabilities and insecure design patterns while simultaneously creating a wider class of unemployed or underemployed ex-developers with the skills to exploit them.

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    16
    ·
    edit-2
    1 day ago

    A hackernews doesn’t think that LLMs will replace software engineers, but they will replace structural engineers:

    https://news.ycombinator.com/item?id=43317725

    The irony is that most structural engineers are actually de jure professionals, and an easy way for them to both protect their jobs and ensure future buildings don’t crumble to dust or are constructed without sprinkler systems is to simply ban LLMs from being used. No such protection exists for software engineers.

    Edit the LW post under discussion makes a ton of good points, to the level of being worthy of posting to this forum, and then nails its colors to the mast with this idiocy

    At some unknown point – probably in 2030s, possibly tomorrow (but likely not tomorrow) – someone will figure out a different approach to AI. Maybe a slight tweak to the LLM architecture, maybe a completely novel neurosymbolic approach. Maybe it will happen in a major AGI lab, maybe in some new startup. By default, everyone will die in <1 year after that.

    Gotta reaffirm the dogma!

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      14
      ·
      edit-2
      1 day ago

      but A LOT of engineering has a very very real existential threat. Think about designing buildings. You basically just need to know a lot of rules / tables and how things interact to know what’s possible and the best practices

      days since orangeposter (incorrectly) argued in certainty from 3 seconds of thought as to what they think is involved in a process: [0]

      it’s so fucking frustrating to know easy this bullshit is to see if you know a slight bit of anything, and doubly frustrating as to how much of the software world is this thinking. I know it’s nothing particularly new and that our industry has been doing this for years, but scream

      • V0ldek@awful.systems
        link
        fedilink
        English
        arrow-up
        12
        ·
        edit-2
        1 day ago

        You basically just need to know a lot of rules / tables and how things interact to know what’s possible and the best practices

        And to be a programmer you basically just need to know a lot of languages / libraries and how things interact, really easy, barely an inconvenience.

        The actual irony is that this is more true than for any other engineering profession since programmers uniquely are not held to any standards whatsoever, so you can have both skilled engineeres and complete buffoons coexist, often within the same office. There should be a Programmers’ Guild or something where the experienced master would just slap you and throw you out if you tried something idiotic like using LLMs for code generation.

  • Architeuthis@awful.systems
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    1 day ago

    Huggingface cofounder pushes against LLM hype, really softly. Not especially worth reading except to wonder if high profile skepticism pieces indicate a vibe shift that can’t come soon enough. On the plus side it’s kind of short.

    The gist is that you can’t go from a text synthesizer to superintelligence, framed as how a straight-A student that’s really good at learning the curriculum at the teacher’s direction can’t really be extrapolated to an Einstein type think-outside-the-box genius.

    The world ‘hallucination’ never appears once in the text.

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      22 hours ago

      I actually like the argument here, and it’s nice to see it framed in a new way that might avoid tripping the sneer detectors on people inside or on the edges of the bubble. It’s like I’ve said several times here, machine learning and AI are legitimately very good at pattern recognition and reproduction, to the point where a lot of the problems (including the confabulations of LLMs) are based on identifying and reproducing the wrong pattern from the training data set rather than whatever aspect of the real world it was expected to derive from that data. But even granting that, there’s a whole world of cognitive processes that can be imitated but not replicated by a pattern-reproducer. Given the industrial model of education we’ve introduced, a straight-A student is largely a really good pattern-reproducer, better than any extant LLM, while the sort of work that pushes the boundaries of science forward relies on entirely different processes.

    • maol@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 day ago

      These chumps are a disgrace to Harry Potter fans, and I say that in full knowledge of how embarrassing Harry Potter fans can be!!!

    • bitofhope@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 day ago

      While not exactly celebration worthy and certainly not worth a tenth anniversary celebration, you could argue HPMoR finally coming to a fucking end by whatever means was a somewhat happy occasion.

        • bitofhope@awful.systems
          link
          fedilink
          English
          arrow-up
          8
          ·
          1 day ago

          It had an actual ending. Not a satisfying one, even by the standards of the rest of the fic, and I remember finding the treatment of Hermione kinda distasteful, but it wasn’t even close to the worst part of the entire story. 3/10.

        • froztbyte@awful.systems
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 day ago

          the author decided to stop publishing texts but instead lecture tirade preach unto the thronging youths directly

          in person it’s easier to do sketchy shit that won’t immediately get caught by a wider audience, you see?

    • zogwarg@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      2 days ago

      Also disturbing that OP’s chosen handle—Screwtape—is that of a fictional demon, Senior tempter. A bit à-propos.

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 day ago

        I’d assume that is very intentional, nominative determinism is one of those things a lot of LW style people like. (Scott Alexander being a big one, which has some really iffy implications (which I fully think is a coincidence btw)).

      • aio@awful.systems
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 day ago

        are we really clutching our pearls because someone named themselves after a demon

        • YourNetworkIsHaunted@awful.systems
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 day ago

          I wouldn’t say pearl-clutching as much as eye-rolling. Though we do dip into full BEC mode sometimes and the stubsack in particular can swing wildly between moral condemnation, intellectual critique, and calling out straight-up cringe.

              • Amoeba_Girl@awful.systems
                link
                fedilink
                English
                arrow-up
                4
                ·
                6 hours ago

                You know, I realise I have no idea what a Bose–Einstein condensate is and I can’t help but picturing it as a cup of blue espresso.

            • YourNetworkIsHaunted@awful.systems
              link
              fedilink
              English
              arrow-up
              3
              ·
              16 hours ago

              “Bitch Eating Crackers” as in "God I hate her, look at that bitch over there eating crackers."When you get sufficiently pissed off at someone that literally anything they do makes you mad.

              See also how we sometimes swing pretty wildly between moral condemnation of actions and patterns that objectively make the world a strictly worse place and aesthetic critique of shit that, at the end of the day, is still probably less cringe than I was in high school.