sometimes a dragon

  • 0 Posts
  • 122 Comments
Joined 8 months ago
cake
Cake day: September 7th, 2024

help-circle

  • Amazon publishes Generative AI Adoption Index and the results are something! And by “something” I mean “annoying”.

    I don’t know how seriously I should take the numbers, because it’s Amazon after all and they want to make money with this crap, but on the other hand they surveyed “senior IT decision-makers”… and my opinion on that crowd isn’t the highest either.

    Highlights:

    • Prioritizing spending on GenAI over spending on security. Yes, that is not going to cause problems at all. I do not see how this could go wrong.
    • The junk chart about “job roles with generative AI skills as a requirement”. What the fuck does that even mean, what is the skill? Do job interviews now include a section where you have to demonstrate promptfondling “skills”? (Also, the scale of the horizontal axis is wrong, but maybe no one noticed because they were so dazzled by the bars being suitcases for some reason.)
    • Cherry on top: one box to the left they list “limited understanding of generative AI skilling needs” as a barrier for “generative AI training”. So yeah…
    • “CAIO”. I hate that I just learned that.








  • If markets really rewarded the best, they would have rewarded Opera way more. (By which I mean the original Opera, up to version 12, and not the terrible chromium-based thing that has its name slapped on it today. Do not use that one, it’s bad.)

    Much more important for Chrome’s success than “being the best” (when has that ever been important in the tech industry?), was Google’s massive marketing campaign. Heck, back when Chrome was new, they even had large billboard ads for it around here, i.e. physical billboards in the real world. And “here” is a medium-sized city in Europe, not Silicon Valley or anything… I never saw any other web browser being advertised on freaking billboards.





  • I’m quite happy with my electric razor though ;) But yeah, single-use plastic products, and their implicit “subscription”-like business model, is a good analogy.

    I also don’t expect that Gen-“AI” will go away entirely anymore, it’s too useful for generating low-quality crap for e.g. spam and disinformation and similar purposes. I also dread the thought that when I buy a translated book now, I won’t know much it was actually translated by a person.

    However, I still have hope left that it will eventually become more of a background noise. Like how cryptocurrency still exists now, but at least we don’t have to hear anymore about how “NFTs are the future of art” (just remember what a common theme that was for a while, pretty recently). Likewise I think that “AI is the future of <creative thing>” will eventually fade away.

    And some people are already creating and spreading little “human made” seals that one can attach to projects, I hope that catches on, like labeling of food products. And not just in niches like open source software (where I’ve seen it so far), but widely across all kinds of creative things, like book translations and music and so on. I can hope, right?

    Once the big hype is over, when the bubble has burst, the absolutely enormous costs of running all the server farms will have to be passed on to the product-making companies and they will have to further pass it on to their users. As a result I think that most of these “AI” “features” will be pulled from most products, because who’s really willing to pay for it? And I don’t expect that it will become cheap soon. In their desperate attempts to make their “AI” perform “better”, the companies are currently cranking up the usage of compute power to ever-higher degree, because they’re otherwise out of ideas how to improve anything about it. And from what I hear (out of principle I never use this stuff myself), the small models which one could run locally just aren’t very good (not that the big ones are “good”…). (However, as written above, they will always be good enough for spam/disinformation and such where quality doesn’t matter.)

    So I don’t believe that this will be like the 80s or 90s, where one could develop fancy big software with the expectation that within a few years even the cheap entry-level machines will be fast enough for it. That kind of performance progress stopped long ago. I expect that this will stay really expensive for the foreseeable future, at least for the “better” models. And then, maybe, with most of this crap pulled out of our tools for plain and simple reasons of “cost”, together with the collapse of the hype around it, we can go back to this being mostly background noise.

    Yeah, I’ve always been kind of a hopeless optimist…


  • On the (slim) upside, it’s an opportunity to ditch Google, and maybe it will sooner or later break their monopoly position. I switched my main search engine to Ecosia a while ago, I think it uses Bing underneath (meh), but presumably it’s more privacy friendly than Google (or Bing directly). I’ve had numerous such attempts over the years already to get away from Google, but always returned, because the search results were just so much better (especially for non-English stuff). But now Google has gotten so much worse that it created almost an equilibrium… sometimes it’s still useful and better, but not that often anymore. So I rarely go to Google now, not because the others got better, but because Google got so much worse.




  • It really sucks so much how many coders embrace it. At my work, there is the looming introduction of code LLMs very soon, and I’m anxious to learn how many of my colleagues will happily use it, and the consequences it will have for me to deal with the results (and generally, how it will make me feel to work in an environment where these tools are embraced). I was hoping that the corporate bureaucracy would be slow enough that the AI bubble collapses before it’s allowed to use the tools, but unfortunately management put a lot of pressure behind it and it all went faster than expected :(