Silicon Valley may value imperfect virtual PhDs more than universities pay real ones.

  • stochastictrebuchet
    link
    fedilink
    English
    arrow-up
    6
    ·
    2 days ago

    Hope the offering includes

    • Every task being reframed in terms of the PhD’s specific niche
    • Crippling imposter syndrome
    • Bonus academic tier: replace Extended Thinking with Grant Proposal Writing
    • Peppycito
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      Grant Proposal Writing

      I’d subscribe for that.

  • intensely_human@lemm.ee
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    2 days ago

    This is the dumbest cultural micro trend I’ve ever seen. Now we’re pretending that “PhD-level” has no meaning?

    It means having knowledge equivalent to a PhD in a field. That is so simple.

    The reason there’s so much resistance to conceptualizing anything in the AI field is that AI is scary and people would rather be in denial about its existence.

    • wizardbeard@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      2 days ago

      It definitely means more than just that in this context. Being able to consistently communicate that knowledge in a usable format is important. It’s important that it doesn’t regularly get delusional and output falsehoods.

      That second point is the biggest problem. LLMs have people pretending that the ultimate form of word association done at an incomprehensible scale will eventually become something comparable to logic and reasoning. Even the newer models like deepseek that output “thinking” steps are almost entirely shadow play.

      There has been some minor success using text analysis to insert small expert systems (actual logic) into the middle of the text analysis -> text generation pipeline, but it turns out that those have all the limitations and constraints they’ve always had (limited problem space, difficult to build and maintain, requiring actual subject matter experts to create), and putting them in the middle of multiple “lossy” steps (the analysis and generation of text) doesn’t help their efficacy.

      The reason there’s so much resistance to conceptualizing anything in the AI field is that AI is scary and people would rather be in denial about its existence.

      Additionally, like most fad science trends, there is a gap between enthusiasts and professionals. See the old joke about tech enthusiasts vs old sysadmins.

      Plus, people like Sam Altman have demonstrated that there is an absurd amount of money to be made and influence to be wielded by overstating capabilites and anthropomorphizing code. Pushback against the griftiness is natural and healthy.

      This recent push of AI is the most costly business pursuit ever done, and sold entirely on the promise that they’re going to bypass limitations inherent to the core design in some unspecified way at some unspecified point in the future, they just need even more money. OpenAI is the most successful example, and they lose money on every query.

      My point is that there’s plenty of reasons to be skeptical or even distrustful of this shit besides “wElL yOu JuSt DoN’t UnDeRsTaNd”/“YoU’rE jUsT sCaReD oF aI”. Those are just cop out excuses for your own lack of interest in engaging with opposing views, and lack of interest in examining the underlying technology and business facts beneath the hype machine.

      AI is groundbreaking and amazing. It is also definitively not capable of the dream they’ve sold us, and there is no clear path from A to B. I don’t hate it. I find it far less useful than it’s sold as. I have absolute contempt for the record breaking amount of money and resources going into it that would be far better invested elsewhere. I have negative amounts of confidence that even if by some reality bending miracle they achieve the lofty hype machine goals, that any significant amount of the benefits will trickle down to the layperson at all.

      Apologies for the ramble. I’m just sick and tired of the thought terminating argument that all people speaking against AI are scared or somehow don’t get it. That’s bullshit and even a moment of thought should make you dismiss that as some sort of hard and fast rule.