• rolaulten@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    While I think you might be onto something - I think this only works for consumer focused goods/products/services. Large language models can seed “what car should I get” type conversations - however they have a tendency to be confidently wrong. And in industry specific communities being confidently wrong (especially when attempting to influence a b2b sale) can lead to all kinds of negative ripple effects.

    • Aceticon@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Being confident whilst caring not one jot for being wrong or right is the essence of selling in modern society, especially when it comes to ideas (especially Politics).

      Large language models are great at producing text with all the correct subtle details to trigger the reader’s subconscious feeling of “this is somebody who knows what he’s talking” with zero of the subtle details that make people suspect the writer is unsure of what he wrote or even deceitful (in a way, LLMs are the perfect sleazy politician).

      That said, I do agree with you that amongst expert domain specific communities populated by people who have actual domain knowledge, assured delivery of bullshit doesn’t go far. It will, however, very likely go far in the more generic communities which seem to, at least individually, have the most subscribers, so an “influencer” strategy selling to “consumers” (so, B2C not B2B) might make sense if most of the population of Reddit are non-specialists using it as a “Portal to the Internet”.