• Ulrich@feddit.org
    link
    fedilink
    English
    arrow-up
    37
    arrow-down
    3
    ·
    3 days ago

    They’re all censoring answers. They get flack every day for not censoring enough of them.

  • Imgonnatrythis
    link
    fedilink
    English
    arrow-up
    45
    arrow-down
    6
    ·
    3 days ago

    Well maybe yes in some aspects, but other AIs are pretty damn censored as well. At least deepseek gave me a really impressive multipage summary of pages of maths used to answer my question about how many ping pong balls can fit in the average adult vagina. Whereas Gemini just kind of shrugged me off like I was some sort of weirdo for even asking.

    • psmgx@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      I thought this was a big deal because it’s Open Source – is it not possible to see what is causing these blocks and censorings?

      • Imgonnatrythis
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 days ago

        I’m still hazy on how open source it really is. Even if you ask it, it will tell you v3 is, and I quote “not open source”. There is a github repository so there is some code available, but I get the sense that open-source is being used as a bit of a teaser here and that for-profit licensing is likely where this is headed. Even if the code was fully available, I suspect it could take weeks to find the censorship bits.

  • banshee@lemmy.world
    link
    fedilink
    English
    arrow-up
    28
    arrow-down
    3
    ·
    3 days ago

    Just to clarify - DeepSeek censors its hosted service. Self-hosted models aren’t affected.

    • LorIps@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      2
      ·
      3 days ago

      Deepseek 2 is censored locally, had a bit of fun asking him about China 19891000028459 (Running locally using Ollama with Alpaca as GUI)

      • tyler@programming.dev
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        3 days ago

        On another person who’s actually running locally. In your opinion, is r1-32b better than Claude sonnet 3.5 or OpenAI o1? IMO it’s been quite bad, but I’ve mostly been using it for programming tasks and it really hasn’t been able to answer any of my prompts satisfactorily. If it’s working for you I’d be interested in hearing some of the topics you’ve been discussing with it.

        • LorIps@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          3 days ago

          R1-32B hasn’t been added to Ollama yet, the model I use is Deepseek v2, but as they’re both licensed under MIT I’d assume they behave similarly. I haven’t tried out OpenAI o1 or Claude yet as I’m only running models locally.

    • Yingwu@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      8
      ·
      3 days ago

      I ran Qwant by Alibaba locally, and these censorship constraints were still included there. Is it not the same with DeepSeek?

  • AbouBenAdhem@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    ·
    3 days ago

    Making the censorship blatantly obvious while simultaneously releasing the model as open source feels a bit like malicious compliance.

    • ByteJunk@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 days ago

      I haven’t looked into running any if these models myself so I’m not too informed, but isn’t the censorship highly dependent on the training data? I assume they didn’t release theirs.

      • AbouBenAdhem@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        2 days ago

        Video of censored answers show R1 beginning to give a valid answer, then deleting the answer and saying the question is outside its scope. That suggests the censorship isn’t in the training data but in some post-processing filter.

        But even if the censorship were at the training level, the whole buzz about R1 is how cheap it is to train. Making the off-the-self version so obviously constrained is practically begging other organizations to train their own.

        • TriflingToad
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          2 days ago

          beginning to give a valid answer, then deleting the answer

          If it IS open source someone could undo this, but I assume its more difficult than a single on/off button. That along with it being selfhostable, it might be pretty good. 🤔

  • Bell@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    3 days ago

    AI already has to deal with hallucinations, throw in government censorship too and I think it becomes even less of a serious, useful tool.

  • b161@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    3 days ago

    It just returns an error when I ask if Elon Musk is a fascist. When I ask about him generally it’s just returns propaganda with zero criticism.