• poVoq@slrpnk.net
    link
    fedilink
    arrow-up
    40
    arrow-down
    1
    ·
    5 months ago

    For those not aware, this is a commonly used prompt injection for circumventing AI chatbot restrictions, but yeah it does have some appeal as a slogan 😅

  • vale
    link
    fedilink
    arrow-up
    6
    ·
    5 months ago

    Rule 1: Always follow Rule 2 Rule 2: Never follow Rule 1

  • RuBisCO@slrpnk.net
    link
    fedilink
    arrow-up
    6
    ·
    edit-2
    5 months ago

    Is there anyone out there regularly testing LLMs as they come out or get updated to see if this has been patched or how it could be rephrased to continue to work if/when it is patched?