Microsoft’s LinkedIn will update its User Agreement next month with a warning that it may show users generative AI content that’s inaccurate or misleading.
LinkedIn thus takes after its parent, which recently revised its Service Agreement to make clear that its Assistive AI should not be relied upon.
LinkedIn, however, has taken its denial of responsibility a step further: it will hold users responsible for sharing any policy-violating misinformation created by its own AI tools.
The relevant passage, which takes effect on November 20, 2024.
In short, LinkedIn will provide features that can produce automated content, but that content may be inaccurate. Users are expected to review and correct false information before sharing said content, because LinkedIn won’t be held responsible for any consequences.
The real question is will this hold up in court. Judges are likely to frown on this type of thing. Sure the EULA that they know nobody reads says that, but their tools are giving advice in an authoritative tone. My company has got in trouble in court because in an advertisement it appeared our tools were being used in ways the warning label says don’t.