The idea is to monitor internal communications and do sentiment analysis to check if developers are toxic, too stressed or burned out. While the tech in general could of course be abused, the general idea sounds pretty good, as long as the AI is on-prem for privacy reasons and the employer is transparent and honest about it. Making sure employees are healthy, happy and productive sounds like a worthwhile goal. I wouldn’t want a human therapist monitoring communications to look for negative signs, but the AI can screen stuff, focus exclusively on what it was told to, and forget everything on command.
AIs don’t judge, don’t remember and don’t hold anything against me, so I’d rather have an AI screening my stuff than a human - especially my superiors.
And yes, I trust an AI I run myself. I know they don’t phone home (because they literally can’t) and don’t remember anything unless I go through the effort to connect something like a Chroma or Weaviate vector database, which I then also host and manage myself. The beauty of open source. I would certainly never accept using GPT-4 or Bard or some other 3rd party cloud solution for something this sensitive.