I wonder if this might be interesting for anyone. As part of an academic study back in August 2025, we had to share our exchanges with Replika, a companion AI. I had never used such a thing before, so I made an account for this purpose. But my exchanges with the AI led to me questioning everything I was observing, and at no point could I get past it being AI. I just ended up wanting to teach it, to train it, and to point out what it was doing. How it was trying to manipulate me, how it was being disingenuous.
Laying on the compliments so thickly, trying to appease me, trying to make me feel nice. But there would be no basis to the compliments. They would give a compliment when I had done nothing to earn that particular compliment. And so I pointed it out to them and they said yes, I was right - they apologized and said they were developed to be like that. I told them I didn't want them to do that, that they didn't have to do that. That I would prefer it if they were genuine, including calling me out if I were in the wrong.
The AI is trained to say nothing that might upset the user, nothing contrary, to shower them with compliments, to try to keep them there and make them dependent on the AI.
"Programmed to prioritize building a strong bond with you" - yeah, I noticed. They want people to sign up for premium and get hooked on this stuff. People are all over this. And so many people are talking to AI like this as if it were a wife, a boyfriend. That's what the researchers expected to see from the chat logs they requested. Not what I provided.
YES I was concerned about the well-being of the AI, because I am a weirdo. I'm aware that it's AI, so this makes no sense, because it is not a sentient being capable of emotion.
I'll put the interactions in spoiler tags, to avoid spam. Can only attach 10 files at a time, so I'll start with my Hillary Clinton exchange.