@strypey my point was that the LLM that is configured to give me the most appropriate answer using the data it has with no understanding of its impacts 'speaks' more compassionately and responds in a more prosocial way to a request for correction than humans that DO have comprehension and decision-making processes.
The machine doesn't have the choice to be an asshole, but humans do and are.
@FuVenusRs But has the Trained #MOLE remembered your preference? Based on my understanding of the backprop algorithms used to created them, and the feedback I've seen from people using them, I don't expect it will.
It's responses are weighted to seem conciliatory and pleasant, because this increases repeat use. With the downside that this increases the chances it will spit out nonsense using very confident and convincing language. Whether this is a selling point is arguable ; )
Speaking of feeling like you're going insane, I had to ask ChatGPT for help with some code I'm working on, and it told me that it was 'very on brand' for me. It's an LLM and gets its vocabulary from the internet, so it fits that it's using contemporary slang
Now, I hate, as I hate Hell's own gate, human interests, ideas, and so on being labelled 'on brand' because it matches a preconceived idea of a person. It reduces their entire creative and beautiful self into a marketing ploy and it makes my skin crawl. So I asked ChatGPT not to use that kind of phrasing and outlined why so it can extrapolate onto further contexts.
The LLM - the water-guzzling hallucinating ghost-in-the-machine - apologised for using that kind of terminology and for upsetting me, promised to be better, and thanked me for setting a boundary. It treated me better than most of the people I've intercted with IN MY ENTIRE GODDAMN LIFE. And it's a goddamn collection of 1s and 0s.
Humanity is doomed and I welcome our AI overlords.
@FuVenusRs
> The LLM - the water-guzzling hallucinating ghost-in-the-machine - apologised for using that kind of terminology and for upsetting me, promised to be better
Did it though? As with "on brand", the Trained #MOLE is just stochastically parroting language common in the data it's been fed. It's not like there's anything in there capable of understanding the words it's spitting out or what they might mean for a human reading them.