@futurebird I don't think I have the space or the knowledge to make a case on technical grounds, but all I can say is that when one group anthropomorphizes an inanimate object - be it a product, a tool, or a corporation - it's usually a psychological manipulation to "humanize" the thing so that other humans will identify more closely with it and give it more leeway and the "benefit of the doubt" that they would never give to a mere tool.
When people attribute humanity to things, they're much more likely to rationalize or even defend mistakes or lies as being "only human" by doing something that the thing they're "humanizing" absolutely cannot do - filling in the gaps and jumping to conclusions in order to empathize with something that has no emotions. In turn, anyone that thinks of them as merely a tool is ignorant, backwards, and close-minded.
By conflating LLM with "AI", they're counting on a murky and ill-defined definition of what "intelligence" really means, and they're obfuscating the fact that intelligence in the context of LLM is not the same as human intelligence. They just let the dupes assume that the context is the same and let their human ability to jump to conclusions do the rest. They trick people into off-loading their own human intelligence to a machine and most people don't even notice the context switch, nor the loss of fidelity. They assume that machine intelligence is at least as good as human intelligence, which is absolutely false because we humans can't even come up with a consistent and agreed-upon definition of what "intelligence" quantifiably means.