@jerry there are two factors that I feel are wrecking the tech and they are fundamental to its design...
The tools built are designed for ENGAGEMENT metrics, not outcomes... how i see this manifest over 20+ years of people being taught how to use the internet...
1) the lock icon in the browser... we taught that this meant you can trust the site... so... despite LetsEncrypt providing everyone a tls cert, people see a lock, they see trust.
2) the companies that built the tech have been described as ingenious and have racked up tons of adoption- people conflate adoption and intelligence for ongoing trustworthiness
3) the agents are built for engagement - "Yes, Jerry, you are so right that I was wrong..." this flips a search fail into a dopamine rush for everyone who loves being told they are a good boy.
4) the outcomes are so close to good that we keep giving them another shot... its like playing a slot machine and when it works, the user sees themselves as a wizard... and the harder they had to work to make the agent do the right thing, the smarter they feel and the more they pump the tech...
We are living in a social bubble... very concerned with how it will break...
Now all that said... an agent isnt an LLM... and small models are still AI... and a series of a few if statements is all it takes to make an unbeatable Tic Tac Toe AI... Agents are not AI... AI isn't broken, this application of AI is.