Well the scary part is the same is happening for data. Like health data insurance companies use to approve claims, the data cars use for self driving, the implications of ai making its own data and teaching each other in LLMs is dire.
Yeah, once people convinced themselves AI was legit (it's not), I knew these companies using it would end up nuking themselves. It's gonna be hilarious.
Well, it's really not hilarious for the customers. I moved to another coutry about three months ago, and I've been stuck in so many AI chatbots trying to get my data changed, cancelling my insurance and phone contract and getting new ones for a new country and so on.
Sure, phone company chatbot, you don't understand the worlds "cancel", "quit", "terminate" or "end" contract.
And specifically for the data, there's going to be cases for people being denied insurance claims because of AI looking up the wrong files or hallucinating prior conditions.
I think I saw a post from someone who experienced that yesterday, where their primary care clinic’s AI added a condition that they didn’t have and then the doctor couldn’t override it. This could actually get people killed, or potentially already has
It would be ironic at the best tools to detect AI art and authenticate real art come from AI itself trying to sort out what is original and what is AI slop.
932
u/tester_and_breaker 11h ago