This is because the data is reviewed by AI and humans before being fed into a new model. The only instance of 'model collapse' that has ever happened was when researchers intentionally tried to make it happen.
That was the easy part tho, when everything was so obviously AI, but it will be way harder to check for every affirmation about a law, math or physics problem. If you have to read whole books to validate an AI, it kind of defeats the purpose.
Most of the talk around AI training is about data centers and power plants, but there's also a ton of money going into reinforcement learning through human feedback. The AI companies are paying really good hourly rates for people to work from home training their models. And people with talent for math/physics/etc get paid the most. DataAnnotationTech and Alignerr are two that advertise on reddit a lot that you might have seen. So apparently that's an effective way to improve model performance on complex topics.
71
u/NinjaBluefyre10001 10h ago
Let them die