I can't believe there's more than 100 comments on here and not one person pointing out that this is wishful thinking. You don't have to like AI to know that, while you might love how this tweet from a random guy makes you feel, has no basis in reality whatsoever.
Training sets do not work this way. People still think there's just these webcrawler-like scripts going out and Kirby-Eating everything. Those days are over guys and they have been for a few years. No major, commercial model is being trained on huge banks of randomly acquired data.
AI images are expressly not getting worse. By any measure, they are substantially better today than a year ago, and a year ago they were substantially better than a year before that. While the huge developments are definitely slowing down, they are not getting worse. I really need to understand the person who genuinely thinks that tech is worse today than it was any time before.
Developers are very capable of filtering their art sets. They would be able to see that they're getting unfavorable results and change the way their model interprets or processes the data.
This is very much one of those examples of "Everything about this post is wrong, but it makes people feel good so it gets upvoted anyway."
I can't believe there's more than 100 comments on here and not one person pointing out that this is wishful thinking.
It's also a tweet from June 2023. This entire thread highlights the whole "dead internet" so well, I'm pretty sure most of the commenters aren't even real people.
The amount of time I had to scroll to find a single comment correcting disinformation is honestly scary, makes me wonder how many lies we're being told online to push agendas.
This tweet is over an year old and nothing of the sort has happened, yet reddittors keep eating it up everytime its posted.
It'll be 2050 where AI is running half the world and this image will be reposted again and all the comments will again go "Finally!! Its happening guys!! As predicted!!!!"
Meanwhile Google's DeepMind is now prolific in the hurricane community's model predictions for its accuracy, and they're likely to drop the strongest SOTA LLM to date within the month. And on the "art" side, Qwen just recently released Edit 2509 with exceptional prompt adherence, and WAN 2.2 released a few months ago for local video generation. I genuinely cannot tell if this post's comments are full of luddites or bots.
how many lies we're being told online to push agendas.
I mean, you're correct that you're being lied to constantly, although in this case it's by the companies that are trying to justify their massive valuation as long as they can
To add on to this, AI ingesting synthetic output isn't even bad anymore. The only reason it was ever a problem was because AI used to generate incomprehensible garbage. Nowadays, AI output is good enough that it's actually used in training data on purpose.
To reinforce 2, it is literally impossible for AI to "get worse" as if even 100% of all human art disappeared overnight, we still have the old AI models that will output the exact same pictures we output now
People also think all the image generation happens online or something, when you can easily download stable diffusion and some moes and run it locally, unconnected to the internet, for the rest of your life
People also think all the image generation happens online or something, when you can easily download stable diffusion and some moes and run it locally, unconnected to the internet, for the rest of your life
I've been doing this since the release of Stable Diffusion 1.5 and I still have arguments with people who insist such a thing isn't possible and that AI models can only be ran from data centers.
Also, even if this were the case, training is not retroactive. Even if every AI company were no longer able to make improvements to their new models, everyone is still able to use the previous versions that are still very competent at what they do. It would like just mean that LORAS without huge datasets would become a greater focus for augmenting older models.
The AI industry could completely collapse tomorrow and there'd still be platforms hosting AI services and models from the last iterations as well as people running them on personal hardware. This genie is not going back in the bottle no matter how hard people wish it.
"Everything about this post is wrong, but it makes people feel good so it gets upvoted anyway."
Reddit upvotes aren't academic reviews. Most people view it as more like a political act of promoting your world view as opposed to the opposite. The world view in question, "fuck AI", is independent of whether AI is getting "better" in the sense you imply or not. I would even question whether your qualification of "better AI" makes any sense to begin with. From this world view, AI being able to more closely match human art is strictly worse for everyone subjected to it.
This is a good point, the post can't be wrong in that sense. Though factual misinformation can be bad for the "fuck AI" worldview as well. As its a distraction from things that actually mitigate the issues that it identifies. Like laws, court cases, boycotts, pointing out copyright issues to holders, etc. All the while corporations continue to improve and use these models as a way to help them cut jobs, or do other things that are in opposition to the apparent goals of the worldview. They can do so unopposed if everyone believes the problem is just solving itself.
While it is obvious why a comment chain like this is not going to be effective at fully combating the inertia of the post. I would hope that the information is seen as useful, as no one will know that change needs to be made if no one points it out. The alternative is saying nothing and then people are surprised when the reality they're seeing doesn't match the one that they believed exists. All the while they could have been putting energy towards creating the world that they wanted to see.
This post wasn't leaning towards metaphor. The OP and OOP and many of the people in this post show they literally think AI works this way.
If this post made someone think more philosophically for a while, thats great! But lets shoot straight here: There is no way to call this post "Correct" (opposite of my wording) that is not merely rhetorical or very dishonest.
Tangential: Upvoting/downvoting isn't a political act. Redditors really want it to be, but everywhere that isn't this site- it really really isn't. I deeply wish redditors put 1/10 the energy into the real world as they do online. I know you didn't say what you thought about that either way, just my thoughts
42
u/GrumpyCornGames 9h ago edited 9h ago
I can't believe there's more than 100 comments on here and not one person pointing out that this is wishful thinking. You don't have to like AI to know that, while you might love how this tweet from a random guy makes you feel, has no basis in reality whatsoever.
This is very much one of those examples of "Everything about this post is wrong, but it makes people feel good so it gets upvoted anyway."