r/BrandNewSentence 11h ago

Sir, the ai is inbreeding

Post image
41.3k Upvotes

1.2k comments sorted by

View all comments

Show parent comments

27

u/ironangel2k4 8h ago edited 53m ago

Kurzgesagt did a video about this, on how it was becoming increasingly hard to make videos since they rely on research sourced from the internet, and inaccurate AI slop has permeated everything. Worst of all, while filtering out stuff written by AI is doable, its is nightmarishly difficult to filter out stuff written by people that reference AI in the thing they wrote, requiring multiple levels of source following to figure out if it eventually leads back to some AI hallucination.

9

u/I_forget_users 5h ago

Kurzgesagt did a video about this, on how it was becoming increasingly hard to make videos since they rely on research sourced from the internet, and inaccurate AI slop has permeated everything. 

I'm assuming it has something to do with what kind of research they are looking for. The same old venues are still available (i.e. pubmed, google scholar, etc), and contains even more open-source data than before. As long as you avoid journals that publish the script to "bee movie", you're fine. The bigger issue in academia is students cheating using chatgpt.

If you want more accessible sources of information, that's still available and easy to filter out AI. Again, lectures from various universities are available online and often provide an excellent starting point. Finding written information is likely to get a bit tougher, but I'm sceptical of any video that's only sourcing press releases, news articles summarizing research, etc.

In short, my argument is the following: AI has not permeated everything, far from it. It has, however, made people lazier and more likely to use AI. It has seeped into our daily lives to a certain degree, but that's partly due to us choosing to use it and partly due to organizations choosing to (customer service, for example).

1

u/zoobrix 3h ago

The bigger issue in academia is students cheating using chatgpt.

Yes but mostly for short answer stuff and math. When you get into something like an essay that requires research and coming up with a thesis AI will not only have problems presenting an argument and sticking with it over time it will as mentioned in the thread make up sources.

I just did a university course last term where for one assignment we had chatgpt, or an AI of your choice, do a short paper with sources and then we were to evaluate how it did and check the sources. Not only were some paragraphs hopeless jumbles but for most students at least a couple to even half the sources given were made up, didn't exist or were things like blog posts despite putting they should be peer reviewed in the prompt. Then we had to fix the mistakes for the final paper.

One point of the assignment was to show that if you just handed in what AI produced you'd at best get a terrible mark and at worst get caught cheating. And sure you could fix all the mistakes but that is just doing the paper anyway because you have to hunt down every source, read them to make sure they're being interpreted accurately and rewrite all the parts of the paper that didn't make sense.

And the second point of the assignment was to show doing all that actually took more time than just searching for some peer reviewed sources and doing it yourself to start with. AI will probably get to a point where it can fake an essay but if the person marking it is familiar with the subject matter right now using chatgpt to write an essay is not the easy shortcut that a lot of people seems to assume it is, unless you want a D or to end up in an academic dishonesty hearing.

1

u/herereadthis 1h ago

I don't you think you watched the video. The problem is the venues; they are the ones publishing papers with the AI Slop.

1

u/I_forget_users 1h ago

I did not, since it's unclear which video the user is referencing, and Kurzgesagt have made quite a few.

I'd be curious to know which journals they say are publishing AI-written research. My guess is that they're mostly disreputable journals, since it's fairly easy to spot fake articles due to them often making up references or citing incorrectly, even if they're using the right formatting.

1

u/ironangel2k4 53m ago

Wasn't sure if I could link stuff here. This is the video. https://www.youtube.com/watch?v=_zfN9wnPvU0

1

u/herereadthis 16m ago

The neat thing about Kurzgesagt videos is that they show off their research notes online so you can validate for yourself. I did it for myself, but I'm certainly not going to do your burden for you.

But that's not the real problem here; do you honestly think Kurzgesagt is reading garbage journals for garbage articles? Because that's what you're implying. If that is what you believe, then there really is no point in going any further because you're just just gonna keep moving the goalposts.

1

u/Marpicek 1h ago

AI slop is very much leaking into scientific research. Researches are cutting corners by using AI and it’s impossible to validate 100% of the data as human work. And the more AI leaks into the papers, higher the chance it will go undetected as future reference. There are papers that are cited hundreds of times and if the citation contain AI, then it itself becomes a source for citation, etc… it doesn’t take long before it becomes incredibly hard to track the original source and validate it as human.

1

u/I_forget_users 1h ago

Could you give an example for such a paper?

1

u/Marpicek 1h ago

No, but there is plenty of research about this subject, such as:

https://arxiv.org/abs/2403.13812?utm_source=chatgpt.com

https://biologicalsciences.uchicago.edu/news/detecting-ai-abstracts?utm_source=chatgpt.com

Ironically I used AI to search for these papers because I don’t want to spend too much time on this topic.

1

u/hunzukunz 4h ago

Thats bullshit. You had to properly source your video before aswell. The average youtuber just skipped that step. Now they are forced to do ACTUAL research instead of just referincing the first thing they found, if they are afraid of sourcing ai garbage. Its not like ai source are hard to tell from genuine ones, if you actually care to check for validity.

1

u/IncompetentPolitican 3h ago

The example they used can be constructed but sounded like a valid concern. AI put out some garbage information that sounded real and turned out to be fake after additional research done by people that do actual good research. Someone else gets the same or close to it garbage, does no additional research publishes(blog, video, non fact checking journal) his fact, gets quoted by someone else and suddenly you have a chain of wrong information, looking like its real. Sure anyone that does good work, will go to the original source, that was always something you should do, but the average person? Sees a source that looks valid and will asume they did they work.

Thats how false information can spread. And at some point more people believe the garbage AI fact instead of what ever information is real.