TechAI model collapse risk underscores the need for reliable training data

AI model collapse risk underscores the need for reliable training data

AI models trained on data generated by artificial intelligence can fail. Scientists emphasize that the quality of the information given to the models is crucial for their functionality.

Artificial intelligence is changing all areas of life
Artificial intelligence is changing all areas of life
Images source: © Licensor

25 July 2024 11:54

In an article published in the prestigious scientific journal "Nature," scientists argue that artificial intelligence (AI) models can experience so-called "model collapse" when trained on data generated by other AI models. They highlight the necessity of using reliable, accurate data during the AI model training process to ensure proper functioning.

The foundation of their argument is the concept of "model collapse," which refers to a situation where AI models are trained on datasets generated by other AI models. Scientists claim that such a process can lead to "contamination" of the results, meaning that the original content of the data is replaced with unrelated nonsense. Consequently, after several generations, AI models can begin generating content that makes no sense.

Scientists point to tools of generative artificial intelligence, such as large language models (LLMs), which have gained popularity and were mainly trained using human data. However, as researchers demonstrate, as these AI models proliferate on the internet, there is a risk that computer-generated content might be used to train other AI models and even itself. This process is known as a recursive loop.

Ilia Shumailov from Oxford University in the United Kingdom and his team illustrated, using mathematical models, how AI models can experience "collapse." They showed that AI might skip specific results (e.g., less common text fragments) in the training data, leading to training occurring on only part of the dataset.

Collapsing AI models

Researchers also analyzed how AI models respond to a training dataset primarily created by artificial intelligence. They found that feeding the model data generated by AI degrades subsequent generations of models in terms of learning ability, ultimately leading to "model collapse."

All language models tested by the recursively trained scientists showed a tendency to repeat phrases. For example, the scientists conducted a test using a text about medieval architecture as a training text. By the ninth generation, artificial intelligence was generating content about hares instead of architecture.

The study authors emphasize that "model collapse" is inevitable if AI training uses datasets generated by previous generations of models. They claim that practical training in artificial intelligence and its results is possible but requires careful filtering of the generated data. Additionally, scientists note that technology companies that use only human-generated content for AI training may gain a competitive advantage.

© Daily Wrap
·

Downloading, reproduction, storage, or any other use of content available on this website—regardless of its nature and form of expression (in particular, but not limited to verbal, verbal-musical, musical, audiovisual, audio, textual, graphic, and the data and information contained therein, databases and the data contained therein) and its form (e.g., literary, journalistic, scientific, cartographic, computer programs, visual arts, photographic)—requires prior and explicit consent from Wirtualna Polska Media Spółka Akcyjna, headquartered in Warsaw, the owner of this website, regardless of the method of exploration and the technique used (manual or automated, including the use of machine learning or artificial intelligence programs). The above restriction does not apply solely to facilitate their search by internet search engines and uses within contractual relations or permitted use as specified by applicable law.Detailed information regarding this notice can be found  here.