Training an AI model on its own generated output destroys the model

Training an AI model on its own generated output destroys the model

Describing a situation much like the dangers of genetic inbreeding, computer scientists Matyas Bohacek and Hany Farid wrote a paper that describes how AI image generators that start training with their own generated data quickly start deteriorating.

Nepotistically Trained Generative-AI Models Collapse‘ shows that training an AI image generator on AI generated images quickly leads to a deterioration in the quality of output which can only be fixed by re-introducing real images.

In Nature, a more recent study found a similar effect in text generation, with the use of synthetic data leading to increasingly nonsensical results.

Artifactory:

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.