"Artificial Intelligence" is a generous marketing term for a data meatloaf that is gradually becoming all breadcrumbs. Round one of AI-generated content has been impressive. Sourced from 100% pure human-created data, it's been able to regurgitate surprisingly realistic content. But that will only happen once.
Synthetic generation is a better way to describe AI, because it's way more "A" than "I". With no laws or ground rules, and the promise of loads of money, this largely opaque technology is steamrolling its way online despite being fraught with claims of content theft and digital sweatshops.
Experts warn that by 2026 – that's 2 years from now – 90% of online information could be the result of synthetic generation.
As of today, however, all future AI will be fed less and less human-created content. This is what's known as Model Collapse or "Model Autophagy Disorder." Essentially, AI trained on data generated by AI quickly becomes garbage. With every round, the amount of synthetic content re-consumed grows larger.
Publishers and creative professionals are making decisions to withhold content from the Web – starving future AI.
There are growing lists of documented AI risks to humanity – positing everything from survivorship bias to possible extinction, but the real risk to AI itself, is that unless there's a huge uptick in the "I" part, it may just fall apart.

