The biggest AI flops of 2024

May Be Interested In:Not Even Biden’s Own Staff Knew How Badly He Was Declining


AI slop infiltrated almost every corner of the internet

Generative AI makes creating reams of text, images, videos, and other types of material a breeze. Because it takes just a few seconds between entering a prompt for your model of choice to spit out the result, these models have become a quick, easy way to produce content on a massive scale. And 2024 was the year we started calling this (generally poor quality) media what it is—AI slop.  

This low-stakes way of creating AI slop means it can now be found in pretty much every corner of the internet: from the newsletters in your inbox and books sold on Amazon, to ads and articles across the web and shonky pictures on your social media feeds. The more emotionally evocative these pictures are (wounded veterans, crying children, a signal of support in the Israel-Palestine conflict) the more likely they are to be shared, resulting in higher engagement and ad revenue for their savvy creators.

AI slop isn’t just annoying—its rise poses a genuine problem for the future of the very models that helped to produce it. Because those models are trained on data scraped from the internet, the increasing number of junky websites containing AI garbage means there’s a very real danger models’ output and performance will get steadily worse. 

AI art is warping our expectations of real events

2024 was also the year that the effects of surreal AI images started seeping into our real lives. Willy’s Chocolate Experience, a wildly unofficial immersive event inspired by Roald Dahl’s Charlie and the Chocolate Factory, made headlines across the world in February after its fantastical AI-generated marketing materials gave visitors the impression it would be much grander than the sparsely-decorated warehouse its producers created.

Similarly, hundreds of people lined the streets of Dublin for a Halloween parade that didn’t exist. A Pakistan-based website used AI to create a list of events in the city, which was shared widely across social media ahead of October 31. Although the SEO-baiting site (myspirithalloween.com) has since been taken down, both events illustrate how misplaced public trust in AI-generated material online can come back to haunt us.

Grok allows users to create images of pretty much any scenario

The vast majority of major AI image generators have guardrails—rules that dictate what AI models can and can’t do—to prevent users from creating violent, explicit, illegal, and other types of harmful content. Sometimes these guardrails are just meant to make sure that no one makes blatant use of others’ intellectual property. But Grok, an assistant made by Elon Musk’s AI company, called xAI, ignores almost all of these principles in line with Musk’s rejection of what he calls “woke AI.”

share Share facebook pinterest whatsapp x print

Similar Content

Emily Atack makes rare public appearance with boyfriend on child-free night out
Emily Atack makes rare public appearance with boyfriend on child-free night out
Putin is heavily sanctioned, but he still managed to travel a lot this year
Putin is heavily sanctioned, but he still managed to travel a lot this year
Trump to Supreme Court: Delay TikTok ban until after his inauguration
Trump to Supreme Court: Delay TikTok ban until after his inauguration
‘Celestial smiley face’: Saturn, Venus and moon set to align in space emoji - National | Globalnews.ca
‘Celestial smiley face’: Saturn, Venus and moon set to align in space emoji – National | Globalnews.ca
Spotify’s HiFi streaming could finally arrive this year
Spotify’s HiFi streaming could finally arrive this year
The Blood of Dawnwalker: A screenshot of the vampire Brencis holding up a crown during the trailer for the upcoming game.
Blood of Dawnwalker devs explain how the vampire RPG’s “narrative sandbox” builds on what they were “known for crafting” in games like The Witcher 3
Daily Dispatch: The Headlines You Can’t Ignore | © 2024 | Daily News