AI slop is the new spam (and what to do about it)

  • 0

Social media is flooded with artificial intelligence (AI) content. Generative AI images have been among the most-liked posts on Facebook, and a quick scroll through your timeline is likely to show plenty of content that features animals, soldiers, religious figures or celebrities who have been generated by prompts. “AI slop” is the new spam. However, what’s the point?

Introducing generative AI

Generative AI means any AI model designed to create from user input. ChatGPT, DeepAI and Canva can be used to create written content, images or videos based on prompts. What the user inserts gets fed into the machine, which then uses its database to bring together its results. If you’ve seen generative AI at work, you will know that its results can range from basic and flawed to highly advanced. AI can create a song that sounds almost exactly like Jimi Hendrix or Ed Sheeran in minutes. However, the question is: should it? Complex generative AI models can fool even trained experts when looking with the naked-eye, though the term “AI slop” generally refers to generated content that emerges with flaws – and yet still often manages to fool its audience.

Scratching the surface of AI slop

AI slop has been compared with spam; generative AI is present on almost everyone’s social media feed at some point. Accounts and pages might post hundreds (sometimes thousands) of generated images per day – imagine the flood of owls with Hogwarts acceptance letters bursting through Harry Potter’s home, except electronically and with the wrong number of eyes per owl.

Usually, AI slop is easy to spot if one pays attention to details. But the truth is that most people, when liking or sharing posts, well, don’t. Generated images appear right at first glance, but there’s almost always something that distinguishes the image from a real one. As one example, some generated AI images will look perfectly fine, but draw people with seven fingers on the same hand; other images are impossible in reality – such as a horse made out of bread – but are created simply to get likes.

AI slop works because people have become used to scrolling through their timelines without looking. Furthermore, AI slop can be particularly effective against anyone who isn’t internet-savvy – generated images of orphans or veterans can be an easy way for page creators to pull quick social media likes, or get scammed users to donate to a cause that was made up within five minutes.

If a user isn’t sure whether an image is generative AI, use reverse image search to look for any duplicates of the same picture online; usually, AI slop images tend to reappear over and over. Or the image’s details will fall apart once you’ve taken a closer look; visual “glitches” associated with generative AI can be simple, like a seven-fingered person, but may also be much more bizarre: look up the term “AI slop” on YouTube or Google Image Search, and you will soon find many things more bizarre than the average Escher drawing.

Monetising nothing

Like spam, AI slop is created to spread and is effortless to create. Generative AI has made it easy for anyone to create a monetised social media page which, for all practical purposes, posts absolute nonsense – and the more it spreads from one user to the next via likes and shares, the more revenue the page creator gets.

A page that posts AI slop is likely to be run by someone who intends to monetise the page, though not always. Pages can also be set up to farm for likes and shares, and then be sold through online marketplaces – very, very unethical ones – to other users who will then change the page name to something else. Social media users who haven’t scrolled through their page follows might notice that their favourite “Black Sabbath fan page” has suddenly changed to “Jesus pictures and memes” or “Cars for sale”. (And if they do, the best thing to do is to unfollow and report these pages.) AI slop is designed to scam users en masse, while page creators (or unscrupulous sellers) make a quick buck.

Generative AI has also become a problem for retailers. Amazon is just one example of a marketplace flooded with badly generated content passed off as being original. Spend some time scrolling through entries, and you’ll see many books that were hammered together with prompts. Like slop-filled pages, generative AI books flood the market with cheap content while creators hope to profit.

What users can do

Users who find their timelines flooded with AI slop can take measures to restrict it or prevent the same content from returning. First, report individual posts; most social media websites have an option at the corner of the post where something can be marked as spam. Next, report or unfollow any pages that repeatedly post cheap, generative content. Advanced settings can omit specific tags or keywords from your timeline, which can be useful when algorithms have trapped the user into a loop of increasing spam. One more thing: don’t engage with it.

AI slop should be reported without commenting or liking – anger and frustration are still user engagement, and often get used to boost posts further. (This is known as rage-baiting – baiting users into a discussion with deliberately inflammatory statements – a whole separate marketplace where individuals are hired for commercial, political and other reasons.) When one sees family members or friends commenting on AI slop, it’s best to tell them – using their private inbox – that by commenting on the post, they’re just fuelling the engagement of the individual post further.

AI can have useful applications, like using generative AI as a visualisation aid, but social media users should remember that the same technology can be misused by anyone with five minutes and an internet connection. Once something reminds you of an impossible staircase, you might be looking at a fine example of modern AI slop.

  • 0
Verified by MonsterInsights
Top