AI Labels Need to Be the New Norm in 2025

I’m an AI reporter, and next year, I want to be bored out of my mind. I don’t want to hear about increasing rates of AI-powered scams, messy boardroom power struggles or people abusing AI programs to create harmful, misleading or intentionally inflammatory photos and videos.

It’s a tall order, and I know I probably won’t get my wish. There are simply too many companies developing AI and too little guidance and regulation. But if I had to ask for one thing this holiday season, it’s this: 2025 needs to be the year we get meaningful AI content labels, especially for images and videos.

AI Atlas tag

Zooey Liao/CNET

AI-generated images and videos have come a long way, especially over the past year. But the evolution of AI image generators is a double-edged sword. Improvements to the models mean that images come out with fewer hallucinations or flukes. But those weird things, people with 12 fingers and disappearing objects, were one of the few things people could flag and second guess whether the image was created by humans or AI. As AI generators improve and those tell-tale signs disappear, it’s going to be a major problem for all of us.  

The legal power struggles and ethical debates over AI images will undoubtedly continue next year. But for now, AI image generators and editing services are legal and easy to use. That means AI content is going to continue to inundate our online experiences, and identifying the origins of an image is going to become harder — and more important — than ever. There’s no silver bullet, one-size-fits-all solution. But I’m confident that widespread adoption of AI content labels would go a long way toward helping.

The complicated history of AI art


From talking fridges to iPhones, our experts are here to help make the world a little less complicated.

If there’s one button you can push to send any artist into a blind rage, it’s bringing up AI image generators. The technology, powered by generative AI, can create entire images from a few simple words in your prompts. I’ve used and reviewed several of them for CNET, and it can still surprise me how detailed and clear the images can be. (They’re not all winners, but they can be pretty good.) 

As my former CNET colleague Stephen Shankland succinctly put it, “AI can let you lie with photos. But you don’t want a photo untouched by digital processing.” Striking a balance between retouching and editing away the truth is something that photojournalists, editors and creators have been dealing with for years. Generative AI and AI-powered editing only make it more complicated. 

Take Adobe for example. This fall, Adobe introduced a ton of new features, many of which are generative AI-powered. Photoshop can now remove distracting wires and cables from images, and Premiere Pro users can lengthen existing film clips with gen AI. Generative fill is one of the most popular Photoshop tools, on par with the crop tool, Adobe’s Deepa Subramaniam told me. Adobe made it clear that its generative editing is going to be the new norm and future. And because Adobe is the industry standard, that puts creators in a bind: Get on board with AI or fall behind.

Even though Adobe promises never to train on its users’ work — one of the biggest concerns with generative AI — not every company does or even discloses how its AI models are built. Creators who share their work online already have to deal with “art theft and plagiarism,” digital artist René Ramos told me earlier this year, noting how image generation tools grant access to the styles that artists have spent their lives honing.


From talking fridges to iPhones, our experts are here to help make the world a little less complicated.

What AI labels can do

AI labels are any kind of digital notices that flag when an image might have been created or significantly altered by AI. Some companies automatically add a digital watermark to their generations (like Meta AI’s Imagine), but many offer the ability to remove them by upgrading to paid tiers (like OpenAI’s Dall-E 3). Or users can simply crop the image to cut out the identifying mark.

There’s been a lot of good work done this past year to aid in this effort. Adobe’s content authenticity initiative launched a new app this year called Content Credentials that lets anyone attach digital, invisible signatures to their work. Creators can also use these credentials to disclose and track AI usage in their work. Adobe also has a Google Chrome extension that helps identify these credentials in content across the web.

Google adopted a new standard for content credentials for images and ads in Google Search as part of the Coalition for Content Provenance and Authenticity, co-founded by Adobe. It also added a new section to image info on Google Search that highlights any AI editing for “greater transparency.” Google’s beta program for watermarking and identifying AI content, called SynthID, took a step forward and was rolled out open-source to developers this year. 

Social media companies have also been working on labeling AI content. Folks are twice as likely to encounter false or misleading online images on social media than on any other channel, according to a report from Poynter’s MediaWise initiative. Instagram and Facebook’s parent company Meta rolled out automatic “Made with AI” labels for social posts, and the labels quickly, mistakenly flagged human-taken photographs as AI-generated. Meta later clarified that the labels are applied when it “detect[s] industry standard AI image indicators” and changed the label to read “AI info” to avoid the implication that an image was entirely generated by a computer program. Other social media platforms, like Pinterest and TikTok, have AI labels to varying degrees of success — in my experience, Pinterest has been overwhelmingly flooded with AI, and TikTok’s AI labels are omnipresent but easy to overlook.

Adam Mosseri, head of Instagram, recently shared a series of posts on the subject, saying: “Our role as internet platforms is to label content generated as AI as best we can. But some content will inevitably slip through the cracks, and not all misrepresentations will be generated with AI, so we must also provide context about who is sharing so you can assess for yourself how much you want to trust their content.”

If Mosseri has any actionable advice other than “consider the source” — which most of us are taught in high school English class — I’d love to hear it. But more optimistically, it could hint at future product developments to give people more context, like Twitter/X’s community notes. These things like AI labels are going to be even more important if Meta decides to go through with its experiment to add AI-generated suggested posts to our feeds.

What we need in 2025

All of this is great, but we need more. We need consistent, glaringly obvious labels across every corner of the internet. Not buried in the meta data of a photograph but slapped across it (or above/below it). The more obvious, the better. 

There isn’t an easy solution to this. That kind of online infrastructure would take a lot of work and collaboration across tech, social and probably government and civil society groups. But that kind of investment in distinguishing raw images from those that are entirely AI-generated to everything in between is essential. Teaching people to identify AI content is great, but as AI improves, it’s going to get harder for even experts like me to accurately assess images. So why not make it super freaking obvious and give people the information they need to know about an image’s origins — or at least help them second-guess when they see something weird?

My concern is that this issue is currently at the bottom of many AI companies’ to-do lists, especially as the tide seems to be turning to developing AI videos. But for the sake of my sanity, and everyone else’s, 2025 has to be the year we nail down a better system for identifying and labeling AI images. 



#Labels #Norm

Similar Posts