Adobe envisions a future Internet awash in photos and videos accompanied by information about their provenance. The company's main purpose is to stop visual disinformation from spreading, but the method might also benefit content creators who want to link their identity to their work.
Adobe's Content Authenticity Initiative (CAI), which was introduced in 2019, has since produced a whitepaper on technology to achieve just that, integrated the system into its software, and worked with newsrooms and hardware manufacturers who can help realize its goal of generality.
"Detection [for misinformation] is going to be an arms race, and, you know, frankly, the good guys will lose," Andy Parsons, senior director of Adobe'sAdobe's Content Authenticity Initiative (CAI), told TechCrunch. "We endeavoured instead to double down on content authenticity, which is this idea of proving what's real, how something was made, in cases where it makes sense, who made it."
The new standard also preserves information about how a file was made, including how it was created and altered because EXIF data save specifics about aperture and shutter speed. That metadata, which Adobe refers to as "content credentials," would subsequently be publicly viewable on social networking platforms, image editors, and news sites if the company's common vision is realized.
The C2PA standard was created in collaboration with Adobe's CAI and companies such as Microsoft, Sony, Intel, Twitter, and the BBC. The Wall Street Journal, Nikon, and the Associated Press have joined Adobe's commitment to making content authentication available publicly.
The CAI's primary purpose is to combat visual misinformation on the Internet, such as re-circulating vintage photographs distorting the fighting in Ukraine or Nancy Pelosi'sPelosi's infamous "cheap fake." A digital chain of custody, on the other hand, might help content creators whose work is stolen or sold, a problem that has long plagued visual artists and is now causing a headache in the NFT marketplaces.
According to Parsons, the CAI is also attracting a surprising amount of interest from companies that create synthetic images and videos. Companies may ensure that generative graphics like DALL-E are not readily confused with the original by incorporating origin metadata in the AI creations we see from models like DALL-E.
While the C2PA standard is similar to EXIF, Adobe claims that the new content attribution standard is less "fragile" in terms of tampering with or altering the associated data. On a verification page Adobe developed last year, anyone may drag and drop an image with content credentials to verify its integrity. Picture fingerprinting techniques can be used to reattach broken embedded data. Adobe's goal for online content authentication is ambitious, but it's also realistic about the project's constraints. People with bad intent will always find a way to mislead others. Still, the organization thinks that most average internet users will be willing to learn more about what content to trust online.
The company also expects that if the standard gets traction, image-heavy social media platforms will be more inclined to adopt it, even if they first refuse. Flickr, for example, has traditionally displayed EXIF data alongside each image. Still, Instagram and most other newer image-based social networks have removed metadata, albeit some urge users to re-add location markers.
Adobe has long invested in content authenticity and misleading visual education but is now attempting to lay the basis for widespread acceptance, as it has done with widely used file formats like XMP and PDF, as well as its industry-standard imaging tools.