C2PA Content Credentials — the new metadata layer above EXIF
EXIF tells you what the camera saw. C2PA Content Credentials tell you what every tool in the chain did to the photo afterwards — captured by Sony Alpha, edited in Adobe Photoshop, exported, watermarked, published — with each step cryptographically signed. It is a fundamentally different kind of metadata from EXIF, and you'll start seeing it in more files as cameras and AI generators adopt it.
What C2PA is
The Coalition for Content Provenance and Authenticity (C2PA), founded by Adobe, Microsoft, BBC, and others in 2021, publishes an open specification for embedding signed provenance into media files. The format reuses the JUMBF container (JPEG Universal Metadata Box Format, ISO 19566-5) and lives inside JPEGs in the APP11 marker (0xFFEB). PNG, MP4, PDF, WAV, and other containers have analogous slots.
Adobe ships their consumer-facing branding as Content Credentials. The underlying bytes are C2PA. They are the same thing.
What's inside a manifest
A C2PA manifest is a structured document with three top-level pieces:
- Claim — what tool produced or modified this version of the file. "Captured on Sony α7 IV firmware 2.0", "Edited in Adobe Photoshop 25.5", "Generated by DALL·E 3".
- Assertions — fact statements attached to the claim. A thumbnail of the image at this stage. A hash of the pixel data. A list of "ingredient" files that contributed to this one (the original RAW, an image used as a layer). User-provided fields like author or location.
- Signature — a COSE (CBOR Object Signing) wrapper around the claim and assertions, signed with an X.509 certificate. The certificate chains up to a public C2PA trust list.
Every editing tool that supports C2PA appends a new manifest when it saves the file, and references the previous manifest in the chain. The result is a tamper-evident history: a verifier walks the chain, checks each signature against its certificate, and either confirms the chain is intact or flags exactly where it was broken.
Who actually signs photos right now
- Sony Alpha cameras — α7 IV, α1, α9 III, and α7S III received C2PA signing via firmware updates in 2024. Photos are signed at capture with a per-body certificate.
- Leica M11-P — first camera to ship with Content Credentials enabled by default in 2023.
- Adobe Creative Cloud — Photoshop, Lightroom, and Firefly add manifests on export when Content Credentials are enabled.
- OpenAI DALL·E and ChatGPT image generation — all generated images carry C2PA manifests labelling them as AI-generated.
- Microsoft Bing Image Creator, Google Imagen — similar AI-provenance manifests.
What it doesn't do
C2PA does not prove a photo is real. A manifest can claim "Captured on Sony α7 IV", but if the certificate it was signed with isn't in a trust list — or the signer is unknown — the claim is just text. Conversely, the absence of a manifest doesn't imply manipulation: most photos in the world today don't have one.
It also doesn't survive aggressive social-media re-encoding. Platforms that strip metadata strip the C2PA manifest along with EXIF. The manifest is intended to survive only between tools that preserve it deliberately.
How our viewer shows it
The viewer surfaces whatever the bundled ExifTool reads from the JUMBF box. ExifTool decodes the JUMBF tree in APP11 and emits the claim, the signing tool, the timestamp, and the assertions as individual tags, which appear in the standard tag table — searchable like any other field. ExifTool does not perform cryptographic signature verification (that needs a trust list and live certificate validation, neither of which is bundled), so the manifest is shown as data, not as a verified claim.
If you're checking whether an image came from a specific AI generator or whether it's been edited since capture, C2PA is the field to look at. If it's missing, the photo predates C2PA adoption or was processed by a tool that didn't preserve manifests.