Identifying Digital Content Created Using Generative AI

Creating content using generative AI is now a part of our lives that is here to stay. While there are occasions when content consumers are aware that AI tools were used, there are times when the source or provenance of AI-generated digital content is not transparent. This can lead to deepfakes (manipulated images or video) and other misleading information.

The Content Authenticity Initiative (CAI) works to build tools that can identify digital content created using generative AI. Some of these tools are developed to meet the technical standards set by the Coalition for Content Provenance and Authenticity (C2PA).     

Digital Bedrock has joined the CAI community. Our Digital Preservation Application (DPA) can already automatically extract C2PA metadata from image files as part of our standard digital preservation processing, and we are testing and expanding our tools adoption in other format and content types. Please follow our website and social media as we continue to implement services to help our clients identify generative AI content in their archives.