skip to content

Backing OpenOrigins to Restore Trust in Digital Content

Backing OpenOrigins to Restore Trust in Digital Content

The Double-Edged Sword of Generative AI

There is no doubt that generative AI is revolutionary. However, we're entering a world where seeing is no longer believing. Perhaps the most significant question is: how can we ensure trust and verify authenticity in this rapidly evolving digital landscape?

According to Sumsub, last year saw over 1000% increase in the number of deepfakes detected globally across all industries, with the cost of creating deepfakes having dropped to nearly zero. Unfortunately, the 2024 U.S. Election has further highlighted just how serious the epidemic of deepfakes has become - from Taylor Swift’s public denouncement of a deepfake post claiming her endorsement, to FBI’s warning about fake videos on ballot fraud, to synthesized and manipulated voices of both presidential candidates spreading political disinformation.

Beyond the immediate risks, there is also a critical need to secure and authenticate real-world data used for training and fine-tuning AI models. Researchers from the Universities of Oxford and Cambridge have discovered that even the inclusion of 10% synthetic (or fake) training data can degrade AI performance significantly, leading to issues like “model collapse.”

Source: Adobe
Source: Adobe

Existing Solutions: Why They Fall Short

While there are notable attempts to combat deepfakes and offer authentication solutions for digital content, the approaches we've seen have significant limitations:

Approach 1: Detection (using AI to verify pixels), a reactive mechanism that relies on constantly updating models - which is a classic “whack-a-mole" tactic. The AI models will have to continuously evolve and retrain to catch up with the ongoing deepfake advancements, and eventual undetectable generated output. Even worse, detectors only ironically assist deepfake generators to become better at disguising anomalies. Take a close look at the gallery below - can you tell which one is authentic? Pixel-level differences are quickly becoming undetectable in a short amount of time.

Source: Europol Innovation Lab
Source: Europol Innovation Lab

Approach 2: Crowd-sourcing Play. Some solutions utilize a crowd-sourcing strategy to verify content against decentralized databases or protocols. However, crowdsourced reporting often results in inconsistent detection, and these approaches may lack the comprehensive detection required to counter fast-evolving deepfake techniques. As such, they fall short of the robustness and scalability needed for enterprise-grade applications, which demand scalable, timely, consistent, and high-confidence authentication to ensure media content integrity across vast data volumes.

Approach 3: Watermarking. One prevalent watermarking option is Content Credentials (CR) set by C2PA standards. While we closely monitor the development, we recognize its current limitations in capturing lower-level metadata and its strong association with the Adobe ecosystem, with Photoshop used as the sole compliant tool for editing images - addressing only a fraction of the millions of images generated each year. Additionally, watermarks can be easily removed or copied by fraudsters, diluting the effectiveness of a genuine watermark.

"Declaration", not "Detection"

We are excited to support OpenOrigins in its full stack, immutable approach to establish trust in digital content. OpenOrigins’ innovative IP, developed by Dr. Manny (Mansoor) Ahmed, founder and CEO of OpenOrigins, represents the culmination of over eight years of research. Dr. Ahmed, a Postdoc in Distributed Computing and PhD in Computer Security from the University of Cambridge, began studying Deepfakes in 2017. He soon realized that detection alone would only lead to an ever-escalating arms race and that a completely different approach was needed.

Instead of reactive detection, OpenOrigins offers a fundamentally different and proactive solution by focusing on declaring authenticity at the point of capture. It addresses the challenge with a uniquely multi-source tamper-proof infrastructure, ensuring real-time security at the source of generation that establishes a root of trust for content archives. This approach combines TEE (trusted execution environments), device-based sensors, cryptography, and blockchain to build trust in online content from the ground up.

One of the key innovations of OpenOrigins is at the layer of trusted compute, a hardware-based security chip embedded in smartphones, PCs, and other devices. OpenOrigins leverages existing sensors for movement (e.g. gyroscope, accelerometer, magnetometer, pressure sensor) and/or depth sensors (LiDAR, radar, sonar, etc.) to capture metadata generated at the device, all securely validated and encrypted by TPM before being written into their proprietary distributed system.

Source: OpenOrigins
Source: OpenOrigins

Building on this robust foundation of device-level security, OpenOrigins offers its Archive Anchoring as a Service for media and content archives. Assets and their metadata are registered on a scalable, decentralized network. This ensures that they are immutably anchored and protected from tampering and manipulation, through verification of their authenticity and provenance.

To give customers full control, each customer hosts a node that they can independently maintain, keeping the archive securely in their hands. This configuration also allows external parties to facilitate independent verification of content integrity.

Once anchored and authenticated, media can be credibly licensed as provably human and copyright-compliant source data for AI training. With the growing demand for high-quality, authentic, non-synthetic data for AI model training, OpenOrigins is well positioned at the center of the flywheel which enables both the supply and demand side of the AI training market as a data marketplace.

Trusted by Enterprises and Customers

OpenOrigins has seen significant interest from global media companies. They are partnered with ITN, one of the UK’s leading content producers behind Channel 4 and ITV, to protect the human origins of their media archives spanning the past 70 years and consisting of over a million video and audio assets. ITN can now unlock a new revenue stream by licensing its secured archives to AI companies that are responsibly training their models via OpenOrigins’ marketplace.

OpenOrigin’s ability to win the trust of media partners was also demonstrated by its recent partnership with Fathm (see the public announcement), a leading media innovation organization that works with global newsrooms and media platforms, such as Graham Media Group, Google News Initiative, and AFP. Through the partnership with Fathm, OpenOrigins will help newsrooms of all sizes take advantage of new AI-driven licensing opportunities, ensuring that content remains secure, ethical, and profitable

Last Line of Defense Against Deepfakes with Scalable AI and Blockchain

Our investment in OpenOrigins aligns with our vision of harnessing the transformative power of AI, device based-and distributed security, cryptography and blockchain technology. Their innovative and scalable technology has potential applications far beyond the media sector, expanding its use in sectors such as neural network authentication, customer onboarding/KYC, fraud mitigation, remote infrastructure monitoring, and visual inspection, etc. While their innovations present exciting opportunities, it is essential to acknowledge that challenges remain in the fight against deepfakes and digital misinformation.

It’s often said that the best offense is a good defense. We believe OpenOrigins could be the last line of defense against the rapidly evolving threat of deepfake technology.

Feel free to reach out to [email protected] and [email protected] if you have any questions or insights about deepfake, GenAI, decentralized infrastructure, or distributed computing. We welcome any idea, suggestion, and opportunity to work together to build a more trustworthy, connected, and intelligent internet.

*OpenOrigins is a Galaxy Interactive and Galaxy Ventures portfolio company.