Home » Tech » Microsoft’s AI Blueprint: Can Tech Fight Deepfakes & Disinformation?

Microsoft’s AI Blueprint: Can Tech Fight Deepfakes & Disinformation?

by Lisa Park - Tech Editor

Microsoft is pushing a new blueprint for verifying the authenticity of online content, aiming to combat the growing problem of AI-generated disinformation. The initiative, detailed in a report shared with MIT Technology Review, proposes a series of technical standards designed to establish the provenance – the origin and history – of digital media, including content created by artificial intelligence.

The urgency stems from the increasing sophistication of AI-driven deception. Examples range from manipulated images shared by political figures to videos disseminated by Russian influence campaigns intended to discourage Ukrainian enlistment. The core idea, as explained by Andrew Jenks, Microsoft director of Media Provenance, is to address the “cognitive hacking” that deepfakes represent – a disruption of trust in reality itself.

Microsoft’s approach draws parallels to methods used to authenticate physical artifacts. Just as a Rembrandt painting’s authenticity is verified through a detailed record of its provenance, watermarks, and digital signatures, the company envisions a system for digital content. This would involve documenting the source of the content, tracking its modifications, and creating a unique “fingerprint” based on its characteristics. The company evaluated 60 different combinations of these methods, modeling their resilience against various manipulation attempts.

This isn’t a new endeavor for Microsoft. The company co-founded the Coalition for Content Provenance and Authenticity (C2PA) in , alongside Adobe, Arm, Intel, and Truepic. C2PA is focused on developing an open technical standard for content provenance and authentication. Microsoft also launched Project Origin and the Content Authenticity Initiative (CAI) as foundational efforts. The current push represents an attempt to codify and expand these initiatives in response to rapidly evolving AI capabilities.

However, the effectiveness of such measures remains a subject of debate. Hany Farid, a professor at UC Berkeley specializing in digital forensics, acknowledges that widespread adoption of Microsoft’s blueprint would “meaningfully” increase the difficulty of deceiving the public. He cautions, however, that sophisticated actors – including governments – will likely attempt to circumvent these tools. Nevertheless, he believes the standard could eliminate a significant portion of misleading material, stating, “I don’t think it solves the problem, but I think it takes a nice big chunk out of it.”

A key challenge lies in the persistence of belief in false information, even when debunked. Recent research suggests that individuals can be swayed by AI-generated content even when explicitly informed of its artificial origin. A study of pro-Russian AI-generated videos about the war in Ukraine found that comments acknowledging the videos’ AI creation received significantly less engagement than those treating them as genuine. This highlights a concerning trend: awareness of manipulation doesn’t necessarily negate its impact.

Farid points out that while some individuals will remain impervious to factual correction, a “vast majority” of people desire access to the truth. The question, then, is whether technological solutions can effectively reach and inform that majority.

While Microsoft is actively promoting these standards, their adoption isn’t guaranteed. Google has already begun adding watermarks to content generated by its AI tools, a step Farid considers helpful. Some platforms are also experimenting with C2PA. However, the full suite of changes proposed by Microsoft may face resistance if they are perceived to threaten the business models of AI companies or social media platforms. The incentive structure for these companies may not align with prioritizing content authentication, particularly if it impacts user engagement or advertising revenue.

The current landscape of AI-enabled deception is complex. Interactive deepfakes and hyperrealistic models are becoming increasingly accessible, making it easier to create and disseminate convincing but false content. Microsoft’s initiative represents a proactive attempt to address this challenge, but its success will depend on widespread industry adoption and a continued commitment to transparency and authenticity in the digital realm. The blueprint offers a technical path forward, but the ultimate outcome will hinge on a broader societal and economic willingness to prioritize truth over engagement.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.