Moiré Video Authentication: A Physical Signature Against AI Video Generation

Yuan Qing1*, Kunyu Zheng1*, Lingxiao Li1*, Boqing Gong1, Chang Xiao1

*Equal contribution

1Boston University

Abstract

Recent advances in video generation have made AI-synthesized content increasingly difficult to distinguish from real footage. We propose a physics-based authentication signature that real cameras produce naturally, but that generative models cannot faithfully reproduce. Our approach exploits the Moiré effect: interference fringes formed when a camera views a compact two-layer grating structure. We derive the Moiré motion invariant, showing that fringe phase and grating image displacement are linearly coupled by optical geometry, independent of viewing distance and grating structure. A verifier extracts both signals from video and tests their correlation. We validate the invariant on both real-captured and AI-generated videos from multiple state-of-the-art generators, and find significantly different correlation signatures between real and generated content.

Demo Video

Introduction

AI video generation has reached a level where synthetic content can appear highly realistic, making reliable authenticity verification increasingly important. Existing approaches often rely on post-hoc artifact detection or digital metadata, both of which can be fragile as generation models improve or as media is re-encoded and redistributed.

This project introduces a physics-grounded alternative: a passive Moiré-based signature that naturally emerges during real optical capture and follows deterministic geometric constraints over time. By measuring the consistency between fringe-phase variation and image displacement, we obtain a verifiable signal that helps distinguish real-captured videos from AI-generated ones.

Introduction to Moiré-based video authentication.

Method Overview

Our method introduces a passive, physics-grounded signature for video authenticity. A compact two-layer grating assembly is placed in the scene. Under normal camera or object motion, Moiré fringes shift according to deterministic optical laws.

Given an input video, the verifier: (1) extracts the Moiré pattern and maps it into a canonical frame, (2) tracks fringe phase changes over time, and (3) computes correlation between fringe phase and image-plane displacement of the grating assembly. High correlation indicates physically consistent capture by a real camera; low correlation suggests synthetic generation.

Method pipeline.

Main Contributions

  • A novel Moiré-based video authentication framework with a derived Moiré motion invariant.
  • A practical proof-of-concept built with off-the-shelf materials (lenticular sheet, printed patterns, and ArUco markers).
  • Threat-model analysis and empirical validation against multiple state-of-the-art AI video generators.

Why It Works

Modern video generators are optimized to synthesize plausible appearance, but typically do not solve the underlying wave-optics equations required to preserve frame-wise Moiré geometry. The proposed correlation invariant captures this physical consistency gap directly.

BibTeX