JPEG Trust Watermarking Benchmark

Launch: February 11, 2026  ·  Close: April 15, 2026  ·  Top-3 announced: April 30, 2026

Contact: deepayan.bhowmik@ncl.ac.uk; touradj.ebrahimi@epfl.ch; sabrina.caldwell@unsw.edu.au; frederik.temmermans@vub.be

Description & Goal

This grand challenge aims to assess watermarking performance (e.g., embedding distortion and robustness against attacks) along various evaluation criteria set out by the JPEG Trust Part 3: Media Asset Watermarking initiative. JPEG Trust (ISO/IEC 21617) is an international standardisation effort that provides a framework for establishing trust in media. This framework includes aspects of authenticity, provenance, attribution, intellectual property rights, and integrity of the media assets throughout their life cycle.

Motivation & Background

Digital watermarking, in use for several decades, has been increasingly adopted as a method for embedding information directly into media assets in a way that can be both imperceptible and robust. This technique establishes a link between the metadata and the content, one that is challenging to disrupt without compromising the intended usage of the media asset itself. Since the inception and rapid rise of generative AI, watermarking has increasingly gained popularity, both within the industry and among policymakers, as a solution to signal whether the media asset is AI-generated or AI-manipulated content. Such watermarking is equally beneficial for media assets generated outside the context of AI (e.g., photographs, edited images).

While growing efforts are noticeable in developing technologies, there is a need to have a standard framework to signal the existence of and to assess the performance of watermarking. To facilitate globally interoperable media asset authenticity, JPEG Committee (joint collaborative team between ISO/IEC JTC1/SC29/WG1 and ITU-T SG21), via the JPEG Trust project, initiated the development of a new international standard: JPEG Trust Part 3: Media Asset Watermarking, with the aim to strengthen the overall media trust ecosystem, which also complements the development of the annotation based approach in JPEG Trust Part 1: Core foundation. JPEG Trust Part 3 focuses on the watermarking requirements for addressing provenance, authenticity, integrity, copyright, and the identification of assets and stakeholders.

Scope

Participating teams will benchmark and report their algorithm’s performance over the provided benchmark dataset against the given evaluation test set (through Python packages/library). The leaderboard would provide a comparison of their ranking.

Noteworthy features of this challenge include emerging evaluation criteria that have not been previously used in the literature. This includes JPEG AI, JPEG XS and JPEG XL compressions, as well as a new generative AI-based object manipulation/editing pipeline.

Rules for Participation

  • This challenge is open to all (academia and industry). Teams with student participants must have one academic member of staff.
  • Participants must register to join the challenge. After registration, they'll receive essential credentials and details about the different steps and actions expected from them.
  • Participants can form a team (maximum of 4) or submit individually.
  • Participants will receive the benchmark suite either as a Python package or as open-source code.
  • Competition data will be shared after completing the form and accepting the license agreement.
  • The first three teams on the leaderboard must submit their code to GitHub. The GitHub repository must be PUBLIC. Submitted repository must be readable, documented properly, and interactive.
  • The first three teams' model files must be uploaded to the Google Drive link to check the ‘reproducibility’ of the code and indicate the link on GitHub.
  • Only submissions that meet the deadlines and milestones in the timeline will be considered for the final leaderboard.
  • If any form of malicious attempts to influence the ranking is detected, the participant will be disqualified from the competition.
  • Participants agree to abide by the license for the dataset.
  • The organisers will take the final decision based on a transparent process.

Evaluation

To decide the winner of this Challenge, the organising committee will consider the following items:

Watermark Embedding

Algorithms must be able to accept two input parameters for embedding task: 1) one host image and 2) 100-bit random binary sequence as the watermark. Output of the algorithm is the watermarked images. Performance will be evaluated using PSNR, wPSNR, SSIM, JND and FID score.

Watermarking Robustness

Robustness will be evaulated using Bit Error Rate (BER) against a set of attackes. Algorithm must be blind in nature and be able to accept 1) one input watermarked test image and 2) the original watermark. The output will be watermark BER. The robustness will be measured against a set of attacks with default parameters as provided by the organiser's attack source code. Following classes of attacks are considered:

Signal processing attacks

  • Gaussian noise
  • Speckle noise
  • Blurring
  • Brightness Adjustment
  • Sharpness Adjustments
  • Gamma Correction
  • Histogram Equalisation
  • Median Filtering

Geometric attacks

  • Rotation
  • Resizing
  • Scaling
  • Cropping
  • Flipping

Compression attacks

  • JPEG 1
  • JPEG 2000
  • JPEG XS
  • JPEG XL
  • JPEG AI

Generative AI manipulation attacks

  • AI-manipulation pipeline

Reproducibility

All results must be reproducible using the provided scripts/ executables/ models which all participants must submit.

Class-wide performance

Balanced performance across all attacks is essential. Leaderboard and ranking will be published on embedding performance and each of the attack category seprately. The winner will be decided based on best perforamnces across all category through majority voting criteria.

Dataset

The organisers will provide participants the evalaution dataset (a mix of real and synthetic images) to report their algorithm's performance. Following the closing date top 10% (or 10 whichever is higher) submissions will be tested on an unseen dataset by the organisers. The top 3 winners will be announced based on these algorithms' perforamnce on the unseen dataset. The unseen dataset will be published on completion of this competetion.

Deadlines

Milestone Date
Launch of the challengeFebruary 11, 2026
E-registration opensFebruary 11, 2026
Release of the datasetAfter registration
Close of the competitionApril 15, 2026
Announcement of the top three winning teamsApril 30, 2026
Paper submission (Grand Challenge Paper Submission)May 13, 2026
Paper acceptance notificationJune 10, 2026
Camera-readyJuly 1, 2026
Author registrationJuly 16, 2026
Presentation by the winning team (1st place only) – ICIP 2026September 13–17, 2026
Sponsorship: To be confirmed.

Organizers

Dr Deepayan Bhowmik

Newcastle University, UK. deepayan.bhowmik@newcastle.ac.uk

Deepayan Bhowmik is a Senior Lecturer (Associate Professor) in Data Science and the Co-Director of Newcastle University Centre of Research Excellence in Data Science and AI, Newcastle University, UK. His research expertise includes fundamental image/signal processing, embedded imaging on heterogeneous hardware and related applications, e.g., media security, multimodal remote sensing for environmental monitoring, etc. Dr Bhowmik received his PhD in Electronic and Electrical Engineering (focused on media watermarking) from the University of Sheffield, UK, in 2011. He received a prestigious Dorothy Hodgkin postgraduate award and multiple research grants from various UK research councils, the EU, and industry. He is one of the key editors of the JPEG Trust international standard.

Prof Touradj Ebrahimi

EPFL, Switzerland. touradj.ebrahimi@epfl.ch

Touradj Ebrahimi is a professor of image processing at Ecole Polytechnique Fédérale de Lausanne (EPFL) and the current Convener of the JPEG standardisation Committee. He has founded several startups and spinoff companies in the past two decades, including the most recent RayShaper SA, a startup based in Switzerland involved in AI-powered multimedia. His areas of interest include image and video compression, media security, quality of experience in multimedia, and AI-based image and video processing and analysis. Prof. Ebrahimi is a Fellow of the IEEE, SPIE, EURASIP, and AAIA and has been the recipient of several awards and distinctions, including an IEEE Star Innovator Award in Multimedia, an Emmy Award on behalf of JPEG, and the SMPTE Progress medal. He has been active in media security and trust, including technologies for watermarking and copyright protection, integrity verification, privacy protection, and conditional access for more than twenty years, and in deepfakes and synthetic media using generative AI technologies since 2018.

Dr Sabrina Caldwell

University of New South Wales, Australia and Australian National University. sabrina.caldwell@unsw.edu.au

Sabrina Caldwell is Senior Lecturer, Ethics in Technology at the University of New South Wales, Australia. Her area of research interest centers on images, specifically photo credibility. Key to her research is the important distinction between photographs (photos of real people, places and events) and manipulated images or photoart (images created or changed by image manipulation software and/or AI). She works in the area of physiological signal processing using neural networks, deep learning and bio-inspired computing, particularly as it pertains to human recognition of deception and affective reasoning. She is a member of the JPEG Committee, Co-chair of JPEG Trust, and Co-editor of the new international JPEG ISO standard on image credibility under development.

Dr Frederik Temmermans

Vrije Universiteit Brussel & imec, Belgium · frederik.temmermans@vub.be

Frederik Temmermans graduated in 2006 with a Master's in Computer Science. He received a PhD in Engineering in 2014. His research focuses on image processing, interoperable access to image data and media privacy, security, authenticity and integrity. He has been involved in various research projects in the medical, mobile and cultural domains. Frederik is an active member of the JPEG standardization committee (ISO/IEC JTC1/SC29/WG1) where he contributed to several standards such as JPSearch (ISO/IEC 24800), the JPEG Universal Metadata Box Format (ISO/IEC 19566-5) and JPEG Privacy and Security (ISO/IEC 19566-4) and chaired exploration studies on Media Blockchain, Fake Media and NFT. Frederik is also co-founder of the VUB spin-off company Universum Digitalis.

Contact: Please email the organizers above with questions about registration, dataset access, evaluation, or rules.