View on GitHub

SAFE

Synthetic Audio Forensics Evaluation Challenge

SAFE: Synthetic Audio Forensics Evaluation Challenge

Hugging Face Discord

๐Ÿ‘‰ All participants are required to register for the competition by filling out this Google Form

๐Ÿ“Š Overview โ€ข ๐Ÿฅ‡ Detailed Leaderboard โ€ข ๐Ÿ† Prize โ€ข ๐Ÿ“œ Paper Submission and Dates โ€ข ๐Ÿ“ Tasks โ€ข ๐Ÿ“ˆ Data โ€ข ๐Ÿค– Model Submission โ€ข ๐Ÿ“‚ Create Model Repo โ€ข ๐Ÿ”˜ Submit โ€ข ๐Ÿ†˜ Helpful Stuff โ€ข ๐Ÿ” Evaluation โ€ข โš–๏ธ Rules

๐Ÿ“ฃ Updates

2025-04-23

2025-04-14

2025-04-12

2025-04-08

2025-04-07

2025-04-04

2025-04-03

2025-03-25

๐Ÿ“Š Overview

To advance the state of the art in audio forensics, we are launching a funded evaluation challenge at IH&MMSEC2025 to drive innovation in detecting and attributing synthetic and manipulated audio artifacts. This challenge will focus on several critical aspects, including generalizability across diverse audio sources, robustness against evolving synthesis techniques, and computational efficiency to enable real-world applications. The rapid advancements in audio synthesis, fueled by the increasing availability of new generators and techniques, underscore the urgent need for effective solutions to authenticate audio content and combat emerging threats. Sponsored by the ULRI Digital Safety Research Institute, this initiative aims to mobilize the research community to address this pressing issue.

Sign up here to participate and receive updates: Google Form

๐Ÿฅ‡ Detailed Leaderboard

Public Leaderboard

๐Ÿ† Prize

The most promising solutions may be eligible for research grants to further advance their development. A travel stipend will be available to the highest-performing teams to support attendance at the IH&MMSEC workshop, where they can present their technical approach and results.

All participants are required to register for the competition

๐Ÿ“œ Paper Submission and Dates

All papers for this special session undergo the regular review procedure and must be submitted through the workshop paper submission system following the link given on home page: https://www.ihmmsec.org. For this special session in particular, authors must select the track โ€œCOMPETITION TRACKโ€ on the submission website during the submission.

For any question regarding paper submission, please contact chairs: acm.ihmmsec25@gmail.com.

๐Ÿ“ Tasks

The competition will consist of three detection tasks. For each task, the object is to detect if an audio file contains machine generated speech. Not all tasks will be open at the same time.

๐Ÿ“ˆ Data

The dataset will consist of human and machine generated speech audio tracks.

๐Ÿค– Model Submission

This is a script based competetion. No data will be released before the competition. A subset of the data may be released after the competition. We will be using hugginface competions platform.

๐Ÿ“‚ Create Model Repo

Participants will be required to submit their model to be evaluated on the dataset by creating a huggingface model repository. Please use the example model repo as a template.

๐Ÿ”˜ Submit

Once your model is ready, itโ€™s time to submit:

๐Ÿ†˜ Helpful Stuff

We provide an example model submission repo and a practice competition for troubleshooting.

๐Ÿ” Evaluation

All submissions will be evalulated using balanced accuracy. Balanced accuracy is defined as an average of true positive rate and true negative rate.

The competition page will maintain a public leaderboard and a private leaderboard. The data will be devided along the sources such that public leaderboard will be a subset of the private leaderboard. Public leaderboard will also show error rates for every source, However, the specific source name will be anonymized. For example, public leaderboard will show scores for 4 sources while the private leaderboard will be score on additional 4 sources for 8 sources total. See the following table as an example.

image

โš–๏ธ Rules

To ensure a fair and rigorous evaluation process for the SAFE: Synthetic Audio Forensics Evaluation Challenge (SAFE), the following rules must be adhered to by all participants:

  1. Leaderboard:
    • The competition will maintain both a public and a private leaderboard.
    • The public leaderboard will show error rates for each anonymized source.
    • The private leaderboard will be used for the final evaluation and will include non-overlapping data from the public leaderboard.
  2. Submission Limits:
    • Participants will be limited in submissions per day.
  3. Confidentiality:
    • Participants agree not to publicly compare their results with those of other participants until the other participantโ€™s results are published outside of the IH&MMSEC2025 venue.
    • Participants are free to use and publish their own results independently.
  4. Compliance:
    • Participants must comply with all rules and guidelines provided by the organizers.
    • Failure to comply with the rules may result in disqualification from the competition and exclusion from future evaluations.

By participating in the SAFE challenge, you agree to adhere to these evaluation rules and contribute to the collaborative effort to advance the field of audio forensics.