ForensicHub DocumentationForensicHub Documentation
Leaderboard
  • Basic Information

    • Introduction
    • Framework Design
  • Quick Start

    • Installation
    • Component Registration
    • Yaml Configuration
    • Data Preparation & JSON Generation
    • Running Training & Evaluation
  • Model

    • Model Summary
  • Metric

    • Evaluation Metrics
  • Rank

    • Rank
  • English
  • ็ฎ€ไฝ“ไธญๆ–‡
GitHub
Leaderboard
  • Basic Information

    • Introduction
    • Framework Design
  • Quick Start

    • Installation
    • Component Registration
    • Yaml Configuration
    • Data Preparation & JSON Generation
    • Running Training & Evaluation
  • Model

    • Model Summary
  • Metric

    • Evaluation Metrics
  • Rank

    • Rank
  • English
  • ็ฎ€ไฝ“ไธญๆ–‡
GitHub

๐Ÿ† FIDL Leaderboard ๐Ÿ†

Unified Ranking Model's Generalization across All Domains


This leaderboard follows a simplified version of the IFF-Protocol proposed in ForensicHub: models are trained only on ๐Ÿ“‚OpenMMSec, a dataset similar in design to the IFF-Protocol, jointly released by ForensicHub and Ant Group. OpenMMSec contains real images, tampered images sampled from public datasets, and private forged images. Details about the dataset composition can be found at the end of this page.

You can download OpenMMSec through ็™พๅบฆ็ฝ‘็›˜ or Google Drive.

For ranking, we select the most challenging datasets from each domain and evaluate WITHOUT finetuning to emphasize cross-domain generalization. Each domainโ€™s AUC is averaged, and the overall ranking is based on the mean of these four domain averages.

The datasets used for evaluation are listed below:

  • Deepfake Detection: FF++, DF40
  • Image Manipulation Detection and Localization: IMD2020, Autosplice
  • AI-generated Image Detection: DiffusionForensics, Chameleon
  • Document Tamper Detection and Localization: Doctamper, T-SROIE

๐Ÿ† RankModelDeepfake ๐Ÿ–ผ๏ธIMDL ๐Ÿ“AIGC ๐Ÿค–Doc ๐Ÿ“„Avg โญ
๐Ÿฅ‡ 1Effort0.6140.5870.4100.7040.579
๐Ÿฅˆ 2UnivFD0.5340.4860.4630.6990.545
๐Ÿฅ‰ 3ConvNeXT0.6620.7040.3370.4660.542
4Mesorch0.5410.5620.4600.5670.532
5IML-ViT0.5810.5620.3250.6120.520
6Segformer-b30.5960.5670.3420.3340.460
...

More hyperparameter settings can be found in the ForensicHub. The detailed performance of the model on each dataset can be found in below:

Click to view results

[
    {
        "model": "Effort",
        "deepfake": {"DF40_CollabDiff": 0.7686, "DF40_deepfacelab": 0.4292, "DF40_heygen": 0.7061, "FF++c40": 0.5506},
        "imdl": {"IMD2020": 0.5704, "Autosplice": 0.6035},
        "aigc": {"Chameleon": 0.4898, "DiffusionForensics": 0.3304},
        "doc": {"Doctamper": 0.4745, "T-SROIE": 0.9326}
    },
    {
        "model": "Segformer-b3",
        "deepfake": {"DF40_CollabDiff": 0.7953, "DF40_deepfacelab": 0.5563, "DF40_heygen": 0.5496, "FF++c40": 0.4813},
        "imdl": {"IMD2020": 0.5185, "Autosplice": 0.6157},
        "aigc": {"Chameleon": 0.4344, "DiffusionForensics": 0.2502},
        "doc": {"Doctamper": 0.4049, "T-SROIE": 0.2638}
    },
    {
        "model": "ConvNeXT",
        "deepfake": {"DF40_CollabDiff": 0.9572, "DF40_deepfacelab": 0.6103, "DF40_heygen": 0.5281, "FF++c40": 0.5512},
        "imdl": {"IMD2020": 0.8138, "Autosplice": 0.5945},
        "aigc": {"Chameleon": 0.3944, "DiffusionForensics": 0.2794},
        "doc": {"Doctamper": 0.4977, "T-SROIE": 0.4343}
    },
    {
        "model": "UnivFD",
        "deepfake": {"DF40_CollabDiff": 0.7458, "DF40_deepfacelab": 0.3962, "FF++c40": 0.4610},
        "imdl": {"IMD2020": 0.4887, "Autosplice": 0.4831},
        "aigc": {"Chameleon": 0.5727, "DiffusionForensics": 0.3537},
        "doc": {"Doctamper": 0.4834, "T-SROIE": 0.9136}
    }
]


OpenMMSec is composed of:

  • Real Images: COCO, Object365
  • Deepfake: two forgery types of DeepFakeFace and SFHQ
  • IMDL: small objects inpainting on COCO and Object365
  • AIGC: Community Forensics
  • Document: OSTF and some images in Tianchi competition 2024
Edit this page on GitHub
Last Updated:: 11/5/25, 8:04 AM
Contributors: Bo Du, dddb11