ForensicHub DocumentationForensicHub Documentation
  • Basic Information

    • Introduction
    • Framework Design
  • Quick Start

    • Installation
    • Component Registration
    • Yaml Configuration
    • Data Preparation & JSON Generation
    • Running Training & Evaluation
  • Model

    • Model Summary
  • Metric

    • Evaluation Metrics
  • Rank

    • Rank
  • English
  • 简体中文
GitHub
  • Basic Information

    • Introduction
    • Framework Design
  • Quick Start

    • Installation
    • Component Registration
    • Yaml Configuration
    • Data Preparation & JSON Generation
    • Running Training & Evaluation
  • Model

    • Model Summary
  • Metric

    • Evaluation Metrics
  • Rank

    • Rank
  • English
  • 简体中文
GitHub
  • Model

    • Model Summary
  • Metric

    • Evaluation Metrics
  • Rank

    • Rank

Evaluation Metrics

Currently, the evaluation metrics in ForensicHub are divided into two categories: Image-level and Pixel-level. The specific metrics included are as follows:

  • Image-level: ImageAP, ImageMCC, ImageTNR, ImageTPR, ImageAUC, ImageAccuracy, ImageF1
  • Pixel-level: PixelAUC, PixelAccuracy, PixelF1, PixelIOU
Edit this page on GitHub
Last Updated:: 6/22/25, 10:25 AM
Contributors: Sunnyhaze