BrightRate: Quality Assessment for User-Generated HDR Videos

Anonymous Authors
† Institute-1
‡ Institute-2
* Project Lead
Overall Figure

Abstract

High Dynamic Range (HDR) videos offer superior luminance and color fidelity compared to Standard Dynamic Range (SDR) content. The rapid growth of User-Generated Content (UGC) on platforms such as YouTube, Instagram, and TikTok has significantly increased the volume of streamed and shared UGC videos. This newer category of videos introduces challenges for No-Reference (NR) Video Quality Assessment (VQA) models specialized for HDR UGC due to varied distortions from diverse capture, editing, and processing conditions.Towards addressing this issue, we introduce BrightVQ, a sizeable new psychometric data resource. It is the first large-scale subjective video quality database dedicated to the quality modelling of HDR UGC videos. BrightVQ comprises 2,100 videos, on which we collected 73,794 perceptual quality ratings. Using this dataset, we also developed BrightRate, a novel video quality prediction model designed to capture both UGC-specific distortions coexisting with HDR-specific artifacts. Extensive experimental results demonstrate that BrightRate achieves state-of-the-art performance on both the new BrightVQ data resource and on other existing HDR databases.

Distorted Frames in BrightVQ

We show some frames of videos in our BrightVQ dataset with the predicted scores (by BrightRate) and actual MOS.


How to Access the Dataset

Method 1: Visit the GitHub Repository

Access metadata files from our official GitHub repository. We recommend checking the Supplementary material for detailed instructions.

Method 2: Download Video IDs

Download the video ID list from:BrightVQ.txt

Method 3: Download a Single Video

Use the following AWS CLI command to download a specific video by replacing VIDEO_ID:

aws s3 cp s3://ugchdrmturk/videos/VIDEO_ID.mp4 ./BrightVQ_Videos/

Method 4: Bulk Download All Videos

To download all videos in batch mode, run:

cat BrightVQ.txt | while read video; do
  aws s3 cp s3://ugchdrmturk/videos/${video}.mp4 ./BrightVQ_Videos/
done

BrightRate

We introduce BrightRate, a novel No-Reference (NR) Video Quality Assessment (VQA) model designed to capture UGC-specific distortions and HDR-specific artifacts.

Main Modules for BrightRate:

  • Extract UGC-specific features from a pretrained CONTRIQUE model
  • Extract semantic features from a pretrained CLIP-based encoder
  • Extract HDR features via a piecewise non-linear luminance transform with MSCN
  • These features are regressed to MOS by SVR with temporal differences features.
BrightRate Model

Performance Comparison:

We evaluated our BrightRate results on three public-available HDR dataset, including BrightVQ (ours), LIVE-HDR and SFV+HDR. We use SROCC as the final results shown here.


Dataset COVER DOVER FastVQA HDRChipQA HIDROVQA BrightRate
BrightVQ 0.7609 0.7745 0.8094 0.6781 0.8526 0.8887
LIVE-HDR 0.5022 0.6303 0.5182 0.8250 0.8793 0.8907
SFV+HDR 0.6613 0.6001 0.7130 0.6296 0.7003 0.7328

Video Samples in BrightVQ

Sample Videos (Portraits)

Dance

Boat

Instruction Video

Singing in Room

Firegrill

Art

Sample Videos (Landscapes)

Nature

Dog

Monument

Graphics

Ornaments

Crowd

BibTeX

BibTex Code Here