High Dynamic Range (HDR) videos offer superior luminance and color fidelity compared to Standard Dynamic Range (SDR) content. The rapid growth of User-Generated Content (UGC) on platforms such as YouTube, Instagram, and TikTok has significantly increased the volume of streamed and shared UGC videos. This newer category of videos introduces challenges for No-Reference (NR) Video Quality Assessment (VQA) models specialized for HDR UGC due to varied distortions from diverse capture, editing, and processing conditions.Towards addressing this issue, we introduce BrightVQ, a sizeable new psychometric data resource. It is the first large-scale subjective video quality database dedicated to the quality modelling of HDR UGC videos. BrightVQ comprises 2,100 videos, on which we collected 73,794 perceptual quality ratings. Using this dataset, we also developed BrightRate, a novel video quality prediction model designed to capture both UGC-specific distortions coexisting with HDR-specific artifacts. Extensive experimental results demonstrate that BrightRate achieves state-of-the-art performance on both the new BrightVQ data resource and on other existing HDR databases.