论文标题
视觉机制启发了用于图像和视频质量评估的有效变压器
Visual Mechanisms Inspired Efficient Transformers for Image and Video Quality Assessment
论文作者
论文摘要
视觉(图像,视频)质量评估可以通过不同域中的视觉特征(例如空间,频率和时间域)建模。人类视觉系统(HVS)中的感知机制在质量感知的产生中起着至关重要的作用。本文提出了使用有效的窗口变压器体系结构进行无引用视觉质量评估的一般框架。用于多阶段通道注意的轻量级模块集成到SWIN(移位窗口)变压器中。这样的模块可以在图像质量评估(IQA)中代表适当的感知机制,以构建准确的IQA模型。同时,在空间和频域中图像质量感知的代表性特征也可以从IQA模型中得出,然后将其馈入另一个窗户的变压器体系结构进行视频质量评估(VQA)。 VQA模型有效地重复了整个本地窗口的注意力信息,以解决原始变压器的昂贵时间和记忆复杂性的问题。大规模IQA和VQA数据库的实验结果表明,所提出的质量评估模型优于大幅度的其他最先进模型。完整的源代码将发布在GitHub上。
Visual (image, video) quality assessments can be modelled by visual features in different domains, e.g., spatial, frequency, and temporal domains. Perceptual mechanisms in the human visual system (HVS) play a crucial role in generation of quality perception. This paper proposes a general framework for no-reference visual quality assessment using efficient windowed transformer architectures. A lightweight module for multi-stage channel attention is integrated into Swin (shifted window) Transformer. Such module can represent appropriate perceptual mechanisms in image quality assessment (IQA) to build an accurate IQA model. Meanwhile, representative features for image quality perception in the spatial and frequency domains can also be derived from the IQA model, which are then fed into another windowed transformer architecture for video quality assessment (VQA). The VQA model efficiently reuses attention information across local windows to tackle the issue of expensive time and memory complexities of original transformer. Experimental results on both large-scale IQA and VQA databases demonstrate that the proposed quality assessment models outperform other state-of-the-art models by large margins. The complete source code will be published on Github.