论文标题

颜色感知引导的显示功率降低虚拟现实

Color-Perception-Guided Display Power Reduction for Virtual Reality

论文作者

Duinkharjav, Budmonde, Chen, Kenneth, Tyagi, Abhishek, He, Jiayi, Zhu, Yuhao, Sun, Qi

论文摘要

对于当今不受限制的VR和AR设备,电池寿命是日益紧迫的挑战。但是,头部安装显示器的功率效率自然与由更好的分辨率,刷新速率和动态范围驱动的计算需求不断增长,所有这些范围都减少了未经处理的AR/VR设备的持续使用时间。例如,Oculus Quest 2在充满电的电池下只能维持2到3个小时的操作时间。先前的显示功率减少技术主要针对智能手机显示。但是,直接应用智能手机显示功率降低技术,但是,用明显的文物降低了AR/VR的视觉感知。例如,智能手机上的“节省功率模式”均匀地降低了显示屏上的像素亮度,因此,如果直接应用于VR内容,则向用户呈现了整体变暗的视觉感知。 我们的关键见解是,VR显示功率降低必须意识到高视野VR显示的凝视性质。为此,我们提出了一个注视诱导的系统,该系统在不降低亮度的情况下,在用户积极查看沉浸式视频序列时,可以最大程度地减少显示功耗,同时保持高视觉保真度。这是通过通过心理物理研究构建目光的颜色歧视模型以及通过实际设备测量来构建功率模型(相对于像素颜色)的。至关重要的是,由于构建两个模型的仔细设计决策,我们的算法被用作封闭形式解决方案的约束优化问题,该解决方案可以作为实时的图像空间空间。我们使用一系列心理物理研究和自然图像进行大规模分析来评估我们的系统。实验结果表明,我们的系统将显示功率降低了多达24%,而几乎没有感知的保真度退化。

Battery life is an increasingly urgent challenge for today's untethered VR and AR devices. However, the power efficiency of head-mounted displays is naturally at odds with growing computational requirements driven by better resolution, refresh rate, and dynamic ranges, all of which reduce the sustained usage time of untethered AR/VR devices. For instance, the Oculus Quest 2, under a fully-charged battery, can sustain only 2 to 3 hours of operation time. Prior display power reduction techniques mostly target smartphone displays. Directly applying smartphone display power reduction techniques, however, degrades the visual perception in AR/VR with noticeable artifacts. For instance, the "power-saving mode" on smartphones uniformly lowers the pixel luminance across the display and, as a result, presents an overall darkened visual perception to users if directly applied to VR content. Our key insight is that VR display power reduction must be cognizant of the gaze-contingent nature of high field-of-view VR displays. To that end, we present a gaze-contingent system that, without degrading luminance, minimizes the display power consumption while preserving high visual fidelity when users actively view immersive video sequences. This is enabled by constructing a gaze-contingent color discrimination model through psychophysical studies, and a display power model (with respect to pixel color) through real-device measurements. Critically, due to the careful design decisions made in constructing the two models, our algorithm is cast as a constrained optimization problem with a closed-form solution, which can be implemented as a real-time, image-space shader. We evaluate our system using a series of psychophysical studies and large-scale analyses on natural images. Experiment results show that our system reduces the display power by as much as 24% with little to no perceptual fidelity degradation.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源