论文标题
视频编码的神经生成块
Neural Generation of Blocks for Video Coding
论文作者
论文摘要
受过良好训练的生成神经网络(GNN)非常有效地在其学习的参数中压缩静态图像的视觉信息,但对大多数视频内容的预测效率不高。但是,对于进入框架的内容,例如在平移或放大过程中,曲线,不规则形状或细节的内容可以通过GNN生成,可以提供更好的压缩效率(较低的速率降低)。本文提出了在特定时间内在视频bitstream中编码特定于内容的GNN的参数,并使用GNN为特定的块和帧范围生成内容。要生成的块仅仅是生成的块,而生成的块比间预测或内部预测更有效地压缩了。这种方法最大程度地提高了学习参数中包含的信息的有用性。
Well-trained generative neural networks (GNN) are very efficient at compressing visual information for static images in their learned parameters but not as efficient as inter- and intra-prediction for most video content. However, for content entering a frame, such as during panning or zooming out, and content with curves, irregular shapes, or fine detail, generation by a GNN can give better compression efficiency (lower rate-distortion). This paper proposes encoding content-specific learned parameters of a GNN within a video bitstream at specific times and using the GNN to generate content for specific ranges of blocks and frames. The blocks to generate are just the ones for which generation gives more efficient compression than inter- or intra- prediction. This approach maximizes the usefulness of the information contained in the learned parameters.