论文标题

基于混合模型 /数据驱动的图像编码的图形变换

Hybrid Model-based / Data-driven Graph Transform for Image Coding

论文作者

Bagheri, Saghar, Do, Tam Thuc, Cheung, Gene, Ortega, Antonio

论文摘要

在图像压缩管道中,转换编码对稀疏信号表示形式仍然至关重要。虽然从经验协方差矩阵$ \ bar {c} $计算的Karhunen-Loève变换(KLT)在理论上对于固定过程是最佳的,但实际上,很难从非机构图像中收集足够的统计数据以可靠地估计$ \ bar {c} $很难。 In this paper, to encode an intra-prediction residual block, we pursue a hybrid model-based / data-driven approach: the first $K$ eigenvectors of a transform matrix are derived from a statistical model, e.g., the asymmetric discrete sine transform (ADST), for stability, while the remaining $N-K$ are computed from $\bar{C}$ for performance.转换计算是一个图形学习问题,在其中,我们寻求图形拉普拉斯矩阵最小化凸锥内部的图形套索物镜,在真实对称矩阵的希尔伯特空间中共享第一个$ k $ eigenVectors。我们通过增强的拉格朗日放松和近端梯度(PG)有效地解决了问题。使用WebP作为基线图像编解码器,实验结果表明,与默认离散余弦变换(DCT)相比,我们的混合图转换获得了更好的能量压实,而稳定性比KLT更好。

Transform coding to sparsify signal representations remains crucial in an image compression pipeline. While the Karhunen-Loève transform (KLT) computed from an empirical covariance matrix $\bar{C}$ is theoretically optimal for a stationary process, in practice, collecting sufficient statistics from a non-stationary image to reliably estimate $\bar{C}$ can be difficult. In this paper, to encode an intra-prediction residual block, we pursue a hybrid model-based / data-driven approach: the first $K$ eigenvectors of a transform matrix are derived from a statistical model, e.g., the asymmetric discrete sine transform (ADST), for stability, while the remaining $N-K$ are computed from $\bar{C}$ for performance. The transform computation is posed as a graph learning problem, where we seek a graph Laplacian matrix minimizing a graphical lasso objective inside a convex cone sharing the first $K$ eigenvectors in a Hilbert space of real symmetric matrices. We efficiently solve the problem via augmented Lagrangian relaxation and proximal gradient (PG). Using WebP as a baseline image codec, experimental results show that our hybrid graph transform achieved better energy compaction than default discrete cosine transform (DCT) and better stability than KLT.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源