论文标题
通过转换的张量张量产物进行张量分解以进行图像比对
Tensor Factorization via Transformed Tensor-Tensor Product for Image Alignment
论文作者
论文摘要
在本文中,我们研究了一批线性相关的图像对齐的问题,其中观察到的图像被一些未知的域转换变形,并被加性高斯噪声和稀疏的噪声损坏。通过将这些图像作为三阶张量的额叶切片,我们建议通过转换张张量张量产品利用张量分解方法来探索基础张量的低级别,这被分解为在任何单位转换下通过转换的张量转换产品,将两个较小的张量的量张量。转换张量张量产品的主要优点是,基于转化的张量核定常的现有文献比其计算复杂性较低。此外,使用张量$ \ ell_p $ $(0 <p <1)$ norm用于表征稀疏噪声的稀疏性,并且采用了张量FROBENIUS NORM来模型添加性高斯噪声。通用的高斯 - 纽顿算法旨在通过线性化域转换来解决所得模型,并开发了近端高斯 - 西德尔算法来求解相应的子问题。此外,还建立了高斯 - 西德尔算法的融合,其收敛率也基于kurdyka- $ $ ojasiewicz属性进行了分析。与在精度和计算时间中的几种最新方法相比,对现实世界图像数据集进行了广泛的数值实验,以证明所提出的方法的出色性能。
In this paper, we study the problem of a batch of linearly correlated image alignment, where the observed images are deformed by some unknown domain transformations, and corrupted by additive Gaussian noise and sparse noise simultaneously. By stacking these images as the frontal slices of a third-order tensor, we propose to utilize the tensor factorization method via transformed tensor-tensor product to explore the low-rankness of the underlying tensor, which is factorized into the product of two smaller tensors via transformed tensor-tensor product under any unitary transformation. The main advantage of transformed tensor-tensor product is that its computational complexity is lower compared with the existing literature based on transformed tensor nuclear norm. Moreover, the tensor $\ell_p$ $(0<p<1)$ norm is employed to characterize the sparsity of sparse noise and the tensor Frobenius norm is adopted to model additive Gaussian noise. A generalized Gauss-Newton algorithm is designed to solve the resulting model by linearizing the domain transformations and a proximal Gauss-Seidel algorithm is developed to solve the corresponding subproblem. Furthermore, the convergence of the proximal Gauss-Seidel algorithm is established, whose convergence rate is also analyzed based on the Kurdyka-$Ł$ojasiewicz property. Extensive numerical experiments on real-world image datasets are carried out to demonstrate the superior performance of the proposed method as compared to several state-of-the-art methods in both accuracy and computational time.