论文标题
快速优化通过共同的单数值分解设置的矩阵的共同基础
Fast optimization of common basis for matrix set through Common Singular Value Decomposition
论文作者
论文摘要
SVD(单数值分解)是机器学习的基本工具之一,可以优化给定矩阵的基础。但是,有时我们会有一组矩阵$ \ {a_k \} _ k $,并且想为它们优化一个共同的基础:查找正交矩阵$ u $,$ v $,使得$ \^t a_k v \} $ set of Simperer是Simperer。例如,DCT -II是图像/视频压缩中常用功能的正交基础 - 如下所述,这种基础可以自动为给定数据集自动优化。虽然还讨论了梯度下降优化的优化可能是计算昂贵的,但提出的CSVD(常见SVD):基于SVD的快速通用方法。具体来说,我们选择$ u $作为$ \ sum_i(w_k)^q(a_k a_k^t)^p $和$ \ sum_k(w_k)^q(a_k^t a_k)^p $的$ \ sum_i(w_k)^q(a_k a_k^t)^q(a_k a_k^t)^q(a_k a_k^t) 1/2,可选地进行归一化,例如$ a \ to a -rc^t $其中$ r_i = \ sum_j a_ {ij},c_j = \ sum_i a__ {ij} $。
SVD (singular value decomposition) is one of the basic tools of machine learning, allowing to optimize basis for a given matrix. However, sometimes we have a set of matrices $\{A_k\}_k$ instead, and would like to optimize a single common basis for them: find orthogonal matrices $U$, $V$, such that $\{U^T A_k V\}$ set of matrices is somehow simpler. For example DCT-II is orthonormal basis of functions commonly used in image/video compression - as discussed here, this kind of basis can be quickly automatically optimized for a given dataset. While also discussed gradient descent optimization might be computationally costly, there is proposed CSVD (common SVD): fast general approach based on SVD. Specifically, we choose $U$ as built of eigenvectors of $\sum_i (w_k)^q (A_k A_k^T)^p$ and $V$ of $\sum_k (w_k)^q (A_k^T A_k)^p$, where $w_k$ are their weights, $p,q>0$ are some chosen powers e.g. 1/2, optionally with normalization e.g. $A \to A - rc^T$ where $r_i=\sum_j A_{ij}, c_j =\sum_i A_{ij}$.