论文标题
转换编码:模棱两可表示的简单目标
Transformation Coding: Simple Objectives for Equivariant Representations
论文作者
论文摘要
我们提出了一种简单的非生成方法来深入表示学习,该方法通过简单的目标寻求模棱两可的深层嵌入。与现有的Equivariant网络相反,我们的转换编码方法不会限制进料前层或体系结构的选择,并允许在输入空间上进行未知的组操作。我们为欧几里得,正交和单一群体等不同的谎言群体介绍了几个这样的转换编码目标。当使用产品组时,表示形式被分解和分解。我们表明,有关不同转换的其他信息的存在改善了转换编码中的分离。我们通过在下游任务(包括强化学习)上进行定性和定量编码来评估所学的表示形式。
We present a simple non-generative approach to deep representation learning that seeks equivariant deep embedding through simple objectives. In contrast to existing equivariant networks, our transformation coding approach does not constrain the choice of the feed-forward layer or the architecture and allows for an unknown group action on the input space. We introduce several such transformation coding objectives for different Lie groups such as the Euclidean, Orthogonal and the Unitary groups. When using product groups, the representation is decomposed and disentangled. We show that the presence of additional information on different transformations improves disentanglement in transformation coding. We evaluate the representations learnt by transformation coding both qualitatively and quantitatively on downstream tasks, including reinforcement learning.