论文标题
学习最大保证金渠道解码器
Learning Maximum Margin Channel Decoders
论文作者
论文摘要
对于两个通道模型,考虑学习通道解码器的问题。第一个模型是一个加性噪声通道,其噪声分布未知且非参数。为学习者提供了一个固定的代码簿和一个由噪声的独立样本组成的数据集,并且需要根据Mahalanobis距离为最近的邻居解码器选择精密矩阵。第二个模型是具有加性白色高斯噪声和未知通道变换的非线性通道。为学习者提供了固定的代码簿和一个由通道的独立输入输出样本组成的数据集,并且需要为带有线性内核的最近的邻居解码器选择一个矩阵。对于这两种模型,解决了最大化解码器边缘的目的。因此,对于每个频道模型,开发了与代码书相关的正规化项和铰链损耗功能的正常损耗最小化问题,这是受到分类问题的支持向量机范式的启发。在正则化参数的最佳选择下,为这两个模型提供了误差概率损失函数的预期泛化误差界。对于附加噪声通道,根据该结合,提出了选择训练信噪比的理论指南。另外,对于非线性通道,假设类别提供了高概率统一的概括误差。对于每个通道,提出了一种用于解决正则损耗最小化问题的随机亚梯度下降算法,并指定了优化误差。通过几个示例证明了所提出的算法的性能。
The problem of learning a channel decoder is considered for two channel models. The first model is an additive noise channel whose noise distribution is unknown and nonparametric. The learner is provided with a fixed codebook and a dataset comprised of independent samples of the noise, and is required to select a precision matrix for a nearest neighbor decoder in terms of the Mahalanobis distance. The second model is a non-linear channel with additive white Gaussian noise and unknown channel transformation. The learner is provided with a fixed codebook and a dataset comprised of independent input-output samples of the channel, and is required to select a matrix for a nearest neighbor decoder with a linear kernel. For both models, the objective of maximizing the margin of the decoder is addressed. Accordingly, for each channel model, a regularized loss minimization problem with a codebook-related regularization term and hinge-like loss function is developed, which is inspired by the support vector machine paradigm for classification problems. Expected generalization error bounds for the error probability loss function are provided for both models, under optimal choice of the regularization parameter. For the additive noise channel, a theoretical guidance for choosing the training signal-to-noise ratio is proposed based on this bound. In addition, for the non-linear channel, a high probability uniform generalization error bound is provided for the hypothesis class. For each channel, a stochastic sub-gradient descent algorithm for solving the regularized loss minimization problem is proposed, and an optimization error bound is stated. The performance of the proposed algorithms is demonstrated through several examples.