论文标题
建立对单词和关系表示的理论理解
Towards a Theoretical Understanding of Word and Relation Representation
论文作者
论文摘要
代表向量或嵌入的单词可以实现计算推理,并且是自动化自然语言任务的基础。例如,如果类似单词的单词嵌入包含相似的值,则可以轻易评估单词相似性,而从拼写中判断,通常不可能(例如,猫 /猫科动物),预先确定并存储所有单词之间的相似性,这是过时的时间耗时,记忆密集,记忆力密集和主观。我们专注于从文本语料库和知识图中学到的单词嵌入。几种著名的算法通过学习预测每个单词周围出现的单词,例如Word2Vec和手套。已知这种单词嵌入的参数反映了单词共存在的统计数据,但是它们如何捕获语义含义尚不清楚。知识图表示模型都会通过训练模型以监督的方式训练模型来预测已知事实。尽管实际上预测的准确性稳步改善,但对实现这一目标的潜在结构几乎没有理解。 在单词嵌入和知识图表示的几何形状中,对潜在语义结构的编码有限的理解使得有原则性的方法是提高其性能,可靠性或可解释性不清楚。解决这个问题: 1。从理论上讲,我们证明了经验观察是合理的,即通过算法(例如Word2vec和Glove)学到的单词嵌入之间的特定几何关系对应于单词之间的语义关系。和 2。我们将语义和几何形状之间的对应关系扩展到知识图的实体和关系,为知识图表示的潜在结构提供了与单词嵌入的模型。
Representing words by vectors, or embeddings, enables computational reasoning and is foundational to automating natural language tasks. For example, if word embeddings of similar words contain similar values, word similarity can be readily assessed, whereas judging that from their spelling is often impossible (e.g. cat /feline) and to predetermine and store similarities between all words is prohibitively time-consuming, memory intensive and subjective. We focus on word embeddings learned from text corpora and knowledge graphs. Several well-known algorithms learn word embeddings from text on an unsupervised basis by learning to predict those words that occur around each word, e.g. word2vec and GloVe. Parameters of such word embeddings are known to reflect word co-occurrence statistics, but how they capture semantic meaning has been unclear. Knowledge graph representation models learn representations both of entities (words, people, places, etc.) and relations between them, typically by training a model to predict known facts in a supervised manner. Despite steady improvements in fact prediction accuracy, little is understood of the latent structure that enables this. The limited understanding of how latent semantic structure is encoded in the geometry of word embeddings and knowledge graph representations makes a principled means of improving their performance, reliability or interpretability unclear. To address this: 1. we theoretically justify the empirical observation that particular geometric relationships between word embeddings learned by algorithms such as word2vec and GloVe correspond to semantic relations between words; and 2. we extend this correspondence between semantics and geometry to the entities and relations of knowledge graphs, providing a model for the latent structure of knowledge graph representation linked to that of word embeddings.