Hostname: page-component-89b8bd64d-z2ts4 Total loading time: 0 Render date: 2026-05-09T01:30:39.977Z Has data issue: false hasContentIssue false

What should be encoded by position embedding for neural network language models?

Published online by Cambridge University Press:  10 May 2023

Shuiyuan Yu
Affiliation:
Institute of Quantitative Linguistics, Beijing Language and Culture University, Beijing 100083, P. R. China
Zihao Zhang
Affiliation:
Institute of Quantitative Linguistics, Beijing Language and Culture University, Beijing 100083, P. R. China
Haitao Liu*
Affiliation:
Institute of Quantitative Linguistics, Beijing Language and Culture University, Beijing 100083, P. R. China Department of Linguistics, Zhejiang University, Hangzhou 310058, P. R. China Centre for Linguistics and Applied Linguistics, Guangdong University of Foreign Studies, Guangzhou 510006, P. R. China
*
Corresponding author: H. Liu; Email: lhtzju@yeah.net
Rights & Permissions [Opens in a new window]

Abstract

Word order is one of the most important grammatical devices and the basis for language understanding. However, as one of the most popular NLP architectures, Transformer does not explicitly encode word order. A solution to this problem is to incorporate position information by means of position encoding/embedding (PE). Although a variety of methods of incorporating position information have been proposed, the NLP community is still in want of detailed statistical researches on position information in real-life language. In order to understand the influence of position information on the correlation between words in more detail, we investigated the factors that affect the frequency of words and word sequences in large corpora. Our results show that absolute position, relative position, being at one of the two ends of a sentence and sentence length all significantly affect the frequency of words and word sequences. Besides, we observed that the frequency distribution of word sequences over relative position carries valuable grammatical information. Our study suggests that in order to accurately capture word–word correlations, it is not enough to focus merely on absolute and relative position. Transformers should have access to more types of position-related information which may require improvements to the current architecture.

Information

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2023. Published by Cambridge University Press
Figure 0

Figure 1. The position–frequency distribution of top-3 most populated word clusters on length 15 sentences.

Figure 1

Figure 2. The relationship between sentence length and word frequency.

Figure 2

Figure 3. The relationship between absolute position and word frequency.

Figure 3

Figure 4. Quadratic polynomial regression with outlier detection.

Figure 4

Figure 5. The relationship between relative position and bigram frequency in length 15 sub-corpora.

Figure 5

Figure 6. The relationship between sentence length and bigram frequency.

Figure 6

Figure 7. The relationship between relative position and bigram frequency.

Figure 7

Figure 8. Relative position–frequency distribution of three bigram with different degrees of symmetry.

Figure 8

Figure 9. The frequency distribution of bigrams consisting of nominative and genitive variant of English pronouns over relative position.

Figure 9

Figure 10. Position–frequency distribution of words in different frequency bands.