Hostname: page-component-77f85d65b8-pkds5 Total loading time: 0 Render date: 2026-04-21T18:19:23.848Z Has data issue: false hasContentIssue false

Backward and trigger-based language models for statistical machine translation

Published online by Cambridge University Press:  24 July 2013

DEYI XIONG
Affiliation:
School of Computer Science and Technology, Soochow University, Suzhou 215006, China email: dyxiong@suda.edu.cn, minzhang@suda.edu.cn
MIN ZHANG
Affiliation:
School of Computer Science and Technology, Soochow University, Suzhou 215006, China email: dyxiong@suda.edu.cn, minzhang@suda.edu.cn

Abstract

The language model is one of the most important knowledge sources for statistical machine translation. In this article, we present two extensions to standard n-gram language models in statistical machine translation: a backward language model that augments the conventional forward language model, and a mutual information trigger model which captures long-distance dependencies that go beyond the scope of standard n-gram language models. We introduce algorithms to integrate the two proposed models into two kinds of state-of-the-art phrase-based decoders. Our experimental results on Chinese/Spanish/Vietnamese-to-English show that both models are able to significantly improve translation quality in terms of BLEU and METEOR over a competitive baseline.

Information

Type
Articles
Copyright
Copyright © Cambridge University Press 2013 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Article purchase

Temporarily unavailable