Skip to main content
    • Aa
    • Aa
  • Get access
    Check if you have access via personal or institutional login
  • Cited by 9
  • Cited by
    This article has been cited by the following publications. This list is generated based on data provided by CrossRef.

    Wibawa, Aji Prasetya Nafalski, Andrew Tweedale, Jeffrey Murray, Neil and Kadarisman, Ahmad Effendi 2013. 2013 5th International Conference on Knowledge and Smart Technology (KST). p. 64.

    Sun, Xiao and Huang, Degen 2011. 2011 14th IEEE International Conference on Computational Science and Engineering. p. 161.

    Way, Andy 2010. Panning for EBMT gold, or “Remembering not to forget”. Machine Translation, Vol. 24, Issue. 3-4, p. 177.

    2010. The Handbook of Computational Linguistics and Natural Language Processing.

    Sun, Xiao Ren, Fuji and Huang, Degen 2009. 2009 International Conference on Natural Language Processing and Knowledge Engineering. p. 1.

    Groves, Declan and Way, Andy 2007. Hybrid data-driven models of machine translation. Machine Translation, Vol. 19, Issue. 3-4, p. 301.

    Hutchins, John 2007. Example-based machine translation: a review and commentary. Machine Translation, Vol. 19, Issue. 3-4, p. 197.

    Quirk, Christopher and Menezes, Arul 2007. Dependency treelet translation: the convergence of statistical and example-based machine-translation?. Machine Translation, Vol. 20, Issue. 1, p. 43.

    Wu, Dekai 2007. MT model space: statistical versus compositional versus example-based machine translation. Machine Translation, Vol. 19, Issue. 3-4, p. 213.


Comparing example-based and statistical machine translation

  • ANDY WAY (a1) and NANO GOUGH (a1)
  • DOI:
  • Published online: 01 September 2005

In previous work (Gough and Way 2004), we showed that our Example-Based Machine Translation (EBMT) system improved with respect to both coverage and quality when seeded with increasing amounts of training data, so that it significantly outperformed the on-line MT system Logomedia according to a wide variety of automatic evaluation metrics. While it is perhaps unsurprising that system performance is correlated with the amount of training data, we address in this paper the question of whether a large-scale, robust EBMT system such as ours can outperform a Statistical Machine Translation (SMT) system. We obtained a large English-French translation memory from Sun Microsystems from which we randomly extracted a near 4K test set. The remaining data was split into three training sets, of roughly 50K, 100K and 200K sentence-pairs in order to measure the effect of increasing the size of the training data on the performance of the two systems. Our main observation is that contrary to perceived wisdom in the field, there appears to be little substance to the claim that SMT systems are guaranteed to outperform EBMT systems when confronted with ‘enough’ training data. Our tests on a 4.8 million word bitext indicate that while SMT appears to outperform our system for French-English on a number of metrics, for English-French, on all but one automatic evaluation metric, the performance of our EBMT system is superior to the baseline SMT model.

Recommend this journal

Email your librarian or administrator to recommend adding this journal to your organisation's collection.

Natural Language Engineering
  • ISSN: 1351-3249
  • EISSN: 1469-8110
  • URL: /core/journals/natural-language-engineering
Please enter your name
Please enter a valid email address
Who would you like to send this to? *