Hostname: page-component-89b8bd64d-7zcd7 Total loading time: 0 Render date: 2026-05-09T14:25:15.022Z Has data issue: false hasContentIssue false

Vision and language: from visual perception to content creation

Published online by Cambridge University Press:  30 March 2020

Tao Mei*
Affiliation:
JD AI Research, Building A, North-Star Century Center, 8 Beichen West Road, Beijing, China
Wei Zhang
Affiliation:
JD AI Research, Building A, North-Star Century Center, 8 Beichen West Road, Beijing, China
Ting Yao
Affiliation:
JD AI Research, Building A, North-Star Century Center, 8 Beichen West Road, Beijing, China
*
Corresponding author: Tao Mei Email: tmei@jd.com

Abstract

Vision and language are two fundamental capabilities of human intelligence. Humans routinely perform tasks through the interactions between vision and language, supporting the uniquely human capacity to talk about what they see or hallucinate a picture on a natural-language description. The valid question of how language interacts with vision motivates us researchers to expand the horizons of computer vision area. In particular, “vision to language” is probably one of the most popular topics in the past 5 years, with a significant growth in both volume of publications and extensive applications, e.g. captioning, visual question answering, visual dialog, language navigation, etc. Such tasks boost visual perception with more comprehensive understanding and diverse linguistic representations. Going beyond the progresses made in “vision to language,” language can also contribute to vision understanding and offer new possibilities of visual content creation, i.e. “language to vision.” The process performs as a prism through which to create visual content conditioning on the language inputs. This paper reviews the recent advances along these two dimensions: “vision to language” and “language to vision.” More concretely, the former mainly focuses on the development of image/video captioning, as well as typical encoder–decoder structures and benchmarks, while the latter summarizes the technologies of visual content creation. The real-world deployment or services of vision and language are elaborated as well.

Information

Type
Industrial Technology Advances
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
Copyright © The Authors, 2020. published by Cambridge University Press in association with Asia Pacific Signal and Information Processing Association
Figure 0

Fig. 1. A road map for the techniques and datasets in vision (image/video) to language in 10 years.

Figure 1

Fig. 2. The typical architectures of (a) CNN encoder plus LSTM decoder and (b) transformer-based encoder–decoder for image captioning.

Figure 2

Table 1. The reported performance (%) of image captioning on COCO testing server with 5 reference captions (c5) and 40 reference captions (c40).

Figure 3

Fig. 3. The road map of “language-to-vision” in past five years, while milestone techniques are marked along the year axis. Top: single object generation. Bottom: multiple-objects scene generation.