Skip to main content
×
Home

Advances in deep learning approaches for image tagging

  • Jianlong Fu (a1) and Yong Rui (a1)
Abstract

The advent of mobile devices and media cloud services has led to the unprecedented growth of personal photo collections. One of the fundamental problems in managing the increasing number of photos is automatic image tagging. Image tagging is the task of assigning human-friendly tags to an image so that the semantic tags can better reflect the content of the image and therefore can help users better access that image. The quality of image tagging depends on the quality of concept modeling which builds a mapping from concepts to visual images. While significant progresses are made in the past decade on image tagging, the previous approaches can only achieve limited success due to the limited concept representation ability from hand-crafted features (e.g., Scale-Invariant Feature Transform, GIST, Histogram of Oriented Gradients, etc.). Further progresses are made, since the efficient and effective deep learning algorithms have been developed. The purpose of this paper is to categorize and evaluate different image tagging approaches based on deep learning techniques. We also discuss the relevant problems and applications to image tagging, including data collection, evaluation metrics, and existing commercial systems. We conclude the advantages of different image tagging paradigms and propose several promising research directions for future works.

  • View HTML
    • Send article to Kindle

      To send this article to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about sending to your Kindle.

      Note you can select to send to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

      Find out more about the Kindle Personal Document Service.

      Advances in deep learning approaches for image tagging
      Available formats
      ×
      Send article to Dropbox

      To send this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your Dropbox account. Find out more about sending content to Dropbox.

      Advances in deep learning approaches for image tagging
      Available formats
      ×
      Send article to Google Drive

      To send this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your Google Drive account. Find out more about sending content to Google Drive.

      Advances in deep learning approaches for image tagging
      Available formats
      ×
Copyright
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Corresponding author
Corresponding author: J. Fu Email: jianf@microsoft.com
References
Hide All
[1] Jin Y.; Khan L.; Wang L.; Awad M.: Image annotations by combining multiple evidence and WordNet, in Proc. ACM Multimedia, 2005, 706715.
[2] Wang M.; Ni B.; Hua X.-S.; Chua T.-S.: Assistive tagging: a survey of multimedia tagging with human–computer joint exploration. ACM Comput. Surv., 44 (4) (2012), 25:125:24.
[3] Wang X.-J.; Zhang L.; Liu M.; Li Y.; Ma W.-Y.: Arista - image search to annotation on billions of web photos, in Conf. on Computer Vision and Pattern Recognition, 2010, 29872994.
[4] Tang J.; Chen Q.; Wang M.; Yan S.; Chua T.-S.; Jain R.: Towards optimizing human labeling for interactive image tagging. ACM Trans. Multimed. Comput. Commun. Appl., 9 (4) (2013), 29:129:18.
[5] Lowe D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis., 60 (2) (2004), 91110.
[6] Oliva A.; Torralba A.: Building the gist of a scene: the role of global image features in recognition. Prog. Brain Res. 155 (2006), 2336.
[7] Felzenszwalb P.F.; Girshick R.B.; McAllester D.; Ramanan D.: Object detection with discriminatively trained part-based models. IEEE Trans. Pattern Anal. Mach. Intell., 32 (9) (2010), 16271645.
[8] Sivic J.; Zisserman A.: Video Google: a text retrieval approach to object matching in videos, in Int. Conf. on Computer Vision, 2003, 14701477.
[9] Lazebnik S.; Cordelia S.; Jean P.: Beyond bags of features: spatial pyramid matching for recognizing natural scene categories, in Conf. on Computer Vision and Pattern Recognition, 2006, 21692178.
[10] Fu J.; Wang J.; Rui Y.; Wang X.-J.; Mei T.; Lu H.: Image tag refinment with view-dependent concept representations, in IEEE Transactions on Circuits and Systems for Video Technology. IEEE, 2014.
[11] Fu J.; Wang J.; Li Z.; Xu M.; Lu H.: Efficient clothing retrieval with semantic-preserving visual phrases, in Asian Conf. on Computer Vision, 2013, 420431.
[12] Bengio Y.: Learning deep architectures for AI. Found. Trends Mach. Learn., 2 (1) (2009), 1127.
[13] Krizhevsky A.; Sutskever I.; Hinton G.E.: Imagenet classification with deep convolutional neural networks, in Advances in Neural Information Processing Systems, 2012, 11061114.
[14] Le Q.V. et al. : Building high-level features using large scale unsupervised learning, in Int. Conf. on Machine Learning, 2012.
[15] Graham B.: Spatially-sparse convolutional neural networks, 2014. arXiv preprint arXiv:1409.6070.
[16] Deng J.; Berg A.C.; Li K.; Fei-Fei L.: What does classifying more than 10,000 image categories tell us? in European Conf. on Computer Vision, 2010, 7184.
[17] Fu J.; Mei T.; Yang K.; Lu H.; Rui Y.: Tagging personal photos with transfer deep learning, in Proc. World Wide Web, 2015, 344354.
[18] Li X.; Uricchio T.; Ballan L.; Bertini M.; Snoek C.G.M.; Bimbo A.D.: Socializing the semantic gap: a comparative survey on image tag assignment, refinement and retrieval, in CoRR, arXiv: abs/1503.08248, 2015.
[19] Chen X.; Shrivastava A.; Gupta A.: Neil: extracting visual knowledge from web data, in Int. Conf. on Computer Vision, 2013.
[20] Divvala C.S.K.; Farhadi A.: Learning everything about anything: Webly-supervised visual concept learning, in Conf. on Computer Vision and Pattern Recognition, 2014.
[21] Fu J.; Wu Y.; Mei T.; Wang J.; Lu H.; Rui Y.: Relaxing from vocabulary: robust weakly-supervised deep learning for vocabulary-free image tagging, in IEEE Int. Conf. on Computer Vision, 2015.
[22] Deng J.; Dong W.; Socher R.; Li L.-J.; Li K.; Fei-Fei L.: ImageNet: a large-scale hierarchical image database, in Conf. on Computer Vision and Pattern Recognition, 2009, 248255.
[23] Stauffer C.; Grimson W.E.L.: Adaptive background mixture models for real-time tracking, in Conf. on Computer Vision and Pattern Recognition, vol. 2, 1999, 246252.
[24] Rabiner L.R.: A tutorial on hidden Markov models and selected applications in speech recognition, Proc. IEEE, 77 (2) (1989), 257286.
[25] Lafferty J.D.; McCallum A.; Pereira F.C.N.: Conditional random fields: probabilistic models for segmenting and labeling sequence data, in Int. Conf. on Machine Learning, vol. 1, 2001, 282289.
[26] Jaynes E.T.: Information theory and statistical mechanics. Phys. Rev., 106 (4) (1957), 620.
[27] Cortes C.; Vapnik V.: Support-vector networks. Mach. Learn., 20 (3) (1995), 273297.
[28] Hosmer D.W. Jr.; Lemeshow S.; Sturdivant R.X.: Applied Logistic Regression, 3rd ed. John Wiley & Sons, 2013.
[29] Rumelhart D.E.; Hinton G.E.; Williams R.J.: Learning internal representations by error propagation. DTIC Document, Technical Report, 1985.
[30] Deng L.: A tutorial survey of architectures, algorithms, and applications for deep learning, in APSIPA Transactions on Signal and Information Processing, January 2014.
[31] Hinton G.E.; Osindero S.; Teh Y.-W.: A fast learning algorithm for deep belief nets. Neural Comput., 18 (7) (2006), 15271554.
[32] Ackley D.H.; Hinton G.E.; Sejnowski T.J.: A learning algorithm for Boltzmann machines. Cognit. Sci., 9 (1) (1985), 147169.
[33] Smolensky P.: Information processing in dynamical systems: foundations of harmony theory. DTIC Document, Technical Report, 1986.
[34] Bengio Y.; Lamblin P.; Popovici D.; Larochelle H.: Greedy layer-wise training of deep networks, in Advances in Neural Information Processing Systems, 2007, 153160.
[35] Rumelhart D.; Hintont G.; Williams R.: Learning representations by back-propagating errors. Nature, 323 (1986), 533536.
[36] Li T.; Mei T.; Kweon I.-S.; Hua X.-S.: Contextual bag-of-words for visual categorization. IEEE Trans. Circuits Syst. Video Technol., 21 (4) (2011), 381392.
[37] Liu D.; Hua X.-S.; Yang L.; Wang M.; Zhang H.-J.: Tag ranking, in Proc. of World Wide Web, 2009, 351360.
[38] Tsai D.; Jing Y.; Liu Y.; Rowley H.A.; Ioffe S.; Rehg J.M.: Large-scale image annotation using visual synset, in Int. Conf. on Computer Vision, 2011, 611618.
[39] Wu P.; Hoi S.C.-H.; Zhao P.; He Y.: Mining social images with distance metric learning for automated image tagging, in Proc. of Web Search and Data Mining, 2011, 197206.
[40] Li X.; Snoek C.G.; Worring M.: Learning tag relevance by neighbor voting for social image retrieval, in Proc. ACM Int. Conf. on Multimedia Information Retrieval, 2008, 180187.
[41] Wang X.-J.; Zhang L.; Jing F.; Ma W.-Y.: Annosearch: image auto-annotation by search, in Conf. on Computer Vision and Pattern Recognition, 2006, 14831490.
[42] Szegedy C. et al. : Going deeper with convolutions, in Conf. on Computer Vision and Pattern Recognition, 2015.
[43] Simonyan K.; Zisserman A.: Very deep convolutional networks for large-scale image recognition, in Int. Conf. on Learning Representations, 2015, 14091556.
[44] He K.; Zhang X.; Ren S.; Sun J.: Deep residual learning for image recognition, in Conference on Computer Vision and Pattern Recognition, 2016, 770778.
[45] Li X.; Snoek C.G.M.; Worring M.: Learning social tag relevance by neighbor voting. IEEE Trans. Multimed., 11 (7) (2009), 13101322.
[46] Guillaumin M.; Mensink T.; Verbeek J.J.; Schmid C.: Tagprop: discriminative metric learning in nearest neighbor models for image auto-annotation, in Int. Conf. on Computer Vision. IEEE, 2009, 309316.
[47] Wang C.; Jing F.; Zhang L.; Zhang H.-J.: Content-based image annotation refinement, in Conf. on Computer Vision and Pattern Recognition, 2007.
[48] Liu D.; Hua X.-S.; Wang M.; Zhang H.-J.: Image retagging, in Proc. ACM Multimedia, 2010, 491500.
[49] Zhu G.; Yan S.; Ma Y.: Image tag refinement towards low-rank, content-tag prior and error sparsity, in Proc. ACM Multimedia, 2010, 461470.
[50] Chua T.; Tang J.; Hong R.; Li H.; Luo Z.; Zheng Y.: Nus-wide: a real-world web image database from national university of singapore, in Proc. ACM Conf. on Image and Video Retrieval, 2009.
[51] Cortes C.; Vapnik V.: Support-vector networks. Mach. Learn., 20 (3) (1995), 273297.
[52] Maxime O.; Leno B.; Ivan L.; Josef S.: Learning and transferring mid-level image representations using convolutional neural networks, in Conf. on Computer Vision and Pattern Recognition, 2014, 17171724.
[53] Srivastava N.; Salakhutdinov R.: Discriminative transfer learning with tree-based priors, in Advances in Neural Information Processing Systems, 2013, 20942102.
[54] Candès E.J.; Li X.; Ma Y.; Wright J.: Robust principal component analysis? J. ACM, 58 (3) (2011), 11:111:37.
[55] Xu H.; Caramanis C.; Mannor S.: Outlier-robust PCA: the high-dimensional case. IEEE Trans. Inf. Theory, 59 (1) (2013), 546572.
[56] Kim J.; Scott C.D.: Robust kernel density estimation. Mach. Learn., 13 (2012), 25292565.
[57] Liu W.; Hua G.; Smith J.R.: Unsupervised one-class learning for automatic outlier removal, in Conf. on Computer Vision and Pattern Recognition, 2014.
[58] Vincent P.; Larochelle H.; Lajoie I.; Bengio Y.; Manzagol P.-A.: Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion. Mach. Learn., 11 (2010), 33713408.
[59] Luo P.; Wang X.; Tang X.: Hierarchical face parsing via deep learning, in Conf. on Computer Vision and Pattern Recognition, 2012, 24802487.
[60] Larsen J.; Andersen L.N.; Hintz-madsen M.; Hansen L.K.: Design of robust neural network classifiers, in Int. Conf. on Acoustics, Speech and Signal Processing, 1998, 12051208.
[61] Mnih V.; Hinton G.E.: Learning to label aerial images from noisy data, in Int. Conf. on Machine Learning, 2012.
[62] Sukhbaatar S.; Fergus R.: Learning from noisy labels with deep neural networks, 2014. arXiv:1406.2080.
[63] Akata Z.; Perronnin F.; Harchaoui Z.; Schmid C.: Label-embedding for attribute-based classification, in Conf. on Computer Vision and Pattern Recognition, 2013.
[64] Jayaraman D.; Grauman K.: Zero shot recognition with unreliable attributes, in Advances in Neural Information Processing Systems, 2014, 34643472.
[65] Lampert C.H.; Nickisch H.; Harmeling S.: Attribute-based classification for zero-shot visual object categorization. IEEE Trans. Pattern Anal. Mach. Intell., 36 (3) (2014), 453465.
[66] Norouzi M. et al. : Zero-shot learning by convex combination of semantic embeddings, in CoRR, arXiv: abs/1312.5650, 2013.
[67] Palatucci M.; Pomerleau D.; Hinton G.E.; Mitchell T.M.: Zero-shot learning with semantic output codes, in Advances in Neural Information Processing Systems, 2009, 14101418.
[68] Socher R.; Ganjoo M.; Manning C.D.; Ng A.Y.: Zero-shot learning through cross-modal transfer, in Advances in Neural Information Processing Systems, (2013), 935943.
[69] Fu Y.; Yang Y.; Hospedales T.M.; Xiang T.; Gong S.: Transductive multi-label zero-shot learning, in British Machine Vision Association, 2015.
[70] Zhang Y.; Gong B.; Shah M.: Fast zero-shot image tagging, in Computer Vision and Pattern Recognition, 2016.
[71] Bychkovsky V.; Paris S.; Chan E.; Durand F.: Learning photographic global tonal adjustment with a database of input/output image pairs, in Conf. on Computer Vision and Pattern Recognition, 2011, 97104.
[72] Huiskes M.J.; Lew M.S.: The MIR flickr retrieval evaluation, in Proc. ACM Int. Conf. on Multimedia Information Retrieval, 2008.
[73] Lin T. et al. : Microsoft COCO: common objects in context, in CoRR, arXiv: abs/1405.0312, 2014.
[74] Makadia A.; Pavlovic V.; Kumar S.: Baselines for image annotation. Int. J. Comput.Vis., 90 (1) (2010), 88105.
[75] Gong Y.; Jia Y.; Leung T.; Toshev A.; Ioffe S.: Deep convolutional ranking for multilabel image annotation, in CoRR, arXiv: abs/1312.4894, 2013.
[76] Branson S.; Horn G.V.; Belongie S.J.; Perona P.: Bird species categorization using pose normalized deep convolutional nets, in British Machine Vision Conf., 2014.
[77] Zhang X.; Xiong H.; Zhou W.; Lin W.; Tian Q.: Picking deep filter responses for fine-grained image recognition, in Conf. on Computer Vision and Pattern Recognition, 2016, 11341142.
[78] Nilsback M.-E.; Zisserman A.: A visual vocabulary for flower classification, in Conf. on Computer Vision and Pattern Recognition, 2006, 14471454.
[79] Reed S.E.; Akata Z.; Schiele B.; Lee H.: Learning deep representations of fine-grained visual descriptions, in Conf. on Computer Vision and Pattern Recognition, 2016.
[80] Krause J.; Jin H.; Yang J.; Li F.: Fine-grained recognition without part annotations, in Conf. on Computer Vision and Pattern Recognition, 2015, 55465555.
[81] Lin T.-Y.; RoyChowdhury A.; Maji S.: Bilinear CNN models for fine-grained visual recognition, in Int. Conf. on Computer Vision, 2015, 14491457.
[82] Wang J.; Fu J.; Mei T.; Xu Y.: Beyond object recognition: visual sentiment analysis with deep coupled adjective and noun neural networks, in Int. Joint Conf. on Artificial Intelligence, 2016.
[83] Fu J.; Zheng H.; Mei T.: Look closer to see better: recurrent attention convolutional neural network for fine-grained image recognition, in Conf. on Computer Vision and Pattern Recognition, 2017.
[84] Zheng H.; Fu J.; Mei T.; Luo J.: Learning multi-attention convolutional neural network for fine-grained image recognition, in Int. Conf. on Computer Vision, 2017.
[85] Mao J.; Xu W.; Yang Y.; Wang J.; Huang Z.; Yuille A.: Deep captioning with multimodal recurrent neural networks (m-RNN), in Int. Conf. on Learning Representations, 2015.
[86] Vinyals O.; Toshev A.; Bengio S.; Erhan D.: Show and tell: a neural image caption generator, in Conf. on Computer Vision and Pattern Recognition, 2015.
[87] Johnson J.; Karpathy A.; Fei-Fei L.: Densecap: fully convolutional localization networks for dense captioning, in Conf. on Computer Vision and Pattern Recognition, 2016.
[88] Yu D.; Fu J.; Mei T.; Rui Y.: Multi-level attention networks for visual question answering, in Conf. on Computer Vision and Pattern Recognition, 2017.
[89] Park C.C.; Kim G.: Expressing an image stream with a sequence of natural sentences, in Advances in Neural Information Processing Systems, 2015.
[90] Huang T.H. et al. : Visual storytelling, in Conf. of the North American Chapter of the Association for Computational Linguistics, 2016.
[91] Liu Y.; Fu J.; Mei T.; Chen C.W.: Let your photos talk: generating narrative paragraph for photo stream via bidirectional attention recurrent neural networks, in The Association for the Advance of Artificial Intelligence, 2017, 14451452.
Recommend this journal

Email your librarian or administrator to recommend adding this journal to your organisation's collection.

APSIPA Transactions on Signal and Information Processing
  • ISSN: 2048-7703
  • EISSN: 2048-7703
  • URL: /core/journals/apsipa-transactions-on-signal-and-information-processing
Please enter your name
Please enter a valid email address
Who would you like to send this to? *
×

Keywords:

Metrics

Full text views

Total number of HTML views: 9
Total number of PDF views: 130 *
Loading metrics...

Abstract views

Total abstract views: 108 *
Loading metrics...

* Views captured on Cambridge Core between 4th October 2017 - 13th December 2017. This data will be updated every 24 hours.