Unsupervised Generative Learning and Native Explanatory Frameworks

20 November 2020, Version 1
This content is an early or alternative research output and has not been peer-reviewed by Cambridge University Press at the time of posting.

Abstract

A framework of native concepts emergent in unsupervised generative training under the constraints of redundancy reduction and generative accuracy was observed and investigated with an unsupervised generative neural network model and real-world image data. Characteristics of concept distributions in the latent representations were measured and possibility of effective learning with identified density structure demonstrated. We discuss the potential of using frameworks of native information clusters in the effective latent representations of learning models as a natural platform for explanation of learning processes in machine and biological systems based on the relations and criteria of native similarity that form in the process of generative learning under certain constraints. This approach can be general, intuitive and independent of specific architectures and types of data.

Keywords

Artificial Intelligence
unsupervised learning
concept learning

Comments

Comments are not moderated before they are posted, but they can be removed by the site moderators if they are found to be in contravention of our Commenting and Discussion Policy [opens in a new tab] - please read this policy before you post. Comments should be used for scholarly discussion of the content in question. You can find more information about how to use the commenting feature here [opens in a new tab] .
This site is protected by reCAPTCHA and the Google Privacy Policy [opens in a new tab] and Terms of Service [opens in a new tab] apply.