Hostname: page-component-89b8bd64d-9prln Total loading time: 0 Render date: 2026-05-07T16:30:33.379Z Has data issue: false hasContentIssue false

Neurocognition-inspired design with machine learning

Published online by Cambridge University Press:  17 December 2020

Pan Wang
Affiliation:
Dyson School of Design Engineering/Data Science Institute, Imperial College London, London, UK
Shuo Wang
Affiliation:
Dyson School of Design Engineering/Data Science Institute, Imperial College London, London, UK
Danlin Peng
Affiliation:
Dyson School of Design Engineering/Data Science Institute, Imperial College London, London, UK
Liuqing Chen
Affiliation:
Dyson School of Design Engineering/Data Science Institute, Imperial College London, London, UK
Chao Wu
Affiliation:
School of Public Affairs, Zhejiang University, Hangzhou, China
Zhen Wei
Affiliation:
Dyson School of Design Engineering/Data Science Institute, Imperial College London, London, UK
Peter Childs
Affiliation:
Dyson School of Design Engineering/Data Science Institute, Imperial College London, London, UK
Yike Guo
Affiliation:
Dyson School of Design Engineering/Data Science Institute, Imperial College London, London, UK Hong Kong Baptist University, Hong Kong, China
Ling Li*
Affiliation:
School of Computing, University of Kent, Canterbury, UK
*
Corresponding author L. Li c.li@kent.ac.uk
Rights & Permissions [Opens in a new window]

Abstract

Generating designs via machine learning has been an on-going challenge in computer-aided design. Recently, deep learning methods have been applied to randomly generate images in fashion, furniture and product design. However, such deep generative methods usually require a large number of training images and human aspects are not taken into account in the design process. In this work, we seek a way to involve human cognitive factors through brain activity indicated by electroencephalographic measurements (EEG) in the generative process. We propose a neuroscience-inspired design with a machine learning method where EEG is used to capture preferred design features. Such signals are used as a condition in generative adversarial networks (GAN). First, we employ a recurrent neural network Long Short-Term Memory as an encoder to extract EEG features from raw EEG signals; this data are recorded from subjects viewing several categories of images from ImageNet. Second, we train a GAN model conditioned on the encoded EEG features to generate design images. Third, we use the model to generate design images from a subject’s EEG measured brain activity. To verify our proposed generative design method, we present a case study, in which the subjects imagine the products they prefer, and the corresponding EEG signals are recorded and reconstructed by our model for evaluation. The results indicate that a generated product image with preference EEG signals gains more preference than those generated without EEG signals. Overall, we propose a neuroscience-inspired artificial intelligence design method for generating a design taking into account human preference. The method could help improve communication between designers and clients where clients might not be able to express design requests clearly.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s), 2020. Published by Cambridge University Press
Figure 0

Figure 1. Overview of the process of brain signal conditioned design image generation.

Figure 1

Figure 2. Training an EEG conditioned generative model.

Figure 2

Figure 3. Image presentation experiment. Images were presented in the centre of the display with a central fixation cross. Ten images were shown per-block with one repeated image which required subjects to press a button when saw this image to maintain their attention.

Figure 3

Figure 4. Preference imagery experiment. The onset of each block was started by a central fixation cross. The 8000 ms imagery periods were signalled by auditory beeps. Before the first beep, subjects were required to visualize the preferred product for 4000 ms as the preparation of the imagery after. At the end of each block, subjects were required to evaluate the vividness of their imagination by pressing the button.

Figure 4

Figure 5. EEG feature encoder.

Figure 5

Figure 6. Confusion matrix for the EEG encoder and examples of misclassified images. The ($ i,j $) element in the confusion matrix represents the frequency product from the $ i $th class, classified as $ j $th class.

Figure 6

Figure 7. General view on model architecture.

Figure 7

Table 1. Hyperparameters architecture of the generator

Figure 8

Figure 8. Training procedure for each epoch.

Figure 9

Figure 9. Seen image reconstruction results in the grey frame (left) and imagery preference design image in red frame (right) reconstruction results.

Figure 10

Figure 10. Human study results of the design case study.