Hostname: page-component-77f85d65b8-8wtlm Total loading time: 0 Render date: 2026-03-29T21:06:46.174Z Has data issue: false hasContentIssue false

Representations in design computing through 3-D deep generative models

Published online by Cambridge University Press:  10 December 2024

Başak Çakmak*
Affiliation:
Department of Digital Game Design, Istanbul Bilgi University, Istanbul, Turkey
Cihan Öngün
Affiliation:
Graduate School of Informatics, Information Systems, Middle East Technical University (METU), Ankara, Turkey
*
Corresponding author: Başak Çakmak; Email: basak.cakmak@bilgi.edu.tr
Rights & Permissions [Opens in a new window]

Abstract

This paper aims to explore alternative representations of the physical architecture using its real-world sensory data through artificial neural networks (ANNs). In the project developed for this research, a detailed 3-D point cloud model is produced by scanning a physical structure with LiDAR. Then, point cloud data and mesh models are divided into parts according to architectural references and part-whole relationships with various techniques to create datasets. A deep learning model is trained using these datasets, and new 3-D models produced by deep generative models are examined. These new 3-D models, which are embodied in different representations, such as point clouds, mesh models, and bounding boxes, are used as a design vocabulary, and combinatorial formations are generated from them.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press
Figure 0

Figure 1. Workflow diagram.

Figure 1

Figure 2. A 3-D mesh model of the Faculty of Architecture at METU.

Figure 2

Figure 3. Partitions according to the references of design elements.

Figure 3

Figure 4. Dividing the 3-D point cloud model and mesh model into 3-D subunits.

Figure 4

Figure 5. Methodology.

Figure 5

Figure 6. System architecture.

Figure 6

Figure 7. The AutoEncoder model.

Figure 7

Figure 8. The generative adversarial network (GAN) model

Figure 8

Table 1. The best WGAN results from the LPMNet (Öngün and Temizel, 2021) and our results

Figure 9

Figure 9. 3-D productions of the ML model in different abstraction levels.

Figure 10

Figure 10. A Sample scene with spatial configurations of the productions.