top of page

3D Generative Adversarial Networks to Autonomously Generate Building Geometry

ML Tags

Generative Adversarial Networks

Neural Networks


Topic Tags

Generative Design

> Software & Plug-ins Used 


  • Python using Keras API for Tensor-Flow


> Summary


The purpose of this thesis is to investigate if the size of the output and the clarity (reduce the noise) of the output of 3D GANs can be increased to create high-quality, detailed building geometry. 


The AI model used in this thesis is Generative Adversarial Networks (GAN). 


The steps followed in building the model are: 

  • Selecting a training data set consisting of 3D building models

  • Selecting current state-of-the-art 3D GAN methods to test with the training data

  • Systematically reviewing and adjusting hyperparameters to develop a new GAN architecture that successfully creates 3D building geometry

  • Adjusting the number of layers and channels to reduce noise in the output


The following conclusions were reached:

Key Hyperparameters: The most successful architecture used Wasserstein Loss with Gradient Penalty, Leaky ReLU in the Generator and the Critic, and RMSProp with Learning Rate=0.00005.


Optimizers:

Two optimizers are tested. When using ADAM as the optimizer, architectures perform better when also using learning rate decay. When using RMSprop as the optimizer, architectures perform better with a set learning rate. In the experiments, RMSProp outperformed ADAM for this application, when comparing the output for size, shape, and proportion against the models in the dataset.


Scaling architectures:

It is determined that if architectures perform well compared to others when all architectures have few layers and channels, the same high-performing architectures also perform well when the depth and width of the network is scaled up. Starting with smaller networks aids in testing many architectures quickly. After identifying a well-performing architecture, adjusting the depth and width of the network can help to reduce the noise in the output. Additionally, increasing the layers and channels of a network can help to decrease the noise in the output. This should always keep in mind the balance between depth and width and the training data size.


LIMITATIONS: Due to data set availability, the training data set was quite small. It is necessary to scale up the approach with a much larger data set and also look at the opportunities to incorporate memorization rejection strategies. 


> Possible Applications


For ideas on how to implement some of the above mentioned techniques, please see

‘Possible applications for students to try with Generative Deep Learning” 

bottom of page