top of page

Supervised Learning examples

Supervised Learning is a type of machine learning where the algorithm makes predictions based on labeled data. This tutorial shows several examples of supervised learning using artificial neural networks (ANNs) on Grasshopper. One example is a regression problem where a neural network predicts the values of new inputs. Another example is binary classification, where a neural network classifies points inside or outside a specified shape. The tutorial includes an image classification example using a convolutional neural network (CNN) on the MNIST dataset.

​

Contents

​

> Regression problem examples

The two types of problems of Supervised Learning are classification and regression.

Here we will make a simple regression example on grasshopper:

​

​

Example file: 02_SL_Regression

​

​

In this practice. We will investigate the capacity and constraints of a Neural Network in predicting the values of new inputs based on a set of existing data.

Untitled (4).png

Generating random points along the Bezier curve.

Let’s take a look at the cluster. Here we are trying to use Graph Mapper to generate a set of random points around this Bezier curve.

Untitled (5).png

Inside the cluster

Untitled (6).png

Generated random points

Let’s have a look at the whole network architecture:

The first part of the network is the input layer, and the network contains 3 Dense Layers.

Untitled (7).png

Overview of network structure.

First, we define the input size. This input size has to correspond to your training data input and predicting data input.

 

In the first Keras Layer, we use Tanh as our activation function. An Activation function decides whether a neuron should be activated or not. The essential feature of an activation function is its ability to add non-linearity into a neural network.

​

​

You'll be able to learn more about what activation function is here.

Untitled (8).png

Setting input size and type of activation function.

After setting the initial input size, we define some Dense layers to create an MLP. If you scroll down the menu, you will see different components for Keras layers.

​

​

We won’t talk about it in detail, but if you are interested, you can find out more about the different kinds of layers here.

Choosing the type of component in a Keras Layer.png

Choosing the type of component in a Keras Layer.

At the end of the layers, we must declare a Keras Model. The model will group layers into an object with training/inference features. An optimizer is required for compiling a Keras model. The optimizer is a function or an algorithm that modifies the attributes of the neural network, such as weights and learning rates. Thus, it helps in reducing the overall loss and improving accuracy.

​

​

You'll be able to learn more about what Keras Model is here, and what an Optimizer is here.

Defining a Keras Model.png

Defining a Keras Model.

We finished setting all the network frameworks, and now we will connect the network to a Supervised Learning solver(SL), and we can start training the data! We can further adjust the Batch size and Epochs to improve the accuracy. The metric shown here results from the mean absolute error (MAE).

​

Epochs: One Epoch is when an ENTIRE dataset is passed forward and backward through the neural network only ONCE.

​

Batch Size: Total number of training examples present in a single batch.

​

​

You can learn more about what an MAE is here.

Start training the SL agent and see the MAE result progress..png

Defining a Keras Start training the SL agent and see the MAE result progress.

After that, we generate some random points to predict the value Y, which should be close to the Bezier curve. Notice that the input size corresponds to what we defined at the beginning of the network.

Untitled (9).png
result of the prediction in pink dots.png

Result of the prediction in pink dots.

> Classification problem examples

Example file: 01_SL_Classification

​

​

The PUG plugin will train a network using Supervised Learning to perform binary classification of points - classifying points as inside or outside a specified shape. This exercise serves as a simple demonstration of how Machine Learning can be applied to real-world classification problems. The Neural Network will be trained on labeled data and can classify new, unlabeled data based on the knowledge acquired during training.

generate 1000 points.png

Generate 1000 points.

random generation points and remap  to 0-1.png

Random generation points and remap to 0-1.

We will define a curve and set of points and determine whether the points of intersection are located inside or outside the curve.

define the boundary curve for inside outside condition.png

Define the boundary curve for inside/outside condition.

After defining the curve and points, we will initiate training by pressing the "run" button on the supervised learning component.

Untitled (10).png

To verify the accuracy of the trained model, we will generate random points and test if they are classified as inside or outside the curve.

Untitled (11).png

We will connect the tensor of test points to the prediction component to evaluate if they are classified as inside or outside the curve.

Untitled (12).png

> CNN Classification problem examples

Example file: 03_SL_Classification_CNN

​

​

A Convolutional Neural Network (CNN) is a neural network that processes and analyzes visual data. They are commonly used for image classification and have proven effective in many other applications, such as object detection and image segmentation. The critical difference between a CNN and a Dense Neural Network (DNN) is that CNNs process visual data by applying convolutions and pooling operations to the input data, which allows them to learn spatial hierarchies of features from the input. On the other hand, DNNs are fully connected networks that perform the exact computations for all input data, regardless of the input's spatial structure. In the example of classifying hand-written numbers, a CNN may outperform a DNN due to its ability to identify and extract features from the spatial structure of the images.

MNIST database.png

MNIST database. (2023, January 14). In Wikipedia https://en.wikipedia.org/wiki/MNIST_database .

We import handwritten images from the MNIST dataset, then reshape the images into a tensor with dimensions 28x28x1 (height, width, number of color channels). The number of color channels represents the depth of the image, typically either 1 for grayscale or 3 for RGB.

Import and normalize the values.png

Import and normalize the values.

​We will use Pug to build two network architectures: a Convolutional Neural Network (CNN), and a fully connected network. The main difference between these two is their architecture, where the CNN has a specific structure for image classification tasks.

Two different network architecture.png

Two different network architecture.

The labeled images from the MNIST data set are then fed into either a CNN or a fully connected neural network architecture using the Pug SL component. The architecture is defined using the Pug Object component, which is plugged into the ANN input of the Pug SL component. Running the component trains the network to classify the images based on their labels, which are numbers zero to nine.

Untitled (13).png

After connecting the trained network to the prediction component, we will use the testing dataset from the MNIST component to evaluate the accuracy of the network. The Owl component can then be used to visualize the tensor.

Untitled (14).png

> LeNet example

Example file: LeNet-5.gh

​

​

LeNet is a famous Convolutional Neural Network architecture introduced in the late 1990s for recognizing hand-written digits. It comprises multiple Convolutional and Pooling layers followed by fully connected layers. This architecture is often used as a starting point for other image classification tasks. In this example, we will compare its performance against the previously used network architecture using the MNIST data set.

​

LeNet is a convolutional neural network structure.

 

You can learn more about LeNet here.

Data flow in LeNet.png

​We will construct the LeNet architecture with the help of the Pug Keras layer component.

Untitled (15).png

Next, plug the LeNet architecture into the pug SL component and hit the run button to train the network using the MNIST data set. Finally, connect the trained agent to the prediction component and test the accuracy using the testing dataset from the MNIST component. Training LeNet architecture may take more time due to its deeper network structure.

Untitled (16).png
bottom of page