top of page

Self-Organizing Maps | Training process

As discussed in previous notebooks, when training a classifier, it is important to understand how the training progresses. For SOMs, it is possible to plot the quantization and the topographic error of the SOM for each step. This will then show how many iterations should be run to get the trained model. The quantization error measures the quality of the learning and is equal to the average difference of the input samples compared to its corresponding winning neurons. The topographical error measures the projection quality and assesses the number of the data samples having the first best matching unit and the second best matching unit that are not adjacent. For both values, the lower the value, the better.


som = MiniSom(10, 20, X.shape[1], sigma=3., learning_rate=.7,
neighborhood_function='gaussian', random_seed=10)

max_iter = 1000
q_error = []
t_error = []

for i in range(max_iter):
rand_i = np.random.randint(len(X))
som.update(X[rand_i], som.winner(X[rand_i]), i, max_iter)
q_error.append(som.quantization_error(X))
t_error.append(som.topographic_error(X))

plt.plot(np.arange(max_iter), q_error, label='quantization error')
plt.plot(np.arange(max_iter), t_error, label='topographic error')
plt.ylabel('error')
plt.xlabel('iteration index')
plt.legend()
plt.show()



bottom of page