Unsupervised Learning Final
Unsupervised Learning Final
Example: Consider a dataset of house prices. Each house is represented by features such as the number of
bedrooms, square footage, and location. The price of the house is the output (or label). In supervised learning,
the algorithm is trained on this data to predict the price of a new house based on its features.
Decision trees
Neural networks
Unsupervised Learning
Unsupervised learning is a type of machine learning where the algorithm is given data that is not labeled. The
algorithm must learn patterns, relationships, or structures in the data without explicit guidance on what the
output should be. It is mainly used for clustering or dimensionality reduction.
Example: Consider a dataset of customer data at a retail store with features like age, purchasing history, and
browsing time. There are no labels to indicate what group each customer belongs to. In unsupervised learning,
the algorithm can group similar customers together, which can be useful for targeting marketing campaigns.
Output: No predefined label; the algorithm finds patterns like customer segments.
K-means clustering
Hierarchical clustering
Autoencoders
1
Differences Between Supervised and Unsupervised Learning
K-Means Algorithm
The K-Means algorithm is used to divide a given dataset into k distinct groups or clusters. Below is an
easy-to-follow explanation of the algorithm step by step:
Objective The goal of the K-Means algorithm is to group the data points into k clusters such that points in
the same cluster are more similar to each other than to points in other clusters.
Initial Setup You start by choosing the number of clusters (k ), which you need to decide beforehand. Initially,
the algorithm assigns k random points in the input space as the centers of the clusters (called centroids).
Distance Calculation To group data points, we need to measure how close a point is to a cluster center.
This is done using a distance measure like Euclidean distance (the straight-line distance between two points).
Assigning Points to Clusters For each data point, calculate its distance from all the cluster centers. Assign
each data point to the cluster whose center is closest to it. This means each point is grouped based on the
minimum distance to a cluster center.
Updating Cluster Centers After assigning all data points to clusters, compute the mean of the points
in each cluster. The new cluster center is placed at this mean point (the average position of all points in the
cluster). This step ensures that the cluster centers move towards the center of the group of points they are
responsible for.
Iterative Process Repeat the steps of reassigning points to clusters and updating the cluster centers. This
process continues until the cluster centers stop moving or change very little, meaning the algorithm has converged
and the clusters are stable.
Stopping Criteria The algorithm stops when the cluster centers don’t change their positions significantly,
meaning the best cluster centers for the data have been found.
Example Imagine a group of people (data points) being grouped around k tour guides (cluster centers).
Initially, the tour guides stand randomly, and people move toward the guide closest to them. As people gather,
the tour guides move to the center of their respective groups. This process repeats until the tour guides no
longer need to move.
K-Means Algorithm
1. Start by selecting k (number of clusters).
2. Randomly initialize k cluster centers.
3. For each point, calculate the distance to each cluster center and assign it to the closest one.
4. Recalculate the cluster center as the mean of the assigned points.
5. Repeat the process until cluster centers stop moving.
6. The output is k clusters where each point belongs to its nearest cluster.
2
The k-Means Algorithm
Initialisation
– Choose a value for k.
– Choose k random positions in the input space.
– Assign the cluster centres µj to those positions.
Learning
– Repeat
* For each datapoint xi :
Compute the distance to each cluster centre.
Assign the datapoint to the nearest cluster centre with distance di = minj d(xi , µj ).
* For each cluster centre:
Move the position of the centre to the mean of the points in that cluster:
Nj
1 X
µj = xi
Nj i=1
where Nj is the number of points in cluster j.
– Until the cluster centres stop moving.
Usage
– For each test point:
* Compute the distance to each cluster centre.
* Assign the datapoint to the nearest cluster centre with distance di = minj d(xi , µj ).
3
Dealing with Noise
Clustering is often used to handle noisy data. Noisy data can occur for several reasons; it might be slightly
corrupted or even completely wrong. If we can group the data points into the right clusters, we can effectively
reduce the noise. This is because we can replace each noisy data point with the average of the points in its
cluster.
However, the k-means algorithm uses the average (mean) to find the cluster centers, which can be greatly
affected by outliers—these are extreme values that don’t fit with the rest of the data. For example, if we have
the numbers (1, 2, 1, 2, 100), the average is 21.2, but the middle value, known as the median, is 2. The median
is a robust statistic because it is not influenced by outliers.
To improve our clustering method, we can replace the average with the median when calculating the cluster
centers. This change makes the algorithm more resistant to noise. However, calculating the median can take
more time than finding the average, but it helps in effectively reducing noise in the data.
algorithm, each input identifies the closest cluster center by calculating the distance to all centers. We can
mimic this in a neural network. The position of each neuron corresponds to its weights in weight space. For
each input, we calculate the distance between the input and each neuron’s weight. During training, we adjust
the position of the neuron by changing its weights.
To implement the k-means algorithm using neurons, we will use one layer of neurons and some input nodes,
without a bias node. The first layer will consist of the input nodes that don’t perform any computations, while
the second layer will contain competitive neurons. Only one neuron will “fire” for each input, meaning that
only one cluster center can represent a particular input vector. The neuron that is closest to the input will be
chosen to fire. This approach is called winner-takes-all activation, which is a form of competitive learning. In
competitive learning, neurons compete to fire based on their closeness to the input. Each neuron may learn
to recognize a specific feature, firing only when it detects that feature. For example, you could have a neuron
trained to recognize your grandmother.
We will select k neurons and connect the inputs to the neurons fully, as usual. The activation of the neurons
will be calculated using a linear transfer function as follows:
X
hi = wij xj .
j
When the inputs are normalized to have the same scale, this effectively measures the distance between the
input vector and the cluster center represented by that neuron. Higher activation values indicate that the input
and the cluster center are closer together.
The winning neuron is the one closest to the current input. The challenge is updating the position of that
neuron in weight space, or how to adjust its weights. In the original k-means algorithm, this was straightforward;
we simply set the cluster center to be the mean of all data points assigned to that center. However, in neural
network training, we often input just one vector at a time and change the weights accordingly (using online
4
learning rather than batch learning). This means we don’t know the mean because we only have information
about the current input.
To address this, we approximate the mean by moving the winning neuron closer to the current input. This
makes it more likely to be the best match for the next occurrence of that input. The adjustment is given by:
∆wij = ηxj .
However, this method might not be sufficient. We will explore why that is when we discuss normalization
further.
Neuronal Activation
The neuronal activation can be expressed as:
hi = WiT · x,
where · denotes the inner product (or scalar product) between two vectors, and WiT is the transpose of the
i-th row of the weight matrix W .
The inner product calculates the product ∥Wi ∥∥x∥ cos θ, where θ is the angle between the two vectors, and
∥ · ∥ represents the magnitude of the vector. If the magnitudes of all vectors are normalized to one, then only
the angle θ influences the size of the dot product.
This means that the closer the two vectors point in the same direction, the larger the activation will be.
Therefore, the angle between the vectors is crucial in determining how effectively they match each other.
5
The important point is that we only update the weights of the neuron that is the closest match to the
input. In this method, unlike in supervised learning where we minimize the overall error across all weights, we
minimize the error for each weight independently. This makes the analysis of the algorithm more complex, but
competitive learning algorithms like this tend to work well in practice.
Key Points:
Weight Update Rule: Moves weights closer to inputs based on their distance.
Independence: Each weight is updated independently, making the analysis complex but effective.
Learning
– Normalize the data so that all points lie on the unit sphere.
– Repeat the following steps:
* For each datapoint:
· Compute the activations of all the nodes.
· Pick the winner as the node with the highest activation.
· Update the weights using the weight update rule.
– Continue until the number of iterations exceeds a predefined threshold.
Usage
6
Difference between Normal K-Means and Competitive K-Means
Normal K-Means is the standard clustering algorithm where the assignment of data points to centroids is
purely based on the distance (usually Euclidean distance). The algorithm repeatedly computes the mean of
all points assigned to a cluster and updates the cluster centroids.
Competitive K-Means introduces the concept of competitive learning into K-Means, where centroids
”compete” for each data point. The key difference is how centroids are updated. In Competitive K-Means:
The centroids are updated not based on the average of the assigned points but in a way that the ”winning”
centroid (the closest one) adjusts itself incrementally toward the input data point.
This is similar to competitive learning in neural networks, where only the winning neuron (centroid) is
adjusted during each iteration.
1. Normal K-Means
In normal K-Means, we follow these steps:
Step 1: Assignment
For each point, compute the distance to both centroids and assign the point to the nearest centroid.
Distances:
Step 2: Update
New Centroid 1: Mean of (2,2) and (3,4) → (2.5, 3)
Step 3: Repeat
Re-assign and update until convergence. This iterative process continues until the centroids stop moving
significantly.
7
2. Competitive K-Means
In Competitive K-Means, the centroids compete for each data point and only the ”winning” centroid is updated
incrementally towards the input data point, rather than updating based on the average of all assigned points.
Step 1: Assignment (Competition)
Like normal K-Means, we assign points to the nearest centroid based on distance.
Step 2: Incremental Update
Instead of calculating the mean, the winning centroid is incrementally adjusted towards each assigned data
point. This adjustment is often controlled by a learning rate (η).
Set learning rate η = 0.5. Let us go through each point:
New Centroid 1 = (2, 2) + 0.5 × ((2, 2) − (2, 2)) = (2, 2) (no change)
New Centroid 2 = (6, 6.5) + 0.5 × ((8, 8) − (6, 6.5)) = (7, 7.25)
Repeat
The process repeats for each new point with incremental updates to the centroids.
Key Differences
Centroid Update Method:
– Normal K-Means: Centroids are updated by averaging all assigned points.
– Competitive K-Means: The winning centroid is updated incrementally toward each data point
(using a learning rate).
Convergence Speed:
– Normal K-Means may converge faster as centroids move directly to the mean.
– Competitive K-Means may require more iterations due to incremental updates but could offer
better adaptability, especially for online data where new points arrive sequentially.
Suitability:
– Normal K-Means works well with batch data and when all data points are available.
– Competitive K-Means is more suitable for streaming or dynamic data, where centroids continu-
ously evolve based on incoming data points.
8
Silhouette Score:
The silhouette score measures how similar a data point is to its own cluster compared to other clusters.
A high silhouette score indicates that data points are well-clustered.
By calculating the silhouette score for different values of k, the k with the highest average score is
considered the best choice.
Domain Knowledge:
Often, the choice of k depends on prior knowledge about the data. For example, if you are clustering
customer groups in a retail setting, business objectives or existing customer segmentation models may
guide you in choosing a suitable k.
Cross-Validation:
In cases where you can validate the clustering performance with some labeled data, cross-validation
techniques can help determine the optimal k. You can assess the accuracy of clustering for different
values of k based on external validation metrics like purity or adjusted Rand index.
Gap Statistic:
The Gap Statistic compares the total within-cluster variation for different values of k with the expected
variation under a random distribution. The best k is the one with the largest gap, indicating that the
clustering structure is far from random.
Data Size: For large datasets, too many clusters can lead to overfitting, while too few clusters may not
capture the true underlying structure.
Computational Resources: The time complexity of k-means increases with k. Choosing a large k for
big datasets may be computationally expensive, so practical constraints should be considered.
Conclusion: Choosing k in k-means clustering is not straightforward, and multiple techniques should be
used in conjunction to select the most appropriate value. Methods like the Elbow Method, Silhouette Score,
and Gap Statistic, combined with domain knowledge and practical considerations, provide a robust approach
to determining k.
Vector Quantization
In Vector Quantization (VQ), we use competitive learning to compress data, a concept closely related to
noise removal. Data compression is essential in many applications, such as storing data or transmitting speech
and image data. The main idea is to reduce the number of data points by replacing them with a cluster center
or ”prototype vector” that the data point belongs to.
In noise reduction, the goal is to replace a noisy input with a cleaner, representative cluster center. In
data compression, the purpose is to minimize the amount of data being transmitted or stored. The method
for both involves using competitive learning to identify the closest cluster center to each data point and replacing
the input with this center.
9
Lossy Compression and Vector Quantization
One challenge arises when a data point is not in the codebook. In that case, we send the index of the closest
available prototype vector. This is the core idea behind vector quantization, which leads to lossy compres-
sion. In lossy compression, the data received may not be identical to the original, but it is close enough for
practical use.
the relationships between prototype vectors. If we connect the prototype vectors (or ”cities”) that share a
boundary in the Voronoi Tessellation, we create a set of triangles. These triangles form what is known as the
Delaunay Triangulation, which shows the optimal way to organize the space for certain tasks, like function
approximation.
In simpler terms, Delaunay Triangulation is like drawing lines between neighboring cities that share a
common boundary, creating a web of connections between them. This structure helps us in efficiently organizing
and processing data, especially in algorithms that need to approximate or interpolate between points.
Now, instead of sending the full data points, we only send indices referring to these prototypes:
(1, 2) → Prototype 1
10
(2, 3) → Prototype 1
(8, 9) → Prototype 2
(9, 10) → Prototype 2
By sending indices instead of full data points, the transmission becomes much more efficient.
This process is the basis of vector quantization and is used in various compression algorithms in modern
technology.
Key Concepts
Feature Mapping: In SOM, the neurons are arranged in a grid, typically in a 2D grid (but can also be
1D). Neurons that are close to each other in this grid will respond to similar input patterns, while neurons
that are far apart will respond to very different inputs. This is known as topology preservation.
Topology Preservation: In simpler terms, topology preservation means that the structure of the data
(the relationship between different inputs) is maintained on the grid. If two inputs are similar, the neurons
that represent them will be near each other. If the inputs are very different, the neurons will be far apart.
However, since most input spaces have higher dimensions than the grid (which is usually 2D), it’s not
always possible to perfectly preserve the structure of the data.
Example
Let us say we have different sounds, such as:
Sound A: A low-pitch sound.
Sound B: A medium-pitch sound.
Sound C: A high-pitch sound.
In SOM, the neurons responding to Sound A and Sound B will be close together on the map because they are
similar in pitch. Neurons responding to Sound C will be farther away, as it is very different from Sound A and
Sound B.
11
Weight Adjustment
Once the BMU is identified, its weights are updated to become more similar to the input pattern. The
adjustment is based on the distance from the BMU to other neurons in the grid. The update rule can be
described as follows:
where:
– Wi (t) is the weight vector of neuron i at time t.
– α(t) is the learning rate, which decreases over time.
– hci (t) is the neighborhood function, which determines the influence of the BMU on its neighboring
neurons.
– X(t) is the input vector at time t.
Neurons that are close to the BMU also have their weights adjusted, but to a lesser extent, based on their
distance from the BMU. This is achieved through the Mexican Hat function mentioned earlier.
Iteration and Convergence
The process of presenting input patterns, identifying the BMU, and adjusting weights is repeated for a
number of iterations. As training progresses, the learning rate and the size of the neighborhood decrease.
This allows the SOM to stabilize, clustering similar input patterns together while preserving the topological
relationships.
Result
After training, the SOM produces a 2D representation of the input space, where similar input patterns
are mapped to nearby neurons. This makes it easier to visualize and analyze complex data.
Example
Consider a dataset containing different types of fruits. Each fruit can be represented by features such as
color, size, and sweetness. When this dataset is fed into a SOM, fruits with similar characteristics will be
grouped together on the map. For example, apples and pears might be located close to each other, while
oranges are placed farther away, reflecting their different features.
12
SOM Process
1. Initialization: We start by initializing a grid of neurons, each with a random weight vector.
2. Input Presentation: For each input data point, we find the neuron whose weight vector is closest to
the input. This neuron is called the Best Matching Unit (BMU).
3. Update Weights: After finding the BMU, we adjust the weights of the BMU and its neighbors so that
they become closer to the input data point. The amount of adjustment depends on the distance from the
BMU: closer neurons are adjusted more, and farther neurons are adjusted less.
4. Repeat: This process is repeated for many input data points until the grid of neurons has organized itself
into a meaningful pattern that represents the structure of the data.
Real-Time Example
Let us consider the example of classifying flower species (e.g., iris flowers) based on features like petal length,
petal width, sepal length, and sepal width. After training the SOM on flower data, the neurons that represent
similar species will be close to each other on the grid, while neurons representing very different species will be
farther apart. This helps us visualize the relationships between different species on a 2D map.
The Mexican Hat function resembles a sombrero hat, with a peak in the center and slopes downward
on all sides, creating a ”bell” shape.
13
Mathematically, it is defined as the second derivative of a Gaussian function, which means it captures
the local structure around a point.
2. Lateral Connections:
In SOM, when a neuron (let’s call it the Best Matching Unit, or BMU) is activated by an input,
it influences its neighboring neurons.
The strength of this influence diminishes with distance from the BMU. Neurons that are close to the
BMU receive a stronger pull, while those that are farther away experience weaker interactions.
Consider a scenario where the SOM is trained on images of animals, such as cats, dogs, and birds.
When an image of a cat is presented:
– The neuron that best matches the image (BMU) is activated.
– Nearby neurons, which may represent similar features (like whiskers or fur patterns), are pulled
closer in weight space, reinforcing their similarity to the cat image.
– Neurons further away, representing animals with very different features (like a bird), are pushed
away, preventing them from being wrongly associated with the cat.
7. Summary:
14
The Mexican Hat function in SOM embodies the principle of competitive learning and the importance
of local structure in data.
By establishing a mechanism for neurons to pull together similar features while pushing apart dis-
similar ones, it effectively allows the SOM to learn and represent complex relationships in high-
dimensional data spaces in a lower-dimensional grid.
Conclusion
The Self-Organizing Feature Map is a powerful tool for dimensionality reduction and pattern recognition. By
organizing neurons into a grid and preserving the topology of the input data, SOM allows us to visualize and
understand the relationships between different data points.
In summary:
SOM arranges neurons in a grid, where nearby neurons respond to similar inputs.
Topology preservation means that the grid tries to reflect the structure of the data.
Neurons interact through lateral connections, where winning neurons pull neighbors closer in weight space.
SOM is useful for tasks like data visualization and pattern recognition.
– Choose a size (number of neurons) and number of dimensions d for the map.
– Either:
* Choose random values for the weight vectors so that they are all different, OR
* Set the weight values to increase in the direction of the first d principal components of the
dataset.
Learning
– Repeat:
* For each datapoint:
· Select the best-matching neuron nb using the minimum Euclidean distance between the
weights and the input:
nb = arg min ∥x − wjT ∥
j
15
· Update the weight vector of all other neurons using:
where ηn (t) is the learning rate for neighborhood nodes, and h(nb, t) is the neighborhood
function that decides whether each neuron should be included in the neighborhood of the
winning neuron (so h = 1 for neighbors and h = 0 for non-neighbors).
· Reduce the learning rates and adjust the neighborhood function, typically by:
k
η(t + 1) = αη(t) kmax
where 0 ≤ α ≤ 1 decides how fast the size decreases, k is the number of iterations the
algorithm has been running for, and kmax is when learning is intended to stop. The same
equation is used for both learning rates (η, ηn ) and the neighborhood function h(nb, t).
– Until the map stops changing or some maximum number of iterations is exceeded.
Usage
– For each test point:
* Select the best-matching neuron nb using the minimum Euclidean distance between the weights
and the input:
nb = arg min ∥x − wjT ∥
j
Self-Organization
The term self-organize in the context of Self-Organizing Maps (SOMs) refers to the ability of the algorithm to
automatically arrange and categorize input data based on its inherent structure, without the need for supervision
or explicit guidance. This self-organizing characteristic is a fundamental aspect of SOMs and contributes
significantly to their functionality and effectiveness in various applications.
16
The term also encompasses the emergence of complex behaviors from simple rules and interactions
among the neurons. Self-organization results in the emergence of patterns that represent the under-
lying structure of the data.
3. Feature Mapping:
The self-organizing property allows SOMs to learn meaningful representations of the input features.
Neurons that are activated by similar inputs become organized in close proximity on the map, making
it easier to interpret and analyze the relationships between different features.
4. Unsupervised Learning:
Self-organization enables SOMs to operate in an unsupervised learning framework. This is par-
ticularly advantageous when labeled data is scarce or unavailable. SOMs can discover underlying
structures and patterns in the data autonomously, which can then be used for further analysis or
modeling.
5. Robustness to Noise:
The self-organizing nature of SOMs makes them robust to noise and outliers in the data. By focus-
ing on the overall structure and relationships between data points, SOMs can effectively filter out
irrelevant variations and capture the essential patterns.
The concept of self-organization is central to the operation of Self-Organizing Maps. It indicates a form of
autonomous, adaptive learning that enables the algorithm to effectively capture and represent the structure of
complex data. By self-organizing, SOMs can cluster, reduce dimensions, and provide meaningful insights into
the data, making them valuable tools in various fields such as data analysis, image recognition, and pattern
recognition. Understanding the implications of self-organization enhances our ability to leverage SOMs for
practical applications and contributes to the advancement of unsupervised learning methodologies.
Key Points
The SOM organizes itself from random weights to structured groups after training, even on random data.
For real datasets like the Iris dataset, the SOM can separate different classes into distinct clusters, even
without knowing the class labels.
The Ecoli dataset presents a more challenging problem, but the SOM still manages to group data into
clusters, though less clearly than with simpler datasets.
17