Deep Learning Regularization Guide
Deep Learning Regularization Guide
Regularization
Welcome to the second assignment of this week. Deep Learning models have so much flexibility and
capacity that overfitting can be a serious problem, if the training dataset is not big enough. Sure it does
well on the training set, but the learned network doesn't generalize to new examples that it has never
seen!
You will learn to: Use regularization in your deep learning models.
Updates to Assignment
If you were working on a previous version
List of Updates
In [1]:
# import packages
import numpy as np
import matplotlib.pyplot as plt
from reg_utils import sigmoid, relu, plot_decision_boundary, initialize_paramete
rs, load_2D_dataset, predict_dec
from reg_utils import compute_cost, predict, forward_propagation, backward_propa
gation, update_parameters
import sklearn
import sklearn.datasets
import scipy.io
from testCases import *
%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
https://wdnvhvtmmzjkdfpnopymxu.coursera-apps.org/nbconvert/html/week5/Regularization/Regularization_v2a.ipynb?download=false 1/21
2020/2/25 Regularization_v2a
Problem Statement: You have just been hired as an AI expert by the French Football Corporation. They
would like you to recommend positions where France's goal keeper should kick the ball so that the French
team's players can then hit it with their head.
They give you the following 2D dataset from France's past 10 games.
In [3]:
https://wdnvhvtmmzjkdfpnopymxu.coursera-apps.org/nbconvert/html/week5/Regularization/Regularization_v2a.ipynb?download=false 2/21
2020/2/25 Regularization_v2a
Each dot corresponds to a position on the football field where a football player has hit the ball with his/her
head after the French goal keeper has shot the ball from the left side of the football field.
If the dot is blue, it means the French player managed to hit the ball with his/her head
If the dot is red, it means the other team's player hit the ball with their head
Your goal: Use a deep learning model to find the positions on the field where the goalkeeper should kick the
ball.
Analysis of the dataset: This dataset is a little noisy, but it looks like a diagonal line separating the upper
left half (blue) from the lower right half (red) would work well.
You will first try a non-regularized model. Then you'll learn how to regularize it and decide which model you
will choose to solve the French Football Corporation's problem.
1 - Non-regularized model
You will use the following neural network (already implemented for you below). This model can be used:
in regularization mode -- by setting the lambd input to a non-zero value. We use "lambd" instead of
"lambda" because "lambda" is a reserved keyword in Python.
in dropout mode -- by setting the keep_prob to a value less than one
You will first try the model without any regularization. Then, you will implement:
In each part, you will run this model with the correct inputs so that it calls the functions you've implemented.
Take a look at the code below to familiarize yourself with the model.
https://wdnvhvtmmzjkdfpnopymxu.coursera-apps.org/nbconvert/html/week5/Regularization/Regularization_v2a.ipynb?download=false 3/21
2020/2/25 Regularization_v2a
In [4]:
Arguments:
X -- input data, of shape (input size, number of examples)
Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (output
size, number of examples)
learning_rate -- learning rate of the optimization
num_iterations -- number of iterations of the optimization loop
print_cost -- If True, print the cost every 10000 iterations
lambd -- regularization hyperparameter, scalar
keep_prob - probability of keeping a neuron active during drop-out, scalar.
Returns:
parameters -- parameters learned by the model. They can then be used to pred
ict.
"""
grads = {}
costs = [] # to keep track of the cost
m = X.shape[1] # number of examples
layers_dims = [X.shape[0], 20, 3, 1]
# Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIG
MOID.
if keep_prob == 1:
a3, cache = forward_propagation(X, parameters)
elif keep_prob < 1:
a3, cache = forward_propagation_with_dropout(X, parameters, keep_pro
b)
# Cost function
if lambd == 0:
cost = compute_cost(a3, Y)
else:
cost = compute_cost_with_regularization(a3, Y, parameters, lambd)
# Backward propagation.
assert(lambd==0 or keep_prob==1) # it is possible to use both L2 regu
larization and dropout,
# but this assignment will only expl
ore one at a time
if lambd == 0 and keep_prob == 1:
grads = backward_propagation(X, Y, cache)
elif lambd != 0:
grads = backward_propagation_with_regularization(X, Y, cache, lambd)
elif keep_prob < 1:
grads = backward_propagation_with_dropout(X, Y, cache, keep_prob)
https://wdnvhvtmmzjkdfpnopymxu.coursera-apps.org/nbconvert/html/week5/Regularization/Regularization_v2a.ipynb?download=false 4/21
2020/2/25 Regularization_v2a
# Update parameters.
parameters = update_parameters(parameters, grads, learning_rate)
return parameters
Let's train the model without any regularization, and observe the accuracy on the train/test sets.
In [5]:
The train accuracy is 94.8% while the test accuracy is 91.5%. This is the baseline model (you will observe
the impact of regularization on this model). Run the following code to plot the decision boundary of your
model.
https://wdnvhvtmmzjkdfpnopymxu.coursera-apps.org/nbconvert/html/week5/Regularization/Regularization_v2a.ipynb?download=false 5/21
2020/2/25 Regularization_v2a
In [6]:
The non-regularized model is obviously overfitting the training set. It is fitting the noisy points! Lets now look
at two techniques to reduce overfitting.
2 - L2 Regularization
The standard way to avoid overfitting is called L2 regularization. It consists of appropriately modifying your
cost function, from:
m
1 (i) [L](i) (i) [L](i)
J = − (y log(a ) + (1 − y ) log(1 − a )) (1)
m ∑
i=1
To:
m
1 (i) [L](i) (i) [L](i)
1 λ [l]2
Jregularized = − (y log(a ) + (1 − y ) log(1 − a )) + W
k,j
(2)
m ∑ m 2 ∑ ∑ ∑
i=1 l k j
np.sum(np.square(Wl))
Note that you have to do this for W [1] , W [2] and W [3] , then sum the three terms and multiply by .
1 λ
m 2
https://wdnvhvtmmzjkdfpnopymxu.coursera-apps.org/nbconvert/html/week5/Regularization/Regularization_v2a.ipynb?download=false 6/21
2020/2/25 Regularization_v2a
In [11]:
Arguments:
A3 -- post-activation, output of forward propagation, of shape (output size,
number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
parameters -- python dictionary containing parameters of the model
Returns:
cost - value of the regularized loss function (formula (2))
"""
m = Y.shape[1]
W1 = parameters["W1"]
W2 = parameters["W2"]
W3 = parameters["W3"]
return cost
In [12]:
cost = 1.78648594516
Expected Output:
**cost** 1.78648594516
Of course, because you changed the cost, you have to change backward propagation as well! All the
gradients have to be computed with respect to this new cost.
Exercise: Implement the changes needed in backward propagation to take into account regularization. The
changes only concern dW1, dW2 and dW3. For each, you have to add the regularization term's gradient (
W ).
d 1 λ 2 λ
( W ) =
dW 2 m m
https://wdnvhvtmmzjkdfpnopymxu.coursera-apps.org/nbconvert/html/week5/Regularization/Regularization_v2a.ipynb?download=false 7/21
2020/2/25 Regularization_v2a
In [13]:
Arguments:
X -- input dataset, of shape (input size, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
cache -- cache output from forward_propagation()
lambd -- regularization hyperparameter, scalar
Returns:
gradients -- A dictionary with the gradients with respect to each parameter,
activation and pre-activation variables
"""
m = X.shape[1]
(Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
return gradients
https://wdnvhvtmmzjkdfpnopymxu.coursera-apps.org/nbconvert/html/week5/Regularization/Regularization_v2a.ipynb?download=false 8/21
2020/2/25 Regularization_v2a
In [14]:
dW1 =
[[-0.25604646 0.12298827 -0.28297129]
[-0.17706303 0.34536094 -0.4410571 ]]
dW2 =
[[ 0.79276486 0.85133918]
[-0.0957219 -0.01720463]
[-0.13100772 -0.03750433]]
dW3 =
[[-1.77691347 -0.11832879 -0.09397446]]
Expected Output:
dW1 =
[[-0.25604646 0.12298827 -0.28297129]
[-0.17706303 0.34536094 -0.4410571 ]]
dW2 =
[[ 0.79276486 0.85133918]
[-0.0957219 -0.01720463]
[-0.13100772 -0.03750433]]
dW3 =
[[-1.77691347 -0.11832879 -0.09397446]]
Let's now run the model with L2 regularization (λ = 0.7) . The model() function will call:
https://wdnvhvtmmzjkdfpnopymxu.coursera-apps.org/nbconvert/html/week5/Regularization/Regularization_v2a.ipynb?download=false 9/21
2020/2/25 Regularization_v2a
In [19]:
Congrats, the test set accuracy increased to 93%. You have saved the French football team!
You are not overfitting the training data anymore. Let's plot the decision boundary.
https://wdnvhvtmmzjkdfpnopymxu.coursera-apps.org/nbconvert/html/week5/Regularization/Regularization_v2a.ipynb?download=false 10/21
2020/2/25 Regularization_v2a
In [16]:
Observations:
The value of λ is a hyperparameter that you can tune using a dev set.
L2 regularization makes your decision boundary smoother. If λ is too large, it is also possible to
"oversmooth", resulting in a model with high bias.
L2-regularization relies on the assumption that a model with small weights is simpler than a model with large
weights. Thus, by penalizing the square values of the weights in the cost function you drive all the weights to
smaller values. It becomes too costly for the cost to have large weights! This leads to a smoother model in
which the output changes more slowly as the input changes.
https://wdnvhvtmmzjkdfpnopymxu.coursera-apps.org/nbconvert/html/week5/Regularization/Regularization_v2a.ipynb?download=false 11/21
2020/2/25 Regularization_v2a
3 - Dropout
Finally, dropout is a widely used regularization technique that is specific to deep learning. It randomly shuts
down some neurons in each iteration. Watch these two videos to see what this means!
0:00 / 0:08
https://wdnvhvtmmzjkdfpnopymxu.coursera-apps.org/nbconvert/html/week5/Regularization/Regularization_v2a.ipynb?download=false 12/21
2020/2/25 Regularization_v2a
0:00 / 0:08
When you shut some neurons down, you actually modify your model. The idea behind drop-out is that at
each iteration, you train a different model that uses only a subset of your neurons. With dropout, your
neurons thus become less sensitive to the activation of one other specific neuron, because that other neuron
might be shut down at any time.
Instructions: You would like to shut down some neurons in the first and second layers. To do that, you are
going to carry out 4 Steps:
1. In lecture, we dicussed creating a variable d [1] with the same shape as a[1] using
np.random.rand() to randomly get numbers between 0 and 1. Here, you will use a vectorized
implementation, so create a random matrix D[1] = [d [1](1) d [1](2) . . . d [1](m)] of the same dimension
as A[1] .
2. Set each entry of D[1] to be 1 with probability (keep_prob), and 0 otherwise.
Hint: Let's say that keep_prob = 0.8, which means that we want to keep about 80% of the neurons and drop
out about 20% of them. We want to generate a vector that has 1's and 0's, where about 80% of them are 1
and about 20% are 0. This python statement:
X = (X < keep_prob).astype(int)
is conceptually the same as this if-else statement (for the simple case of a one-dimensional array) :
https://wdnvhvtmmzjkdfpnopymxu.coursera-apps.org/nbconvert/html/week5/Regularization/Regularization_v2a.ipynb?download=false 13/21
2020/2/25 Regularization_v2a
https://wdnvhvtmmzjkdfpnopymxu.coursera-apps.org/nbconvert/html/week5/Regularization/Regularization_v2a.ipynb?download=false 14/21
2020/2/25 Regularization_v2a
In [20]:
# GRADED FUNCTION: forward_propagation_with_dropout
Arguments:
X -- input dataset, of shape (2, number of examples)
parameters -- python dictionary containing your parameters "W1", "b1", "W2",
"b2", "W3", "b3":
W1 -- weight matrix of shape (20, 2)
b1 -- bias vector of shape (20, 1)
W2 -- weight matrix of shape (3, 20)
b2 -- bias vector of shape (3, 1)
W3 -- weight matrix of shape (1, 3)
b3 -- bias vector of shape (1, 1)
keep_prob - probability of keeping a neuron active during drop-out, scalar
Returns:
A3 -- last activation value, output of the forward propagation, of shape (1,
1)
cache -- tuple, information stored for computing the backward propagation
"""
np.random.seed(1)
# retrieve parameters
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
W3 = parameters["W3"]
b3 = parameters["b3"]
# LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID
Z1 = np.dot(W1, X) + b1
A1 = relu(Z1)
### START CODE HERE ### (approx. 4 lines) # Steps 1-4 below correspo
nd to the Steps 1-4 described above.
D1 = np.random.rand(A1.shape[0], A1.shape[1])
# Step 1: initialize matrix D1 = np.random.rand(..., ...)
D1 = (D1 < keep_prob).astype(int) #
Step 2: convert entries of D1 to 0 or 1 (using keep_prob as the threshold)
A1 = np.multiply(A1, D1) # Step 3: s
hut down some neurons of A1
A1 = A1/keep_prob # Step 4: scale the
value of neurons that haven't been shut down
### END CODE HERE ###
Z2 = np.dot(W2, A1) + b2
A2 = relu(Z2)
### START CODE HERE ### (approx. 4 lines)
D2 = np.random.rand(A2.shape[0], A2.shape[1])
# Step 1: initialize matrix D2 = np.random.rand(..., ...)
D2 = (D2 < keep_prob).astype(int)
# Step 2: convert entries of D2 to 0 or 1 (using keep_prob as the threshold)
A2 = np.multiply(A2, D2) # Step 3: s
hut down some neurons of A2
A2 = A2/keep_prob # Step 4: scale th
e value of neurons that haven't been shut down
https://wdnvhvtmmzjkdfpnopymxu.coursera-apps.org/nbconvert/html/week5/Regularization/Regularization_v2a.ipynb?download=false 15/21
2020/2/25 Regularization_v2a
### END CODE HERE ###
Z3 = np.dot(W3, A2) + b3
A3 = sigmoid(Z3)
cache = (Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3)
In [21]:
Expected Output:
Instruction: Backpropagation with dropout is actually quite easy. You will have to carry out 2 Steps:
1. You had previously shut down some neurons during forward propagation, by applying a mask D[1]
to A1. In backpropagation, you will have to shut down the same neurons, by reapplying the same
mask D[1] to dA1.
2. During forward propagation, you had divided A1 by keep_prob. In backpropagation, you'll
therefore have to divide dA1 by keep_prob again (the calculus interpretation is that if A[1] is scaled
by keep_prob, then its derivative dA[1] is also scaled by the same keep_prob).
https://wdnvhvtmmzjkdfpnopymxu.coursera-apps.org/nbconvert/html/week5/Regularization/Regularization_v2a.ipynb?download=false 16/21
2020/2/25 Regularization_v2a
In [22]:
Arguments:
X -- input dataset, of shape (2, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
cache -- cache output from forward_propagation_with_dropout()
keep_prob - probability of keeping a neuron active during drop-out, scalar
Returns:
gradients -- A dictionary with the gradients with respect to each parameter,
activation and pre-activation variables
"""
m = X.shape[1]
(Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
dW3 = 1./m * np.dot(dZ3, A2.T)
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
### START CODE HERE ### (≈ 2 lines of code)
dA2 = np.multiply(dA2, D2) # Step 1: Apply mask D2 to shut down t
he same neurons as during the forward propagation
dA2 = dA2/keep_prob # Step 2: Scale the value of neurons that h
aven't been shut down
### END CODE HERE ###
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
dW2 = 1./m * np.dot(dZ2, A1.T)
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
return gradients
https://wdnvhvtmmzjkdfpnopymxu.coursera-apps.org/nbconvert/html/week5/Regularization/Regularization_v2a.ipynb?download=false 17/21
2020/2/25 Regularization_v2a
In [23]:
dA1 =
[[ 0.36544439 0. -0.00188233 0. -0.17408748]
[ 0.65515713 0. -0.00337459 0. -0. ]]
dA2 =
[[ 0.58180856 0. -0.00299679 0. -0.27715731]
[ 0. 0.53159854 -0. 0.53159854 -0.34089673]
[ 0. 0. -0.00292733 0. -0. ]]
Expected Output:
dA1 =
[[ 0.36544439 0. -0.00188233 0. -0.17408748]
[ 0.65515713 0. -0.00337459 0. -0. ]]
dA2 =
[[ 0.58180856 0. -0.00299679 0. -0.27715731]
[ 0. 0.53159854 -0. 0.53159854 -0.34089673]
[ 0. 0. -0.00292733 0. -0. ]]
Let's now run the model with dropout (keep_prob = 0.86). It means at every iteration you shut down
each neurons of layer 1 and 2 with 14% probability. The function model() will now call:
https://wdnvhvtmmzjkdfpnopymxu.coursera-apps.org/nbconvert/html/week5/Regularization/Regularization_v2a.ipynb?download=false 18/21
2020/2/25 Regularization_v2a
In [24]:
Dropout works great! The test accuracy has increased again (to 95%)! Your model is not overfitting the
training set and does a great job on the test set. The French football team will be forever grateful to you!
https://wdnvhvtmmzjkdfpnopymxu.coursera-apps.org/nbconvert/html/week5/Regularization/Regularization_v2a.ipynb?download=false 19/21
2020/2/25 Regularization_v2a
In [25]:
Note:
A common mistake when using dropout is to use it both in training and testing. You should use
dropout (randomly eliminate nodes) only in training.
Deep learning frameworks like tensorflow
(https://www.tensorflow.org/api_docs/python/tf/nn/dropout), PaddlePaddle
(http://doc.paddlepaddle.org/release_doc/0.9.0/doc/ui/api/trainer_config_helpers/attrs.html), keras
(https://keras.io/layers/core/#dropout) or caffe
(http://caffe.berkeleyvision.org/tutorial/layers/dropout.html) come with a dropout layer
implementation. Don't stress - you will soon learn some of these frameworks.
4 - Conclusions
https://wdnvhvtmmzjkdfpnopymxu.coursera-apps.org/nbconvert/html/week5/Regularization/Regularization_v2a.ipynb?download=false 20/21
2020/2/25 Regularization_v2a
Note that regularization hurts training set performance! This is because it limits the ability of the network to
overfit to the training set. But since it ultimately gives better test accuracy, it is helping your system.
Congratulations for finishing this assignment! And also for revolutionizing French football. :-)
https://wdnvhvtmmzjkdfpnopymxu.coursera-apps.org/nbconvert/html/week5/Regularization/Regularization_v2a.ipynb?download=false 21/21