KEMBAR78
ML Assignment 6 | PDF | Machine Learning | Applied Mathematics
0% found this document useful (0 votes)
184 views5 pages

ML Assignment 6

NPTEL Machine Learning Assignment 6
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
184 views5 pages

ML Assignment 6

NPTEL Machine Learning Assignment 6
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

NPTEL Online Certification Courses

Indian Institute of Technology Kharagpur

Course Name – Introduction To Machine Learning


Assignment – Week 6 (Neural Networks)
TYPE OF QUESTION: MCQ/MSQ

Number of Question: 10 Total Marks: 10x2 = 20

1. In training a neural network, we notice that the loss does not increase in the first few starting
epochs: What is the reason for this?

A) The learning Rate is low.


B) Regularization Parameter is High.
C) Stuck at the Local Minima.
D) All of these could be the reason.

Answer: D
The problem can occur due to any one of the reasons above.

2. What is the sequence of the following tasks in a perceptron?

I) Initialize the weights of the perceptron randomly.


II) Go to the next batch of data set.
III) If the prediction does not match the output, change the weights.
IV) For a sample input, compute an output.

A) I, II, III, IV
B) IV, III, II, I
C) III, I, II, IV
D) I, IV, III, II

Answer: D
D is the correct sequence.

3. Suppose you have inputs as x, y, and z with values -2, 5, and -4 respectively. You have a
neuron ‘q’ and neuron ‘f’ with functions:

q=x+y
f=q*z

Graphical representation of the functions is as follows:


NPTEL Online Certification Courses
Indian Institute of Technology Kharagpur

What is the gradient of F with respect to x, y, and z?

A) (-3, 4, 4)
B) (4, 4, 3)
C) (-4, -4, 3)
D) (3, -4, -4)

Answer: C
To calculate gradient, we should find out (df/dx), (df/dy) and (df/dz).

4. A neural network can be considered as multiple simple equations stacked together. Suppose
we want to replicate the function for the below mentioned decision boundary.

Using two simple inputs h1 and h2,

What will be the final equation?

A) (h1 AND NOT h2) OR (NOT h1 AND h2)


B) (h1 OR NOT h2) AND (NOT h1 OR h2)
C) (h1 AND h2) OR (h1 OR h2)
D) None of these
NPTEL Online Certification Courses
Indian Institute of Technology Kharagpur

Answer: A
As you can see, combining h1 and h2 in an intelligent way can get you a complex equation.

5. Which of the following is true about model capacity (where model capacity means the
ability of neural network to approximate complex functions)?

A) As number of hidden layers increase, model capacity increases


B) As dropout ratio increases, model capacity increases
C) As learning rate increases, model capacity increases
D) None of these.

Answer: A
Option A is correct.

6. First Order Gradient descent would not work correctly (i.e. may get stuck) in which of the
following graphs?

A)

B)
NPTEL Online Certification Courses
Indian Institute of Technology Kharagpur

C)

D) None of These.

Answer: B
This is a classic example of saddle point problem of gradient descent.

7. Which of the following is true?


Single layer associative neural networks do not have the ability to

A) Perform pattern recognition


B) Find the parity of a picture
C) Determine whether two or more shapes in a picture are connected or not

A) II and III are true


B) II is true
C) All of the above
D) None of the above

Answer: A
Pattern recognition is what single layer neural networks are best at but they do not have
the ability to find the parity of a picture or to determine whether two shapes are connected
or not.

8. The network that involves backward links from outputs to the inputs and hidden layers is
called as

A) Self-organizing Maps
B) Perceptron
C) Recurrent Neural Networks
D) Multi-Layered Perceptron

Answer: C

9. Intersection of linear hyperplanes in a three-layer network can produce both convex and non-
convex surfaces. Is the statement true?
A) Yes
B) No

Answer: B
NPTEL Online Certification Courses
Indian Institute of Technology Kharagpur

Intersection of linear hyperplanes can only produce convex surfaces.

10. What is meant by the statement “Backpropagation is a generalized delta rule”?


A) Because backpropagation can be extended to hidden layer units
B) Because delta is applied to only to input and output layers, thus making it more
generalized.
C) It has no significance
D) None of the above.

Answer: A
The term generalized is used because it can be extended to hidden layer units.

You might also like