KEMBAR78
Machine Learning Test 2 | PDF | Statistical Classification | Applied Mathematics
0% found this document useful (0 votes)
60 views6 pages

Machine Learning Test 2

The document provides a summary of an exam for a Machine Learning course, specifically the GO Classes DA Test Series 2025. It includes details such as the number of questions attempted, correct answers, penalties, and individual question statistics with correct answers. The exam consists of 15 questions with a total duration of 45 minutes and a total score of 20 marks.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
60 views6 pages

Machine Learning Test 2

The document provides a summary of an exam for a Machine Learning course, specifically the GO Classes DA Test Series 2025. It includes details such as the number of questions attempted, correct answers, penalties, and individual question statistics with correct answers. The exam consists of 15 questions with a total duration of 45 minutes and a total score of 20 marks.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Summary in Graph

Exam Summary (GO Classes DA Test Series 2025 |


Machine Learning | Topic Wise Test 2)

Qs. Attempted: 0 Correct Marks: 0


0+0 0+0

Correct
Attempts:
0 Penalty Marks: 0
0+0 0+0

Incorrect
Attempts:
0 Resultant
Marks:
0
0+0 0+0

Total
Questions:
15
10 + 5

Total Marks: 20
10 + 10

Exam Duration: 45 Minutes

Time Taken: 0 Minutes

EXAM RESPONSE EXAM STATS FEEDBACK

Technical

Q #1 Multiple Choice Type Award: 1 Penalty: 0.33 Machine Learning

Consider a dataset with two features and three points: xq = (0, 0) is the query point, and the other points
are x1 = (1, 1) and x2 = (1.5, 0). The task is to determine the 1-Nearest-Neighbor (1-NN) for the query
point xq using two different distance metrics: Euclidean and Manhattan distances.
Which of the following statements is true about the nearest neighbor of the query point xq based on the
distance metric used?

A. x1 is the nearest neighbor according to both Euclidean and Manhattan distances.


B. x2 is the nearest neighbor according to both Euclidean and Manhattan distances.
C. x1 is the nearest neighbor according to Euclidean distance, and x2 is the nearest neighbor according
to Manhattan distance.
D. x2 is the nearest neighbor according to Euclidean distance, and x1 is the nearest neighbor according
to Manhattan distance.

Your Answer: Correct Answer: C Not Attempted Time taken: 00min 03sec Discuss

Q #2 Multiple Choice Type Award: 1 Penalty: 0.33 Machine Learning

Consider the following training set in a two-dimensional feature space:


(x , y) Class
-
(-1 , 1)

(0 , 1) +
(0, 2) -
(1 , -1) -
(1 , 0) +
(1 , 2) +
(2 , 2) -
(2 , 3) +

What are the predictions of the k-nearest-neighbor classifier at the point (1, 1) for different values of k ?

A. 3-nearest: +, 5-nearest: +, 7-nearest: -


B. 3-nearest: -, 5-nearest: -, 7-nearest: -
C. 3-nearest: +, 5-nearest: -, 7-nearest: +
D. 3-nearest: -, 5-nearest: +, 7-nearest: +

Your Answer: Correct Answer: A Not Attempted Time taken: 00min 00sec Discuss

Q #3 Multiple Choice Type Award: 1 Penalty: 0.33 Machine Learning

Imagine you are using a k-Nearest Neighbor classifier on a dataset with lots of noise. You want your
classifier to be less sensitive to the noise. Which of the following is likely to help and with what side effect?

A. Increase the value of k → Increase in prediction time


B. Decrease the value of k → Increase in prediction time
C. Increase the value of k → Decrease in prediction time
D. Decrease the value of k → Decrease in prediction time

Your Answer: Correct Answer: A Not Attempted Time taken: 00min 00sec Discuss

Q #4 Multiple Choice Type Award: 1 Penalty: 0.33 Machine Learning

Figure shows the decision boundaries (DB) for two nearest-neighbour classifiers - 1NN and 3NN. Consider
following statements and choose the correct option (True/False for all the statements 1/2/3/4).
1. DB1 belongs to 3NN while DB2 belongs to 1NN .
2. DB1 belongs to 1NN while DB2 belongs to 3NN .
3. DB1 gives zero test error.
4. DB1 gives zero training error.

A. 1: True, 2: False, 3: True, 4: False


B. 1: False, 2: True, 3: False, 4: True
C. 1: False, 2: True, 3: True, 4: False
D. 1: True, 2: False, 3: False, 4: True

Your Answer: Correct Answer: B Not Attempted Time taken: 00min 00sec Discuss

Q #5 Multiple Choice Type Award: 1 Penalty: 0.33 Machine Learning

You are building a k-NN classifier to filter spam emails. You divide 100 labeled emails into a training set (
60 emails), a validation set ( 20 emails), and a test set ( 20 emails). For k = 1, you get a training error of
0.0% and a validation error of 11.2%. For k = 2, you get a training error of 8.5% and a validation error of
9.4%. Which classifier should you expect to have a lower error on the test set, and why?

A. The k = 1 classifier, because it has a 0.0% training error.


B. The k = 2 classifier, because it has a lower validation error.
C. The k = 1 classifier, because it will generalize better with less training error.
D. Both classifiers are equally likely to have the same error on the test set.

Your Answer: Correct Answer: B Not Attempted Time taken: 00min 00sec Discuss

Q #6 Multiple Choice Type Award: 1 Penalty: 0.33 Machine Learning

You are using a 1-NN classifier to filter spam emails. Given a training set of size n and a validation set of
size m, and assuming that the distance calculation takes constant time, what is the asymptotic time
complexity of computing the validation error? (Use big-O notation in terms of n and m.)

A. O(n)
B. O(m)
C. O(n + m)
D. O(n ⋅ m)

Your Answer: Correct Answer: D Not Attempted Time taken: 00min 00sec Discuss

Q #7 Multiple Choice Type Award: 1 Penalty: 0.33 Machine Learning

The table below shows the test set for a 1-nearest-neighbor classifier that uses Manhattan distance, i.e.,
the distance between two points at coordinates p and q is |p − q|. The only attribute, X, is real-valued,
and the label, Y , has two classes, 0 and 1 . Suppose a subset containing n ≤ 8 examples is selected
from this set to train the classifier, and the accuracy of the classifier is 100 percent when tested on this set
(with all 8 examples). What is the smallest possible value for n ? In case of ties in distance, use the
example with smallest X value as the neighbor.

X -5 -4 -1 0 1 3 4 8
Y 0 1 0 0 0 0 0 1
A. 2
B. 3
C. 4
D. 5

Your Answer: Correct Answer: C Not Attempted Time taken: 00min 00sec Discuss

Q #8 Multiple Select Type Award: 1 Penalty: 0 Machine Learning

Which of the following statements about k-Nearest Neighbor ( k-NN) are true in a classification setting,
and for all k ? Select all that apply.

A. The training error of a 1-NN will always be better than that of 5 − NN.
B. The test error of a 1-NN will always be better than that of a 5-NN.
C. The decision boundary of the k-NN classifier is linear.
D. The time needed to classify a test example with the k-NN classifier grows with the size of the training
set.

Your Answer: Correct Answer: A;D Not Attempted Time taken: 00min 00sec Discuss

Q #9 Multiple Choice Type Award: 1 Penalty: 0.33 Machine Learning

Which of the following is true about the k-nearest neighbors (KNN) algorithm?

A. It is a parametric model.
B. It learns a nonlinear decision boundary between classes.
C. It requires a separate training phase and testing phase for prediction.
D. It typically requires longer training compared to other ML algorithms.

Your Answer: Correct Answer: B Not Attempted Time taken: 00min 00sec Discuss

Q #10 Multiple Choice Type Award: 1 Penalty: 0.33 Machine Learning

In k-nearest neighbor ( kNN ) regression, the prediction of Y at point x0 is given by the average of the
values Y at the k neighbors closest to x0 . Denote the ℓ-nearest neighbor to x0 by x(ℓ) and its
corresponding Y value by y(ℓ) . Which of the following expresses the prediction f^ (x0 ) in terms of y(ℓ) , for
1≤ℓ≤k?

A. f^ (x0 ) =
1 k
k
∑ℓ=1 y(ℓ)
B. f^ (x0 ) =
1
k
∑kℓ=1 (y(ℓ) ⋅ e−∣∣x(ℓ) −x0 ∣∣ )

∑ℓ=1 (y(ℓ) + (x(ℓ) − x0 ) )


2
C. f^ (x0 ) =
1 k
k
D. f^ (x0 ) =
1 k
k
∑ℓ=1 (y(ℓ) − (x(ℓ) − x0 ))

Your Answer: Correct Answer: A Not Attempted Time taken: 00min 00sec Discuss

Q #11 Multiple Choice Type Award: 2 Penalty: 0.67 Machine Learning

Let S1 and S2 be two separate training data sets. Let hk (Si , x) be the binary classifier function which
takes in a training data set Si and a test example x, and outputs the result of the k-nearest neighbor
classifier that has been trained on data set Si on the test example x. (So, for example, h5 (S1 , x) is the
output of the 5-nearest neighbor classifier trained on S1 on the test example x ). Assume that the two
labels are 0 and 1. Consider the following two statements:
(a) If h1 (S1 , x) = 1 and h1 (S2 , x) = 1, then h1 (S1 ∪ S2 , x) = 1
(b) If h3 (S1 , x) = 1 and h3 (S2 , x) = 1, then h3 (S1 ∪ S2 , x) = 1
Which of the following statements is correct?

A. Only (a) is correct.


B. Only (b) is correct.
C. Both (a) and (b) are correct.
D. Both (a) and (b) are incorrect.

Your Answer: Correct Answer: B Not Attempted Time taken: 00min 00sec Discuss

Q #12 Multiple Choice Type Award: 2 Penalty: 0.67 Machine Learning

Given the following training set of four labeled points:

(−0.8, 0), (−0.4, 1), (0.2, 1), (0.8, 0)

What is the decision boundary of a 1-NN classifier using this training set?

A.

1 if − 0.4 ≤ x ≤ 0.8
h(x) = {
0 otherwise

B.

1 if − 0.8 ≤ x ≤ 0.2
h(x) = {
0 otherwise

C.

1 if − 0.6 ≤ x ≤ 0.5
h(x) = {
0 if x < −0.6 or x > 0.5

D.

1 if 0.2 ≤ x ≤ 0.8
h(x) = {
0 otherwise

Your Answer: Correct Answer: C Not Attempted Time taken: 00min 00sec Discuss

Q #13 Multiple Choice Type Award: 2 Penalty: 0.67 Machine Learning

A 1-NN classifier will have non-zero training error on a data set with three points (x, y) if:

A. This is not possible for a 1-NN classifier with any data set.
B. Two or more points have the same x-value but different labels.
C. The points are linearly separable.
D. The distance between all points is equal .

Your Answer: Correct Answer: B Not Attempted Time taken: 00min 00sec Discuss

Q #14 Multiple Select Type Award: 2 Penalty: 0 Machine Learning

Which of the following statements is/are TRUE about the k-NN classifier?

A. If point A is among point B's k-nearest neighbors, then point B is always among point A 's k-nearest
neighbors.
B. On a training set with 1000 positive items and 1000 negative items, it is possible for 1-NN to have a
training set accuracy of 0%.
C. Multiplying each dimension of every item's feature vector by 0.1 (assuming Euclidean distance) will
not change k-NN results.
D. As the value of k used in a k-NN classifier is incrementally increased from 1 to n , the total number of
training examples, the classification accuracy on the training set will always increase.

Your Answer: Correct Answer: C Not Attempted Time taken: 00min 00sec Discuss

Q #15 Multiple Choice Type Award: 2 Penalty: 0.67 Machine Learning

You are using a Probabilistic 3NN model for binary classification with labels {−1, +1}. The model
predicts the label of a query point based on the relative proportion of labels among its 3 nearest neighbors.
For example, if the 3 nearest neighbors have labels {−1, −1, +1}, the model assigns a probability of 2/3
to label -1 and 1/3 to label +1 .
Given the 5 nearest neighbors and their distances from the query point q :

Neighbor Distance from q Label


N1 1.2 -1
N2 1.5 +1
N3 1.7 -1
N4 2.0 -1
N5 2.5 +1

What is the predicted probability P (y = −1) using the 3 nearest neighbors?

A. 1/3
B. 2/3
C. 3/4
D. 1/2

Your Answer: Correct Answer: B Not Attempted Time taken: 00min 00sec Discuss

Copyright & Stuff

You might also like