Kytcs.Blogspot.com: Machine Learning Unit - 4 mcq

Machine Learning Unit - 4 mcq

Machine Learning Unit - 4 mcq

 

1. For the analysis of ML algorithms, we need
(A) Computational learning theory
(B) Statistical learning theory
(C) Both A & B
(D) None of these

Correct option is C


2. PAC stand for
(A) Probably Approximate Correct
(B) Probably Approx Correct
(C) Probably Approximate Computation
(D) Probably Approx Computation

Correct option is A


3. ___________ of hypothesis h with respect to target concept c and distribution D , is the probability that h will misclassify an instance drawn at random according to D
(A) True Error
(B) Type 1 Error
(C) Type 2 Error
(D) None of these

Correct option is A


4. Statement: True error defined over entire instance space, not just training data
(A) True
(B) False

Correct option is A


5. What are the area CLT comprised of?
(A) Sample Complexity
(B) Computational Complexity
(C) Mistake Bound
(D) All of these

Correct option is D


6. What area of CLT tells “How many examples we need to find a good hypothesis ?”?
(A) Sample Complexity
(B) Computational Complexity
(C) Mistake Bound
(D) None of these

Correct option is A


7. What area of CLT tells “How much computational power we need to find a good hypothesis ?”?
(A) Sample Complexity
(B) Computational Complexity
(C) Mistake Bound
(D) None of these

Correct option is B


8. What area of CLT tells “How many mistakes we will make before finding a good hypothesis ?”?
(A) Sample Complexity
(B) Computational Complexity
(C) Mistake Bound
(D) None of these

Correct option is C


9. (For question no. 9 and 10) Can we say that concept described by conjunctions of Boolean literals are PAC learnable?
(A) Yes
(B) No

Correct option is A


10. How large is the hypothesis space when we have n Boolean attributes?
(A) |H| = 3 n
(B) |H| = 2 n
(C) |H| = 1 n
(D) |H| = 4n

Correct option is A


11. The VC dimension of hypothesis space H1 is larger than the VC dimension of hypothesis space H2. Which of the following can be inferred from this?


(A) The number of examples required for learning a hypothesis in H1 is larger than the number of examples required for H2
(B) The number of examples required for learning a hypothesis in H1 is smaller than the number of examples required for H2.
(C) No relation to number of samples required for PAC learning.

Correct option is A


12. For a particular learning task, if the requirement of error parameter changes from 0.1 to 0.01. How many more samples will be required for PAC learning?
(A) Same
(B) 2 times
(C) 1000 times
(D) 10 times

Correct option is D


13. Computational complexity of classes of learning problems depends on which of the following?
(A) The size or complexity of the hypothesis space considered by learner
(B) The accuracy to which the target concept must be approximated
(C) The probability that the learner will output a successful hypothesis
(D) All of these

Correct option is D


14. The instance-based learner is a ____________
(A) Lazy-learner
(B) Eager learner
(C) Can’t say

Correct option is A


15. When to consider nearest neighbour algorithms?
(A) Instance map to point in kn
(B) Not more than 20 attributes per instance
(C) Lots of training data
(D) None of these
(E) A, B & C

Correct option is E


16. What are the advantages of Nearest neighbour alogo?
(A) Training is very fast
(B) Can learn complex target functions
(C) Don’t lose information
(D) All of these

Correct option is D


17. What are the difficulties with k-nearest neighbour algo?
(A) Calculate the distance of the test case from all training cases
(B) Curse of dimensionality
(C) Both A & B
(D) None of these

Correct option is C


18. What if the target function is real valued in kNN algo?
(A) Calculate the mean of the k nearest neighbours
(B) Calculate the SD of the k nearest neighbour
(C) None of these

Correct option is A


19. What is/are true about Distance-weighted KNN?
(A) The weight of the neighbour is considered
(B) The distance of the neighbour is considered
(C) Both A & B
(D) None of these

Correct option is C


20. What is/are advantage(s) of Distance-weighted k-NN over k-NN?
(A) Robust to noisy training data
(B) Quite effective when a sufficient large set of training data is provided
(C) Both A & B
(D) None of these

Correct option is C


21. What is/are advantage(s) of Locally Weighted Regression?
(A) Pointwise approximation of complex target function
(B) Earlier data has no influence on the new ones
(C) Both A & B
(D) None of these

Correct option is C


22. The quality of the result depends on (LWR)
(A) Choice of the function
(B) Choice of the kernel function K
(C) Choice of the hypothesis space H
(D) All of these

Correct option is D


23. How many types of layer in radial basis function neural networks?
(A) 3
(B) 2
(C) 1
(D) 4

Correct option is A


24. The neurons in the hidden layer contains Gaussian transfer function whose output are _____________ to the distance from the centre of the neuron.
(A) Directly
(B) Inversely
(C) equal
(D) None of these

Correct option is B


25. PNN/GRNN networks have one neuron for each point in the training file, While RBF network have a variable number of neurons that is usually 


(A) less than the number of training points.
(B) greater than the number of training points
(C) equal to the number of training points
(D) None of these

Correct option is A


26. Which network is more accurate when the size of training set between small to medium?


(A) PNN/GRNN
(B) RBF
(C) K-means clustering
(D) None of these

Correct option is A


27. What is/are true about RBF network?
(A) A kind of supervised learning
(B) Design of NN as curve fitting problem
(C) Use of multidimensional surface to interpolate the test data
(D) All of these

Correct option is D


28. Application of CBR
(A) Design
(B) Planning
(C) Diagnosis
(D) All of these

Correct option is A


29. What is/are advantages of CBR?
(A) A local approx. is found for each test case
(B) Knowledge is in a form understandable to human
(C) Fast to train
(D) All of these

Correct option is D


30 In k-NN algorithm, given a set of training examples and the value of k < size of training set (n), the algorithm predicts the class of a test example to be the. What is/are advantages of CBR?


(A) Least frequent class among the classes of k closest training examples.
(B) Most frequent class among the classes of k closest training examples.
(C) Class of the closest point.
(D) Most frequent class among the classes of the k farthest training examples.

Correct option is B


31. Which of the following statements is true about PCA? 


(i) We must standardize the data before applying PCA.
(ii) We should select the principal components which explain the highest variance
(iii) We should select the principal components which explain the lowest variance
(iv) We can use PCA for visualizing the data in lower dimensions
(A) (i), (ii) and (iv).
(B) (ii) and (iv)
(C) (iii) and (iv)
(D) (i) and (iii)

Correct option is A


No comments:

Post a Comment

Followers

Ad Space