Supervised Learning Details
Message from the Writer
Linear Regression Explained
Explanation
what we're trying to solve?
We are try to find best fit line with minimal error. If someone give height of a person, we can predict the weight of a person with the help of dataset.
Maths behind it
We are try to find best fit line with minimal error. If someone give weight and height of a person, we can predict the weight of a person with the help of this line.
Quick Test
This is For You to Check Your Understanding of the Previous Chapter
Color Define the difficulty of the Question
Hard Question →
RED
Medium Question →
SKY
Easy Question →
GREEN
Very Easy Question →
YELLOW
- What is supervised learning?
💡Answer
Answer B
- Regression model in which more than one independent variable is used to predict the dependent variable is called
💡Answer
b) A multiple regression models
- A nearest neighbour approach is best used
💡Answer
Answer B
- Logistic regression is a ___________________ regression technique that is used to model data having a ________outcome.
💡Answer
d) non-linear, binary
- This supervised learning technique can process both numeric and categorical input attributes.
💡Answer
a) linear regression
- Machine learning techniques differ from statistical techniques in that machine learning methods
💡Answer
b) are better able to deal with missing and noisy data
- Which of the following methods do we use to best fit the data in Logistic Regression?
💡Answer
b) Maximum Likelihood
- Choose which of the following options is true regarding One-vs-AII method in Logistic Regression.
💡Answer
a) We need to fit n models in n-class classification problem
- Suppose, You applied a Logistic Regression model on a given data and got a
training accuracy X and testing accuracy Y. Now, you want to add a few new
features in the same data. Select the option(s) which is/are correct in such a case.
Note: Consider remaining parameter are same.
💡Answer
A) Training accuracy increases B) Training accuracy increases or remains the same
- Which Of the following statement is true about outliers in Linear regression?
💡Answer
a) Linear regression is sensitive to outliers
- Which of the following option is true?
💡Answer
a) Linear Regression errors values has to be normally distributed but in case of Logistic Regression it is not the case
- The SVM's are less effective when:
💡Answer
c) The data is noisy and contains overlapping points
- The cost parameter in the SVM means:
💡Answer
c) The trade-off between misclassification and simplicity of the model
- Which Of the following are real world applications Of the SVM?
💡Answer
d) All Of these
- Which Of the following is true about Naive Bayes?
💡Answer
c) Both A and B — answer
- What do you mean by generalization error in terms of the SVM?
💡Answer
b) How accurately the SVM can predict outcomes for unseen data
- Which of the following is a disadvantage of decision trees?
💡Answer
c) Decision trees are prone to be overfit
- You run gradient descent for 15 iterations with a=O.3 and compute J(theta) after each iteration. You find that the value of J (Theta) decreases quickly and then
levels off. Based on this, which of the following conclusions seems most
plausible?
💡Answer
c) a=O.3 is an effective choice Of learning rate
- In linear regression, we try to__________the least square error of the model to identify the line of best fit.
💡Answer
a) Minimize
- We can compute the coefficient of linear regression by using
💡Answer
a) gradient descent
- What's the objective of the support vector machine algorithm?
💡Answer
d) None of these
22 Which options are true for SVM?
💡Answer
d) SVM can solve the data points that are not linearly separable
- For SVM, which options are correct?
💡Answer
a) Support vectors are data points that are closer to the hyperplane and influence the position and orientation of the hyperplane
c) Deleting the support vectors will change the position of the hyperplane
- Decision tree is the most powerful for
💡Answer
c) both (a) & (b)
- Which one of the following statements is TRUE for a Decision Tree?
💡Answer
b) In a decision tree, the entropy of a node decreases as we go down a decision tree.
- How do you choose the right node while constructing a decision tree?
💡Answer
d) An attribute having the highest information gain
- Which of the following statement is False in the case of the KNN Algorithm?
💡Answer
c) KNN is used only for classification problem statements.
- In the Naive Bayes algorithm, suppose that prior for class w1 is greater than
class w2, would the decision boundary shift towards the region R1(region for
deciding w1) or towards region R2 (region for deciding w2)?
💡Answer
b) towards region R2