-
What is an alternative name for a Confusion Matrix? A. *Error Matrix B. Result Matrix C. Confidence Matrix D. Assessment Matrix
-
Which accuracy metric can be used to measure the effects of imbalanced datasets? A. *F1-score B. Precision C. MAE D. AUC-ROC Curve
-
Which of the following metrics measures how far the prediction is from the actual outcome? A. *Mean Absolute Error (MAE) B. Accuracy C. F1-score D. Precision
-
What is the purpose of the AUC-ROC curve? A. Diagnose underfitting and overfitting B. Measure the variance of predictions C. *Measure the discriminative power of the model D. Determine accuracy and precision
-
During the training of a Machine Learning model, if the model is adapted too much to the training data it will have good performance on that data but poor performance on future, unseen data, this is called _____? A. Confusion B. From-To Error C. *Overfitting D. Undermining
-
A Confusion Matrix can be used to measure which of the following? A. *Precision and Recall B. Accuracy and MAE C. MAE and MSE D. Mean and Range
-
When dealing with imbalanced datasets, the F1-score balances what two metrics? A. Accuracy and Test Size B. *Precision and Recall C. Sensitivity and Specificity D. F1 Value and ROC
-
Which of the following is used to quantify the variance of a model? A. *Bias-variance Tradeoff B. Confusion Matrix C. Caps rates D. Accuracy
-
If a machine learning model suffers from high bias, what should be done to improve accuracy? A. *Increasing the complexity of the model B. Decreasing the complexity of the model C. Remove categories D. Increase training data
-
What is the difference between overfitting and underfitting in Machine Learning? A. *Overfitting is when the model’s complexity is too high and it begins to train on noise, while Underfitting is when the model’s complexity is too low and as a result, it cannot capture the relations between the input variables and the target variable. B. Overfitting is when the model begins to memorize the training data and performs poorly on validation and test data, while Underfitting is when the model can not learn the underlying trends present in the data. C. *Overfitting is when the model performs well on the training data but poorly on validation and test data, while Underfitting is when the model performs poorly on all the data. D. Overfitting is when the model’s complexity is too low and it considers only a few variables, while Underfitting is when the model’s complexity is too high and it considers too many variables.
Loading...