Q1. You train a model that gets 99% training accuracy but 62% test accuracy. What is most likely happening, and what is the first fix to try?
- A.Underfitting — train a larger model.
- B.Overfitting — add regularization or more data.✓ Correct
- C.Data leakage — the test set is too easy.
- D.The metric is wrong — switch to precision/recall.
Explanation
A large gap between training and test accuracy is the textbook signature of overfitting. The first line of defence is regularization (L1/L2, dropout) or increasing the training data. Underfitting would show low accuracy on both sets. Data leakage usually inflates test accuracy, not the reverse.