Classical ML Mastery
From visualization to interpretability to time-series forecasting — the durable ML skills
A 10-week path focused on classical ML skills that don't expire with the next model release. You'll build algorithms from scratch, learn to interpret any model, and finish with a forecasting project that pulls it all together.
Difficulty: Intermediate · Total time: 10 weeks
Stage 1 — ML Concepts, Visualized
Sequential · 4 weeks
Run each notebook. Tweak the sliders. Notice when the math stops being abstract.
Stage 2 — Reference vs. Build (parallel tracks)
Parallel · pick one or do both
Track A: Reference (2 weeks)
Track B: Build (3 weeks)
✓ Checkpoint — Implement and explain a decision tree classifier
Write a decision-tree classifier from scratch in pure Python. Train it on a dataset of your choice. Then explain to a non-ML friend why it splits where it splits.
Stage 3 — Interpretable ML, the XAI Bible
Sequential · 4 weeks
Stage 4 — Random Forest Hyperparameters, Deeply
Sequential · 1 day
Stage 5 — Time Series, Forecasting Principles & Practice
Sequential · 4 weeks
✓ Checkpoint — Deliver a forecasting model you can defend
Pick a real time series — your own data if you have it, otherwise something messy from FRED or Kaggle. Build a forecast. Write the diagnostic plots. Defend why you chose ARIMA vs. ETS vs. a regression model with calendar features. Cross-validate with a rolling-origin scheme. Defending the choice matters more than the RMSE.
A 10-week path focused on classical ML skills that don't expire with the next model release. You'll build algorithms from scratch, learn to interpret any model, and finish with a forecasting project that pulls it all together.
Difficulty: Intermediate · Total time: 10 weeks
Stage 1 — ML Concepts, Visualized
Sequential · 4 weeks
Run each notebook. Tweak the sliders. Notice when the math stops being abstract.
Stage 2 — Reference vs. Build (parallel tracks)
Parallel · pick one or do both
Track A: Reference (2 weeks)
Track B: Build (3 weeks)
✓ Checkpoint — Implement and explain a decision tree classifier
Write a decision-tree classifier from scratch in pure Python. Train it on a dataset of your choice. Then explain to a non-ML friend why it splits where it splits.
Stage 3 — Interpretable ML, the XAI Bible
Sequential · 4 weeks
Stage 4 — Random Forest Hyperparameters, Deeply
Sequential · 1 day
Stage 5 — Time Series, Forecasting Principles & Practice
Sequential · 4 weeks
✓ Checkpoint — Deliver a forecasting model you can defend
Pick a real time series — your own data if you have it, otherwise something messy from FRED or Kaggle. Build a forecast. Write the diagnostic plots. Defend why you chose ARIMA vs. ETS vs. a regression model with calendar features. Cross-validate with a rolling-origin scheme. Defending the choice matters more than the RMSE.