Autonomous Donkeycar Research Hub
| # | Date | Score | CTE 10s/20s | Trials | Status | Description |
|---|---|---|---|---|---|---|
| 5 | 2026-03-13 17:35 | 0.00 | -8.06/-8.06 | 0/3 | discard |
batch norm + flip augmentation + LR scheduler
Hypothesis: Flip augmentation removes steering bias, batch norm stabilizes training, LR scheduler helps convergence
Summary: Worse than experiment 2 (distance 4.7 vs 8.8). BatchNorm running stats may differ between train/eval modes. Model predictions are very accurate on training data, confirming the real issue is compounding error in the feedback loop.
|
| 4 | 2026-03-13 17:32 | 0.00 | 8.40/8.40 | 0/3 | discard |
moved normalization inside model forward()
Hypothesis: Putting /255 normalization inside the model ensures consistency between train and eval pipelines
Summary: Val loss improved dramatically (0.0004 vs 0.006) confirming normalization fix works. But car still goes off track (CTE=+8.4, flipped sign from -8). Distance improved from 5 to 8.8. Steering bias in data may be the next issue.
|
| 3 | 2026-03-13 17:29 | 0.00 | -8.08/-8.08 | 0/3 | discard |
removed /255 normalization to match evaluate.py
Hypothesis: Normalization mismatch between train and eval causing bad predictions
Summary: Predictions look reasonable in offline test but car still goes off track. CTE=-8 same as baseline. The normalization fix alone is not enough.
|
| 2 | 2026-03-13 17:15 | 0.00 | -8.64/-8.64 | 0/3 | keep | baseline NVIDIA CNN, 5000 PID samples, all trials failed (car barely moves) |
| 1 | 2026-03-13 17:02 | 10.50 | 0.30/0.50 | 2/3 | keep | test recording |