Hey @gubbbe
The hallucination problem is one of the things that led me to build Section 11. The root cause is that LLMs can’t do math reliably, so if you feed them raw data and ask them to calculate TSS, CTL, zone distributions, etc., they’ll confidently return incorrect results.
The fix that worked for me: pre-calculate everything in the data pipeline before the AI ever sees it, then embed instructions telling the AI to use the values, never recalculate them. Works across ChatGPT, Claude, Grok, Gemini, Mistral, same data, same structure.
Section 11 can auto-sync from Intervals.icu via GitHub, so no copy-paste or Google Sheets middleman. Setup is not too complicated, guides for everything.
Free and open source: GitHub - CrankAddict/section-11: Evidence-based endurance coaching protocol for AI and LLMs. Deterministic training guidance with Intervals.icu integration.
More context and updates in my thread: https://forum.intervals.icu/t/section-11-open-protocol-for-ai-endurance-coaching-chatgpt-claude-grok-mistral