Iāve been using Gemini as a day-to-day coach while training for a mountain biking stage race. Iāve been very impressed. I first used Gemini to create the training plan. I then improved the experience by creating a Gemini Gem that could act as a day-to-day coach. As part of that process, I managed to integrate the Gem with Intervals.icu by using a Google Sheet as a simple interface. You can create script that automatically pulls the data from Intervals.icu once a day into the Sheet and then the Gem knows your training history through the Sheet. I didnāt know how to do it, so I asked Gemini to tell the steps and write the needed code.
Let me know if this helps? Iād be happy to record a demo video.
Iām trying to solve this by analyzing each workout individually, but in a way that simultaneously builds an athlete profile. So the whole thing works in a loop: first, an athlete profile is created from the available data, then the AI generates one or more workout analyses. Based on those analyses, the athlete profile gets updated, and the next workout analysis already takes the existing athlete profile into account, and so on.
This way, the analyses build on each other incrementally, and in the end everything is summarized somehow. Thatās roughly how Iām working on it right nowāfeel free to take a look if youād like in this thread.
Actually, as a developer working actively with AI, Iāve seen this quite a bit. Thereās a common misconception that more context equals better results, but when you feed an LLM (regardless of the model) a massive file, youāre often just increasing the noise-to-signal ratio.
In my experience, once you cross the 45% mark of a context window, you usually start seeing a measurable degradation in data retrieval and an uptick in hallucinations. Compaction doesnāt really solve this either, as it tends to strip out the granular data points that are actually useful for your questions.
If youāre hitting those limits, here are a few ways to handle it:
Donāt feed the āhistoryā: Do you actually need the entire historical dataset for the current query? Itās better to feed small, relevant chunks. If youāre using something like Claude, you can use separate agents with independent context windows.
Use Visuals: Feed it a chart of the data instead. Just be careful with image sizing; if the resolution is too low, the model is more likely to misinterpret the axes or data points.
Create Memories/RAG: Instead of a massive ādumpā of data, use agents to fetch only the data relevant to the specific time window youāre asking about.
Define Skills (Claude): Use .md files to give the agent āskillsā so it understands real-world training patterns before it even looks at your specific numbers.
Really interesting thread. I work with AI as a developer and took a similar approach to this problem as coding agents do.
The core idea: donāt feed data into the model. Give the AI agent tools to search and retrieve only whatās relevant to the current question. No massive dumps, no context window degradation.
Add a visualization layer on top ā the user sees the same charts the AI is reasoning from. If it says āyour decoupling was 8.7%,ā you can see the power vs HR drift yourself. Kills the black-box problem and makes hallucinations immediately obvious.
Built this into an app (Thread) that integrates with intervals.icu.
Nice idea, Iāve been pleasantly surprised at how well using Gemini as a coach works, but still getting it dialed in. Gemini was also pretty helpful with a fueling plan, an area I was lacking.
Iāve built a simple web-based tool as part of a series of tools at RunTK.com that is specifically designed to generate a prompt from your intervals.icu data to copy and paste into any AI model you want.
Itās both mobile and desktop friendly, requires no account or login. All you need is your intervals.icu API key to retrieve the data.
I built this tool (along with the other tools at RunTK.com) mainly out of frustration with all the āAI Poweredā tools that are popping up every day, promising to solve all your training problems, when really theyāre only as good as whichever LLM they choose under the hood. Besides, I donāt want them to solve all my problems. I just want to be able to easily get MY data into MY chosen LLM, and I shouldnāt need another account or app to do that.
The tool is designed to walk you through step by step, so you can fill in as much or as little as you like depending on what you want the AI to give you at the end.
I agree thereās a sort of Goldilocks zone for context window usage, but for most exercise analysis this shouldnāt really be the limiting factor.
The real problem arises when people try to use the same chat over many weeks or months, just feeding it new workout data as it comes in. This is often where āAI as a coachā falls apart, and itās a usage pattern problem more than a context window problem.
The better approach is to ask targeted questions and provide all the data needed to answer that specific question. Thatās actually the idea I built my AI Prompt Generator Tool on.
Hi, I have been using Gemini as sort of a coach for the last three months. My approach is as follows: I follow certain coaches online, pros turned coaches, and they publish blog posts, I feed specific blog posts for specific cycles in my coaching phases.
Right now I am building torque for cycling. I gave gemini three articles as structure and general guidance. Gemini asked for my training data and capacity and gave me a detailed training structure according to my goals.
There is nothing automatic about it. I have been training alone for 5 years now, I have developed a lot of knowledge about my training and enjoy spending some minutes every day to plan and review my training.
That way what I find very useful is the feedback that gemini gives me. I tell her what I did today and she asks about it and suggests a weekly plan. The feedback, the questions and in general the way the conversation forces me to review my sessions is what I find most important. It is like having a coach to chat with 24/7.
I guess all the automatization will be great, for now I enjoy the conversation and what it allows me.
My other big tip is to force the AI to plan before diving into big tasks like building a training plan. Ask it to outline its approach first, which training blocks, what periodization logic, how itāll handle recovery weeks, before it writes a single workout. You get much better results and itās easier to spot where its reasoning goes wrong before youāre committed to a whole plan.
Iām actually working on an update to my AI Prompt Generator Tool to help with exactly this for building full training plans.
Yep always what one should do : first look at the calendar for things like holidays, personal stuff that might influence the plan, then pin āAā and āBā races. āCā races can be added later on or last minute. Where I struggle next, is how to plan the macro cycles (duration, sequence) and what mix of workout styles will be needed into each block. I have used ChatGPT for my last ultra trail plan, but for the next one Iām happy to try Gemini or other AI tools.
Iāve been experimenting with this quite a bit as well. One thing Iāve found particularly useful with AI is not just generating a training plan, but being able to adapt it quickly when real life gets in the way.
The situation I run into most often isnāt āwhatās a good plan for the seasonā, itās things like:
⢠a work trip cuts a long ride in half
⢠I miss a key workout earlier in the week
⢠I suddenly only have 45 minutes instead of 90
Being able to take an existing plan and ask āhow should I adjust this week given X?ā turns out to be surprisingly helpful.
Thatās actually the angle Iāve been exploring with a small tool I built for myself (PacePartner) ā keeping Intervals as the system of record but using AI to help adapt the plan to changing circumstances rather than rewriting it completely.