New AlphaHRV version

Thank you @canserbero and @MedTechCD for your last posts: we are gaining knowledge quicker now, as some bases are established.

Polar H10 uses signal processing sampling voltage 1024 times per second (if I had to design one I will do just that, so it is a bold statement from me). RR peaks are discovered by a simple internal routine (R peek is rather easy to spot). ECG sampling is possibly done with 128 Hz frequency, but who bother? They state in documentation that ECG may drift += 3% due to temperature. In NeuroKit2 I tested sampling of data using 130 Hz and 128 Hz: results are the same.

So my next step was to take an ‘easy’ session without activity (except different respiration tests) for about 9 minutes. From Polar data logger, I’ve got separate RR and ECG files. After some gymnastic with python data format (mind that I do not program in Python), I manage to provide correct data to display nasty (to ECG) charts related to Kubios peaks correction method.

This one is based on ECG csv file:

nk.signal_fixpeaks() reported following errors: {‘ectopic’: [304], ‘missed’: [], ‘extra’: [], ‘longshort’: [53, 577]}

The second chart is from RR csv file:

nk.signal_fixpeaks() reported no errors (I spotted 8 I would correct, but something special could be necessary, like modifying array size for ‘filling’ missing beats).

Comment next day: There is already patented method of reconstructing missing / distorted ECG and other physiological signals with CNN (convolutional neural networks). Link to patent is here: Systems and methods for imputing real-time physiological signals

Conclusion: do not use ECG for HRV. Respiration rate from HRV can spotted with one of 4 methods I mentioned.

I checked link you provided about internals of Polar H10, and IMHO, Jukka Oikarinen is rather speculating himself about those ‘internals’. He also should know that MTU can be adjusted.

After

“However, I cannot give you exact details of how the H10 internal algorithm works when it calculates the heart rate from sensor data, e.g. what sampling rate it uses to measure ECG.”

I suggest to discard his opinions. Again, the problem of ‘agency’. People [even experts] is musing about causes and fill whishes (of knowledge seekers) with random noise. :wink: Tell me when I’m doing the same!

The dev of this Android app has quite a bit of expertise on the matter:
https://kenevans.net/opensource/KEDotNetECG/Assets/kedotnetecg.html

This confirms what I suspected:

  • You get high def RAW data (1024Hz sampling) from the H10 over one BLE channel (or the exact same data over ANT+) in the form of Timestamp, HR, RR’s. There’s one datapackage each second with Timestamp, HR as calculated by a proprietary algorithm from Polar within the sensor and based on the last x RR intervals and 0 or more RR’s in msec. The RR’s are those occurring during that last second separated by a space if more then one occurred.
  • ECG comes over a separate BLE channel as a single column data format with only the ECG voltage in µV at a sampling rate of 130Hz. There is NO timestamp. I have done a little hardware hack to shortcircuit the sensor with a push-button. This to have an idea if there is a lag in the ECG data referring to the creation timestamp of the file. If there is a lag, it isn’t significant. As @Retsam already said, ECG trace can’t be used for any HRV calculation because it is low definition, but it can be usefull for visual spotting of artifacts. If you get the timeline value of a visually spotted artifact, it should be easy to find that in the RR values if you have an approximate timing.

The only thing we can work with, are the RR intervals. The easiest way of getting those in the most consistent way is to record them with the head unit (HRV option enabled) in the FIT. Then you can export them from Intervals with the Streams.csv button on the activity intervals page. David logs them with a : as separator. To recreate the timeline, each and every RR must be summed one after another.
These RR values can be used for several things:

  • HRV and a1 post activity calculation
  • Extracting resp-rate from HRV.
  • Detecting abnormal beats/heart problems/anomalies

AlphaHRV already does the first two. We can have a play to see if the Neurokit2 scripts give similar results.
If at one point we can get a res-prate algorithm based on R-peak amplitude, the ECG trace might also become usable because it is the only trace that has voltage. I was close to a solution in Excel, might pick it up again to compare with the above.
The latest article on Bruce’s website, describes a different way of detecting the deflection point(s). It proposes to use the 2nd derivation of the 6th order polynomial fitted resp curve in stead off regression lines. Could also be interesting :wink:

Moving target! I’ve just tested the nsec timer from device and now it is obvious with ECG timestamp differences:

      sensor timestamp [ns]
0                 7686285.0
1                 7686285.0
2                 7686285.0
3                 7686285.0
4                 7686285.0
                    ...
1019              7686285.0
1020              7686285.0
1021              7686285.0
1022              7666253.0
1023              7686564.0

But summing 130 records gives just about 1 second, or multiple of that (_dnn is a dataframe with subsequent differences).

n = 2

np.sum(_dnn[200:200+130*n])
Out[40]: 
sensor timestamp [ns]    **1.998503e+09**
**dtype: float64**

n = 5

np.sum(_dnn[200:200+130*n])
Out[42]: 
sensor timestamp [ns]    *4.996185e+09*
dtype: float64

From that simple evidence, sampling rate for ECG is surely around 130 Hz, not 128 from my assumption of ‘even divider’. Internal voltage signal sampling will be then 1000 Hz. However, when we look more attentively to ECG sampling (and nsec timestamps associated), we find departures from previous timestamp (in nsec) I can show below:

It means that ECG sampling is uneven, like ‘catching the lag’. For ECG from Polar it will not be so important, the jump is of 20-30 microsec is really below the signal noise. I present it only for the sake of curiosity of internal signal handling by Polar processor

On Bruce website I cannot find that, quite new document: Validity of the Polar H10 Sensor for Heart Rate Variability Analysis during Resting State and Incremental Exercise in Recreational Men and Women, where some important DFA a1 results from Polar H10 RR and 12-leads ECG equipment. Too much to resume, alas at least 20 minutes may be necessary to grasp the importance of that study for AlphaHRV.

Well, currently Polar logger app is providing ECG data in that format:

Phone timestamp;sensor timestamp [ns];timestamp [ms];ecg [uV]
2023-02-01T17:32:31.767;599616046781617847;0.0;5798
2023-02-01T17:32:31.774;599616046789304132;7.686285;17053
2023-02-01T17:32:31.782;599616046796990417;15.37257;19864
2023-02-01T17:32:31.790;599616046804676702;23.058855;19864

It looks like your link may be outdated…

Anybody else feel they have been slightly left behind on this thread :grinning_face_with_smiling_eyes:

5 Likes

Sorry, it was just a warm up session. Dust was not intended. :smiling_imp:

1 Like

I did a proper ramp according to the protocol suggested by @Luisma_Gallego_Soy_P :

  • warm up 15’@40%FTP
  • 6’@50/60/70/80/90%FTP

I was only interested in LT1 so I decided to stop the ramp at 90%FTP.

To provide some context: I started a taper week on 2/13 and the ramp was done on 2/15; I was decently fresh.

To provide some “tech info”:

  • ramp performed on rollers, so no erg mode, but, as you are going to see from the plots, I am decently good at providing a steady output; cadence during the intervals was around 90, but the last step more into 93-94
  • Polar H10 monitor + polar pro strap
  • hrm connected to alphaHRV via ant+
  • RR series duration in alphaHRV: 300
  • Time window for RespRate in alphaHRV: 60"
  • hrm connected to head unit - Garmin Edge 530 - via BLE. Head unit set to record HRV data

The processed data I am going to show is the full time series of the various metrics retrieved from WKO5. I have to do this as I have some issues in interpreting the format of the metrics alphaHRV is writing in the .fit file once I import the file into the environment I use to perform the analysis.

As for the other ramp I did, I am still noticing some weird behavior regarding the respiration rate and alpha 1 when we need them most, i.e., in the final stages of the ramp (in my case, 80% and 90% FTP). Despite power output is increasing, as well as heart rate, respiration rate and alpha1 are not stable. At first, I thought it was because once we enter in a new stage of the ramp there is some delay in capturing what’s going on, perhaps due to the fact that respiration rate and alpha1 are estimated on moving windows of different lengths, and because of this I have drawn the time series of the metrics removing the first 0/60/120/180 seconds of each stage of the ramp. I attach the figures I have produced so that you can make up your own mind.




Sticking to the same principle of dropping the first 0/60/120/180 seconds of each stage, I conducted a sort of sensitivity analysis to see how results would change. In the figures below I depict the scatterplot of heart rate vs RespRate/Alpha; points have different colors according to each stage of the ramp.

By considering

  • the full time series the estimated threshold would be 150bpm
  • the time series in which the first 60 second of each stage are dropped, the estimate would be 143bpm
  • the time series in which the first 120 second of each stage are dropped, the estimate would be 146bpm
  • the time series in which the first 180 second of each stage are dropped, the estimate would be 146bpm

Anyways, the exercise is not even so meaningful because the point is that alpha1 and respiration rate have a weird behavior in the last two stages.

I cannot tell whether one of the heart rate threshold estimates is somehow reliable: based on my recent experience with zone 2 training, I can pedal around 138-142 bpm for two hours with a reasonable heart rate drift and repeat the same session multiple times per week.

@Retsam @MedTechCD good stuff on the tech part. I need to carefully read the posts :wink:

1 Like

Excellent test and post. If that were my figures I would take my Z2 range to be 130-145bpm and try to do most of my Z2 work around 137/140 to give me some wiggle room.

1 Like

Thanks. I have been working on that HR range when the session was ~2hours and the goal was to work “at” LT1 to push it to the right. From my experience it is fine to devote some specific work in that range, but I find the results of the ramp unreliable.

In my previous post my will was not to say that the theoretical foundations underlying estimation of LT1 are not right, I just think that estimation of some quantities are not good enough for cases like mine; it is the second time I experience a drop in RespRate as the test progresses.
I am keen to send my fit file to @Inigo_Tolosa or @Luisma_Gallego_Soy_P so that, if they are willing to, they can play a little bit with, for instance, the estimation windows used for Alpha1 and RespRate.

Hello,

Can you send me the fit file to luisma.gallego@gmail.com, sorry for our disconnection from the post but we are working hard on a validation of a very large N

1 Like

The use of a1 and resprate (derived from RR) accentuates the deflection points. But a flaw in this method is that an error in the RR stream influences both metrics and accentuates an error…
The artifacts stream gives you an idea of the detected/corrected errors and should always be as low as possible. Too many errors means bad data.
But the problem is that some errors are not detected and thus not corrected. You have no idea if this is the case, yes or no. Sometimes you can quite easily spot them in a steady effort because there is a jump in the trace with the exact width of a measurement window. That is most of the time caused by only one single beat that’s not correctly aligned. The cause can be a measurement artifact or an abnormal beat.
Having the RR stream, allows reanalyzing that stream with another (different or more powerful) algorithm. The algorithm used by AlphaHRV is good but not as good as the existing algorithms in Kubios/Fatmaxxer/Runalyze… Simply because AlphaHRV has limited processing power available on the head unit.
It could very well be that reanalyzing your data with one of those more robust algorithms shows better data.
And if we could get resprate from another source (anything else then HRV), that might also give us an opportunity to double check the results from AlphaHRV.
At this point, the most important part is making sure that the underlying theory can produce the results we’re after.
When that’s done, we will have to find ways to make the measurements accessible to as much people as possible.

Great reply, and I think that your explanation makes perfectly sense. I did not processed the HRV data as to not add “noise” to the analysis: after all this is a discussion regarding alphaHRV.

I do believe that the theoretical grounds of the idea underlying estimation of thresholds is sound. The way I look at it is as a viable and non invasive way that resembles the determination of VT1/2 (done in lab via gas exchange). Under this perspective, what is really important is to have some reliable estimates of RespRate and alpha1 to determine the inflection points; their actual values should be irrelevant, but for software companies interested in business.

Good work, a lot of effort, @canserbero! Let me stay also a bit sceptic [similar way to @MedTechCD] about DFA a1 and Respiration rate combined, as already DFA a1 is ‘noisy’ parameters and in last Rogers’ paper [see my previous post] it is clear that even with clean signals, there is individual variability from different sources.

Moreover, extracting EDR signal from RR is also not error prone. I did the following exercise using NeurKit2:

Interpolate RR data (from H10 RR CSV file, 9 minutes rest with few breathing exercises) to 1000 Hz samples and converting it to HR, smoothed with ‘monotone_cubic’ method [HR is necessary for EDR estimation].

Original RR data were used to calculate HRV parameters:

EDR estimate is here, with 4 different methods:

Now, selecting EDR method will add another source of noise: as I remember what I did, I will use ‘Soni 2019’ or ‘charlton2016’. The answer is in strong baroreflex obvious when I stopped nose breathing after 3 minutes and started with ‘restricted breathing’ with small device I printed in PETG on my 3D printer. After 3 more minutes I started to breath with my mouth only, not restricting air flow: it is well visible in ‘Soni’ method.

Doing HRV frequency review in 2 minutes sliding window, I’ve seen big shift in power spectrums, just in relation to way of breathing (so ANS is very sensitive in that respect).

What will be the impact of breathing type when still in VT1 zone (where you do not need mouth breathing) is something to test (I will do in near future). I see that DFA a1 is sensitive to that breathing exercise at rest, so new factor regarding another source of noise should be taken into account.

I will not go into whether it is noise or not noise, noise is to think that 0.75 and 0.5 are the thresholds, our tests with lactate are proving positive and are processing data from over 200 athletes to obtain the final reading, after all this you are free to use it or not use it, as I say we have samples that used as internal load control is working.

I don’t see anything unusual in the respiration rate, neither dfa taking the averages of each step, as attached in the image.

image

This is the result that Matlab gives, the watts there is some peak in the step so you will have to value it.

Would I be correct in assuming VT2 is around 155 bpm from that data?

1 Like

Cannot estimate second threshold, stopped test at 90% of ftp.

2 Likes

First of all, thanks for taking the time to analyze the data. From the figures I posted about the time series of metrics, it is crystal clear that ResRate/Alpha1 has an unexpected behavior during the last two stages of the ramp. For the sake of clarity, if we just look at the original time series plot (the one named “Remove first 0 seconds”)

  • in the second to last stage the metric takes a little bit to rise, then seems to stabilize but then tends to decrease
  • in the last stage the metric takes a little bit rise, then rises abruptly but then decreases steadily

For your convenience, I attach here again the figure I am commenting on

We are looking for change points in the relation HR vs RespRate/Alpha1 and the rationale should be that the higher the power exerted on the pedals, to higher the HR and RespRate/Alpha1. In particular, I expect ResRate to increase, or at least, to remain constant as we progress in the ramp because the harder the effort, the more the oxygen the body needs to do what it is asked to. If the estimate of Resprate drops, then IMHO either something is not right from a computational standpoint or RespRate alone is not enough, meaning that, for instance, a proxy of tidal volume would be required (at least when data looks like mine).

A scatter plot, i.e., the plot you are showing (analogous to the ones I posted yesterday) is not able to show, for instance, whether the higher values of RespRate/Alpha1 had occurred in the last stages of the ramp. The bilinear regression you use to estimate LT1/2 is based solely on the pairs HR-RespRate/Alpha1 and does not take into account time, so it is prone to errors if during the ramp something goes wrong. For this reason, I am suggesting to inspect the time series to do a qualitative check of data.

In order to illustrate the point I was concerned about yesterday and to highlight why inspection of the time series would’ve useful, I am doing a scholar exercise based on my data: I am just filling the last two stages of the time series of RespRate/Alpha1 with fake data that abide to the rules

  1. ResRate/Alpha1 approximately constant within the stage.
  2. ResRate/Alpha1 of the last stage greater than ResRate/Alpha1 in the second to last stage.

The data could have been made more realistic, but the time I would spend in making the data look more realistic it is not worth it. You can see the new time series below (just Resprate/Alpha1 has been changed)

You can compare this figure with the one above. I then performed the LT1 estimation on the tweaked data and the result is shown below

With this exercise, LT1 estimation is 139 bpm vs 150 bpm. The latter value is based on the original time series data.

I hope that this example illustrates the point and help. I truly believe in the theoretical foundations of the method, but concerned about how far we can go with the data provided by wearable devices. If I can help in some way, we can discuss here or you can contact me via email.

All the tests I do are done with lactate at the same time, unfortunately many tests we don’t have permission to publish, at the beginning we are not going to publish anything more about this because we are starting the validation tests with all the results we have and ethically we will publish very punctually. My reflection is the following, I think you are very obsessed with the mathematical noise that this may generate, we know it perfectly well and we know the limitations it may have, any model has its limitations, the important thing is to know them and act accordingly. The Matlab software takes into account several things more than a linear regression, so it is possible that we evaluate in a different way.