FormCheck: Personal Trainer Assistant That Fixes Your Exercise Form
Like many males in their 20s, I decided that I wanted to make some muscle gains so I started lifting. However, as a student, I was too poor/frugal to afford a personal trainer since they cost a lot. The national average cost for hiring a personal trainer is $80 to $125 per hour. Therefore, with a lot of newbie enthusiasm, I jumped straight into lifting without really knowing proper form and what to look for when trying to perfect my form. Eventually, I started hurting my lower back from improperly lifting so I had to pause going to the gym for a while. Then, when I took Electronics Design Lab and our professor announced that we can work on any project we wanted to, I immediately thought of creating some sort of system that gives beginner lifters quick and informative feedback on how to correct their lifting form. This idea eventually evolved from a small group project for an assignment, into getting a free trip to China and competing against many other makers for prizes. Throughout the first stages of this project, when it was just a school assignment, I worked with two other undergraduate students, Luke Robinson and Jack Ravekes, to refine the idea, define the objectives, create the system, and test it. It was great working with these two people as Luke knew electronics well and Jack was a personal trainer and engineer. Later that summer, my friend, Thomas Keady, and I further developed this project to compete in the 2019 China-US Young Maker Competition in Beijing. We already had competed in a hackathon together (previous project: YoloBike) so we knew we were great at working together.
Recognize speech and respond (voice user interface):
The user should be able to interact with the system through speaking to the system and getting meaningful audible responses.
Detect improper form for squats and deadlifts: Squat Errors to Detect:
Not getting the proper amount of squat depth. Knee flexion (meaning angle between upper and lower parts of the legs) for both of the knees at the bottom of the squat should be between 70-100 degrees.
Knees caving in. When viewing the lifter from the front, the horizontal distance between the center of the lifter's knee and the center of the lifter's ankle must be less than 10 cm.
Lifter should be distributing the weight of the load evenly between both legs. An imbalance error occurs if the difference in applied force between the two legs is greater than 20%.
The lifter's back should maintain its natural arch. From the top to the bottom of the rep, the bar cannot move further than 20 cm forward. (We had to use bar path rather than using the positions where the Kinect reported the person's spine was, because the Kinect was inaccurate since it cannot see/sense the person's back).
Deadlift Errors to Detect:
Improper starting and ending position. Knee flexion (meaning angle between upper and lower parts of the legs) when starting and ending the deadlift should be 90-100 degrees.
Neck should be straight to maintain a neutral spine.
Bar should be less than 10 cm from the lifter's legs during the deadlift.
At the top of the deadlift, the lifter's head should not go more than 5 cm backwards or his/her back would be bent.
FormCheck companion app:
Through the app, the user should be able to track his/her rep statistics (number of good/bad reps per day per exercise) over time. These rep statistics should be automatically updated whenever the user completes a rep. Also through the app, the user should be able to view tutorials and other information on how to properly do the exercise and learn about additional exercises.
We wanted a monitor where the user could see a camera-view and skeleton-representation of himself/herself lifting. Further, we wanted to give the user the ability to replay a video his/her previous rep so that if the user did a rep incorrectly, he/she could view it and make the correction.
(demo begins at 1:35 in the video)
*I would like to make a better video in the future, but I threw out the balance board, PCB, and foam barbell because I didn't have enough room in my luggage :( *
We ran out of time to clean up this visual feedback screen. Ideally it would have a nice graphical display without showing the code editor in the back. Also, that camera video shows both real-time and historical reps depending if the user wants to play a replay.
Thomas and I submitted this project to the 2019 China-US Young Maker Competition which was hosted in Beijing and organized by Google, Chinese Service Center for Scholarly Exchange, Tsinghua University, Beijing Gehua Cultural Development Group, and China University Science and Technology Park Alliance.
We were excited to be picked as finalists and we flew to Beijing in July 2019. Google paid for our flights, hotels, visas, and food. The competition was insanely well-organized with media and posters everywhere.
Thomas and I arrived two days earlier with the intent to have more time to do touristy things, but instead, we barely slept and just coded throughout all of the days and nights leading up to the judging day. If we do this competition again, we'll definitely finish our project before arriving in China!
That work paid off because we presented it through two rounds of judging and ended up getting tied for second place! We got about $7,500 in prize money :D (though, lost more than 50% through taxes, ugh)
Other Beijing Stuff:
Besides the competition, Google brought us to the Great Wall Of China and took us out for Peking Duck!
After judging, we had an afternoon and night to ourselves, so Thomas and I explored the Forbidden City. Later that night, we hung out at a rooftop bar with a bunch of friends we made during the competition.
This trip was soooo much fun! Even with the stress of getting a project to work (it wasn't fully working until like 20 minutes before judging) and pitching it to many judges, I would 100% compete in this competition again.
Thank you to all the organizers that made this happen!
How We Made It - Software:
This entire project was written in C++ with the Kinect for Windows SDK 2.0. The Kinect is surprisingly powerful with its ability to do gesture, emotion, and voice recognition.
The logic follows this flowchart that I created below:
I'm not going to publish the code yet because we may want to come back to this project in an entrepreneurial manner in the future.
How We Made It - Electrical:
Why we couldn't simply use the Kinect to detect weight imbalances:
When squatting, the lifter should be distributing the weight of the load evenly between both legs to prevent injuries and to prevent an imbalance of muscle gain. Originally, we thought that we would be able to visually tell if someone was off-balanced, but we did a simple (and maybe inaccurate) test to see how much percent difference of force applied between the legs is required before we can visually tell that they the person is imbalanced. The result was very unintuitive and interesting to me.
Also, I know the test below is janky but it was only used as an initial experiment and it wouldn't be an undergraduate engineering project if it wasn't a little janky.
Note: Ideally we would have used two equivalent analog bathroom scales, but we only had two different digital scales. When we put the same weight on each one, the clear one displayed a weight that was 3lbs heavier. We forgot to account for this in the percent difference calculations, but the point is still there.
Jack, the guy who is squatting in the video, is a personal trainer so he has pretty good squatting form. During his non-weighted squats, we saw that even when he wasn't trying to have an imbalance of weight, the difference in applied force between his legs peaked at 16% during the squat. We also did another test where he tried to be slightly imbalanced on purpose, but not too much that we could visually tell. When he did this, the difference in force applied peaked at 27%. So we would need some threshold around 16-27% weight imbalance where we would tell the lifter to try to shift his/her's weight to the other side. Therefore, because we need to detect force imbalance differences of 16-27% and we can't visually tell when this occurs, the Kinect wouldn't be able to either. We needed to use some other sensor to detect the force applied through each leg. We first tried using force-sensing resistors, but we quickly realized how bad their part-to-part repeatability was. So, we decided to use a Wii Balance Board because they're actually meant to measure someone's balance and because they use load cells as opposed to FSRs. Further, we found out that used ones are extremely cheap.
I bought it for $3.40 + $11.06 shipping! As opposed to FSRs, Wii Balance Boards use load cells which work on a similar principle, but have much better repeatability are are more accurate than FSRs. The downsides are that they're much bigger, more expensive, and require a wheatstone bridge and op-amp to convert the small change in resistance to a larger readable voltage.
Unfortunately, the load cell amplifier chip we wanted to use, the HX711, is manufactured in China and not available through Digikey, Mouser, or any other American suppliers. In order to get these chips within a week, we would have to pay $40 to get it shipped quickly from China. Instead of that, we designed our PCB to be able to accept HX711 breakout boards. Since the Wii Balance Board has four load cells, we needed to use four HX711 breakout boards and an ATMega328 for the additional GPIO pins.
This is what the PCB ended up looking like! The matte black looks sick in real life.
This is what the modified Wii Balance Board looked like with our PCB and the balance board's load cells connected. We mounted the frame onto a sheet of wood because the Wii Balance Board was a bit too small to accommodate wide squatting stances.
This was definitely one of my favorite projects. It was a lot of work, but it paid off in the end. At the time of writing this, I'm still on my post-Beijing high and I just can't believe what an amazing experience this project and the competition was.
One question I got a lot was "how's the accuracy?" The accuracy of the system for detecting "drastic" form issues is really good as it catches these errors almost 100% of the time. However, the system doesn't work so well when the user makes slight form errors. It's difficult for me to quantify a tolerance for where a "slight" form error turns into a "drastic" form error. Theoretically, we could set a tolerance like, for instance, we declare that the user hasn't squatted far enough if the angle between his/her upper and lower legs is 100+ degrees. With this approach, we could ask a person to squat to 115 degrees, then 110, 105, 100, 95, and 90 degrees many times for each angle and then calculate the sensitivity and specificity of the system to quantify its performance. The issue with this approach is that it's very difficult for a person to repeatedly squat to the same exact angle every time and we would need another system to find the ground truth (ie. be able to accurately measure the person's angle between his/her upper and lower legs) to compare the Kinect's performance to, but this secondary system would have its own variance in measurement as well. Also there's no published definitive guide to good and bad form. Is 90 degrees good form? What about 91? 92? It's hard to declare that one degree is good but being one degree off is bad.
I think the best approach would be to test this system against several personal trainers and use the majority's opinion on someone's form as the ground truth. For instance, we could get several people of varying squatting experience to squat and have personal trainers record down what error, if any, that they saw. Then we'd compare what FormCheck outputs to the majority's classification of the squatter's performance.