BU Logo
ExeChecker: Where Did I Go Wrong?
Movement Assessment, Contractive Learning, Dataset

Abstract

We present a contrastive learning based framework, ExeChecker, for the interpretation of rehabilitation exercises. Our work builds upon state-of-the-art advances in the area of human pose estimation, graph-attention neural networks, and transformer interpretablity. The downstream task is to assist rehabilitation by providing informative feedback to users while they are performing prescribed exercises. We utilize a contrastive learning strategy during training. Given a tuple of correctly and incorrectly executed exercises, our model is able to identify and highlight those joints that are involved in an incorrect movement and thus require the user's attention. We collected an in-house dataset, ExeCheck, with paired recordings of both correct and incorrect execution of exercises. In our experiments, we tested our method on this dataset as well as the UI-PRMD dataset and found ExeCheck outperformed the baseline method using pairwise sequence alignment in identifying joints of physical relevance in rehabilitation exercises.

Open Access Article - ACVR 2024

Y Gu, M Patel, M Betke. ExeChecker: Where Did I Go Wrong? Twelfth International Workshop on Assistive Computer Vision and Robotics In Conjunction With ECCV 2024  
        
Yiwen Gu
Mahir Patel
Margrit Betke

ExeCheck Dataset (Dropbox)

  • The ExeCheck dataset consists of RGB-D videos of 10 rehabilitation exercises performed by 7 healthy subjects. Each exercise has a paired performance in both correct and incorrect forms by the same subject with 5 movement repetitions.
  • Inside Dropbox
    • raw_data: contains the RGB-D videos (.mkv) and skeletal joints information (.json) including both positions and orientation for each participant. The json files are obtained using Microsoft Azure Kinect Body Tracking SDK. More information about the tracked joints can be found here.
    • processing_scripts: contains meta files (*.txt), repetition annotations (RepSeg.csv), and scripts for generating dataset from raw json files (gendata.py), splitting (split_train_val.py) and augmenting the dataset (augmentation.py)
    • processed_dataset: contains the processed dataset (segmented, mirrored) we used in the paper for both ExeCheck and UI-PRMD dataset. Please check our paper for how we processed the UI-PRMD dataset and also cite the original paper if you use it.
  • The table we show here describes how the incorrect exercises were conducted and the corresponding Joints of Attention (JoA). Please check out our paper for more details.

ExeChecker

Overview

Framework


Previous Work: ExerciseCheck

@inproceedings{pandit2019exercisecheck, 
  title={Exercise{C}heck: {A} scalable platform for remote physical therapy deployed as a hybrid desktop and 
         web application},
  author={Pandit, Shreya and Tran, San and Gu, Yiwen and Saraee, Elham and Jansen, Frederick and 
         Singh, Saurabh and Cao, Shirene and Sadeghi, Arezoo and Shandelman, Eugenia and Ellis, Terry and others},
  booktitle={Proceedings of the 12th ACM International Conference on PErvasive Technologies Related to Assistive Environments},
  pages={101--109},
  year={2019},
  doi={10.1145/3316782.3321537}
}
  • ExerciseCheck is a scalable platform designed and developed for the remote monitoring and evaluation of physical therapy.
  • ExerciseCheck has been deployed as a hybrid desktop and web application at a Boston University rehabilitation clinic and has been employed by physical therapists in their sessions with individuals with Parkinson’s disease.
  • The video clip we show here demonstrates how to use the ExerciseCheck app as a clinician to create a reference exercise for the patient and a patient practises the exercise with realtime visual feedback and their reference displayed side-by-side.
  • We plan to incorporate the ExeChecker visualization into the ExerciseCheck platform for this user study.

Demo