Exploring the potential of computer vision and machine learning in enhancing the functionality of an EMG-controlled prosthetic hand

Published in Harvest, University of Saskatchewan, 2023

The potential of using machine learning techniques to develop prosthetic arms that can automatically perform hand gestures and grasp objects is very important in healthcare systems. Hands are an important part of the body for all vertebras, animals use theirs for locomotion, however, because of our bipedal nature as humans, we use our hands majorly for gripping and general manipulation. Humans without hands must live with prostheses, as a result, prosthetic hands must be well-sophisticated to perform the functions of a regular human hand. Many electronic prosthetics available in the market come with sophisticated control methods. It may take several months of continuous training for the human to learn how to accurately control the prosthetic fingers and perform tasks like picking up objects. This research aims to alleviate this problem by proposing an automated method for performing hand gestures and grasping objects using computer vision-based techniques and machine learning. This research demonstrates the feasibility of this approach by training tree-based classifiers to interpret EMG signals because they offer a direct measure of feature importance. Of the two tree-based classifiers implemented, the results show that the decision tree classifier outperforms the random forest classifier in terms of precision, recall, and F1-score, with EMG signals from Channel 2 being the most important feature for both models. Using an RGBD camera mounted at the base of the gripper which records observation in discrete steps, this research demonstrated the effectiveness of machine learning in automated object gripping. Agents were trained using the existing Soft Actor-Critic (SAC), Deep Q-Network (DQN) and Proximal Policy Optimization (PPO). This research shows that the SAC algorithm is the most effective approach for training agents to perform automated grasping tasks, outperforming other algorithms such as DQN and PPO in terms of their success rate. The agents were trained on three different types of objects (remote controller, soap bar, and mug), and the results show that the two factors (object shape and size) affect the agent’s ability to converge to an optimal policy. The SAC algorithm demonstrated a remarkable resilience when tested on diverse environments and objects, and at varied hyperparameters. While the PPO algorithm demonstrated greater adaptability than DQN, it did not perform as well as the SAC algorithm in terms of overall success rate and ability to handle diverse scenes and objects. Discussions of the reason behind these results are provided. The contribution of this thesis is the conclusion that the second channel of an 8-channel EMG device is the most significant when using Decision Tree Classifiers to interpret EMG signals. Also, the SAC algorithm has a great potential in developing intelligent prosthetic arms with the automatic object gripping capabilities, paving the way for more advanced prosthetics in the future.

Recommended citation: Odeyemi, J. (2023). Exploring the potential of computer vision and machine learning in enhancing the functionality of an EMG-controlled prosthetic hand. Masters thesis, University of Saskatchewan.
Download Paper