dc.description.abstract |
The Electromyography (EMG) based trans-radial prostheses have revolutionized the
prosthetic industry due to their ability to control the robotic hand using human intention.
Although recently developed EMG-based prosthetic hands can classify a signi cant
number of wrist motions, classifying grasping patterns in real-time is challenging.
However, the wrist motions alone cannot facilitate a prosthetic hand to grasp objects
properly without performing appropriate grasping pattern. The collaboration of EMG
and vision has addressed this problem to a certain extent. However they have not been
able to achieve signi cant performance in real-time.
This study proposed a vision-EMG fusion method that can improve the real-time prediction
accuracy of the EMG classi cation system by merging a probability matrix that
represents the usage of the six grasping patterns for the targeted object. The You Only
Look Once (YOLO) object detection algorithm was utilized to retrieve the probability
matrix of the identi ed object, and it was used to correct the classi cation error in
the EMG classi cation system by applying Bayesian fusion. Experiments were carried
out to collect EMG data from six muscles of 15 subjects during the grasping action for
classi er development. In addition, an online survey was conducted to collect data to
calculate the respective conditional probability matrix for selected objects. Finally, the
ve optimized supervised learning EMG classi ers; Arti cial Neural Network (ANN),
K-nearest neighbor (KNN), Linear Discriminant Analysis (LDA), Naive Bayes (NB),
and Decision Tree (DT) were compared to select the best classi er for fusion.
The real-time experiment results revealed that the ANN outperformed other selected
classi ers by achieving the highest mean True Positive Rate (mTPR) of M = 72:86%
(SD = 17:89%) for all six grasping patterns. Furthermore, the feature set identi ed at
the experiment (Age, Gender, and Handedness of the user) proved that their in
uence
increases the mTPR of ANN by M = 16:05% (SD = 2:70%). The proposed system
takes M = 393:89 ms (SD = 178:23 ms) to produce a prediction. Therefore, the user
did not feel a delay between intention and execution. Furthermore, proposed system
facilitated the user to use suitable multiple grasping patterns for a single object as in
real life. In future research works, the functionalities of the system should be expanded
to include wrist motions and evaluate the system on amputees. |
en_US |