StairNet: visual recognition of stairs for human–robot locomotion

dc.contributor.authorKurbis, Andrew Garrett
dc.contributor.authorKuzmenko, Dmytro
dc.contributor.authorIvanyuk-Skulskiy, Bogdan
dc.contributor.authorMihailidis, Alex
dc.contributor.authorLaschowski, Brokoslaw
dc.date.accessioned2024-03-15T12:09:24Z
dc.date.available2024-03-15T12:09:24Z
dc.date.issued2024
dc.description.abstractHuman–robot walking with prosthetic legs and exoskeletons, especially over complex terrains, such as stairs, remains a significant challenge. Egocentric vision has the unique potential to detect the walking environment prior to physical interactions, which can improve transitions to and from stairs. This motivated us to develop the StairNet initiative to support the development of new deep learning models for visual perception of real-world stair environments. In this study, we present a comprehensive overview of the StairNet initiative and key research to date. First, we summarize the development of our large-scale data set with over 515,000 manually labeled images. We then provide a summary and detailed comparison of the performances achieved with different algorithms (i.e., 2D and 3D CNN, hybrid CNN and LSTM, and ViT networks), training methods (i.e., supervised learning with and without temporal data, and semi-supervised learning with unlabeled images), and deployment methods (i.e., mobile and embedded computing), using the StairNet data set. Finally, we discuss the challenges and future directions. To date, our StairNet models have consistently achieved high classification accuracy (i.e., up to 98.8%) with different designs, offering trade-offs between model accuracy and size. When deployed on mobile devices with GPU and NPU accelerators, our deep learning models achieved inference speeds up to 2.8 ms. In comparison, when deployed on our custom-designed CPU-powered smart glasses, our models yielded slower inference speeds of 1.5 s, presenting a trade-off between human-centered design and performance. Overall, the results of numerous experiments presented herein provide consistent evidence that StairNet can be an effective platform to develop and study new deep learning models for visual perception of human–robot walking environments, with an emphasis on stair recognition. This research aims to support the development of next-generation vision-based control systems for robotic prosthetic legs, exoskeletons, and other mobility assistive technologies.en_US
dc.identifier.citationStairNet: visual recognition of stairs for human–robot locomotion / Andrew Garrett Kurbis, Dmytro Kuzmenko, Bogdan Ivanyuk-Skulskiy, Alex Mihailidis, Brokoslaw Laschowski // BioMedical Engineering OnLine. - 2024. - Vol. 23. - Article number: 20. - https://doi.org/10.1186/s12938-024-01216-0en_US
dc.identifier.issn1475-925X
dc.identifier.urihttps://doi.org/10.1186/s12938-024-01216-0
dc.identifier.urihttps://ekmair.ukma.edu.ua/handle/123456789/28283
dc.language.isoenen_US
dc.relation.sourceBioMedical Engineering OnLineen_US
dc.statusfirst publisheduk_UA
dc.subjectComputer visionen_US
dc.subjectDeep learningen_US
dc.subjectWearable roboticsen_US
dc.subjectProstheticsen_US
dc.subjectExoskeletonsen_US
dc.subjectarticleen_US
dc.titleStairNet: visual recognition of stairs for human–robot locomotionen_US
dc.typeArticleuk_UA
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
StairNet visual recognition of stairs for human_robot locomotion.pdf
Size:
2.63 MB
Format:
Adobe Portable Document Format
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.71 KB
Format:
Item-specific license agreed upon to submission
Description: