department seminar of Eran Bamani- Intent Recognition in Natural Human-Robot Collaboration

17 February 2025, 14:00 - 15:00 
 
department seminar of Eran Bamani- Intent Recognition in Natural Human-Robot Collaboration

Intent Recognition in Natural Human-Robot Collaboration

 

Monday February 17th 2025 at 14:00

Wolfson Building of Mechanical Engineering, Room 206

Abstract:

Human-robot collaboration relies on the ability of robots to intuitively recognize and respond to natural human gestures, which are non-verbal communication methods conveying intent and directives. These gestures, such as pointing or holding, play a crucial role in enabling seamless interaction between humans and robots in shared tasks. Challenges in this domain include variability in environments, differences among users, and the need for robust systems that can operate under dynamic conditions. Addressing these challenges is essential for advancing human-robot collaboration across multiple fields, including healthcare, search and rescue, and industrial automation.

In this research, we proposed innovative frameworks to address key challenges in intent recognition. First, we developed a wearable Force-Myography (FMG) based system for recognizing objects held by users, utilizing the novel Flip-U-Net architecture for robust performance across diverse conditions and multi-user environments. Second, we introduced a framework for robust recognition and estimation of pointing gestures using a single web camera, leveraging a lightweight segmentation-based model to accurately detect gestures and estimate their position and direction. Third, we presented the Ultra-Range Gesture Recognition (URGR) framework, combining a High-Quality Network (HQ-Net) for super-resolution with a Graph-Vision Transformer (GViT) for gesture classification, enabling recognition at distances up to 28 meters. Fourth, we developed the Diffusion in Ultra-Range (DUR) framework to generate high-fidelity synthetic datasets for training gesture recognition models, addressing data scarcity and enhancing performance across diverse scenarios. Finally, we introduced a robust dynamic gesture recognition framework based on the SlowFast-Transformer model, achieving high accuracy in challenging conditions, such as low light and occlusions, further advancing the applicability of gesture recognition systems for real-world applications.

 

Bio:

Eran Bamani Beeri is a PhD candidate at the School of Mechanical Engineering, Tel Aviv University, under the supervision of Dr. Avishai Sintov. His research focuses on deep learning, computer vision, and human-robot interaction, aiming to develop scalable frameworks for natural and intuitive human-robot collab oration. Eran holds a B.Sc. and M.Sc. in Electronic Engineering, where he specialized in image and signal processing. Eran has extensive experience in research and development in the fields of medical image processing, trajectory estimation, and gesture recognition. His work has been published in leading journals. Eran is expected to graduate in March 2025 and will begin as a post doctoral associate at MIT’s Lab 77, working on rehabilitation robotics within the broader field of human-robot collaboration.

Tel Aviv University makes every effort to respect copyright. If you own copyright to the content contained
here and / or the use of such content is in your opinion infringing Contact us as soon as possible >>