Tytuł pozycji:
Action based activities prediction by considering human-object relation
For an effective human-robot collaboration, it is important that an assistive robot is able to forecast a human action. The new action recognition method for anticipating the human activities basis on visual observation is presented. The spatio-temporal human-object relation taking into account the so-called affordance is analyzed, the action features are defined. We also deliver a RGB-D type activity dataset obtained using new Senz3D vision sensors. To demonstrate the effectiveness of the proposed approach, we discuss the experiments with summarizing the anticipation results obtained using two different datasets.
Opracowanie rekordu w ramach umowy 509/P-DUN/2018 ze środków MNiSW przeznaczonych na działalność upowszechniającą naukę (2019).