Author of the publication

Graphing the Future: Activity and Next Active Object Prediction using Graph-based Activity Representations.

, , and . CoRR, (2022)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

Hobbit, a care robot supporting independent living at home: First prototype and lessons learned., , , , , , , , , and 1 other author(s). Robotics Auton. Syst., (2016)VLMAH: Visual-Linguistic Modeling of Action History for Effective Action Anticipation., , , and . ICCV (Workshops), page 1909-1919. IEEE, (2023)Exploring the Impact of Knowledge Graphs on Zero-Shot Visual Object State Classification., , , , and . VISIGRAPP (2): VISAPP, page 738-749. SCITEPRESS, (2024)Results of Field Trials with a Mobile Service Robot for Older Adults in 16 Private Households., , , , , , , , , and 12 other author(s). ACM Trans. Hum. Robot Interact., 9 (2): 10:1-10:27 (2020)Unsupervised co-segmentation of actions in motion capture data and videos. University of Crete, Greece, (2019)National Archive of PhD Theses: oai:10442/47237.Action Prediction During Human-Object Interaction Based on DTW and Early Fusion of Human and Object Representations., , and . ICVS, volume 12899 of Lecture Notes in Computer Science, page 169-179. Springer, (2021)Complexity based investigation in collaborative assembly scenarios via non intrusive techniques., , , , , and . ISM, volume 217 of Procedia Computer Science, page 478-485. Elsevier, (2023)Segmentation and classification of modeled actions in the context of unmodeled ones., , and . BMVC, BMVA Press, (2014)Evaluating Method Design Options for Action Classification based on Bags of Visual Words., , and . VISIGRAPP (5: VISAPP), page 185-192. SciTePress, (2018)Detection of physical strain and fatigue in industrial environments using visual and non-visual sensors., , , , , and . PETRA, page 270-271. ACM, (2021)