AI-based Activity Recognition “In the Wild” for Activation of Elderly People in Home Care Situations
Supervisors: Astrid Laubenheimer (HKA)
Faculty: Computer Science and Business Information Systems (HKA)
Institute: Intelligent Systems Research Group Institute (ISRG – HKA)
Elderly people with care needs and their relatives are usually dependent on voluntary and semi-professional support services. If these can no longer be taken advantage of - for example in the case of a pandemic - this puts a strain on the already often fragile care arrangements. At the same time, limited contacts lead to an increased risk of social isolation and in consequence may lead to self-neglection and even self-abandonment. The design of future assistance systems for home care arrangements are intended to alleviate this situation, for example through targeted activation of these people, e. g. by motivating them to get in contact with other people or to pursue certain activities.
On the other hand, activation through an interactive system first requires robust perception of the activities that are already carried out. Although there is a lot work in activity recognition based on visual, acoustic and other sensor data, the existing approaches seem to fail in the described situation due to certain structural requirements, such as a strong limitation of the number of sensors, their positioning – e. g. the retrofitting of visual sensors in an existing home in general is limited to the ceiling – and the fact, that the observed situation is “in the wild” which encounters a huge variety of environments. Moreover, many relevant activities are quite similar concerning the corresponding body movements such as "brushing teeth" and "shaving". Similar activities like this are not located “out of distribution” but rather “in between two distributions”, which limits existing approaches for the usage in the described scenarios.
In this PhD project, one is encouraged to define one's own AI-based PhD topic, which should ultimately lead to better performance of existing and future multimodal (visual and acoustic) sensor systems in real environments. For means of robustness and the independence of the proposed method from the specific sensors, high level approaches should be investigated such as 3D-keypoint or 3D-model based data analysis.
Desired qualifications of the PhD student:
University degree (M.Sc.) with excellent grades in Computer Science or related fields
Strong programming skills in at least one programming language (preferably Python and with experience in TensorFlow, PyTorch or similar)
Strong interest in all, and good knowledge of at least one of the following: deep learning, computer vision, activity recognition, 3D/4D modelling of humans
Good English language skills (your responsibilities include writing publications and giving international presentations)
Interest in human centered technology and AI-based assistive technologies
The student should be a team player, have good communication skills and take on responsibility in the research team of the ISRG and the KATE-program.