Social and ethical consequences of being excluded from everyday AI technologies
Supervisors: Matthias Wölfel (HKA), Armin Grunwald (KIT), Linda Nierling (KIT)
Faculty: Humanities and Social Sciences
The social and ethical consequences in using artificial intelligence (AI) technology to support persons with special needs represent a complementary dimension to the technology perspective that must be properly investigated in the assessment of AI technologies. This has also a practical reasoning since more and more AI technology is used and applied in semi-public and public spaces such as airports or museums but also in public transport and work settings. To capture a comprehensive assessment of such “everyday AI-technologies” it must be addressed not only how the progress in AI is able to support persons with special needs, but at the same time also how AI technologies have the potential to contribute to further excluding them.
An example for such an exclusion is the fact, that AI systems rely on large amounts of data reflecting the properties of the ”average human“. People with special needs, as many others, do not necessary fit within the collected data and belong to the class of “outliers”. Consequently, often AI-systems are not working properly for people with disabilities. For instance, input modalities relying on AI such as speech recognition cannot detect words, authorization is not granted by biometric access control (e.g. fingerprint or palm vein cannot work if one has no hands), or persons might not be detected by autonomous cars due to their unexpected behavior.
In order to contribute to the inclusion of people with special needs rather than taking the risk of excluding them, AI-based assistance systems must be designed such that they are supportive to overcome social barriers. For instance, a dialog system might be too complex and needs to be adjusted to not overburdening people with cognitive impairments. Therefore, AI solutions require a contextual and participatory analysis in order to be inclusive for people with disabilities. In this PhD project such an analysis will be performed by assessing the exclusive and inclusive potential of “everyday AI technologies” by analyzing them in a context specific setting. This will be accomplished by a case analysis including the participation of user’s but also further relevant actors to evaluate the case and how it responds to the special needs of people with disabilities while respecting their everyday life conditions as well as social networks. This case analysis has the overall goal to develop an orientation and guidance for the future design of AI technologies to prevent exclusion through AI technology.
Methodically, the case analysis can draw from user-oriented approaches for technology development, e.g. constructive technology assessment or values by design. Following on that the use cases here are taken from a real-life context, methods from real-world labs focusing on AI and digitalization might be also used and further developed.
The aim of this PhD study is to provide input for future AI expert committees and to draw their attention to those with special needs.
In this PhD project, the research fields include:
To analyse potential risks of exclusion through AI technologies applied and used in everyday settings.
To perform a context specific and participatory case analysis to develop guidance for barrier free design of AI systems.
To develop a method to assess AI systems for people with disabilities in real world settings.
Desired qualifications of the PhD student:
University degree (M.Sc.) with excellent grades in social sciences, human-computer interaction, STS or related fields
Good English language skills (your responsibilities include writing publications and giving international presentations)
High interest in performing research in an interdisciplinary setting