About us

We are a group of researchers who aim at gaining a better insight into human behaviour and its underlying mechanisms in a world which becomes increasingly complex. We engage in multiple areas of human factors, such as highly automated driving, the use of collaborative robotic systems, the automation of ships, as well as new software solutions. We provide fundamental as well as more applied findings from a psychological and cognitive science perspective, with a focus on mental workload, adaptability, acceptance, and cognitive functions as well as influencing factors. Our findings create a basis for a human-centred development of new technologies and the development of measures to support the users in interacting with new systems. Our work is funded by different national and international research grants as well as various business partners.


Blog

WHAT'S NEW

03.06.2025

IEEE CogSIMA 2025: A Week of Insightful Research and Inspiring Conversations

From June 2nd to 5th our working group had the pleasure of attending the IEEE Conference on Cognitive and Computational Aspects of Situation Management (CogSIMA), hosted this year at the Mercator Haus of the University of Duisburg-Essen. 

It was a special experience to take part in such a dynamic international conference right here in Duisburg. The program offered a wide range of talks, panels, and discussions on cognitive systems, intelligent technologies, and human-centered design. Beyond the academic exchange, it was a joy to reconnect with colleagues, engage in new conversations, and be part of an interdisciplinary community working toward safer, more adaptive, and more trustworthy systems. 

Our team contributed three papers to this year’s conference. The first contribution was presented by Eva Gößwein and explored how the acceptance of an eco-driving app is influenced by user trust, environmental awareness, and cognitive flexibility. The second contribution by Verena Staab, presented by Magnus Liebherr, examined how adaptable AI systems can foster trust, fairness, and user control, which are key aspects of responsible AI design. The third contribution, presented by Markus Nieradzik, introduced a collaborative robot system for semi-automated liquid cargo handling in inland shipping, with a focus on safety and situational awareness in human-robot interaction. 

A special thanks goes to all co-authors, collaborators, and the conference organizers for making CogSIMA 2025 such a successful and enriching event. We are returning with valuable insights and new inspiration, and we look forward to continuing the conversations and collaborations sparked during these exciting days in Duisburg. 

03.06.2025

Understanding the Factors Supporting Eco-Driving Decision-Making: How Technology, an Ecological Mindset, and Cognitive Flexibility Promote Sustainable Driving

How do we get people to drive more sustainably? That’s the question we tackled in one of our contributions to CogSIMA 2025. 

In our study, we looked at how people respond to an eco-driving app and what influences their willingness to actually use it.

Beyond technical factors like ease of use and usefulness, we found that trust in the technology, a person’s environmental mindset and their cognitive flexibility all play a major role in shaping sustainable driving intentions. 

It was exciting to bring together insights from psychology and human-technology interaction and to see how mindset and design come together to support more sustainable behavior on the road. This contribution adds to the growing conversation on how digital tools can help us meet climate goals. 

The paper was co-authored by Eva Gößwein, Julia Braun, Jana Thin, and Magnus Liebherr.

06.03.2025

Empowering Trust: the Role of Adaptable Design in AI Systems

A key question addressed in our CogSIMA 2025 contribution was what builds trust in AI systems. The results highlight adaptability as a central element.

In our experiment we compared decision support systems that were either adaptable or fixed. People who were able to adjust the system to their needs felt more in control and saw it as fairer and more transparent. This also led to greater trust, which increased their willingness to use the system. 

The findings underline the relevance of user experience in the design of interactive AI systems. It’s not just about making AI smarter or more explainable it’s about making it more responsive to the people using it. 

Thanks to Verena Staab, Ilka Hein, Maike Ramrath, Lea Schlüter, Alina Stuckstätte, Maximilian Hohn, Philipp Sieberg, and Magnus Liebherr for their collaboration on this work. 


Cooperations

OUR PARTNERS