Skip to main content

Web Content Display Web Content Display

Skip banner

Web Content Display Web Content Display

Web Content Display Web Content Display

JAHCAI Lab

The JAHCAI Lab (Jagiellonian Human-Centered Artificial Intelligence Lab) focuses on a cutting-edge research concerned with influence of artificial intelligence (AI) development on human life. The research interest of the Lab include, on the one hand, the development of AI models, methods, and techniques and, on the other hand, building an interdisciplinary field for their extension and application in precisely defined areas outside of computer science. AI methods we are interested in include machine decision support, knowledge modeling and symbolic knowledge extraction, data mining and intelligent analytics, methods of contextual inference and explanatory reasoning in AI systems (XAI) and methods of integrating symbolic knowledge with statistical models.

Areas of collaboration with other disciplines and applications include data and behavior analytics in video games and serious games, multimodal human-machine interaction, development of assistive systems for seniors and people with disabilities (Ambient Assisted Living, Quality of Life), affective computing, machine perception, assistive biomedical signal analysis, AI methods in Industry 5.0, law - including legal inference and argumentation systems, among others in the area of Responsible AI (RAI), and finally digital humanities (DH).

JAHCAI consists of 5 working groups:

  1. Data Science in Games (DSG): research dedicated to data science for Human-Centered AI, including video games and new VR, AR and MR applications, as well as computational narrative, which are connected with development of emotive interfaces HCI/BCI, based on multimodal fusion of physiological signals., voice, mimics and contextual data.
  2. Human-Machine Interaction (HMI): research on HCI/BCI emotive interfaces, along with research on EEG analysis in order to assess of cognitive processes and research on adaptation and personalization of a UI - emotive and knowledge-based (recomendation systems, AfC and ambient assisted living), and finaly novel visual interaces.
  3. Knowledge and Explanation (KNE): the KNE group works on developing eXplainable AI methods merging symbolic and ML models (in Industry 4.0, 5.0 and AfC), enhancement of decision supporting systems and increasing transparency and accountability; moreover knowledge graph-based methods for reasoning in multiple domains are considered.
  4. Law, AI and Responsibility (LAR): The group focuses on researching methods of representing the legal system, legal knowledge and legal reasonings, as well as implications of this research for XAI and assigning responsibility of AI systems (RAI).
  5. Machine Perception (MPR): MPR's research focuses on AI-based systems of senses substition for blind persons, aiming to create the systems of visual information substition which, thanks to AI, analyzes the image taking into account the transformations made by the visual system of the sighted person. Moreover, the group is working on development of innovative multi-modal anayses of biomedic and psychological data, as well as development of automatic methods of human medical and ergonomic status assessment. Another area of research is modelling and analysis of the perception within artificial systems using AI methods.

Members

  • Prof. dr hab. Grzegorz J. Nalepa (lab leader)
  • Dr Michał Araszkiewicz
  • Dr inż. Szymon Bobek
  • Dr inż. Krzysztof Kutt
  • Dr Jeremi K. Ochab
  • Dr hab. Paweł Węgrzyn
  • Prof. dr hab. Michał Wierzchoń

Projects

2023 - 2027 - PEER (hyPEr ExpeRt) EU HORIZON HORIZON-CL4-2022-HUMAN-02 (RIA) (Proposal 101120406)
2022 - 2024 - ODR e-Justice EU HORIZON JUST-2021-EJUSTICE Call (101046468)
2021 - 2024 - eXplainable Predictive Maintenance - CHIST-ERA 2019 XAI