Workshops Proceedings

Artificail Intelligence. ECAI 2023 International Workshops
Duration of free access: January 29, 2024 – February 27, 2024

ECAI 2023, part I, CCIS 1947  ECAI 2023, part II, CCIS 1948


Workshops Schedule

All tutorials and workshops are held on Saturday the 30th of September and on Sunday the 1st of October at the campus of the Jagiellonian University. 
 
   Wotkshops Acronym Session 1
9:00-10:30
Coffee
break
10:30 -
11:00
Session 2
11:00-12:30
Lunch
12:30 -
13:30
Session 3
13:30-15:00
Coffee
break
15:00 -
15:30
Session 4
15:30-17:00
Day 1
(Sat.)
30.09

 EDAI (one day)

WMI
0006

Coffee
break
10:30 -
11:00

WMI
​0006

Lunch
12:30 -
13:30

WMI
​0006

Coffee
break
10:30 -
11:00

WMI
​0006

 EGAI (hald day)

 

 

WMI
0086

WMI
0086

 AI4S (two days)

WMI
​0094

WMI
​0094

WMI
​0094

WMI
​0094

 RAAIT (one day)

WMI
0174

WMI
0174

WMI
0174

WMI
0174

 XI-ML (one day)

WFAIS
A-1-08

WFAIS
​A-1-08

WFAIS
​A-1-08

WFAIS
​A-1-08

 AI4AI (one day)

WMI
1094

WMI
​1094

WMI
​1094

WMI
​1094

 NLPerspectives (one day)

WMI
1177

WMI
1177

WMI
1177

WMI
1177

 VALE (one day)

WFAIS
A-2-02

WFAIS
​A-2-02

WFAIS
​A-2-02

WFAIS
​A-2-02

 MRC (one day)

WFAIS
A-2-04

WFAIS
​A-2-04

WFAIS
​A-2-04

WFAIS
​A-2-04

 IMIS (half day)

    WFAIS
A-0-13
WFAIS
​A-0-13

Doctoral Consortium

WFAIS
A-1-06

WFAIS
A-1-06

WFAIS
A-1-06

WFAIS
A-1-06

Day 2
(Sun)
1.10

 AREA (one day)

WFAIS
A-0-13

Coffee
break
10:30 -
11:00

 

WFAIS
A-0-13

Lunch
12:30 -
13:30

 

WFAIS
A-0-13

Coffee
break
10:30 -
11:00

 

WFAIS
A-0-13

 VeriLearn (one day)

WMI
0174

WMI
0174

WMI
0174

WMI
0174

 HYDRA (half day)

 

 

WFAIS
​A-2-01

WFAIS
​A-2-01

 AI4S (two days)

WMI
0094

WMI
​0094

WMI
​0094

WMI
​0094

 AEQUITAS (one day)

WMI
1094

WMI
​1094

WMI
​1094

WMI
​1094

 MODeM (one day)

WMI
1177

WMI
​1177

WMI
​1177

WMI
​1177

 IMIS (one day)

WFAIS
A-2-04

WFAIS
​A-2-04

WFAIS
​A-2-04

WFAIS
​A-2-04

 LAMAS&SR (one day)

WFAIS
A-2-02

WFAIS
​A-2-02

WFAIS
​A-2-02

WFAIS
​A-2-02

 XAI^3 (one day)

WFAIS
A-1-13

WFAIS
A-1-13

WFAIS
A-1-13

WFAIS
A-1-13

 TACTFUL (half day)

WFAIS
A-2-01

WFAIS
​A-2-01

 

 

 QR (one day)

WMI
0006

WMI
​0006

WMI
​0006

WMI
​0006

 Awareness Inside (one day)

  WFAIS
A-1-03
WFAIS
A-1-03
WFAIS
A-1-03

 SEDAMI (half day)

WMI
0086
WMI
​0086
   

STAIRS

WFAIS
A-1-06

WFAIS
A-1-06

WFAIS
A-1-06

WFAIS
A-1-06

WFAIS - Faculty of Physics, Astronomy and Applied Computer Science 
WMI - Faculty of Mathematics and Computer Science


Agents and Robots for reliable Engineered Autonomy

Acronym:AREA

This workshop aims to bring together researchers from the autonomous agents and the robotics communities, since combining knowledge coming from these two research areas may lead to innovative approaches that solve complex problems related with the verification and validation of autonomous robotic systems. Consequently, we encourage submissions that combine agents, robots, software engineering, and verification, but we also welcome papers focused on one of these areas, as long as their applicability to the other areas is clear.

WEBSITE



The Semantic Data Mining Workshop 


Acronym:SEDAMI

The theme of the SEDAMI workshop is semantic data mining. With this workshop we aim to get an insight into the current status of research in this area.. We focus mainly on methods that allow include/utilize/exploit semantic information and domain knowledge in machine learning and data mining, focusing on domains and research questions that have not been deeply investigated so far to improve standard machine learning solutions.
We encourage contributions on methods, techniques and applications that are both domain-specific but also transversal to different application domains.. In particular, we solicit contributions that aim to focus on semantic data mining for providing and/or enhancing interpretability, the introduction and upgrading of knowledge, as well as enhancing explanations.

WEBSITE




Verifying Learning AI Systems


Acronym: VeriLearn

AI is becoming more deeply integrated into our daily lives, there are increasing concerns about what AI systems will be able to do and what they should be allowed to do. This has raised a number of relevant questions about how to ensure that AI is used in safe manner. For example, deployed AI models may have to conform to requirements (e.g., legal) or exhibit specific properties (e.g., fairness). That is, it is necessary to verify that a model complies with these requirements. Software engineering has long studied the problem of verifying whether software satisfies its expected requirements. Therefore, a key open question is how to combine software  verification techniques with machine learning to provide strong guarantees about software that learns? Moreover,  what are the boundaries of what can be verified, and how can and should system design be enhanced by other mechanisms (e.g., procedural safeguards, accountability) to produce the desired properties? This workshop aims to bring together researchers interested in these questions.

WEBSITE



Evolutionary Dynamics in social, cooperative and hybrid AI  

Acronym: EDAI

Social, cooperative and hybrid AI have increasingly gained attention. Researchers imagine ecosystems wherein (artificial) intelligent agents and humans interact and make decisions, addressing simultaneously, either individual or collectively, a range of heterogeneous problems. To handle the complex dynamics inherent to such systems, and to ensure that results are globally beneficial, research into AI design and analysis methods is needed.  This workshop aims to connect the traditions of single-agent AI research (reasoning, learning, ...) to the areas of (eco-)evolutionary dynamics, typically investigated in the context of complex systems. We hope to foster collaboration and cross-fertilisation of ideas among these communities to advance the areas of social, cooperative and hybrid AI.

WEBSITE




2nd International Workshop on HYbrid Models for Coupling Deductive and Inductive ReAsoning  

Acronym: HYDRA

The HYDRA workshop aims to combine deductive and inductive reasoning, two powerful methods in artificial intelligence, to create more robust and flexible AI systems that can reason effectively in various contexts. The workshop welcomes original research on theoretical frameworks, practical applications, and experimental results on hybrid deductive-inductive reasoning. Key challenges include integrating logical and statistical models, reasoning with incomplete or uncertain knowledge, and creating tools for explaining and interpreting hybrid models. Ethical and social implications are also addressed, including issues related to fairness, accountability, and transparency. The workshop invites theoretical and practical papers, summaries of recently published papers, and work-in-progress contributions. It aims to bring together the scientific community to discuss different scenarios for integrating and combining deductive and inductive systems.

WEBSITE


​​

Artificial Intelligence for Sustainability 


Acronym: AI4S

Artificial Intelligence generates twofold effect: produces various kinds of waste and has a potential to help addressing the sustainability goals, produce smarter and greener hardware, software and applications. Sustainability requires solving complex problems, often with hybrid AI approaches. The objective of this multidisciplinary session is to gather both researchers and practitioners to discuss methodological, technical, organizational and environmental aspects of AI used for various facets of sustainability.

WEBSITE




Fairness and bias in AI  

Acronym: AEQUITAS

AI-based decision support systems are increasingly deployed in industry, in the public and private sectors, and in policymaking to guide decisions in important societal spheres, including hiring decisions, university admissions, loan granting, medical diagnosis, and crime prediction. As our society is facing a dramatic increase in inequalities and intersectional discrimination, we need to prevent AI systems to amplify this phenomenon but rather mitigate it. As we use automated decision support systems to formalize, scale, and accelerate processes, we have the opportunity, as well as the duty, to revisit the existing processes for the better, avoiding perpetuating existing patterns of injustice, by detecting, diagnosing and repairing them. To trust these systems, domain experts and stakeholders need to trust the decisions. Despite the increased amount of work in this area in the last few years, we still lack a comprehensive understanding of how pertinent concepts of bias or discrimination should be interpreted in the context of AI and which socio-technical options to combat bias and discrimination are both realistically possible and normatively justified. This workshop provides a forum for the exchange of ideas, presentation of results and preliminary work in all areas related to fairness and bias in AI.

WEBSITE


​​

Multi-Objective Decision Making Workshop

Acronym: MODeM

In recent years there has been a growing awareness of the need for automated and assistive decision-making systems to move beyond single-objective formulations when dealing with complex real-world issues, which invariably involve multiple competing objectives.  The purpose of the Multi-Objective Decision Making Workshop is to promote collaboration and cross-fertilisation of ideas between researchers working in different areas of multi-objective decision-making in the context of intelligent systems, and to provide a forum for the dissemination of high-quality multi-objective decision-making research. Topics of interest include: multi-objective learning, planning and scheduling, multi-objective game theory, explainable MODeM, benchmarks and applications of MODeM, multi-objective metaheuristic optimisation for autonomous agents and multi-agent systems, preference elicitation and social choice in MODeM for autonomous agents and multi-agent systems.

WEBSITE




Workshop on Intelligent Management Information Systems

Acronym: IMIS

The ECAI 2023 Workshop on Intelligent Management Information Systems (IMIS 2023) is devoted to models, methods and approaches addressing to developing of artificial intelligence solution for improving functionality of information systems which supports the management. We  want  to  offer  an  opportunity  for  researchers  and  practitioners  to  identify  new  interdisciplinary, promising  research  directions  as  well  as  to  publish  recent  advances  in  this  area.  The  scope  of  the  IMIS  includes,  but  is  not  limited  to  the  following  topics: machine learning and deep learning for support business processes, agent-based systems in management, cognitive technologies for management, artificial intelligence for financial systems, artificial intelligence for cryptocurrencies, intelligent human-computer interfaces, intelligent personalization, knowledge management in business organizations, intelligent decision supporting, hybrid artificial intelligence and multiple criteria decision analysis methods.

WEBSITE



The Workshop on Ethics of Game Artificial Intelligence 

Acronym: EGAI 

AI technology is an enabler in games at every level, from game development to player experience to building and maintaining gamer communities. Using AI-powered procedural generation of content, recommender systems, or community moderation is very attractive, but calls for consideration of Responsible AI (RAI) principles to address possible risks of e.g., echo-chambers and lack of inclusivity.
The purpose of the Ethics of Game AI workshop is to bring together researchers from academia and industry who are interested in developing, applying and critiquing the ways in which principles of RAI have (or have not) been applied to AI in games. Our goal is to provide a forum where we can share, discuss, and propose ideas and solutions regarding the role of AI in all stages of game development, and the ethical implications that follow from it.

WEBSITE




Responsible Applied Artificial Intelligence


Acronym: RAAIT

Artificial Intelligence (AI) increasingly affects the way people work, live, and interact. It is applied in all kinds of domains, such as healthcare, education, media, creative industry, retail, defense, transportation, law, and the financial sector. While AI has great potential to enhance well-being and help solving societal challenges, it also comes with severe risks of negative social and ethical consequences, such as discrimination, reinforcing existing biases, and causing a big carbon footprint. Over the past years, many high-level principles, and guidelines for ‘responsible’ or ‘ethical’ AI have been developed, and a lot of theoretical research on responsible AI has been done. However, this work often fails to address the challenges that arise when applying AI in practice. In this one-day workshop on Responsible Applied Artificial InTelligence (RAAIT), we aim to connect and share experiences with fellow researchers and AI practitioners who bring Responsible AI to practice. Contributions can address technological aspects of responsible applied AI, but may also include social or socio-technical factors, such as the design process (e.g., through co-creation) or the organizational governance to ensure responsible application of AI. Research in a broad range of application domains is encouraged. We invite both case studies, position papers, and research papers that address elements of a Responsible Applied AI practice.

WEBSITE



International Workshop on Logical Aspects in Multi-Agent Systems and Strategic Reasoning

Acronym: LAMAS&SR

Logics and strategic reasoning play a central role in multi-agent systems (MAS). Logics can be used, for instance, to express agents' abilities, knowledge, and objectives. Strategic reasoning, on the other hand, refers to algorithmic methods that allow for developing good behaviour for agents of a system. At the intersection, we find logics that can express the existence of strategies or equilibria, and can be used to reason about them. The LAMAS&SR workshop merges two international workshops: LAMAS (Logical Aspects of Multi-Agent Systems), which focuses on all kinds of logical aspects of MAS from the perspectives of AI, computer science, and game theory, and SR (Strategic Reasoning), devoted to all aspects of strategic reasoning in formal methods and AI. Over the years the communities and research themes of both workshops got closer and closer, with a significant overlap in the participants and organisers of both events. For this reason, the next editions of LAMAS and SR will be unified under the same flag, formally joining the two communities.

Easychair: https://easychair.org/conferences/?conf=lamassr2023

WEBSITE




International Workshop on Explainable and Interpretable Machine Learning

Acronym: XI-ML

With the current scientific discourse on explainable AI (XAI), algorithmic transparency, interpretability, accountability and finally explainability of algorithmic models and decisions, this workshop on explainable and interpretable machine learning tackles these themes from the modeling and learning perspective; it targets interpretable methods and models being able to explain themselves and their output, respectively. he workshop aims to provide an interdisciplinary forum to investigate fundamental issues in explainable and interpretable machine learning as well as to discuss recent advances, applications and challenges in this area.

WEBSITE



AI for AI education


Acronym: AI4AI

AI4AI aims to provide a platform for exchange of ideas and experiences under the general theme of AI for education, specialising on university education in AI, but interpreting AI in a broad sense. This workshop held at ECAI 2023 brings together researchers involved in these diverse programs dedicated to investigating, developing and exploring AI techniques for AI and computer science education. We also wish to contribute towards forming a European community on the theme of AI for AI education, and foster basic research as well as the development of intelligent assistance technology (e.g., intelligent tutor systems) in a multi-disciplinary setting in order to improve AI education by making use of AI technology itself: AI4AI. Besides researchers with a background in AI, we encourage interdisciplinary contributions from cognitive science and education technology.

WEBSITE




Joint workshops on XAI methods, challenges and applications 


Acronym: XAI^3

Welcome to the Joint workshops on XAI methods, challenges and applications (XAI^3:), where we aim to discuss opportunities for the new generation of explainable AI (XAI) methods that are reliable, robust, and trustworthy. Explainability of AI models and systems is crucial for humans to trust and use intelligent systems, yet their utility in high-risk applications such as healthcare and industry has been severely limited. Our workshop will have three tracks: medical, industry, and future challenges, where we will explore the challenges and opportunities in creating useful XAI methods for medical applications, integrating explainability in highly automated industrial processes, and evaluating current and future XAI methods. We welcome contributions from researchers, academia, and industries primarily from a technical and application point of view, but also from an ethical and sociological perspective. Join us in discussing the latest developments in XAI and their practical applications at the 26th European Conference on Artificial Intelligence (ECAI 2023) in Kraków, Poland.

WEBSITE



Trustworthy AI for safe & secure traffic control in connected & autonomous vehicles


Acronym: TACTFUL

Connected and Autonomous Vehicle and System Technologies are already transforming the very way transport is perceived, mobility is serviced, travel eco-systems ‘behave’, and cities and societies as a whole operate. Expected benefits include accident prevention, reduced traffic congestion and lessened greenhouse gas emissions to energy savings, improved surveillance, increased ease of use, and improved traffic management and control. However, these promising benefits are not without significant technological challenges. The TACTFUL workshop aims at providing a venue to present approaches related to any aspect of autonomous driving and on the use of CAV/AV functionalities for traffic control, including driving algorithms, security vulnerabilities, exploit potential, and how to mitigate them by leveraging on AI to increase resilience and robustness of intelligent transport systems.

WEBSITE


​​


2nd Workshop on Perspectivist Approaches to NLP (and Beyond)


Acronym:
 NLPerspectives

Until recently, the dominant paradigm in natural language processing and other areas of artificial intelligence has been to resolve observed label disagreement into a single “ground truth” or “gold standard” via aggregation, adjudication, or statistical means. However, in recent years, the field has increasingly focused on subjective tasks, such as abuse detection or quality estimation, in which multiple points of view may be equally valid.

The NLPerspectives workshop explores current and ongoing work on: the collection and labelling of non-aggregated datasets; and approaches to modelling and including perspectives, as well as evaluation and applications of multi-perspective Machine Learning models. 

WEBSITE


​​

36th International Workshop on Qualitative Reasoning


Acronym: QR

The Qualitative Reasoning (QR) community is involved with the development and application of qualitative representations to understand the world from incomplete, imprecise, or uncertain data. Qualitative representations have been used to model natural systems (e.g., physics, biology, ecology, geology), social systems (e.g., economics, cultural decision-making), cognitive systems (e.g., conceptual learning, spatial reasoning, intelligent tutors, robotics), technical systems (e.g., manufacturing, robotics) and more. QR connects to several AI subfields commonly represented at AI conferences such as IJCAI, AAAI, ECAI. As QR strives to capture the everyday reasoning that comes naturally to humans, its method contributes to explainable AI.

WEBSITE




Value Engineering in AI


Acronym: VALE 

With AI’s impact on our everyday lives becoming more and more tangible, the need for ethical and trustworthy AI has gained strong recognition by governments, industry, the general public, as well as academics. Ensuring AI is trustworthy and reliable requires ensuring that it fulfils human needs and respects human values. To achieve this, there is a need to develop software systems that reason about human values and norms, implement these values through norms, and ensure the alignment of behaviour with those values and norms. We argue that just as values guide our own morality, values can guide the morality of software agents and systems, bringing machine morality closer to reality. The result would be value-aware systems that take value-aligned decisions, interpret human behaviour in terms of values and enrich human reasoning by enhancing the human's value-awareness. VALE 2023 intends to bring in research on value engineering together and foster in-depth discussions on the topic.

WEBSITE



Modelling and Representing Context


Acronym: MRC

Join us for the Fourteenth International Workshop on Modelling and Representing Context. As a fundamental topic in Artificial Intelligence, context plays a critical role in enabling AI systems to effectively integrate and utilize information, reason and communicate, and adapt to different scenarios and domains. With contextual AI, we can enhance the transparency, explainability, and interpretability of AI systems, process diverse data types, and achieve a better understanding of context and meaning, enabling more natural and rich interactions between humans and AI systems.

As an interdisciplinary topic, contextual AI has clear relations to linguistics and semiotics, cognitive science and psychology, mathematics and philosophy, as well as sociology and anthropology. The MRC workshop series is highly interactive and offers a platform for researchers and practitioners from different disciplines to exchange ideas and discuss the latest advancements in contextual AI. As context research is inherently transdisciplinary, we encourage a range of publication formats that may not be commonly seen in the AI community, including narrative reviews, ethnographic stories, and artistic creations, reflecting the diverse perspectives and approaches needed to fully understand the complexities of context. So, whether you're a researcher or artist, we invite you to share your unique insights and perspectives with the wider community through this workshop.

WEBSITE



Awareness Inside: Open Meeting of the EIC Pathfinder Challenge

Acronym: Awareness Inside

This is the first open meeting of the portfolio of projects funded by the EIC Pathfinder Challenge “Awareness Inside“. All are welcome.

Awareness and consciousness have been high on the Artificial Intelligence (AI) research agenda for decades. Progress has been difficult because it has been hard to agree on exactly what it means to be aware. Most researchers would agree though that we do not have any truly aware artificial system yet, that awareness is much more than a sensorial sophistication and that it is much more than any Artificial Intelligence as we know it. But, what is it then that a user would expect from a service or device that has ‘awareness inside’?

In this workshop we will present all eight “Awareness Inside” projects funded to explore this question. Join our meeting to see preliminary results, and debate the meaning of awareness.

WEBSITE


Honorary patronage

 

Jacek Majchrowski, Mayor of the City of Kraków

 

Co-organized and supported by

 

Jagiellonian University in Kraków

 



AGH University of Science and Technology


Krakow Technology Park
 


Silver Sponsors



Bronze Sponsors






 

Media Partners






 

Official Community Partner