Program
9:00-9:15 | Opening Session |
9:15-10:30 | Elevator Pitches |
10:30-11:30 | Poster Session + Coffee Break |
11:30-12:30 | Invited Talk |
12:30-13:30 | Lunch |
13:30-14:15 | Plenary Discussion |
14:15-15:30 | Elevator Pitches |
15:30-16:30 | Poster Session + Coffee Break |
16:30-17:30 | Career Panel |
11:30-12:30: INVITED TALK
Virginia Dignum (Umeå University) [Website]Virginia Dignum is Professor of Responsible Artificial Intelligence at Umeå University, Sweden where she leads the AI Policy Lab. She is also senior advisor on AI policy to the Wallenberg Foundations and chair of the ACM’s Technology Policy Council. She has a PHD in Artificial Intelligence from Utrecht University in 2004, was appointed Wallenberg Scholar in 2024, is member of the Royal Swedish Academy of Engineering Sciences (IVA), and Fellow of the European Artificial Intelligence Association (EURAI), and of ELLIS (European Laboratory of Learning and Intelligent Systems). She is a member of the United Nations Advisory Body on AI, the Global Partnership on AI (GPAI), UNESCO’s expert group on the implementation of AI recommendations, OECD’s Expert group on AI, founder of ALLAI, the Dutch AI Alliance, and co-chair of the WEF’s Global Future Council on AI. She was a member of EU’s High Level Expert Group on Artificial Intelligence and leader of UNICEF's guidance for AI and children. Her new book “The AI Paradox” is planned for publication in late 2024.
- Title: Beyond the hype: Responsible AI
- Abstract: AI can extend human capabilities but requires addressing challenges in education, jobs, and
biases. Taking a responsible approach involves understanding AI's nature, design choices,
societal role, and ethical considerations. Recent AI developments, including foundational
models, transformer models, generative models, and large language models (LLMs), raise
questions about whether they are changing the paradigm of AI, and about the responsibility of
those that are developing and deploying AI systems. In all these developments, it is vital to
understand that AI is not an autonomous entity but rather dependent on human responsibility
and decision-making.
In this talk, I will further discuss the need for a responsible approach to AI that emphasizes trust, cooperation, and the common good. Taking responsibility involves regulation, governance, and awareness. Ethics and dilemmas are ongoing considerations, but require understanding that trade-offs must be made and that decision processes are always contextual. Taking responsibility requires designing AI systems with values in mind, implementing regulations, governance, monitoring, agreements, and norms. Rather than viewing regulation as a constraint, it should be seen as a stepping stone for innovation, ensuring public acceptance, driving transformation, and promoting business differentiation. Responsible Artificial Intelligence (AI) is not an option but the only possible way to go forward in AI.
13:30-14:15: PLENARY DISCUSSION [Slides]
Marija Slavkovik (University of Bergen) [Website]- Topic: Interdisciplinary collaboration: How does one go about it?
16:30-17:30: CAREER PANEL