EIaiV

Explainable and Interpretable AI through Visualization

Artificial Intelligence (AI) has become integral to decision-making across various domains, including healthcare, finance, and autonomous systems. However, the increasing complexity of AI models and intense learning networks have raised concerns about transparency, accountability, and interpretability. To build trust and foster responsible AI deployment, visualization techniques play a crucial role in making AI models understandable to both experts and non-experts.
This conference invites high-quality research contributions that explore novel visualization techniques to enhance AI interpretability, improve model transparency, and bridge the gap between AI decision-making and human understanding. We seek papers that present theoretical advances, practical applications, and case studies demonstrating how visualization can aid in explainable AI (XAI).

Topics of Interest

We welcome contributions on (but not limited to) the following topics:

1. Techniques for Making AI Models Understandable to Non-Experts

  • Methods to simplify complex AI model decisions using visualization
  • Designing user-friendly interfaces for AI interpretation
  • Interactive tools to improve AI model comprehension for diverse audiences

2. Visualizing Model Decisions for Accountability and Transparency

  • Visualization techniques to illustrate model biases and fairness
  • Explainability tools for regulatory compliance and decision audits
  • Case studies on AI interpretability in high-stakes applications (e.g., healthcare, legal, finance)

3. Overcoming Challenges in Visualizing Complex Neural Networks

  • Novel approaches for rendering high-dimensional data comprehensibly
  • Comparison of different visualization techniques for neural networks
  • Tools and frameworks to enhance interpretability in deep learning

4. Human-in-the-Loop Approaches to Enhance AI Interpretability

  • Methods for integrating expert feedback into AI decision-making
  • Evaluating user trust and cognitive load in AI-augmented systems
  • AI visualization in collaborative and assistive decision-making processes

For submission guidelines, visit the page submission page.

For general enquiries, and submissions, contact the Conference Co-ordinator

For Symposium-specific enquiries reach out to::

Dr Alice Dong

University of Technology Sydney

Alice Dong <Xiaodan.Dong (@) uts.edu.au>