A Visual Exploration of the Intellectual Landscape of Information Visualization: Hindsight, Insight, and Foresight
Professor Chaomei Chen
College of Computing and Informatics at Drexel University. USA
Information visualization aims to reveal insightful and inspirational patterns in abstract and complex information systems. Knowledge discovery and keeping abreast of the state of knowledge are among the most abstract and complex activities of their very nature. The growing adoption of visual analytics for assessing knowledge domains across diverse disciplines underscores a mutually beneficial relationship between information visualization as a versatile enabling technology and a wide variety of knowledge domains as rich sources of inspiration. I will highlight key historical trajectories in the past, emerging trends and patterns shaping information visualization today, and early signs of potentially promising but currently uncharted areas. Insights gained from this intellectual exploration will provide an interdisciplinary roadmap for developing and maintaining enduring, productive, and impactful career paths.
Bio-Sketch
Dr. Chaomei Chen is a professor of information science at the College of Computing and Informatics at Drexel University in the USA. His expertise is in the visual analytic reasoning and assessment of critical information concerning the structure and dynamics of complex adaptive systems, including information visualization and scientometrics. Dr. Chen is the Editor-in-Chief of Information Visualization and the Field Chief Editor of Frontiers in Research Metrics and Analytics. He is the author of a series of books on visualizing the evolution of scientific knowledge as wells strategies and techniques for critical thinking, creativity, and discovery, including Representing Scientific Knowledge: The Role of Uncertainty (Springer 2017), The Fitness of Information: Quantitative Assessments of Critical Information (Wiley, 2014), Turning Points: The Nature of Creativity (Springer, 2011), Information Visualization: Beyond the Horizon (Springer 2004, 2006), Mapping Scientific Frontiers: The Quest for Knowledge Visualization (Springer 2003, 2013). He is the creator of the widely used visual analytics software CiteSpace for visualizing and analyzing structural and temporal patterns in scientific literature.
+++++++++++++++++++++++++++++++++++++++++
Visualization meets Immersive Computing: enabling more informed decisions.
Professor Beatriz Sousa Santos, Department of Electronics, Telecommunications and Informatics University of Aveiro, Portugal
In the ever-evolving landscape of science and technology, the convergence of Immersive Computing and Visualization opens possibilities for experiences that used to be in the realm of science fiction, supporting a more immersive, collaborative, and engaging human data exploration and analysis, ultimately allowing more insights and better decisions. Immersive Analytics has thrived over the past years, developing a growing body of knowledge poised to provide productive and effective applications, yet still not having widespread adoption. Research is showing that Virtual and Augmented Reality may provide enhanced sensory perception and embodied interaction to Visualization, but a lot remains to be devised and studied, from specific interaction and visualization methods as well as theory to aspects that come with maturity, such as ethics and societal impact. This talk will address fundamental concepts, challenges, and opportunities of Immersive Analytics and focus on Augmented Reality-based Situated Visualization, presenting examples in different scenarios.
Bio-Sketch
Beatriz Sousa Santos is Associate Professor in the Department of Electronics, Telecommunications and Informatics (DETI/UA), University of Aveiro, Portugal, and a researcher at the Institute of Electronics an Informatics Engineering of Aveiro (IEETA). Currently her main research interests are Data and Information Visualization and Virtual and Augmented Reality.
+++++++++++++++++++++++++++++++++++++++++
TWIZ: The Multimodal Conversational Wizard of Complex Manual Tasks
Professor Joao Magalhaes
Department of Computer Science, Universidade NOVA de Lisboa, Portugal
Conversational agents have become an integral part of our daily routines, aiding humans in various tasks. Helping users in real-world manual tasks is a complex and challenging paradigm, where it is necessary to leverage multiple information sources, provide several multimodal stimuli, and be able to correctly ground the conversation in a helpful and robust manner. In this talk I will describe TWIZ, a conversational AI assistant that is helpful, multimodal, knowledgeable, and engaging, and designed to guide users towards the successful completion of complex manual tasks. To achieve this, we focused our efforts on two main research questions: (1) Humanly-Shaped Conversations, by providing information in a knowledgeable way; (2) Multimodal Stimulus, making use of various modalities including voice, images, and videos, to improve the robustness of the interaction to unseen scenarios. TWIZ is an assistant capable of supporting a wide range of unseen tasks — it leverages Generative AI methods to deliver several innovative features such as creative cooking, video navigation through voice, and the robust TWIZ-LLM, a Large Language Model trained for dialoguing about complex manual tasks. TWIZ interacted with more than 150k users during 6 months and its final version won the 1st prize in the Amazon Alexa TaskBot Challenge.
Bio-Sketch
João Magalhães is a Full Professor at the Computer Science Dep. at Universidade Nova de Lisboa. He holds a Ph.D. degree (2008) from Imperial College London, UK. His research aims to move vision and language AI closer to the way humans understand it and communicate. He has made scientific contributions to the fields of multimedia search and summarization, multimodal conversational AI, data mining and multimodal information representation. He has coordinated and participated in several research projects (national, EU-FP7 and H2020) where he pursues robust and generalizable methods in different domains. He is regularly involved in review panels, organization of international conferences and program committees. His work and the work of his group has been awarded, or nominated for, several honours and distinctions, most notably the 1st prize in the Amazon Alexa Taskbot Challenge 2022. He was the General Chair of ECIR 2020 and ACM Multimedia 2022, Honorary Chair for ACM Multimedia Asia 2021 and will be the PC chair of ACM Multimedia 2026.
+++++++++++++++++++++++++++++++++++++++++
Supporting remote guidance through augmented reality-based sharing of visual communication cues
Professor Tony Huang, The TD School, University of Technology Sydney, Australia
Many real-world scenarios require a remote expert to guide a local user to perform physical tasks, such as remote machine maintenance. Theories and systems have been developed to support this type of collaboration. This support is often provided by adding visual communication cues, including hand gestures, in a shared visual space. In these systems, hand gestures are shared in different formats, such as raw hands, projected hands, digital representations of gestures, and sketches. However, the effects of the combination of these gesturing formats have not been fully explored and understood. We, therefore, have developed a series of systems to meet the needs of different real-world working scenarios using emerging and wearable technologies. In this talk, I will introduce some innovative techniques and systems that we designed, developed and evaluated in supporting remote guidance through augmented reality-based sharing of visual communication cues.
Bio-Sketch
Prof Tony Huang is currently the Director of Data Science and Innovation courses at UTS TD School (formally Faculty of Transdisciplinary Innovation) and an Executive Committee member of UTS Visualisation Institute. He is a data visualisation researcher with expertise in visual analytics and human computer(data) interaction. He designs visualisations, user interfaces and interaction methods to combine data values with human intelligence for effective data exploration, communication and decision making.