Accessible Reporting of Spatiotemporal Geographic Information Leveraging Generated Text and Visualization (vgiReports)

vgiReports

Following the idea that a public good should also serve the public, it is our goal to make volunteered geographic data and derived analysis results accessible for a wide audience.

  • Previous research mostly focused on professional and expert users, while this project aims to research self-explaining representations of spatio-temporal geographic information.
  • Textual description and visualizations can complement each other to provide self-contained explanations and information-rich representations.
  • Different types of spatio-temporal data require the development of new summarization and reporting techniques that are applicable to various areas.


Research Highlights


The Interplay of Text and Visualization

In detailed analysis of existing journalistic examples, we have focused on the interplay between textual narration and visualizations in data-driven stories, with a particular emphasis on geographic aspects. Through two qualitative studies [1, 4], we have identified categories of textual narrative and analyzed how they link to visualizations. We have also investigated the different ways in which visualizations are used to show data and support the textual narrative, as well as the use of different strategies to convert visualizations and text into a coherent story. Combining textual and visual descriptions correctly can increase user engagement and improve user understanding of data. By studying high-quality data-driven stories from journalistic outlets, we have identified best practices that can be useful for designers and journalists who wish to create effective data-driven stories in the future.


Authoring Interactive Reports

Creating data-driven stories involves analyzing and presenting data in a visually appealing way. However, most content management systems do not support integrating both textual and visual content to create interactive documents. Various authoring tools have been developed to fill this gap, but they do not directly address creating explicit and interactive links between text and visualization. To address this, we developed Kori [4] to provide an easy and efficient way to create valuable links between the text and visualizations. The system supports both manual and automatic creation of links, and a study indicated that participants found the interface easy to use and were able to construct meaningful references. 

The user interface of Kori consists of a chart gallery (1) and an editing interface (2). It supports manual creation of links through simple interactions (3). Users can choose the highlighting options and change their properties (4).


Interactive Audio Guides in Virtual Reality

Virtual reality is a technology that allows users to interact with digital environments in a way that feels immersive and lifelike. One challenge in using virtual reality for data-driven storytelling is how to convey information to the user in a way that is engaging and immersive, without being overwhelming. To address this challenge, we suggest an approach called Talking Realities [3] that combines data visualizations with automatically generated audio narratives. This approach allows users to explore data in an immersive way and receive guidance through the audio narrative that adapts to their interactions with the visualization. We introduce three modes of exploration, including guided tours, guided exploration, and free exploration, to cater to different user preferences. We tested the tool with different immersive visualizations, such as multivariate statistical data and air traffic data projected onto a globe.

Scenes and audio explanations (here, transcribed) from our prototype implementing the Talking Realities approach for air traffic data. (Left) A description of the aggregated intercontinental flights for one day. (Right) Scenes reporting the longest flight from an airport and most flights to any other airport.


A Chatbot Interface Providing Visual and Textual Answers

In a collaboration with project WorlKG [2], we developed a chatbot interface called VisKonnect that allows users to explore relationships among historical public figures by asking questions. The chatbot uses a rule-based approach to understand the intent of the question and extract meaningful entities, and formulates a query to pull relevant data from an event knowledge graph. The resulting data is then visualized in multiple linked visualizations, with accompanying textual explanations that aim to answer the user’s question. We believe that using chatbots to make the first contact with data could be a good starting point for data analysis and visualization, but we also need to be cautious to not raise false expectations and present misleading replies.

VisKonnect answers user questions with a mix of textual reply (left) and explorable visualizations (right). The cutout of the visualization shows a timeline for the two identified scientists; annotations are placed manually for highlighting events that the users might further explore.


Summary and Discussion

In our research, we have investigated how geographic data and related information can be described and linked in both textual and visual representations. We have provided authoring support for creating data-driven stories as integrated reports, with the ability to manually add links and automatically recommend specific linking through analyzing the data-driven text. Our reporting solutions have demonstrated the flexibility and broad applicability of automatically generated descriptions of statistical maps, audio guides in virtual reality, and natural-language interfaces to knowledge graphs that respond with textual and visual data representations. We aim not only to guide users through data analysis insights, but also to invite them to explore the data in depth. Our research emphasizes that citizen participation in research is not one-directional, and that reflecting results and providing options to explore the data supports an even higher level of participation.


Publications

  1. Latif, S., Chen, S., & Beck, F. (2021). A Deeper Understanding of Visualization-Text Interplay in Geographic Data-driven Stories. Computer Graphics Forum, 40(3), 311–322. DOI: https://doi.org/10.1111/cgf.14309
  2. Latif, S., Agarwal, S., Gottschalk, S., Chrosch, C., Feit, F., Jahn, J., Braun, T., Tchenko, Y. C., Demidova, E., & Beck, F. (2021). Visually Connecting Historical Figures Through Event Knowledge Graphs. 2021 IEEE Visualization Conference (VIS), 156–160. DOI: 10.1109/VIS49827.2021.9623313
  3. Latif, S., Tarner, H., & Beck, F. (2022). Talking Realities: Audio Guides in Virtual Reality Visualizations. IEEE Computer Graphics and Applications, 42(1), 73–83. DOI: 10.1109/MCG.2021.3058129
  4. Latif, S., Zhou, Z., Kim, Y., Beck, F., & Kim, N. W. (2022). Kori: Interactive Synthesis of Text and Charts in Data Documents. IEEE Transactions on Visualization and Computer Graphics, 28(1), 184–194. DOI: 10.1109/TVCG.2021.3114802
  5. Agarwal, S., Latif, S., Rothweiler, A., & Beck, F. (2022). Visualizing the Evolution of Multi-agent Game-playing Behaviors. EuroVis 2022 – Posters, 23–25. DOI: 10.2312/evp.20221111