Abstract
Videos become prevalent for storytellers to inspire viewers' interests. To further enhance narrations, visualizations are integrated into videos to present data-driven insights. However, manually crafting such data-driven videos is difficult and time-consuming. Thus, we present SmartShots, a system that facilitates the automatic integration of in-video visualizations. Specifically, we propose a computational framework that integrates non-verbal video clips, images, a melody, and a data table to create a video with data visualizations embedded. The system automatically translates the multi-media material into shots and then combines the shots into a compelling video. In addition, we develop a set of post-editing interactions to incorporate users' design knowledge and help them re-edit the automatically-generated videos.
Original language | English |
---|---|
Pages | 4509-4511 |
Number of pages | 3 |
DOIs | |
Publication status | Published - 12 Oct 2020 |
Event | 28th ACM International Conference on Multimedia, MM 2020 - Virtual, Online, United States Duration: 12 Oct 2020 → 16 Oct 2020 |
Conference
Conference | 28th ACM International Conference on Multimedia, MM 2020 |
---|---|
Country/Territory | United States |
City | Virtual, Online |
Period | 12/10/20 → 16/10/20 |
Keywords
- data-driven videos
- storytelling
- visualization