@cathytomato

MediaDiver: Viewing and Annotating Multi-view Video

, , , , , , and . CHI '11 Extended Abstracts on Human Factors in Computing Systems, page 1141--1146. New York, NY, USA, ACM, (2011)
DOI: 10.1145/1979742.1979711

Abstract

We propose to bring our novel rich media interface called MediaDiver demonstrating our new interaction techniques for viewing and annotating multiple view video. The demonstration allows attendees to experience novel moving target selection methods (called Hold and Chase), new multi-view selection techniques, automated quality of view analysis to switch viewpoints to follow targets, integrated annotation methods for viewing or authoring meta-content and advanced context sensitive transport and timeline functions. As users have become increasingly sophisticated when managing navigation and viewing of hyper-documents, they transfer their expectations to new media. Our proposal is a demonstration of the technology required to meet these expectations for video. Thus users will be able to directly click on objects in the video to link to more information or other video, easily change camera views and mark-up the video with their own content. The applications of this technology stretch from home video management to broadcast quality media production, which may be consumed on both desktop and mobile platforms.

Links and resources

Tags

community

  • @cathytomato
  • @dblp
@cathytomato's tags highlighted