25
Eric Bailey, Peter Worth ED 229 C Seminar in Learning, Design and Technology Stanford University School of Education January, 21 2004 THE CURRENT LANDSCAPE: Media Streams- Video annotation and editing system Designed by Marc Davis, Brian Williams, and Golan Levin 1991-1997 Machine Understanding Group of the MIT Media Laboratory and Interval Research Corporation

THE CURRENT LANDSCAPE:

  • Upload
    shania

  • View
    46

  • Download
    0

Embed Size (px)

DESCRIPTION

THE CURRENT LANDSCAPE:. Media Streams- Video annotation and editing system Designed by Marc Davis, Brian Williams, and Golan Levin 1991-1997 Machine Understanding Group of the MIT Media Laboratory and Interval Research Corporation. THE CURRENT LANDSCAPE:. - PowerPoint PPT Presentation

Citation preview

Page 1: THE CURRENT LANDSCAPE:

Eric Bailey, Peter Worth

ED 229 C Seminar in Learning, Design and TechnologyStanford University School of EducationJanuary, 21 2004

THE CURRENT LANDSCAPE:Media Streams-Video annotation and editing systemDesigned by Marc Davis, Brian Williams, and Golan Levin1991-1997 Machine Understanding Group of the MIT Media Laboratory and Interval Research Corporation

Page 2: THE CURRENT LANDSCAPE:

Eric Bailey, Peter Worth

ED 229 C Seminar in Learning, Design and TechnologyStanford University School of EducationJanuary, 21 2004

THE CURRENT LANDSCAPE:Media Streams is a system for annotating, retrieving, repurposing, and automatically assembling digital video.

Page 3: THE CURRENT LANDSCAPE:

Eric Bailey, Peter Worth

ED 229 C Seminar in Learning, Design and TechnologyStanford University School of EducationJanuary, 21 2004

WHAT ARE THEY SOLVING?

Page 4: THE CURRENT LANDSCAPE:

Eric Bailey, Peter Worth

ED 229 C Seminar in Learning, Design and TechnologyStanford University School of EducationJanuary, 21 2004

WHAT ARE THEY SOLVING?The problem of finding video information in a large and growing archive- examining how to annotate and describe video data in a way that is comprehensible by all people and is searchable by computer.

Page 5: THE CURRENT LANDSCAPE:

Eric Bailey, Peter Worth

ED 229 C Seminar in Learning, Design and TechnologyStanford University School of EducationJanuary, 21 2004

WHAT ARE THEY SOLVING?Their intent is to give access to video materials to users who want to repurpose or recompose them.

Page 6: THE CURRENT LANDSCAPE:

Eric Bailey, Peter Worth

ED 229 C Seminar in Learning, Design and TechnologyStanford University School of EducationJanuary, 21 2004

WHAT ARE THEY SOLVING?Current annotation is limited by the language of key word choice, resulting in a missed search opportunity. Often, key words are not specific enough for searching in video.

Page 7: THE CURRENT LANDSCAPE:

Eric Bailey, Peter Worth

ED 229 C Seminar in Learning, Design and TechnologyStanford University School of EducationJanuary, 21 2004

WHAT ARE THEY SOLVING?Current annotation has no universal guidelines. Annotation “language” is personal and varies from person to person.

Page 8: THE CURRENT LANDSCAPE:

Eric Bailey, Peter Worth

ED 229 C Seminar in Learning, Design and TechnologyStanford University School of EducationJanuary, 21 2004

HOW ARE THEY SOLVING IT?

Page 9: THE CURRENT LANDSCAPE:

Eric Bailey, Peter Worth

ED 229 C Seminar in Learning, Design and TechnologyStanford University School of EducationJanuary, 21 2004

HOW ARE THEY SOLVING IT?Video-annotation software that allows multiple annotations of the same clip, including varying levels of overlap of clip annotations.

Page 10: THE CURRENT LANDSCAPE:

Eric Bailey, Peter Worth

ED 229 C Seminar in Learning, Design and TechnologyStanford University School of EducationJanuary, 21 2004

HOW ARE THEY SOLVING IT?To make search and annotation more reliable, Media Streams uses a system of visual icons which represent what is depicted in the video clip. It can both read and write computer-generated icons.

Page 11: THE CURRENT LANDSCAPE:

Eric Bailey, Peter Worth

ED 229 C Seminar in Learning, Design and TechnologyStanford University School of EducationJanuary, 21 2004

“…Media Timeline, on which iconic annotations of video are temporally indexed. Each stream in the Media Timeline contains annotations about a unique aspect of video content, such as settings, characters, objects, actions, camera motions, etc.”

Golan Levin, Principal Designer of Icon Visual Language

Page 12: THE CURRENT LANDSCAPE:

Eric Bailey, Peter Worth

ED 229 C Seminar in Learning, Design and TechnologyStanford University School of EducationJanuary, 21 2004

HOW ARE THEY SOLVING IT?Users select an individual icon or combination of icons (compound) to annotate a clip. Icons represent what is visually depicted in a scene, not the meaning of a scene.

Page 13: THE CURRENT LANDSCAPE:

Eric Bailey, Peter Worth

ED 229 C Seminar in Learning, Design and TechnologyStanford University School of EducationJanuary, 21 2004

“…Icon Space, an atemporal, hierarchically-indexed "dictionary" of iconic descriptors. The Icon Space incorporates utilities for icon construction and search.”

Golan Levin, Principal Designer of Icon Visual Language

Page 14: THE CURRENT LANDSCAPE:

Eric Bailey, Peter Worth

ED 229 C Seminar in Learning, Design and TechnologyStanford University School of EducationJanuary, 21 2004

HOW ARE THEY SOLVING IT?Searching the archive affords setting parameters of what you want to appear in the clip. MS locates existing footage representing that description. It can then recompose existing shots to create a clip meeting the desired parameters.

Page 15: THE CURRENT LANDSCAPE:

Eric Bailey, Peter Worth

ED 229 C Seminar in Learning, Design and TechnologyStanford University School of EducationJanuary, 21 2004

“…cross-section of the icon hierarchies, including: Historic Period, Calendar Time, Time of Day, Functional Building Space, Topological Relationships, …. Character Body Types, Occupations, Tools, Food, Animals, Weather, and a variety of other objects and cinematographic relationships..”

Golan Levin, Principal Designer of Icon Visual Language

Page 16: THE CURRENT LANDSCAPE:

Eric Bailey, Peter Worth

ED 229 C Seminar in Learning, Design and TechnologyStanford University School of EducationJanuary, 21 2004

WHAT KEY ISSUES WERE FOUND?

Page 17: THE CURRENT LANDSCAPE:

Eric Bailey, Peter Worth

ED 229 C Seminar in Learning, Design and TechnologyStanford University School of EducationJanuary, 21 2004

WHAT KEY ISSUES WERE FOUND?Syntax and Semantics- the meaning of video information is constructed from its relationship to the shots surrounding it. Annotating by physical description is effective, annotating by more complex meaning does not hold up.

Page 18: THE CURRENT LANDSCAPE:

Eric Bailey, Peter Worth

ED 229 C Seminar in Learning, Design and TechnologyStanford University School of EducationJanuary, 21 2004

HOW DOES IT RELATE?

Page 19: THE CURRENT LANDSCAPE:

Eric Bailey, Peter Worth

ED 229 C Seminar in Learning, Design and TechnologyStanford University School of EducationJanuary, 21 2004

HOW DOES IT RELATE?Although Media Streams is solving a different problem than ours, the issue of examining and creating meaning from video texts is significant.

Page 20: THE CURRENT LANDSCAPE:

Eric Bailey, Peter Worth

ED 229 C Seminar in Learning, Design and TechnologyStanford University School of EducationJanuary, 21 2004

HOW DOES IT RELATE?Media Timeline visually displays an entire clip. This is an interesting model for visualizing a film. Users can quickly visually scan to find a relevant point, instead of remembering time code.

Page 21: THE CURRENT LANDSCAPE:

Eric Bailey, Peter Worth

ED 229 C Seminar in Learning, Design and TechnologyStanford University School of EducationJanuary, 21 2004

HOW DOES IT RELATE?Media Timeline makes viewable the context of single clips within an entire work. Users can mark clips while recognizing and understanding their syntax.

Page 22: THE CURRENT LANDSCAPE:

Eric Bailey, Peter Worth

ED 229 C Seminar in Learning, Design and TechnologyStanford University School of EducationJanuary, 21 2004

Page 23: THE CURRENT LANDSCAPE:

Eric Bailey, Peter Worth

ED 229 C Seminar in Learning, Design and TechnologyStanford University School of EducationJanuary, 21 2004

HOW DOES IT RELATE?Icons propose an interesting way of quickly flagging types of clips. Could add meaning to the process marking clips. They could also be useful for some analytical tasks such as identifying elements of film grammar.

Page 24: THE CURRENT LANDSCAPE:

Eric Bailey, Peter Worth

ED 229 C Seminar in Learning, Design and TechnologyStanford University School of EducationJanuary, 21 2004

REFERENCES:Davis, Marc. "Media Streams: An Iconic Visual Language for Video Representation." In: Readings in Human-Computer Interaction: Toward the Year 2000, ed. Ronald M. Baecker, Jonathan Grudin, William A. S. Buxton, and Saul Greenberg. 854-866. 2nd ed., San Francisco: Morgan Kaufmann Publishers, Inc., 1995.

http://acg.media.mit.edu/people/golan/mediastreams/

Page 25: THE CURRENT LANDSCAPE:

Eric Bailey, Peter Worth

ED 229 C Seminar in Learning, Design and TechnologyStanford University School of EducationJanuary, 21 2004

FIN