Suppose you’ve recorded a one hour lecture onto video and taken digital photographs of each blackboard used during the presentation. The video and photograph files contain metadata with time stamps: we know the time when each frame of video and photograph was captured. How should these files be blended to create a browser-based environment for studying the ideas in the lecture? My colleague Dror Bar-Natan has prototyped a remarkable software platform combining this data to support online courses and seminars.
The system is built atop MediaWiki (the software that runs Wikipedia) and allows metadata, such as comments and photographs, to be correlated in time to an embedded video stream. Comments can be attached to each frame of video. The system enables remote audience members to participate in the seminar frame-by-frame using their power to comment. In this way, the wClips system goes beyond playback by empowering the remote audience of the future to contribute to the past presentation.
The system was recently used to supplement a paper Dror is co-authoring with Zsuzsanna Dansco in the wClips seminar. Each section of the paper was discussed in detail in a seminar lecture captured on video. Each blackboard in the seminar series was photographed. Time links near the photographs move the video stream to the video time when that blackboard image was created. The sizes of the different elements can be changed by sliding the red line. In the final version of the paper, there will appear hyperlinks in each section pointing at the associated seminar presentation. Clicking on the photographs reveals another layer of comments highlighting the elements appearing on the blackboard.
The wClips system runs on a linux server using open source software so is inexpensive to deploy. Since it is based on MediaWiki, it has the potential to scale to support a massive audience. Getting the video content out of the camera and into the online course system involves some work but could be streamlined with the right scripts.