The Semantic Analysis of Video research project is conducted at the Advanced Digital Sciences Center (ADSC) as part of the Interactive Digital Media subprogram (IDM). The Principal Investigators on this project are Prof. Narendra Ahuja from University of Illinois at Urbana-Champaign (UIUC) and Dr. Bernard Ghanem from King Abdullah University of Science and Technology.
This project investigates and develops a computational framework suitable for the semantic analysis of complex activities present in video. The project contributes to the main vision of the IDM by resolving the information overload problem by finding and modeling complex activities in video. By recognizing actions and activities that take place in a scene, users can both quickly understand what happaned in a scene and be made to virtually feel like they are part of the scene. This research will build off of previous research on constrained multi-object motion analysis and modeling.
Representing action of dynamic video is challenging for several reasons: camera motion, object tracking, and activity recognition are three of many. We employ video registration, object tracking, and activity recognition techniques to solve these problems (see more on our Research page). Video registration is the process of aligning successive video frames to one absolute coordinate system. This is challenging in dynamic scenes due to camera motion: pan, tilt, and zoom. Tracking of moving objects is difficult in part due to occlusion, motion blur, or similar appearances of certain objects. These two modules are fundamental blocks needed for semantic analysis of video; they are primary areas of our research. Object Activity Recognition, identifying the action of an object, is another important area of interest.
Currently, our primary application is in sports video analysis. The goal is to detect important events and activities in sports video such as a goal in soccer, a touchdown in American Football, or a forehand winner in table tennis. Our new video analysis techniques are also used in projects such as the Smart Office Integrative Project. The ability to track and subsequently analyze object motion, across multiple registered cameras (e.g. employees in their office space), can be useful in numerous applications.
Video activity analysis is well established in the fields of security surveillance and military applications, but the sports world is extremely under-served. With this semantic analysis research, people across the sports domain will be positively affected: sports experts (coaches, teams, players), content providers (broadcasters and advertisers), and end users (via television, mobile device) will be positively impacted. The ability to automatically or semi-automatically search, analyze, and understand large collections of sports video will aid users across the world. Sports teams will more efficiently understand and adjust strategy; sports analysts will quickly understand match activity and comment appropriately; sports content providers will more quickly disseminate relevant information which sports fans crave, providing enhanced video streaming and advertising.