Figure 1: Two example clips. The left one is considered more interesting.
The amount of videos available on the Web is growing
explosively. While some videos are very interesting
and receive high rating from viewers, many of them are
less interesting or even boring.
The measure of interestingness of videos can be used to improve
user satisfaction in many applications. For example, in
Web video search, for the videos with similar relevancy to a
query, it would be good to rank the more interesting ones
higher. Similarly this measure is also useful in video recommendation,
where users will certainly be more satisfied and
as a result the stickiness of a video-sharing website will be
largely improved if the recommended videos are interesting
In this project, we conduct a pilot study on the understanding of human perception of video interestingness, and demonstrate a simple computational method to identify more interesting videos. To this end we first construct two datasets of Flickr and YouTube videos respectively. Human judgements of interestingness are collected and used as the ground-truth for training computational models. We evaluate several off-the-shelf visual and audio features that are potentially useful for predicting interestingness on both datasets.
Yu-Gang Jiang, Yanran Wang, Rui Feng, Xiangyang Xue, Yingbin Zheng, Hanfang Yang, Understanding and Predicting Interestingness of Videos, The 27th AAAI Conference on Artificial Intelligence (AAAI), Bellevue, Washington, USA, Jul. 2013.
To facilitate the study we need benchmark datasets with ground-truth interestingness labels. Since there is no such kind of dataset publicly available, we collected two new datasets. The first dataset (1,200 videos) was collected from Flickr, which has a criterion called "interestingness" to rank its search results. The second dataset (420 videos) was collected from YouTube, which does not have similar ranking criteria so we hired 10 human annotators to provide intestingness ratings of the videos.
Click here to download the dataset (~7.3GB in total).
Note: People who download this dataset must agree that 1) the use of the data is restricted to research purpose only, and that 2) The authors of the above AAAI'13 paper, and the Fudan University, make no warranties regarding this dataset, such as (not limited to) non-infringement.