Talk:Video Content Structuring

From Scholarpedia
Jump to: navigation, search

    [Author] We would like to show our appreciation to the three reviewers of this article for the valuable comments and revisions. According to the reviewers' comments, we have made revisions in the revised article.


    The page covers the topic well. I would suggest adding the following references: Hari Sundaram's work on summarizing movies Nevenka Dimitrova et al's work on movies and music videos Work on Sports Video by Li, Sezan et al, Divakaran et al, Changsheng Xu et al etc. Xiong et al's work should be moved from recommended reading to the references since it presents an overview of the problem. This is not a major criticism.

    [Author] Thanks. We have investigated the suggested references and added several related ones in the article. Xiong et al.'s work is removed from recommended reading.

    Contents

    Reviewer B:

    This is a very well written article that provides a good coverage of the video content structuring topic. In my view the article could be made even better if revised and expanded according to the following suggestions:

    - remove the literature references from the "Basic Description" section, since there the topic is introduced and basic definitions of the terms are given. The literature references should be concentrated in the later sections where each aspect of video content structuring is treated in depth. But there, they should be made complete, to cover all important contributions of the relevant previous work.

    [Author] Thanks for this suggestion. We have removed literature references from the "Basic Description" section. About the references in each part, we are trying to select several representative research efforts. Since the scholarpedia article is an entry that tries to help readers get a quick but precise idea about this topic, comprehensively illustrating hundreds of references may not be a good idea. But we will keep updating this article and adding the introduction to more and more interesting works in this field.

    - not all papers listed under "References" are mentioned in the article. They should be mentioned and put in the proper context in the text to reveal the entire spectrum of major solutions to the problem.

    [Author] Thanks for pointing this out. We have fixed the problem

    - I would suggest to expand the Scene Grouping section and the list of references to mention and discuss the following contributions:

    1. Kender and Yeo, on video scene segmentation via continuous video coherence (CVPR98),

    2. Hanjalic et.al., on video segmentation into logical story units (T-CSVT June 1999),

    3. Vedrig and Worring, on evaluation of logical story unit segmentation methods (T-MM Dec. 2002)

    [Author] Thanks. We have introduced these works in the article.

    Reviewer C:

    Dear Authors,

    I did not mean "remove" Xiong et al from Recommended reading. I was recommending moving it to the references section.

    Best Regards,

    Reviewer


    [Author] Thanks. We have moved Xing et al.'s book to the reference section now.


    Reviewer C:

    Dear Authors,

    Please check the article thoroughly for grammatical and spelling errors. I could find only one so far. Please replace "already becomes" with "has already become" in the section on shot detection. In general, I like the organization and thoroughness of the article. It is a very good survey of the state of the art.

    Best Regards, ajay

    [Author] Thanks, Ajay. We have checked the article and fixed the typos and errors we can find.

    Reviewer C:

    Dear All,

    Here are some suggested grammatical corrections. If you run a search on your article with some keywords you will understand what I mean by the following list: More and more video data has become available to … One or more keyframes can be selected from … Replace “terminologies” with “terms” Learning methods can be made more robust … A more comprehensive survey can be found in… Using Gaussian Mixture Models A Subshot is a … They can be used as entry points into the video. Typically, the shot , as defined earlier, is adopted as a basic unit … Therefore, the shot is adopted as the unit for annotation.

    Best Regards, ajay

    Reviewer C:

    Dear All,


    If you could not understand my latest comment please let me know. I will show you exactly where the changes have to be made. It is difficult to do with this interface.

    Best Regards, ajay

    [Author] Thanks. We have checked the article and revised the errors. Actually you can also edit the article (you have the permission to edit the article) if you think it needs any change, just like reviewer A.

    Reviewer C:

    I directly edited the article and made two or three changes. I dont remember them all now. One last suggestion. Please read Kevin Wilson's contribution to my book below: http://www.merl.com/publications/TR2008-091/

    It provides a computationally simple product ready approach to content-adaptive temporal segmentation.

    I think referring to it in your article would be good. But I leave that up to you.

    These are my last comments. Good job with the article.

    ajay

    [Author] Thanks for your revision. We have read the article you mentioned and found it interesting. The work has been introduced in Section 3: "Recently, Wilson and Divakaran (2009) proposed a supervised learning approach to scene grouping. It classfies shot boundaries into scene boundaries and non-scene-boundaries by learning models from labeled training data, and in this way the discrimination rules can be made more robust and can deal with videos with varying content."

    Reviewer C:

    Thanks for your quick response. You have a typo in the text you added which I edited i.e. I replaced "classfies" with "classifies." This is my absolute last comment. The editor shows all spelling problems with red underlining so please go through those carefully.

    Regards, ajay

    [Author] Thanks. We have further checked the article for typos and it seems OK now.

    Personal tools
    Namespaces

    Variants
    Actions
    Navigation
    Focal areas
    Activity
    Tools