Collaborative tagging systems have become rather popular for annotating any kind of resources ranging from electronic documents to real world objects. In current tagging systems resources as a whole are annotated with and referenced by user defined tags. For multimedia data, as e. g. for video data, single scenes can be identified and annotated by using MPEG-7 metadata. We propose a collaborative tagging system that is combined with an automated annotation system for synchronized multimedia presentations. MPEG-7 metadata are used for the annotation of single scenes with user compiled tagging information in combination with metadata provided directly by the author or by other annotation systems. Thus, we propose a system being able to search within multimedia data that can further be extended to search within any kind of (partial) document to achieve a more tightly focused and personalized search.