Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

 
 
DownloadE-Mail
  Automatic annotation suggestions for audiovisual archives: Evaluation aspects

Gazendam, L., Wartena, C., Malaise, V., Schreiber, G., De Jong, A., & Brugman, H. (2009). Automatic annotation suggestions for audiovisual archives: Evaluation aspects. Interdisciplinary Science Reviews, 34(2/3), 172-188. doi:10.1179/174327909X441090.

Item is

Dateien

einblenden: Dateien
ausblenden: Dateien
:
Gazendam_Automatic_annotation_Int._Sci_Rev_2009.pdf (Verlagsversion), 284KB
Name:
Gazendam_Automatic_annotation_Int._Sci_Rev_2009.pdf
Beschreibung:
-
OA-Status:
Sichtbarkeit:
Öffentlich
MIME-Typ / Prüfsumme:
application/pdf / [MD5]
Technische Metadaten:
Copyright Datum:
-
Copyright Info:
-
Lizenz:
-

Externe Referenzen

einblenden:

Urheber

einblenden:
ausblenden:
 Urheber:
Gazendam, Luit1, Autor
Wartena, Christian1, Autor
Malaise, Veronique2, Autor
Schreiber, Guus2, Autor
De Jong, Annemieke3, Autor
Brugman, Hennie4, Autor           
Affiliations:
1Novay, NL-7500 AN Enschede, Netherlands , ou_persistent22              
2Department of Computer Science, Vrije Universiteit Amsterdam, The Netherlands, ou_persistent22              
3Netherlands Institute for Sound and Vision, Hilversum, The Netherlands, ou_persistent22              
4Technical Group, MPI for Psycholinguistics, Max Planck Society, ou_55220              

Inhalt

einblenden:
ausblenden:
Schlagwörter: -
 Zusammenfassung: In the context of large and ever growing archives, generating annotation suggestions automatically from textual resources related to the documents to be archived is an interesting option in theory. It could save a lot of work in the time consuming and expensive task of manual annotation and it could help cataloguers attain a higher inter-annotator agreement. However, some questions arise in practice: what is the quality of the automatically produced annotations? How do they compare with manual annotations and with the requirements for annotation that were defined in the archive? If different from the manual annotations, are the automatic annotations wrong? In the CHOICE project, partially hosted at the Netherlands Institute for Sound and Vision, the Dutch public archive for audiovisual broadcasts, we automatically generate annotation suggestions for cataloguers. In this paper, we define three types of evaluation of these annotation suggestions: (1) a classic and strict evaluation measure expressing the overlap between automatically generated keywords and the manual annotations, (2) a loosened evaluation measure for which semantically very similar annotations are also considered as relevant matches, and (3) an in-use evaluation of the usefulness of manual versus automatic annotations in the context of serendipitous browsing. During serendipitous browsing, the annotations (manual or automatic) are used to retrieve and visualize semantically related documents.

Details

einblenden:
ausblenden:
Sprache(n):
 Datum: 2009
 Publikationsstatus: Erschienen
 Seiten: -
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: Expertenbegutachtung
 Identifikatoren: DOI: 10.1179/174327909X441090
 Art des Abschluß: -

Veranstaltung

einblenden:

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle 1

einblenden:
ausblenden:
Titel: Interdisciplinary Science Reviews
Genre der Quelle: Zeitschrift
 Urheber:
Affiliations:
Ort, Verlag, Ausgabe: London : Heyden
Seiten: - Band / Heft: 34 (2/3) Artikelnummer: - Start- / Endseite: 172 - 188 Identifikator: Anderer: 954928564516
ISSN: 0308-0188