Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Zeitschriftenartikel

The SHARC Framework for Data Quality in Web Archiving

MPG-Autoren
/persons/resource/persons44297

Denev,  Dimitar
Databases and Information Systems, MPI for Informatics, Max Planck Society;

/persons/resource/persons45016

Mazeika,  Arturas
Databases and Information Systems, MPI for Informatics, Max Planck Society;

/persons/resource/persons45528

Spaniol,  Marc
Databases and Information Systems, MPI for Informatics, Max Planck Society;

/persons/resource/persons45720

Weikum,  Gerhard
Databases and Information Systems, MPI for Informatics, Max Planck Society;

Externe Ressourcen
Es sind keine externen Ressourcen hinterlegt
Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Denev, D., Mazeika, A., Spaniol, M., & Weikum, G. (2011). The SHARC Framework for Data Quality in Web Archiving. The VLDB Journal, 20(2), 183-207. doi:10.1007/s00778-011-0219-9.


Zitierlink: https://hdl.handle.net/11858/00-001M-0000-0010-14D3-C
Zusammenfassung
Web archives preserve the history of born-digital content and offer great potential for sociologists, business analysts, and legal experts on intellectual property and compliance issues. Data quality is crucial for these purposes. Ideally, crawlers should gather coherent captures of entire Web sites, but the politeness etiquette and completeness requirement mandate very slow, long-duration crawling while Web sites undergo changes. %big-picture contribution This paper presents the SHARC framework for assessing the data quality in Web archives and for tuning capturing strategies towards better quality with given resources. We define data-quality measures, characterize their properties, and develop a suite of quality-conscious scheduling strategies for archive crawling. Our framework includes single-visit and visit-revisit crawls. Single-visit crawls download every page of a site exactly once in an order that aims to minimize the ``blur'' in capturing the site. Visit-revisit strategies revisit pages after their initial downloads to check for intermediate changes. The revisiting order aims to maximize the ``coherence'' of the site capture(number pages that did not change during the capture). The quality notions of blur and coherence are formalized in the paper. Blur is a stochastic notion that reflects the expected number of page changes that a time-travel access to a site capture would accidentally see, instead of the ideal view of a instantaneously captured, ``sharp'' site. Coherence is a deterministic quality measure that counts the number of unchanged and thus coherently captured pages in a site snapshot. Strategies that aim to either minimize blur or maximize coherence are based on prior knowledge of or predictions for the change rates of individual pages. Our framework includes fairly accurate classifiers for change predictions. All strategies are fully implemented in a testbed, and shown to be effective by experiments with both synthetically generated sites and a periodic crawl series for different Web sites.