English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

Fallacies in evaluating decentralized systems

MPS-Authors
/persons/resource/persons180309

Haeberlen,  Andreas
Group P. Druschel, Max Planck Institute for Software Systems, Max Planck Society;

/persons/resource/persons180337

Mislove,  Alan
Group P. Druschel, Max Planck Institute for Software Systems, Max Planck Society;

/persons/resource/persons180351

Post,  Ansley
Group P. Druschel, Max Planck Institute for Software Systems, Max Planck Society;

/persons/resource/persons144511

Druschel,  Peter
Group P. Druschel, Max Planck Institute for Software Systems, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Haeberlen, A., Mislove, A., Post, A., & Druschel, P. (2006). Fallacies in evaluating decentralized systems. In 5th International Workshop on Peer-to-Peer Systems (IPTPS'06) (pp. 19-24).


Cite as: https://hdl.handle.net/11858/00-001M-0000-0028-8C86-4
Abstract
Research on decentralized systems such as peer-­to-­peer overlays and ad hoc networks has been hampered by the fact that few systems of this type are in production use, and the space of possible applications is still poorly understood. As a consequence, new ideas have mostly been evaluated using common synthetic workloads, traces from a few existing systems, testbeds like PlanetLab, and simulators like ns-­2. Some of these methods have, in fact, become the “gold standard” for evaluating new systems, and are often a prerequisite for getting papers accepted at top conferences in the field. In this paper, we examine the current practice of evaluating decentralized systems under these specific sets of conditions and point out pitfalls associated with this practice. In particular, we argue that (i) despite authors' best intentions, results from such evaluations often end up being inappropriately generalized; (ii) there is an incentive not to deviate from the accepted standard of evaluation, even if that is technically appropriate; (iii) research may gravitate towards systems that are feasible and perform well when evaluated in the accepted environments; and, (iv) in the worst­ case, research may become ossified as a result. We close with a call to action for the community to develop tools, data, and best practices that allow systems to be evaluated across a space of workloads and environments.