Help Privacy Policy Disclaimer
  Advanced SearchBrowse





A method for implementing lock-free shared data structures


Barnes,  Greg
Algorithms and Complexity, MPI for Informatics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

(Any fulltext), 11MB

Supplementary Material (public)
There is no public supplementary material available

Barnes, G.(1994). A method for implementing lock-free shared data structures (MPI-I-94-120). Saarbrücken: Max-Planck-Institut für Informatik.

Cite as: https://hdl.handle.net/11858/00-001M-0000-0014-B78E-0
We are interested in implementing data structures on shared memory
multiprocessors. A natural model for these machines is an
asynchronous parallel machine, in which the processors are subject to
arbitrary delays. On such machines, it is desirable for algorithms to
be {\em lock-free}, that is, they must allow concurrent access to data
without using mutual exclusion.
Efficient lock-free implementations are known for some specific data
structures, but these algorithms do not generalize well to other
structures. For most data structures, the only previously known lock-free
algorithm is due to Herlihy. Herlihy presents a
simple methodology to create a lock-free implementation of a general
data structure, but his approach can be very expensive.

We present a technique that provides the semantics of
exclusive access to data without using mutual exclusion.
this technique,
we devise the {\em caching method},
a general method of implementing lock-free data
structures that is provably
better than Herlihy's methodology for many
well-known data structures.
The cost of one operation using the caching method
is proportional to $T \log T$, where $T$ is the sequential cost of the
operation. Under Herlihy's methodology, the
cost is proportional to $T + C$,
where $C$ is the time needed to make a logical copy of the data structure.
For many data structures, such as arrays and
{\em well connected} pointer-based structures (e.g., a doubly linked
list), the best known value for $C$ is
proportional to the size of the structure, making the copying time
much larger than the sequential cost of an operation.
The new method can also allow {\em concurrent updates} to the data
structure; Herlihy's methodology cannot.
A correct lock-free implementation can be derived
from a correct sequential implementation in a straightforward manner
using this
The method is also flexible; a programmer can change many of the details
of the default implementation to optimize for a particular pattern of data
structure use.