English
 
User Manual Privacy Policy Disclaimer Contact us
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

DIRAC - Distributed Infrastructure with Remote Agent Control

MPS-Authors
/persons/resource/persons30992

Schmelling,  Michael
Division Prof. Dr. Werner Hofmann, MPI for Nuclear Physics, Max Planck Society;

Locator
There are no locators available
Fulltext (public)

2003_CHEP.pdf
(Preprint), 695KB

Supplementary Material (public)
There is no public supplementary material available
Citation

van Herwijnen, E., Closier, J., Frank, M., Gaspar, C., Loverre, F., Ponce, S., et al. (2003). DIRAC - Distributed Infrastructure with Remote Agent Control.


Cite as: http://hdl.handle.net/11858/00-001M-0000-0011-81DD-A
Abstract
This paper describes DIRAC, the LHCb Monte Carlo production system. DIRAC has a client/server architecture based on: · Compute elements distributed among the collaborating institutes; Databases for production management, bookkeeping (the metadata catalogue) and software configuration; · Monitoring and cataloguing services for updating and accessing the databases. Locally installed software agents implemented in Python monitor the local batch queue, interrogate the production database for any outstanding production requests using the XML-RPC protocol and initiate the job submission. The agent checks and, if necessary, installs any required software automatically. After the job has processed the events, the agent transfers the output data and updates the metadata catalogue. DIRAC has been successfully installed at 18 collaborating institutes, including the DataGrid, and has been used in recent Physics Data Challenges. In the near to medium term future we must use a mixed environment with different types of grid middleware or no middleware. We describe how this flexibility has been achieved and how ubiquitously available grid middleware would improve DIRAC.