English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Estimating the galaxy two-point correlation function using a split random catalog

Keihänen, E., Kurki-Suonio, H., Lindholm, V., Viitanen, A., Suur-Uski, A.-S., Allevato, V., et al. (2019). Estimating the galaxy two-point correlation function using a split random catalog. Astronomy and Astrophysics, 631: A73. doi:10.1051/0004-6361/201935828.

Item is

Files

show Files
hide Files
:
Estimating the galaxy two-point correlation function using a split random catalog.pdf (Any fulltext), 310KB
 
File Permalink:
-
Name:
Estimating the galaxy two-point correlation function using a split random catalog.pdf
Description:
-
OA-Status:
Visibility:
Private
MIME-Type / Checksum:
application/pdf
Technical Metadata:
Copyright Date:
-
Copyright Info:
-
License:
-

Locators

show

Creators

show
hide
 Creators:
Keihänen, E., Author
Kurki-Suonio, H., Author
Lindholm, V., Author
Viitanen, A., Author
Suur-Uski, A.-S., Author
Allevato, V., Author
Branchini, E., Author
Marulli, F., Author
Norberg, P., Author
Tavagnacco, D., Author
de la Torre, S., Author
Valiviita, J., Author
Viel, M., Author
Bel, J., Author
Frailis, M., Author
Sanchez, A. G.1, Author           
Affiliations:
1Optical and Interpretative Astronomy, MPI for Extraterrestrial Physics, Max Planck Society, ou_159895              

Content

show
hide
Free keywords: -
 Abstract: The two-point correlation function of the galaxy distribution is a key cosmological observable that allows us to constrain the dynamical and geometrical state of our Universe. To measure the correlation function we need to know both the galaxy positions and the expected galaxy density field. The expected field is commonly specified using a Monte-Carlo sampling of the volume covered by the survey and, to minimize additional sampling errors, this random catalog has to be much larger than the data catalog. Correlation function estimators compare data–data pair counts to data–random and random–random pair counts, where random–random pairs usually dominate the computational cost. Future redshift surveys will deliver spectroscopic catalogs of tens of millions of galaxies. Given the large number of random objects required to guarantee sub-percent accuracy, it is of paramount importance to improve the efficiency of the algorithm without degrading its precision. We show both analytically and numerically that splitting the random catalog into a number of subcatalogs of the same size as the data catalog when calculating random–random pairs and excluding pairs across different subcatalogs provides the optimal error at fixed computational cost. For a random catalog fifty times larger than the data catalog, this reduces the computation time by a factor of more than ten without affecting estimator variance or bias.

Details

show
hide
Language(s):
 Dates: 2019-10-22
 Publication Status: Published online
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: DOI: 10.1051/0004-6361/201935828
Other: LOCALID: 3222355
 Degree: -

Event

show

Legal Case

show

Project information

show

Source 1

show
hide
Title: Astronomy and Astrophysics
  Other : Astron. Astrophys.
Source Genre: Journal
 Creator(s):
Affiliations:
Publ. Info: France : EDP Sciences S A
Pages: - Volume / Issue: 631 Sequence Number: A73 Start / End Page: - Identifier: ISSN: 1432-0746
CoNE: https://pure.mpg.de/cone/journals/resource/954922828219_1