English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
 
 
DownloadE-Mail
  Optimal Testing of Discrete Distributions with High Probability

Diakonikolas, I., Gouleakis, T., Kane, D. M., Peebles, J., & Price, E. (2020). Optimal Testing of Discrete Distributions with High Probability. Retrieved from https://arxiv.org/abs/2009.06540.

Item is

Files

show Files
hide Files
:
arXiv:2009.06540.pdf (Preprint), 456KB
Name:
arXiv:2009.06540.pdf
Description:
File downloaded from arXiv at 2020-12-09 13:35
OA-Status:
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
Technical Metadata:
Copyright Date:
-
Copyright Info:
-

Locators

show

Creators

show
hide
 Creators:
Diakonikolas, Ilias1, Author
Gouleakis, Themis2, Author           
Kane, Daniel M.1, Author
Peebles, John1, Author
Price, Eric1, Author
Affiliations:
1External Organizations, ou_persistent22              
2Algorithms and Complexity, MPI for Informatics, Max Planck Society, ou_24019              

Content

show
hide
Free keywords: Computer Science, Data Structures and Algorithms, cs.DS,Computer Science, Learning, cs.LG,Mathematics, Statistics, math.ST,Statistics, Machine Learning, stat.ML,Statistics, Statistics Theory, stat.TH
 Abstract: We study the problem of testing discrete distributions with a focus on the
high probability regime. Specifically, given samples from one or more discrete
distributions, a property $\mathcal{P}$, and parameters $0< \epsilon, \delta
<1$, we want to distinguish {\em with probability at least $1-\delta$} whether
these distributions satisfy $\mathcal{P}$ or are $\epsilon$-far from
$\mathcal{P}$ in total variation distance. Most prior work in distribution
testing studied the constant confidence case (corresponding to $\delta =
\Omega(1)$), and provided sample-optimal testers for a range of properties.
While one can always boost the confidence probability of any such tester by
black-box amplification, this generic boosting method typically leads to
sub-optimal sample bounds.
Here we study the following broad question: For a given property
$\mathcal{P}$, can we {\em characterize} the sample complexity of testing
$\mathcal{P}$ as a function of all relevant problem parameters, including the
error probability $\delta$? Prior to this work, uniformity testing was the only
statistical task whose sample complexity had been characterized in this
setting. As our main results, we provide the first algorithms for closeness and
independence testing that are sample-optimal, within constant factors, as a
function of all relevant parameters. We also show matching
information-theoretic lower bounds on the sample complexity of these problems.
Our techniques naturally extend to give optimal testers for related problems.
To illustrate the generality of our methods, we give optimal algorithms for
testing collections of distributions and testing closeness with unequal sized
samples.

Details

show
hide
Language(s): eng - English
 Dates: 2020-09-142020
 Publication Status: Published online
 Pages: 48 p.
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: arXiv: 2009.06540
BibTex Citekey: Diakonikolas_arXiv2009.06540
URI: https://arxiv.org/abs/2009.06540
 Degree: -

Event

show

Legal Case

show

Project information

show

Source

show