Help Privacy Policy Disclaimer
  Advanced SearchBrowse





Model selection, large deviations and consistency of data-driven tests

There are no MPG-Authors in the publication available
External Resource
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available

Langovoy, M.(2009). Model selection, large deviations and consistency of data-driven tests (2009-007).

Cite as: https://hdl.handle.net/11858/00-001M-0000-0013-C59F-C
We consider three general classes of data-driven statistical tests. Neyman's smooth
tests, data-driven score tests and data-driven score tests for statistical inverse problems serve as
important special examples for the classes of tests under consideration. Our tests are additionally
incorporated with model selection rules. The rules are based on the penalization idea. Most of the
optimal penalties, derived in statistical literature, can be used in our tests.
We prove general consistency theorems for the tests from those classes. Our proofs make use of
large deviations inequalities for deterministic and random quadratic forms.
The paper shows that the tests can be applied for simple and composite parametric, semi- and
nonparametric hypotheses. Applications to testing in statistical inverse problems and statistics for
stochastic processes are also presented..