English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

 
 
DownloadE-Mail

Released

Paper

Specifying and Testing k-Safety Properties for Machine-Learning Models

MPS-Authors
/persons/resource/persons231014

Christakis,  Maria
Group M. Christakis, Max Planck Institute for Software Systems, Max Planck Society;

/persons/resource/persons266149

Eniser,  Hassan Ferit
Group M. Christakis, Max Planck Institute for Software Systems, Max Planck Society;

/persons/resource/persons216578

Singla,  Adish
Group A. Singla, Max Planck Institute for Software Systems, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Christakis, M., Eniser, H. F., Hoffmann, J., Singla, A., & Wüstholz, V. (2022). Specifying and Testing k-Safety Properties for Machine-Learning Models. Retrieved from https://arxiv.org/abs/2206.06054.


Cite as: https://hdl.handle.net/21.11116/0000-000A-CFBA-C
Abstract
Machine-learning models are becoming increasingly prevalent in our lives, for
instance assisting in image-classification or decision-making tasks.
Consequently, the reliability of these models is of critical importance and has
resulted in the development of numerous approaches for validating and verifying
their robustness and fairness. However, beyond such specific properties, it is
challenging to specify, let alone check, general functional-correctness
expectations from models. In this paper, we take inspiration from
specifications used in formal methods, expressing functional-correctness
properties by reasoning about $k$ different executions, so-called $k$-safety
properties. Considering a credit-screening model of a bank, the expected
property that "if a person is denied a loan and their income decreases, they
should still be denied the loan" is a 2-safety property. Here, we show the wide
applicability of $k$-safety properties for machine-learning models and present
the first specification language for expressing them. We also operationalize
the language in a framework for automatically validating such properties using
metamorphic testing. Our experiments show that our framework is effective in
identifying property violations, and that detected bugs could be used to train
better models.