English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Paper

Learning Interpretable Temporal Properties from Positive Examples Only

MPS-Authors
/persons/resource/persons260725

Roy,  Rajarshi
Group R. Majumdar, Max Planck Institute for Software Systems, Max Planck Society;

/persons/resource/persons215577

Neider,  Daniel
Group R. Majumdar, Max Planck Institute for Software Systems, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

arXiv:2209.02650.pdf
(Preprint), 676KB

Supplementary Material (public)
There is no public supplementary material available
Citation

Roy, R., Gaglione, J.-R., Baharisangari, N., Neider, D., Xu, Z., & Topcu, U. (2022). Learning Interpretable Temporal Properties from Positive Examples Only. Retrieved from https://arxiv.org/abs/2209.02650.


Cite as: https://hdl.handle.net/21.11116/0000-000B-4703-2
Abstract
We consider the problem of explaining the temporal behavior of black-box
systems using human-interpretable models. To this end, based on recent research
trends, we rely on the fundamental yet interpretable models of deterministic
finite automata (DFAs) and linear temporal logic (LTL) formulas. In contrast to
most existing works for learning DFAs and LTL formulas, we rely on only
positive examples. Our motivation is that negative examples are generally
difficult to observe, in particular, from black-box systems. To learn
meaningful models from positive examples only, we design algorithms that rely
on conciseness and language minimality of models as regularizers. To this end,
our algorithms adopt two approaches: a symbolic and a counterexample-guided
one. While the symbolic approach exploits an efficient encoding of language
minimality as a constraint satisfaction problem, the counterexample-guided one
relies on generating suitable negative examples to prune the search. Both the
approaches provide us with effective algorithms with theoretical guarantees on
the learned models. To assess the effectiveness of our algorithms, we evaluate
all of them on synthetic data.