English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Paper

Optimizing ZX-Diagrams with Deep Reinforcement Learning

MPS-Authors

Nägele,  Maximilian
Marquardt Division, Max Planck Institute for the Science of Light, Max Planck Society;

/persons/resource/persons201125

Marquardt,  Florian
Marquardt Division, Max Planck Institute for the Science of Light, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

2311.18588.pdf
(Any fulltext), 2MB

Supplementary Material (public)

Bildschirmfoto 2023-12-18 um 10.23.31.png
(Supplementary material), 18KB

Citation

Nägele, M., & Marquardt, F. (2023). Optimizing ZX-Diagrams with Deep Reinforcement Learning. arXiv, 2311.18588.


Cite as: https://hdl.handle.net/21.11116/0000-000E-0E59-0
Abstract
ZX-diagrams are a powerful graphical language for the description of quantum processes with applications in fundamental quantum mechanics, quantum circuit optimization, tensor network simulation, and many more. The utility of ZX-diagrams relies on a set of local transformation rules that can be applied to them without changing the underlying quantum process they describe. These rules can be exploited to optimize the structure of ZX-diagrams for a range of applications. However, finding an optimal sequence of transformation rules is generally an open problem. In this work, we bring together ZX-diagrams with reinforcement learning, a machine learning technique designed to discover an optimal sequence of actions in a decision-making problem and show that a trained reinforcement learning agent can significantly outperform other optimization techniques like a greedy strategy or simulated annealing. The use of graph neural networks to encode the policy of the agent enables generalization to diagrams much bigger than seen during the training phase.