Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Forschungspapier

Optimizing ZX-Diagrams with Deep Reinforcement Learning

MPG-Autoren

Nägele,  Maximilian
Marquardt Division, Max Planck Institute for the Science of Light, Max Planck Society;

/persons/resource/persons201125

Marquardt,  Florian
Marquardt Division, Max Planck Institute for the Science of Light, Max Planck Society;

Externe Ressourcen
Es sind keine externen Ressourcen hinterlegt
Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)

2311.18588.pdf
(beliebiger Volltext), 2MB

Ergänzendes Material (frei zugänglich)

Bildschirmfoto 2023-12-18 um 10.23.31.png
(Ergänzendes Material), 18KB

Zitation

Nägele, M., & Marquardt, F. (2023). Optimizing ZX-Diagrams with Deep Reinforcement Learning. arXiv, 2311.18588.


Zitierlink: https://hdl.handle.net/21.11116/0000-000E-0E59-0
Zusammenfassung
ZX-diagrams are a powerful graphical language for the description of quantum processes with applications in fundamental quantum mechanics, quantum circuit optimization, tensor network simulation, and many more. The utility of ZX-diagrams relies on a set of local transformation rules that can be applied to them without changing the underlying quantum process they describe. These rules can be exploited to optimize the structure of ZX-diagrams for a range of applications. However, finding an optimal sequence of transformation rules is generally an open problem. In this work, we bring together ZX-diagrams with reinforcement learning, a machine learning technique designed to discover an optimal sequence of actions in a decision-making problem and show that a trained reinforcement learning agent can significantly outperform other optimization techniques like a greedy strategy or simulated annealing. The use of graph neural networks to encode the policy of the agent enables generalization to diagrams much bigger than seen during the training phase.