English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Optimizing ZX-Diagrams with Deep Reinforcement Learning

Nägele, M., & Marquardt, F. (2023). Optimizing ZX-Diagrams with Deep Reinforcement Learning. arXiv, 2311.18588.

Item is

Files

show Files
hide Files
:
2311.18588.pdf (Any fulltext), 2MB
Name:
2311.18588.pdf
Description:
File downloaded from arXiv at 2023-12-18 10:20
OA-Status:
Not specified
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
Technical Metadata:
Copyright Date:
-
Copyright Info:
-
:
Bildschirmfoto 2023-12-18 um 10.23.31.png (Supplementary material), 18KB
Name:
Bildschirmfoto 2023-12-18 um 10.23.31.png
Description:
-
OA-Status:
Not specified
Visibility:
Public
MIME-Type / Checksum:
image/png / [MD5]
Technical Metadata:
Copyright Date:
-
Copyright Info:
-
License:
-

Locators

show

Creators

show
hide
 Creators:
Nägele, Maximilian1, Author
Marquardt, Florian1, Author           
Affiliations:
1Marquardt Division, Max Planck Institute for the Science of Light, Max Planck Society, ou_2421700              

Content

show
hide
Free keywords: Quantum Physics, quant-ph,Computer Science, Learning, cs.LG
 Abstract: ZX-diagrams are a powerful graphical language for the description of quantum processes with applications in fundamental quantum mechanics, quantum circuit optimization, tensor network simulation, and many more. The utility of ZX-diagrams relies on a set of local transformation rules that can be applied to them without changing the underlying quantum process they describe. These rules can be exploited to optimize the structure of ZX-diagrams for a range of applications. However, finding an optimal sequence of transformation rules is generally an open problem. In this work, we bring together ZX-diagrams with reinforcement learning, a machine learning technique designed to discover an optimal sequence of actions in a decision-making problem and show that a trained reinforcement learning agent can significantly outperform other optimization techniques like a greedy strategy or simulated annealing. The use of graph neural networks to encode the policy of the agent enables generalization to diagrams much bigger than seen during the training phase.

Details

show
hide
Language(s):
 Dates: 2023-11-302023-11-30
 Publication Status: Published online
 Pages: 12 pages, 7 figures
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: arXiv: 2311.18588
 Degree: -

Event

show

Legal Case

show

Project information

show

Source 1

show
hide
Title: arXiv
Source Genre: Commentary
 Creator(s):
Affiliations:
Publ. Info: -
Pages: - Volume / Issue: - Sequence Number: 2311.18588 Start / End Page: - Identifier: -