Neural network decoder for topological color codes with circuit level noise
Abstract
A quantum computer needs the assistance of a classical algorithm to detect and identify errors that affect encoded quantum information. At this interface of classical and quantum computing the technique of machine learning has appeared as a way to tailor such an algorithm to the specific error processes of an experiment — without the need for a priori knowledge of the error model. Here, we apply this technique to topological color codes. We demonstrate that a recurrent neural network with long short-term memory cells can be trained to reduce the error rate of the encoded logical qubit to values much below the error rate of the physical qubits — fitting the expected power law scaling , with the code distance. The neural network incorporates the information from “flag qubits” to avoid reduction in the effective code distance caused by the circuit. As a test, we apply the neural network decoder to a density-matrix based simulation of a superconducting quantum computer, demonstrating that the logical qubit has a longer life-time than the constituting physical qubits with near-term experimental parameters.
I Introduction
In fault-tolerant quantum information processing, a topological code stores the logical qubits nonlocally on a lattice of physical qubits, thereby protecting the data from local sources of noise Lid13 ; Ter15 . To ensure that this protection is not spoiled by logical gate operations, they should act locally. A gate where the -th qubit in a code block interacts only with the -th qubit of another block is called “transversal” Got09 . Transversal gates are desirable both because they do not spread the error in one qubit to other qubits in a block, and because they can be implemented efficiently by parallel operations.
Two families of two-dimensional (2D) topological codes have been extensively investigated. The first family, derived from Kitaev’s toric code Kit03 , is the surface code on a square lattice Bra98 ; Fow12b . It has a favorably high threshold error rate for fault tolerance, but only cnot, , and gates can be performed transversally Cam16 . The second family, introduced by Bombin and Martin-Delgado Bom06 ; Bom07 , is defined on a honeycomb lattice, or more generally on a trivalent face-three-colorable graph — hence the name color code.
While the threshold error rate is smaller than for surface codes And11 ; Lan11 , the advantage of color codes is that they allow for the transversal implementation of the full Clifford group of quantum gates (with Hadamard, phase gate, and cnot gate as generators) Bom15 ; Kub15 . This is not yet computationally universal, but can be rendered universal using gate teleportation Got99 and magic state distillation Bra05 . Moreover, color codes are particularly suitable for topological quantum computation with Majorana qubits, since high-fidelity Clifford gates are accessible by braiding Lit17a ; Lit17b .
A drawback of color codes is that quantum error correction is more complicated than for surface codes. The identification of errors in a surface code (the “decoding” problem) can be mapped onto a matching problem in a graph Den02 , for which there exists an efficient solution called the “blossom” algorithm Edm65 . This graph-theoretic approach does not carry over to color codes, motivating the search for a decoder with performance comparable to the blossom decoder Wan10 ; Duc10 ; Sar12 ; Del14 ; Ste14 .
An additional complication of color codes is that the parity checks are prone to “hook” errors, where single-qubit errors on the ancilla qubits propagate to higher weight errors on data qubits, reducing the effective distance of the code. There exist methods due to Shor Sho96 , Steane Ste97 , and Knill Kni05 to mitigate this, but these error correction methods come with much overhead because of the need for additional circuitry. An alternative scheme with reduced overhead uses dedicated ancillas (“flag qubits”) to signal the hook errors Cha171 ; Cha172 ; Cha17 ; Gut18 ; Tan18 .
Here we show that a neural network can be trained to fault-tolerantly decode a color code with high efficiency, using only measurable data as input. No a priori knowledge of the error model is required. This machine learning approach has been previously shown to be successful for the family of surface codes Tor16 ; Var17 ; Bai17 ; Kra17 ; Bre17 , and applications to color codes are now being investigated Dav18 ; Cha18 ; Mas18 . We adapt the recurrent neural network of Ref. Bai17, to decode color codes with distances up to 7, fully incorporating the information from flag qubits. A test on a density matrix-based simulator of a superconducting quantum computer Obr17 shows that the performance of the decoder is close to optimal, and would surpass the quantum memory threshold under realistic experimental conditions.
Ii Description of the problem
ii.1 Color code
The color code belongs to the class of stabilizer codes Got97 , which operate by the following general scheme. We denote by the Pauli matrices on a single qubit and by the Pauli group on qubits. A set of logical qubits is encoded as a -dimensional Hilbert space across noisy physical qubits (with -dimensional Hilbert space ). The logical Hilbert space is stabilized by the repeated measurement of parity checks that generate the stabilizer , defined as
(1) |
where is the algebra of bounded operators on the physical Hilbert space.
As errors accumulate in the physical hardware, an initial state may rotate out of . Measurement of the parity bits discretizes this rotation, either projecting back into , or into an error-detected subspace . The syndrome is determined by the measurement of the parity checks: . It is the job of a classical decoder to interpret the multiple syndrome cycles and determine a correction that maps , such that the combined action of error accumulation and correction leaves the system unperturbed.
This job can be split into a computationally easy task of determining a unitary that maps (a socalled ‘pure error’ Pou08 ), and a computationally difficult task of determining a logical operation within to undo any unwanted logical errors. The former task (known as ‘excitation removal’ Mas18 ) can be performed by a ‘simple decoder’ Var17 . The latter task is reduced, within the stabilizer formalism, to determining at most two parity bits per qubit, which is equivalent to determining the logical parity of the qubit upon measurement at time Bai17 .
We implement the color code Bom06 ; Bom07 on an hexagonal lattice inside a triangle, see Fig. 1. (This is the 6,6,6 color code of Ref. Lan11, .) One logical qubit is encoded by mapping vertices to data qubits , and tiles to the stabilizers , . The simultaneous eigenstate of all the stabilizers (the “code space”) is twofold degenerate Kub15 , so it can be used to define a logical qubit. The number of data qubits that encodes one logical qubit is , 19, or 37 for a code with distance , 5, or 7, respectively. (For any odd integer , a distance- code can correct errors.) Note that is less than for a surface code with the same .
An error on a data qubit switches the parity of the surrounding stabilizers, and similarly a error switches the parity of the surrounding stabilizers. These parity switches are collected in the binary vector of syndrome increments , such that signals an error on the qubits surrounding ancilla . The syndrome increments themselves are sufficient for a classical decoder to infer the errors on the physical data qubits. Parity checks are performed by entangling ancilla qubits at the center of each tile with the data qubits around the border, and then measuring the ancilla qubits (see App. A for the quantum circuit).
ii.2 Error model
We consider two types of circuit-level noise models, both of which incorporate flag qubits to signal hook errors. Firstly, a simple Pauli error model allows us to develop and test the codes up to distance . (For larger the training of the neural network becomes computationally too expensive.) Secondly, the code is applied to a realistic density-matrix error model derived for superconducting qubits.
In the Pauli error model, one error correction cycle of duration consists of a sequence of steps of duration , in which a particular qubit is left idle, measured, or acted upon with a single-qubit rotation gate or a two-qubit conditional-phase gate. Before the first cycle we prepare all the qubits in an initial state, and we reset the ancilla qubits after each measurement. We allow for an error to appear at each step of the circuit and during the preparation, including the reset of the ancilla qubits, with probability . For the preparation errors, idle errors, or rotation errors we introduce the possibility of an , , or error with probability . Upon measurement, we record the wrong result with probability . Finally, after the conditional-phase gate we apply with probability one of the following two-qubit errors: , , , with . We assume that and that all errors are independent, so that we can identify with the physical error rate per step.
The density matrix simulation uses the quantumsim simulator of Ref. Obr17, . We adopt the experimental parameters from that work, which match the state-of-the-art performance of superconducting transmon qubits. In the density-matrix error model the qubits are not reset between cycles of error correction. Because of this, parity checks are determined by the difference between subsequent cycles of ancilla measurement. This error model cannot be parametrized by a single error rate, and instead we compare to the decay rate of a resting, unencoded superconducting qubit.
ii.3 Fault-tolerance
The objective of quantum error correction is to arrive at a much smaller error rate of the encoded logical qubit. If error propagation through the syndrome measurement circuit is limited, and a “good” decoder is used, the logical error rate should exhibit the power law scaling Fow12b
(2) |
with a prefactor that depends on the distance of the code but not on the physical error rate. The so-called “pseudothreshold” Svo05 ; note1
(3) |
is the physical error rate below which the logical qubit can store information for a longer time than a single physical qubit.
ii.4 Flag qubits
During the measurement of a weight- parity check with a single ancilla qubit, an error on the ancilla qubit may propagate to as many as errors on data qubits. This reduces the effective distance of the code in Eq. (2). The surface code can be made resilient to such hook errors, but the color code cannot: Hook errors reduce the effective distance of the color code by a factor of two.
To avoid this degradation of the code distance, we follow Refs. Cha171, ; Cha172, ; Cha17, ; Gut18, ; Tan18, by adding a small number of additional ancilla qubits, socalled “flag qubits”, to detect hook errors. For our chosen color code with weight- parity checks, we require one flag qubit for each ancilla qubit used to make a stabilizer measurement. (This is a much reduced overhead in comparison to alternative approaches Sho96 ; Ste97 ; Kni05 .) Flag and ancilla qubits are entangled during measurement and read out simultaneously. The circuits are given in App. A.
Iii Neural network decoder
Consider a logical qubit, prepared in a state , kept for a certain time , and then measured with outcome in the logical -basis. Upon measurement, phase information is lost. Hence, the only information needed in addition to is the parity of bit flips in the measurement basis. (A separate decoder is invoked for each measurement basis.) If the bit flip parity is odd, we correct the error by negating .
The task of decoding amounts to the estimation of the probability that the logical qubit has had an odd number of bit flips. The experimentally accessible data for this estimation consists of measurements of ancilla and flag qubits, contained in the vectors and of syndrome increments and flag measurements, and, at the end of the experiment, the readout of the data qubits. From this data qubit readout a final syndrome increment vector can be calculated. Depending on the measurement basis, it will only contain the or the stabilizers. Additionally, from the data qubit readout we obtain the true bit flip parity by comparing the measured logical state to the initial logical state that was prepared at the beginning of the experiment.
An efficient decoder must be able to decode an arbitrary and unspecified number of error correction cycles. It is not possible, in practice, to create a training data set which contains all possible sequence lengths, let alone train a neural network decoder efficiently on such a data set. Therefore, the decoder must be a cycle-based algorithm that is translationally invariant in time. To achieve this, we follow Ref. Bai17, and use a recurrent neural network of long short-term memory (LSTM) layers Hoc97 — with one significant modification, which we now describe.
The time-translation invariance of the error propagation holds for the ancilla qubits, but it is broken by the final measurement of the data qubits — since any error in these qubits will not propagate forward in time. To extract the time-translation invariant part of the training data, in Ref. Bai17, two separate networks were trained in parallel, one with and one without the final measurement input. Here, we instead use a single network with two heads, as illustrated in Fig. 2. The upper head sees only the translationally invariant data, while the lower head solves the full decoding problem.
The switch from two parallel networks to a single network with two heads offers several advantages: (1) The number of LSTM layers and the computational cost is cut in half; (2) The network can be trained on a single large error rate, then used for smaller error rates without retraining; (3) The bit flip probability from the upper head provides a so-called Pauli frame decoder Ter15 .
In the training stage the bit flip probabilities and from the upper and lower head are compared with the true bit flip parity . By adjusting the weights of the network connections a cost function is minimized in order to bring close to . We carry out this machine learning procedure using the TensorFlow library tensorflow , see App. B for details of the implementation.
After the training of the neural network has been completed we test the decoder on a fresh data set. Only the lower head is active during the testing stage. If the output probability , the parity of bit flip errors is predicted to be even and otherwise odd. We then compare this to and average over the test data set to obtain the logical fidelity . Using a two-parameter fit to Obr17
(4) |
we determine the logical error rate per step of the decoder.
Iv Neural network performance
iv.1 Power law scaling of the logical error rate
Results for the distance-3 color code are shown in Fig. 3 (with similar plots for distance-5 and distance-7 codes in App. C). These results demonstrate that the neural network decoder is able to decode a large number of consecutive error correction cycles. The dashed lines are fits to Eq. (4), which allow us to extract the logical error rate per step, for different physical error rates per step.
Figure 4 shows that the neural network decoder follows a power law scaling (2) with fixed to the code distance. This shows that the decoder, once trained using a single error rate, operates equally efficiently when the error rate is varied, and that our flag error correction scheme is indeed fault-tolerant. The corresponding pseudothresholds (3) are listed in Table 1.
distance | pseudothreshold |
---|---|
3 | 0.0034 |
5 | 0.0028 |
7 | 0.0023 |
iv.2 Implementation in a physical model
To assess the performance of the decoder in a realistic setting, we have applied it to a density matrix-based simulator of an array of superconducting transmon qubits Obr17 . In Fig. 5 we compare the decay of the fidelity of the logical qubit as it results from the neural network decoder with the fidelity extracted from the simulation Obr17 . The latter fidelity determines via Eq. (4) the logical error rate of an optimal decoder. For the distance-3 code we find and per microsecond, resulting in a decoder efficiency Obr17 of . The dashed gray line is the average fidelity (following Eq. (4)) of a single physical qubit at rest, corresponding to an error rate of Obr17 . This demonstrates that, even with realistic experimental parameters, a logical qubit encoded with the color code has a longer life-time than a physical qubit.
V Conclusion
We have presented a machine-learning based approach to quantum error correction for the topological color code. We believe that this approach to fault-tolerant quantum computation can be used efficiently in experiments on near-term quantum devices with relatively high physical error rates (so that the neural network can be trained with relatively small data sets). In support of this, we have presented a density matrix simulation Obr17 of superconducting transmon qubits (Fig. 5), where we obtain a decoder efficiency of .
Independently of our investigation, three recent works have shown how a neural network can be applied to color code decoding. Refs. Dav18, and Mas18, only consider single rounds of error correction, and cannot be extended to a multi-round experiment or circuit-level noise. Ref. Cha18, uses the Steane and Knill error correction schemes when considering color codes, which are also fault-tolerant against circuit-level noise, but have larger physical qubit requirements than flag error correction. None of these works includes a test on a simulation of physical hardware.
Acknowledgements.
We have benefited from discussions with Christopher Chamberland, Andrew Landahl, Daniel Litinski, and Barbara Terhal. This research was supported by the Netherlands Organization for Scientific Research (NWO/OCW) and by an ERC Synergy Grant.Appendix A Quantum circuits
a.1 Circuits for the Pauli error model
Fig. 6 shows the circuits for the measurements of the and stabilizers in the Pauli error model. To each stabilizer, measured with the aid of an ancilla qubit, we associate a second “flag” ancilla qubit with the task of spotting faults of the first ancilla Cha171 ; Cha172 ; Cha17 ; Gut18 ; Tan18 . This avoids hook errors (errors that propagate from a single ancilla qubit onto two data qubits), which would reduce the distance of the code. After the measurement of the stabilizers, all the ancillas are reset to and reused for the measurement of the stabilizers. Before finally measuring the data qubits, we allow the circuit to run for cycles.
a.2 Measurement processing for the density-matrix error model
For the density matrix simulation, neither ancilla qubits nor flag qubits are reset between cycles, leading to a more involved extraction process of both and , as we now explain.
Let and be the actual ancilla and flag qubit measurements taken in cycle , and , be compensation vectors of ancilla and flag measurements that would have been observed had no errors occurred in this cycle. Then,
(5) | ||||
(6) |
Calculation of the compensation vectors and requires knowledge of the stabilizer , and the initialization of the ancilla qubits and the flag qubits , being the combination of the effects of individual non-zero terms in each of these.
Note that a flag qubit being initialized in will cause errors to propagate onto nearby data qubits, but these errors can be predicted and removed prior to decoding with the neural network. In particular, let us concatenate , and to form a vector . The update may then be written as a matrix multiplication:
(7) |
Where is a sparse, binary matrix. The syndromes may be updated in a similar fashion
(8) |
where is likewise sparse. Both and may be constructed by modeling the stabilizer measurement circuit in the absence of errors. The sparsity in both matrices reflect the connectivity between data and ancilla qubits; for a topological code, both and are local. The calculation of the syndrome increments via Eq. (5) does not require prior calculation of .
Appendix B Details of the neural network decoder
b.1 Architecture
The decoder consists of a double headed network, see Fig. 2, which we implement using the TensorFlow library tensorflow . It maps a list of syndrome increments with to a pair of probabilities . (In what follows we measure time in units of the cycle duration , with .) The lower head gets as additional input a single final syndrome increment . The cost function that we seek to minimize by varying the weights and biases of the network is the cross-entropy
(9) |
between these output probabilities and the true final parity of bit flip errors:
(10) |
The term with is a regularizer, where are the weights of the evaluation layer.
The body of the double headed network is a recurrent neural network, consisting of two LSTM layers Hoc97 ; Zar14 . Each of the LSTM layers has two internal states, representing the long-term memory and the short-term memory , where for distances . The first LSTM layer gets the syndrome increments as input, and outputs its internal states . These states are in turn the input to the second LSTM layer.
The heads of the network consist of a single layer of rectified linear units, whose outputs are mapped onto a single probability using a sigmoid activation function. The input of the two heads is the last short-term memory state of the second LSTM layer, subject to a rectified linear activation function . For the lower head we concatenate with the final syndrome increment .
b.2 Training and evaluation
We use three separate datasets for each code distance. The training dataset is used by the optimizer to optimize the trainable variables of the network. It consists of sequences of lengths between and at a large error rate of for distances and , and of sequences for distance . At the end of each sequence, it contains the final syndrome increment and the final parity of bit flip errors . After each training epoch, consisting of to mini-batches of size , we validate the network (using only the lower head) on a validation dataset consisting of sequences of different lengths between and cycles. The error rates in the validation datasets are , , for distances , , respectively, chosen such that they are the largest error rate for which the expected logical fidelity is larger than after cycles (see Fig. 7). If the logical error rate reaches a new minimum on the validation dataset, we store this instance of the network. To keep the computational effort tractable, for the density matrix-based simulation (Fig. 5) we only train on sequences of lengths between and cycles and validate on sequences of lengths between and cycles. For the density matrix-based simulation, all datasets have the same error rate.
We train using the Adam optimizer Kin14 with a learning rate of . To avoid over-fitting and reach a better generalization of the network to unseen data, we employ two additional regularization methods: Dropout and weight regularization. Dropout with a keep probability of is applied to the output of each LSTM layer and to the output of the hidden units of the evaluation layers. Weight regularization, with a prefactor of , is only applied to the weights of the evaluation layers, but not to the biases.
After training is complete we evaluate the decoder on a test dataset consisting of ( for the density matrix-based simulation) sequences of lengths such that the logical fidelity decays to approximately , but no more than cycles. Unlike for the training and validation datasets, for the test dataset we sample a final syndrome increment and the corresponding final parity of bit flip errors after each cycle. We then select an evenly distributed subset of cycles, where is the smallest integer for which the total number of points is less than 50, for evaluation. This is done in order to reduce the needed computational resources. The logical error rate per step is determined by a fit of the fidelity to Eq. (4).
b.3 Pauli frame updater
We operate the neural network as a bit-flip decoder, but we could have alternatively operated it as a Pauli frame updater. We briefly discuss the connection between the two modes of operation.
Generally, a decoder executes a classical algorithm that determines the operator (the so-called Pauli frame) which transforms back into the logical qubit space . Equivalently (with minimal overhead), a decoder may keep track of logical parity bits that determine whether the Pauli frame of a ‘simple decoder’ Var17 commutes with a set of chosen logical operators for each logical qubit.
The second approach of bit-flip decoding has two advantages over Pauli frame updates: Firstly, it removes the gauge degree of freedom of the Pauli frame ( is an equivalent Pauli frame for any stabilizer ). Secondly, the logical parity can be measured in an experiment, where no ‘true’ Pauli frame exists (due to the gauge degree of freedom).
Note that in the scheme where flag qubits are used without reset, the errors from qubits initialized in may be removed by the simple decoder without any additional input required by the neural network.
Appendix C Results for distance-5 and distance-7 codes
References
- (1) Quantum Error Correction, edited by D. A. Lidar and T. A. Brun (Cambridge University Press, 2013).
- (2) B. M. Terhal, Quantum error correction for quantum memories, Rev. Mod. Phys. 87, 307 (2015).
- (3) D. Gottesman, An introduction to quantum error correction and fault-tolerant quantum computation, Proc. Sympos. Appl. Math. 68, 13 (2010).
- (4) A. Yu. Kitaev, Fault-tolerant quantum computation by anyons, Ann. Phys. 303, 2 (2003).
- (5) S. B. Bravyi and A. Yu. Kitaev, Quantum codes on a lattice with boundary, arXiv:quant-ph/9811052.
- (6) A. G. Fowler, M. Mariantoni, J. M. Martinis, and A. N. Cleland, Surface codes: Towards practical large-scale quantum computation, Phys. Rev. A 86, 032324 (2012).
- (7) E. T. Campbell, B. M. Terhal, and C. Vuillot, The steep road towards robust and universal quantum computation, arXiv:1612.07330.
- (8) H. Bombin and M. A. Martin-Delgado, Topological quantum distillation, Phys. Rev. Lett. 97, 180501 (2006).
- (9) H. Bombin and M. A. Martin-Delgado, Topological computation without braiding, Phys. Rev. Lett. 98, 160502 (2007).
- (10) R. S. Andrist, H. G. Katzgraber, H. Bombin, and M. A. Martin-Delgado, Tricolored lattice gauge theory with randomness: Fault tolerance in topological color codes, New J. Phys. 13, 083006 (2011).
- (11) A. J. Landahl, J. T. Anderson, and P. R. Rice, Fault-tolerant quantum computing with color codes, arXiv:1108.5738.
- (12) H. Bombin, Gauge color codes: optimal transversal gates and gauge fixing in topological stabilizer codes, New J. Phys. 17, 083002 (2015).
- (13) A. Kubica and M. E. Beverland, Universal transversal gates with color codes: A simplified approach, Phys. Rev. A 91, 032330 (2015).
- (14) D. Gottesman and I. L. Chuang, Demonstrating the viability of universal quantum computation using teleportation and single-qubit operations, Nature 402, 390 (1999).
- (15) S. Bravyi and A. Kitaev, Universal quantum computation with ideal Clifford gates and noisy ancillas, Phys. Rev. A 71, 022316 (2005).
- (16) D. Litinski, M. S. Kesselring, J. Eisert, and F. von Oppen, Combining topological hardware and topological software: color code quantum computing with topological superconductor networks, Phys. Rev. X 7, 031048 (2017).
- (17) D. Litinski and F. von Oppen, Braiding by Majorana tracking and long-range CNOT gates with color codes, Phys. Rev. B 96, 205413 (2017).
- (18) E. Dennis, A. Kitaev, A. Landahl, and J. Preskill, Topological quantum memory, J. Math. Phys. 43, 4452 (2002).
- (19) J. Edmonds, Paths, trees, and flowers, Canad. J. Math. 17, 449 (1965).
- (20) D. S. Wang, A. G. Fowler, C. D. Hill, and L. C. L. Hollenberg, Graphical algorithms and threshold error rates for the 2d colour code, Quantum. Inf. Comput. 10, 780 (2010).
- (21) G. Duclos-Cianci and D. Poulin, Fast decoders for topological quantum codes, Phys. Rev. Lett. 104, 050504 (2010).
- (22) P. Sarvepalli and R. Raussendorf, Efficient decoding of topological color codes, Phys. Rev. A 85, 022317 (2012).
- (23) N. Delfosse, Decoding color codes by projection onto surface codes, N. Delfosse, Phys. Rev. A 89, 012317 (2014).
- (24) A. M. Stephens, Efficient fault-tolerant decoding of topological color codes, arXiv:1402.3037.
- (25) P. W. Shor, Fault-tolerant quantum computation, Proceedings of 37th Conference on Foundations of Computer Science, pp. 56–65 (1996).
- (26) A. M. Steane, Active stabilization, quantum computation, and quantum state synthesis, Phys. Rev. Lett. 78, 2252 (1997).
- (27) E. Knill, Scalable quantum computing in the presence of large detected-error rates, Phys. Rev. A 71, 042322 (2005).
- (28) R. Chao and B. W. Reichardt, Quantum error correction with only two extra qubits, arXiv:1705.02329.
- (29) R. Chao and B. W. Reichardt, Fault-tolerant quantum computation with few qubits, arXiv:1705.05365
- (30) C. Chamberland and M. E. Beverland, Flag fault-tolerant error correction with arbitrary distance codes, Quantum 2, 53 (2018).
- (31) M. Gutiérrez, M. Müller and A. Bermudez, Transversality and lattice surgery: exploring realistic routes towards coupled logical qubits with trapped-ion quantum processors, arXiv:1801.07035.
- (32) T. Tansuwannont, C. Chamberland, and D. Leung, Flag fault-tolerant error correction for cyclic CSS codes, arXiv:1803.09758.
- (33) G. Torlai and R. G. Melko, Neural decoder for topological codes, Phys. Rev. Lett. 119, 030501 (2017).
- (34) S. Varsamopoulos, B. Criger, and K. Bertels, Decoding small surface codes with feedforward neural networks, Quantum Sci. Technol. 3, 015004 (2018).
- (35) P. Baireuther, T. E. O’Brien, B. Tarasinski, and C. W. J. Beenakker, Machine-learning-assisted correction of correlated qubit errors in a topological code, Quantum 2, 48 (2018).
- (36) S. Krastanov and L. Jiang, Deep neural network probabilistic decoder for stabilizer codes, Sci. Rep. 7, 11003 (2017).
- (37) N. P. Breuckmann and X. Ni, Scalable neural network decoders for higher dimensional quantum codes, arXiv:1710.09489.
- (38) A. Davaasuren, Y. Suzuki, K. Fujii, and M. Koashi, General framework for constructing fast and near-optimal machine-learning-based decoder of the topological stabilizer codes, arXiv:1801.04377.
- (39) C. Chamberland and P. Ronagh, Deep neural decoders for near term fault-tolerant experiments, arXiv:1802.06441.
- (40) D. Poulin and Y. Chung, On the iterative decoding of sparse quantum codes, Quantum Inf. Comput. 8, 987 (2008).
- (41) N. Maskara, A. Kubica, and T. Jochym-O’Connor, Advantages of versatile neural-network decoding for topological codes, arXiv:1802.08680.
- (42) T. E. O’Brien, B. Tarasinski and L. DiCarlo, Density-matrix simulation of small surface codes under current and projected experimental noise, npj Quantum Information 3, 39 (2017).
- (43) D. Gottesman, Stabilizer Codes and Quantum Error Correction (Doctoral dissertation, California Institute of Technology, 1997).
- (44) K. M. Svore, B. M. Terhal, and D. P. DiVincenzo, Local fault-tolerant quantum computation, Phys. Rev. A 72, 022317 (2005).
- (45) The quantity defined in Eq. (3) is called a pseudo-threshold because it is -dependent. In the limit it converges to the true threshold.
- (46) S. Hochreiter and J. Schmidhuber, Long short-term memory, Neural Computation 9, 1735 (1997).
- (47) M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S.Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mané, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng, TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems, arXiv:1603.04467.
- (48) W. Zaremba, I. Sutskever, and O. Vinyals, Recurrent neural network regularization, arXiv:1409.2329.
- (49) D. P. Kingma and J. Ba, Adam: A Method for Stochastic Optimization, arXiv:1412.6980.