Uneingeschränkter Zugang

Polynomial Complexity of Quantum Sample Tomography

 und   
26. Mai 2025

Zitieren
COVER HERUNTERLADEN

Figure 1.

The diagram illustrates the RNN architecture for sampling. FNN represents feedforward neural network. The output’s 0-1 distribution allows for sampling a specific state, which is then passed on to the next step as a measurement result.
The diagram illustrates the RNN architecture for sampling. FNN represents feedforward neural network. The output’s 0-1 distribution allows for sampling a specific state, which is then passed on to the next step as a measurement result.

Figure 2.

The prediction fidelity for different number of qubits. The fidelity is computed by averaging 400 random samples.
The prediction fidelity for different number of qubits. The fidelity is computed by averaging 400 random samples.

Figure 3.

The performance of the neural network on TFIM states. (a) is for 6-qubit system, and (b) is for 12-qubit system. Both show the accuracy of the neural network predictions on randomly sampled unitary matrices.
The performance of the neural network on TFIM states. (a) is for 6-qubit system, and (b) is for 12-qubit system. Both show the accuracy of the neural network predictions on randomly sampled unitary matrices.

Figure 4.

The average prediction accuracy as the number of samples increases. Generally, as the number of samples increases, the average prediction accuracy also improves.
The average prediction accuracy as the number of samples increases. Generally, as the number of samples increases, the average prediction accuracy also improves.

Figure 5.

Relation between qubit number and sample size. The overall trend suggests a power-law relationship between the sample size and qubit number. The dashed lines represent the fitted curves for the corresponding data.
Relation between qubit number and sample size. The overall trend suggests a power-law relationship between the sample size and qubit number. The dashed lines represent the fitted curves for the corresponding data.

Figure 6.

Performances of different neural network architecture. The fidelity is computed by averaging 400 random samples. It can be found that our LSTM model behaves well, but the attention-based neural networks (ANN (1) and ANN (2)) are unstable.
Performances of different neural network architecture. The fidelity is computed by averaging 400 random samples. It can be found that our LSTM model behaves well, but the attention-based neural networks (ANN (1) and ANN (2)) are unstable.

Figure 7.

Performance of the neural network on different quantum states. Both of them displays the accuracy of the neural network predictions on randomly sampled unitary matrices. The first row consists of 6-qubit systems, while the second row comprises 12-qubit systems. From left to right, each column represents a different quantum state: cat-state, random state, and w-state.
Performance of the neural network on different quantum states. Both of them displays the accuracy of the neural network predictions on randomly sampled unitary matrices. The first row consists of 6-qubit systems, while the second row comprises 12-qubit systems. From left to right, each column represents a different quantum state: cat-state, random state, and w-state.
Sprache:
Englisch
Zeitrahmen der Veröffentlichung:
1 Hefte pro Jahr
Fachgebiete der Zeitschrift:
Physik, Quantenphysik