Open Access

Polynomial Complexity of Quantum Sample Tomography

 and   
May 26, 2025

Cite
Download Cover

Introduction

Quantum computing represents a revolutionary computational paradigm by offering substantial advantages in various applications, including quantum system simulation [1], linear equations [2], and combinatorial optimization [3]. Although fully functional quantum computers remain unrealized, their potential impact on computational capabilities is profound. One of the big challenges in quantum computing is figuring out how to reconstruct quantum states. Specifically, a system with n qubits requires approximately 4n measurements for complete state reconstruction, resulting in exponential computational costs. For example, reconstructing a 10-qubit system requires 8,192 measurements across 59,049 distinct quantum circuits, requiring roughly 130 hours on IBM quantum processors [4]. Consequently, developing efficient methods for quantum state reconstruction is crucial for practical quantum computing applications.

Several approaches have been developed to address the challenges of quantum state reconstruction. These include maximum likelihood estimation [5], projected gradient descent [6], and compressed sensing [7]. For instance, compressed sensing reduces the computational complexity from 𝒪(d2) to 𝒪(d log d), where d = 2n represents the density matrix dimension. Some works also address quantum state tomography for specific cases, such as matrix product density operators [8] and low rank density matrix [9]. While these methods have demonstrated promising results, they primarily focus on full density matrix reconstruction and fail to achieve exponential acceleration. Furthermore, their reliance on iterative procedures complicates both convergence analysis and complexity estimation.

Recently, neural network-based approaches have emerged as promising tools to quantum state reconstruction challenges. Initially started from the application of restricted Boltzmann machines (RBM) [10], this field has expanded to incorporate deep learning methods [11,12], adaptive measurement base selection techniques [13], and quantum machine learning methods [14]. These approaches compress quantum states into statistical learning models, enabling information extraction without exhaustive measurements. A key advantage of these methods is their ability to infer measurement outcomes from partial data, eliminating the need for complete state reconstruction. For example, when computing the trace of the product between a quantum state density matrix ρ of n-qubit and a given matrix M, full density matrix reconstruction can be avoided if M can be decomposed as: M=i=1kαiMi, where k is significantly smaller than O(4n) and tr(ρMi) is computable in polynomial time.

Despite these algorithmic advances, there are very few theoretical justifications. An important theoretical foundation for neural network applications in quantum state reconstruction was established by Aaronson [15], who introduced the concept of quantum shadow tomography. While his work demonstrated that two-outcome measurements could be predicted using polynomial scale sampling, it left open the question of whether it is possible to predict distributional measurement outcomes, similar to classical shadow tomography [16,17], with polynomial complexity. In this paper, we extend this theoretical framework by employing local Rademacher complexity analysis to prove the effectiveness of quantum sample tomography for measurements with distributional outcomes. Relying on the theorem from Bartlett et al. [18], we demonstrate that the required number of measurements scales polynomially with qubit number, which is shown in Theorem 1. By incorporating random projection techniques, we show that this relationship holds even for high-dimensional quantum systems. To validate our findings, we conduct numerical experiments using RNNs. We configure the sample size and the number of RNN parameters to scale polynomially with qubit number and introduce a penalty term in the loss function to satisfy the conditions of our theorem. Our experiments on the ground state of the transverse field Ising model (TFIM) show agreement between theoretical predictions and numerical results. Furthermore, we compare our RNN approach against other models and show its effectiveness in various quantum states including cat states, random states, and W states.

The remainder of this paper is structured as follows. In Section 2, we introduce the fundamentals of quantum sample tomography and provide the main result. Sections 3 and 4 address the theoretical challenges using our new framework and give a rigorous proof of the main theorem. In Sections 5 and 6, we validate the theoretical findings through several quantum models, with particular emphasis on the performance of the proposed neural network. Section 7 concludes the paper.

Preliminary

We first review the fundamental concepts in quantum computation and give some necessary notations.

A quantum system is mathematically represented by a density matrix ρ, which is characterized by three essential properties: Hermiticity (ρ = ρ), positive definiteness (ρ ≽ 0), and unit trace (trρ = 1). The matrix elements may take complex values, reflecting the quantum mechanical nature of the system. Under physical operations implemented through quantum circuits, the system undergoes unitary transformations: when a unitary matrix U is applied, the density matrix transforms as ρUρU. Modern quantum computers, despite their varied physical implementations, fundamentally operate by applying sequences of unitary transformations. According to the Solovay-Kitaev theorem [19], arbitrary unitary operations can be efficiently approximated to any desired precision using a finite universal set of elementary quantum gates.

A fundamental challenge in quantum state characterization is the reconstruction of an unknown quantum state ρ. The standard approach to this problem, known as quantum state tomography, relies on measurements using Pauli operators. The Pauli matrices are defined as follows: X=(0110),Y=(0ii0),Z=(1001).

The Pauli matrices, together with the identity matrix I, constitute a complete basis for two-dimensional Hermitian matrices. In quantum mechanical measurements, the expectation value of a Pauli operator M with respect to a quantum state ρ is given by tr(ρM).

The measurement principles extend naturally to multi-qubit systems, where measurements can be performed on individual qubits. For n-qubit systems, the density matrix can be expressed in terms of Pauli operator measurements through the following decomposition: ρ={v1,,vn}4ntr(ρσ1v1σ2v2σnvn)σ1v1σ2v2σnvn2n, where σivi denotes the Pauli operator on the i-th qubit with vi ∈ {1, 2, 3, 4}, corresponding to {σi1=I,σi2=X,σi3=Y,σi4=Z}. The complete reconstruction of the quantum state requires measuring all terms in this decomposition, requiring at least O(3n) distinct measurements [20]. This exponential scaling with respect to qubit number presents a fundamental challenge for quantum state tomography of large-scale quantum systems.

To address the exponential scaling challenge, Aaronson [15] introduced the concept of shadow tomography. The problem can be formally stated as follows:

Problem 1

Given an unknown D = 2n dimensional quantum mixed state ρ and a distribution D over two-outcome measurements, we sample m measurements {E1, …, Em} independently from D and obtain their measurement results bi = tr(ρEi) for i = 1, …, m. For any chosen failure probability δ > 0, accuracy γ > 0, and error tolerance ε > 0, what is the minimum number of measurements m (depending on D, ε, and δ), such that any hypothesis state σ minimizing the quadratic functional i=1m(tr(σEi)bi)2 satisfies PrED(|tr(ρE)tr(σE)|>γ)ε with probability at least 1 – δ?

This formulation essentially asks whether a statistical learning model exists that can extract exponentially large amounts of information from a relatively small dataset. Aaronson provided an affirmative answer by proving that only mmin=O(1γ2ε2(nγ2ε2ln2(1γε)+ln(1δ))) measurements are sufficient, given that γε ≥ 7η, with η > 0 being the upper bound of |tr(σEi) – tr(ρEi)| for all i = 1, …, m. This result demonstrates that approximating an exponential number of two-outcome measurements requires only a number of quantum state measurements that scale linearly with n, offering a considerable improvement over the exponential complexity of traditional approaches.

While Aaronson’s result focuses on predicting two-outcome measurements, our work addresses quantum sample tomography, building on the experimental investigations by Smith et al. [4]. In the subsequent analysis, let 𝔼 denote the expectation. The problem can be formally defined as follows:

Problem 2

Given an unknown D = 2n dimensional quantum mixed state ρ and a distribution 𝒟 over two-outcome measurements, we sample m unitaries {U1, …, Um} independently from 𝒟 and obtain their measurement results bi=diag(Ui+ρUi) for i = 1, …, m. For any chosen failure probability δ > 0 and error tolerance ε > 0, what is the minimum number of measurements m (depending on D, ε, and δ) such that any hypothesis state σ minimizing the quadratic functional i=1mdiag(Ui+σUi)bi2, where ∥·∥ is given in the sense of l2 norm, satisfies EUD(diag(U+σU)diag(U+ρU))ε with probability at least 1 – δ?

While structurally similar to Problem 1, our formulation differs in three key aspects. First, instead of two-outcome measurements, we consider unitary transformations as independent variables. Second, our output consists of complete probability distributions, which can be represented as vectors in practical implementations, rather than binary outcomes. Third, we employ expectation rather than probability, which is more general. In fact, using Markov’s inequality, if we choose ε = εγ′, where ε′, γ′ > 0, then it holds PrUD(diag(U+σU)diag(U+ρU)>γ)ε, which is consistent with the conclusion of Problem 1. Thus, our formulation represents a natural generalization of Aaronson’s original shadow tomography problem.

In the subsequent analysis, we let the loss function (x,y)=xy22 unless otherwise specified. We denote poly(n) a polynomial function of variable n. We provide an affirmative answer to Problem 2 under the assumption that ρ is sparse, which is stated in the following theorem:

Theorem 1

Let ρ be an n-qubit quantum state and 𝒟 be a distribution over unitary matrices. Assume that the number of nonzero elements in ρ is only polynomial in n. For any given failure probability 0 < δ < 1 and error tolerance γ > 0, consider a collection of unitary matrices {Ui}i=1m, where m = poly(n) with coefficient depending on δ and γ, sampled independently from distribution 𝒟, with the corresponding probability distributions h(Ui)=diag(UiρUi+). Suppose there exists a learning model f from a sufficiently expressive hypothesis space ℱ such that:

f corresponds to some quantum state σ such that f(Ui)=diag(UiσUi+) for all i = 1, …, m,

f achieves empirical error bound such that 1mi=1m(h(Ui),f(Ui))η for a given η > 0.

Then with probability at least 1 – δ, the expected loss satisfies: ED(h(U),f(U))γ+2η.

The theorem requires three assumptions. First, the number of nonzero elements in the matrix should be polynomial qubit number, which enables dimension reduction of the feature map in Section 4. Second, the output of the learning algorithm should be consistent with measurement results from some quantum state, allowing us to express the algorithm’s output as a functional of feature functions. In Section 5, for neural network, we achieve this requirement through a penalty term in the loss function. Third, the loss function should remain below a certain threshold, which is achievable for algorithms with sufficient learning capacity.

For the proof, we diverge from the method employed in [21], which relies on amplification techniques and adaptive protocols for iterative density matrix reconstruction. Instead, we follow the theoretical framework established in [15], which offers a more direct approach to estimate measurement statistics without requiring complete state reconstruction. In particular, we employ local Rademacher complexity, a fundamental concept in statistical learning theory [18], to establish our proof. Our proof strategy focuses on controlling the local Rademacher complexity of the loss function. We first decompose the local Rademacher complexity of the loss function into the local Rademacher complexities of its components. Next we construct a kernel function and the corresponding feature map such that the components can be expressed as functionals of the feature map. We then estimate the expected loss over the entire distribution using empirical loss and local Rademacher complexity. Finally, we employ random projection techniques to reduce the required dataset size to polynomial scale.

Rademacher Complexity

In order to prove Theorem 1, we need to bound the actual error of the model, which requires measuring the learning capacity through the Rademacher complexity. This useful tool helps us measure how effectively a learning algorithm can capture patterns in random data. The definition of Rademacher complexity is given below.

Definition 1

(Rademacher Complexity). Let (χ, P) be a probability space and ℱ be a set of measurable functions mapping from χ to ℝ. Given a positive integer m, consider:

m i.i.d. samples from (χ, P), denoted by X1, X2, …, Xm,

m i.i.d. Rademacher variables with Pr(σi=±1)=12, denoted by σ = {σ1, σ2, …, σm}.

For any f ∈ ℱ, we define: { Pmf=1mi=1mf(Xi),(empirical mean)Pf=Ef(X),(expected value)Rmf=1mi=1mσif(Xi).(Rademacher average)

The empirical Rademacher complexity of ℱ is then defined as: EσRm=1mEσ[ supfi=1mσif(Xi)|(X1,X2,,Xm) ], where Rmℱ = supf ∈ ℱRm f.

The local Rademacher complexity (LRC) is defined in the same way but with an additional constraint Pmf2 < r, which is denoted as 𝔼σRmℱ(r). The LRC serves as a fundamental tool in theoretical machine learning proofs, especially its sub-root property [18].

Definition 2

A function φ : [0, +∞) → ℝ is called sub-root if it satisfies the following conditions:

φ is non-negative,

φ is non-decreasing,

ϕ(r)/r is non-increasing for r > 0.

This sub-root property ensures the existence of a unique fixed point for φ, which is shown in the following lemma.

Lemma 1

A sub-root function φ, which is not identically zero, possesses exactly one positive fixed point, denoted as r*. For any r > 0, the inequality φ(r) ≤ r holds if and only if r* ≤ r. Furthermore, given two sub-root functions φ1 and φ2 where φ1 (r) φ2 (r) for all r > 0, the fixed point r1*ofϕ1 must be smaller than the fixed point r2*ofϕ2.

Proof

The proof begins by establishing the continuity of φ. For any r1 > r2 > 0, the non-decreasing property of φ implies |φ(r1) – φ(r2)| = φ(r1) – φ(r2). Given that ϕ(r)/r is non-increasing, we can derive ϕ(r1)/r2r1ϕ(r2)/r2, leading to the inequality: ϕ(r1)ϕ(r2)ϕ(r2)r1r2r2.

The continuity of φ follows as |φ(r1) – φ(r2)| approaches zero when r1approaches r2from either direction. The function φ(r)/r inherits continuity in (0, +∞) and maintains non-negativity. The strict monotonic decrease of 1/r in (0, +∞) ensures that φ(r)/r is also strictly decreasing.

If this ratio φ(r)/r consistently exceeds 1, then lim limr+ϕ(r)/r would diverge to infinity, which contradicts the sub-root property (3). Conversely, if φ(r)/r is consistently less than 1, then limr0+ϕ(r)/r=0 would imply that φ(r) vanishes in [0, +∞), contradicting the non-triviality of φ. Therefore, equation φ(r)/r = 1 must have exactly one positive solution, guaranteed by monotonicity. When φ(r) ≤ r for some r > 0, we can deduce that φ(t)/t ≤ 1 for all tr, implying r* ≤ r. The converse follows analogously.

Finally, for two sub-root functions φ1 and φ2, where φ1 (r) < φ2 (r) for all r > 0, we can conclude that ϕ1(r2*)<ϕ2(r2*)=r2*, which implies r1*<r2*.

Lemma 1 establishes a fundamental property of sub-root functions. It shows that a sub-root function has a unique fixed point. It also provides us with a method to bound this fixed point. It turns out that the LRC has a sub-root nature, as given in the following lemma [18].

Lemma 2

Let ℱ be a class of measurable functions. Consider a functional T : ℱ → [0, +∞) that satisfies T(αf) ≤ α2T(f) for all f ∈ ℱ and α ∈ [0, 1]. Given f^, define a random function φ as LRC on a subset of ℱ: ϕ(r)=EσRm{f:T(ff^)r},r[0,+).

Then both φ and its expectation 𝔼φ(r) are sub-root functions.

Now consider a supervised learning framework with input space χ and output space Y, where the joint distribution P is defined on X×Y. Given a training sample {(Xi,Yi)}i=1m drawn independently from P, the objective is to identify a function f:XY from a function class ℱ that minimizes the expected loss Ef=E(f(X),Y), where ℓ is an arbitrary loss function on Y×Y. Let ℓ denote the loss class induced by ℱ, formally defined as: ={(x,y)(f(x),y):f}.

Define a star-hull around zero as: star(,0)={αf:f,α[0,1]}.

Building upon Lemma 2, one can establish the following theorem [18]:

Theorem 2

Consider a bounded loss function :Y×Y[0,1]. Given x > 0, define the function ϕ^m,x as: ϕ^m,x(r)=20EσRm{fstar(,0):Pmf22r}+13xm.

Let r^* be the fixed point of ϕ^m,x. Then for any K > 1, the following inequality holds for all functions f ∈ ℱ: PfKK1Pmf+6Kr^*+x(11+5K)m with probability at least 1 – 3e−x.

This theorem establishes a relationship between the empirical error and the actual error through local Rademacher complexity. When the Rademacher term on the right-hand side of equation (2) can be sufficiently small and the fixed point r^* approaches the origin, it is expected that the actual error Pf approaches zero. To quantify this convergence, an estimation of the Rademacher complexity is required. This estimation is based on two results, the first is related to the Rademacher complexity of vector-valued functions [22].

Theorem 3

For any set χ and a sequence (X1, …, Xm) ∈ χm, consider a class of functions ℱ mapping from χ to2 space. Given a collection of functions hi : ℓ2 → ℝ with Lipschitz constant L, the following inequality holds: Eσsupfi=1mσihi(f(Xi))2LEσsupfi=1mkσikfk(Xi), where σik represents the independent doubly indexed Rademacher sequence and fk (Xi) denotes the k-th component of the vector f (Xi).

Theorem 3 establishes a fundamental relationship between the Rademacher complexity of vector-valued functions and their scalar components. In particular, it provides a practical way for complexity estimation by showing that the complexity of a composite vector-valued function can be bounded by the complexities of its components. To obtain a more accurate complexity bound, we adapt the theorem from Cortes et al. [23] with an additional norm scaling factor.

Theorem 4

Let k(x,x˜) be a Mercer kernel (symmetric and positive semidefinite) with its associated feature map Φk. Consider its eigenvalue decomposition: k(x,x˜)=j=1λjφj(x)Tφj(x˜), where (λj)j=1 represents the sequence of eigenvalues in descending order with corresponding eigenfunctions (φj)j=1. Given B > 0, define the function class: ={fw=(xω,Φk(x)):w2B}.

Then, for any r > 0, it holds the following bound on the expected Rademacher complexity: EσRm(r)B2mminθ0(θr+j>θλj)=B2mj=1min(r,λj).

Theorem 4 shows that if we know the eigenvalues of the kernel function, we can estimate the local Rademacher complexity of functionals of its feature map. The construction of such a feature map will be presented in Section 4.

Polynomial Complexity

In the preceding sections, we introduced the local Rademacher complexity and related theorems. This section focuses on proving Theorem 1. We begin by establishing the Lipschitz continuity of the loss function class, which is needed in Theorem 3. In the subsequent analysis, 𝒢 represents the function class that maps the input space χ to the distribution space ℝN, where N = 2n is the dimension of n-qubit Hilbert space.

Lemma 3

For a given distribution function f¯, let us define a new function class Gf¯ as: Gf¯={f(x)f¯(x):f}.

Consider the function G:Gf¯ defined by: G(g(x))=k=1N(gk(x))2,gGf¯, where gk(x) is the k-th component of g(x). Then every function in star(ℓ𝒢, 0) is Lipschitz continuous with Lipschitz constant 4.

Proof

It is sufficient to prove |G(g(x))G(h(x))|4g(x)h(x), where g=f1f¯ and h=f2f¯ both belong to Gf¯.

Let us expand the left-hand side of equation (5): | G(g(x))G(h(x)) |=| i=1N(gi(x))2i=1N(hi(x))2 |=| i=1N(gi(x)hi(x))(gi(x)+hi(x)) || i=1Nf1,i(x)(gi(x)hi(x)) |+| i=1Nf2,i(x)(gi(x)hi(x)) |+| 2i=1Nf¯i(x)(gi(x)hi(x)) |g(x)h(x)(i=1Nf1,i(x)+i=1Nf2,i(x)+2i=1Nf¯i(x))4i=1N(gi(x)hi(x))2=4gh2.

The second inequality uses the fact that all f1,i(x),f2,i(x),f¯i(x) are components of probability distributions.

We then introduce a specific feature map for unitary matrices that plays a crucial role in our estimation. Let SU(N) denote the special unitary group of degree N = 2n, consisting of all N × N unitary matrices with determinant equal to 1. For a unitary matrix U := (uij) ∈ SU(N) and 1 ≤ kN, we define the feature map Φ(k)(U)={ (ukiukj¯)(i,j),i,j=1,,N }N2. Given any two unitary matrices U1, U2, we define the corresponding kernel function as their inner product in the feature space: k(U1,U2)=Φ(k)(U1),Φ(k)(U2).

By Theorem 4, this kernel admits a spectral decomposition: k(U1,U2)=i=1Kλiφi(U1)φiT(U2), where K represents the rank of the kernel matrix, bounded above by N2. The construction of this feature map naturally leads to the following result.

Theorem 5

Let ρ be a density matrix of order N = 2n and USU(N). Then each element of the vector h = diag(UρU) can be expressed as an inner product between the feature map and a vector whose2-norm is strictly bounded by 1.

Proof

For k = 1 …, N, we have hk(U)=i,j=1Nukiukj¯ρij.

Let w = (ρij) ∈ ℂ N2. Then we can express hk (U) as the inner product: hk(U)=w,Φ(k)(U).

To establish the norm bound, observe that ∥w2 equals the Frobenius norm of { σi }i=1N denote the singular values of ρ. By the properties of density matrices, it holds i=1Nσi=trρ=1. Therefore: ρF=i=1Nσi2i=1Nσi=1, where the inequality follows from the fact that 0 ≤ σi ≤ 1. This establishes the conclusion that ∥w∥2 ≤ 1.

We also require the Johnson-Lindenstrauss (JL) lemma [24], which is fundamental to the dimensional reduction analysis.

Theorem 6

(Johnson-Lindenstrauss Lemma). For any sample size m ≥ 1, any finite point set X={ xid }i=1m, and any error tolerance ε ∈ (0, 1), let q be a positive integer satisfying: qClnmε2(1ε), where C is a positive constant that only depends on d. Then there exists a linear map F : ℝd → ℝq of the form F(x) = VTx, where V = (vij) ∈ ℝd×q and vij are i.i.d. random variables with zero mean and scaled unit variance, such that it holds: (1ε)xixj22F(xi)F(xj)22(1+ε)xixj22 for all pairs xi, xj ∈ χ with probability at least 1 – δ, where δ = 2e−(ε2(1−ε))(q/4).

Remark 1

A canonical example is simply choosing vij ~ 𝒩 (0, 1/q).

Now we have all the tools necessary to establish our main result. We begin by presenting a weaker variant of Theorem 1:

Theorem 7

Let ρ be an n-qubit quantum state and 𝒟 be a distribution over unitary matrices. For any given failure probability 0 < δ < 1 and error tolerance γ > 0, consider a collection of unitary matrices { Ui }i=1m, where m = poly(2n) with coefficient depending on δ and γ, sampled independently from distribution 𝒟, with the corresponding probability distributions h(Ui)=diag(UiρUi+). Suppose there exists a learning model f from a sufficiently expressive hypothesis space ℱ that:

f corresponds to some quantum state σ such that f(Ui)=diag(UiσUi+) for all i = 1, …, m,

f achieves empirical error bound such that 1mi=1m(h(Ui),f(Ui))η for a given η > 0.

Then with probability at least 1 – δ, the expected loss satisfies: ED(h(U),f(U))γ+2η.

Remark 2

Theorems 1 and 7 differ only in their sample complexity: m = poly(n) in Theorem 1 versus m = poly(2n) in Theorem 7.

Proof

According to the assumption, we can represent each probability distribution h(Ui) with i = 1, …,m as an N-dimensional vector, where N = 2n corresponds to the dimension of the n-qubit Hilbert space. Let hk(Ui) denote the probability of measuring the system in the k-th computational basis state of h(Ui), where the states are encoded in binary representation, i.e., |0〉n corresponds to index 0, |0〉⊗(n−1)|1〉 corresponds to index 1, and so forth.

We denote the unitary matrices Ui as input variables xi. According to Theorem 3 and Lemma 3, it holds: EσRmG=1mEσsupgGi=1mσi(g(xi),h(xi))42mEσsupgGi=1mk=1Nσik(gk(xi)hk(xi))42mk=1NEσsupgGi=1mσik(gk(xi)hk(xi))=42k=1NEσRmGk, where gk is the k-th component of g, and 𝒢k = {gkhk : g ∈ 𝒢}.

Equation (7) enables us to estimate the Rademacher complexity of the loss function through its component-wise Rademacher complexities. Given r > 0, considering the constraint 1mi=1m(k=1N(gk(xi)hk(xi))2)2r, we can deduce that (k=1N(gk(xi)hk(xi))2)2mr for all i = 1, …, m. This implies k=1N(gk(xi)hk(xi))2mr.

Summing over i yields i=1mk=1N(gk(xi)hk(xi))2mmr, or equivalently, k=1N1mi=1m(gk(xi)hk(xi))2mr.

Denote ak=1mi=1m(gk(xi)hk(xi))2. For any interval [mr/2j,mr/2j1], the number of ak within this interval cannot exceed 2j, where 1 ≤ jn. For j = n, we extend the lower bound to 0 so that 0akmr/2n1. Consequently, from equation (7), it implies EσRmG(r)42j=1n2jEσRmGk(rm2j1).

Based on Theorem 5, we can get that gk = w · Φ(U) and hk = v · Φ(U), which implies wvw+v2.

From Theorem 4, we obtain EσRmGk(rm2j1)22mk=1Kmin(rm2j1,λk), where KN2. Since k ranges from 1 to K, this yields EσRmGk(rm2j1)22Kmrm2j1

Substituting equation (10) into equation (9) produces EσRmG(r)82r14m14Kj=1n2j2322NKm14r14.

Combining equation (11) and equation (1) in Theorem 2, we get that ϕ^m,x(r)1280r14m14N32+13xm=ϕ˜m,x(r).

Based on Lemma 1, we now aim to find the upper bound of the fixed point r˜ of ϕ˜m,x(r), where r˜ satisfies the equation 1280r˜14m14N32+13xm=r˜.

This equation can be rewritten as r˜14(r˜34a)=b, where a=1280N32m14 and b=13xm. Let r^=(1280N32m14+1m)43+(13xm)4.

Then r^ satisfies r^34a+1m and r^14mb, so using Lemma 1, we get r^r˜.

Applying Theorem 2, we can establish: PfKK1Pmf+6Kr^+x(11+5K)mKK1Pmf+6K((1280N32m14+1m)43+(13xm)4)+x(11+5K)m with probability 1 – 3e−x. For any 0 < δ < 1, setting x=ln(δ3), we can select an appropriate m = poly(2n), to ensure the sum of the last two terms remains below a given error tolerance γ. In particular, it holds m=O(N83(ln(δ3))21γ3) from equation (14).

While our analysis shows that quantum state reconstruction is possible, the exponential scaling of measurement requirements m = ploy(2n) remains a fundamental challenge for practical applications. To address this limitation, we extend our analysis to Theorem 1 by incorporating sparsity constraints on the density matrix representation of quantum states, which demonstrates a substantial reduction in measurement complexity.

Proof. of Theorem 1

Here we follow the notations established in Theorem 7. From Theorem 7, for random samples { (xi,yi) }i=1m drawn from distribution 𝒟, where m = poly(2n), we obtain equation (14). Let D˜ denote the uniform distribution over these samples.

Choose a constant ε=min{ γ2,12 }. Following Lemma 6, we define q=C1nε2(1ε)Cln(2m)ε2(1ε), where C > 0, C1 > 0 are constants. We construct a random projection matrix V ∈ ℝ2m×q with entries independently sampled from N(0,1q). With probability 12(2m)C4, it holds (1ε)zizjVziVzj(1+ε)zizj for all pairs zi, zj ∈ {y1, …,ym,f(x1), …, f(xm)}. Let f˜(x)=Vf(x) and y˜=Vy. From equation (15), we derive: (1ε)2(f,y)(f˜,y˜)(1+ε)2(f,y).

Following the proof structure of Theorem 7 with distribution D˜, we now try to determine the sample size m˜. Given the compression to dimension q, equation (7) becomes: EσRm˜G42k=1qEσRm˜Gk

Using the sparsity hypothesis, we can reduce the dimension of the feature map Φ(k), thus reducing the dimensionality factor K in equation (10) to M (the number of nonzero elements in ρ, which is poly(n)). This yields the refined bound: r*(1280Mq·m˜14+1m˜)43+(13xm˜)4.

Choosing m˜=O((Mq)43(ln(δ3))21γ˜3)=poly(n) ensures: Pf˜2Pm˜f˜+γ˜, where 0 0<γ˜<1. Since D˜ is discrete, we have Pf˜=Pmf˜. The desired result follows from equation (14) and equation (16).

Recurrent Neural Network (RNN)

In this section, we address the practical implementation of Theorem 1. A challenge lies in determining an efficient parameterization of unitary matrices that can serve as viable input to our model. Since the dimension of the unitary group SU(2n) grows exponentially with n, direct parameterization of arbitrary unitary matrices becomes impractical. Therefore, for practical implementation, we restrict our attention to a subset of unitary matrices that can be decomposed as Kronecker products of 2 × 2 unitary matrices:

U=U1U2Un,UiSU(2).

This reduction to local operations simplifies our approach by focusing on the parameterization of SU(2) matrices. We express the unitary matrices as: (cosθ2ei(ϕ+ψ)/2sinθ2ei(ϕψ)/2sinθ2ei(ϕ+ψ)/2cosθ2ei(ϕψ)/2), where θ, φ, ψ are all from 0 to 2π. The volume form for this parameterized unitary matrix is proportional to d cos(θ)dφdψ. While our analysis allows for arbitrary probability distributions of unitary matrices, we adopt the Haar measure for sampling to ensure a uniform error distribution across randomly sampled unitary matrices. This choice leads to the following parameterization: { θ=arccosθ,ϕ=ϕ,ψ=ψ, where φ′, ψ′ are sampled uniformly from the original defined interval, and θ0 is sampled uniformly in [–1, 1].

To validate Theorem 1, we design a series of numerical experiments using the Long Short-Term Memory (LSTM) as our main statistical learning model. The LSTM model, known for its capability to process sequential data with long-term dependencies, is particularly suitable for our application. In our framework, each measurement outcome can be represented as a sequence of binary digits (0 or 1), analogous to a sentence in natural language processing. The model generates conditional probabilities for each subsequent measurement outcome based on current and previous results. The joint probability of the entire measurement sequence is then computed as the product of these conditional probabilities.

The prediction phase follows a different process from the training phase. During prediction, the neural network accepts specified conditions as input and generates conditional probabilities at each sequential step. These probabilities guide a sampling process where each binary outcome (“word”) is selected according to the computed probability distribution. The selected outcome is then fed as input to the subsequent step. The final probability of the complete quantum state is computed as the product of these sequential conditional probabilities. This prediction mechanism is depicted in Figure 1.

Figure 1.

The diagram illustrates the RNN architecture for sampling. FNN represents feedforward neural network. The output’s 0-1 distribution allows for sampling a specific state, which is then passed on to the next step as a measurement result.

To enhance the predictive capability, we incorporate quantum measurement context through a dual-input structure: (1) the measurement outcome of the previous particle, and (2) the parameterization of the unitary operator U corresponding to the current particle’s measurement basis. This design enables the network to learn the correlation between measurement outcomes and their corresponding basis transformations, thereby capturing the quantum state’s behavior under different measurement bases. For sequence initialization, we employ a designated token ‘2’, which appears exclusively at the beginning of each measurement sequence. This initialization token provides a clear way of sequence boundaries in the training data.

We now introduce the loss function employed in this work. We adopt the cross-entropy loss as our objective function due to its effectiveness in distribution approximation tasks. The cross-entropy loss between the true distribution p and the predicted distribution q is defined as:

cross-entropy (p,q)=ipilog1qi.

Since our training samples are drawn independently from the true distribution p, by the law of large numbers, the empirical loss converges to:

cross-entropy (p,q)Ex~p[ log1q(x) ]=ilog1qi.

This formulation eliminates the need to compute the true probabilities pi, thereby reducing memory requirements.

To ensure that our neural network correctly represents the quantum state transformation h(U) = diag(UρU), we incorporate two constraints into the loss function. The first is a parameterization invariance constraint: parameter sets {θ, φ, ψ} and {–θ, φ + π, ψ + π} must produce identical outputs, as they represent the same unitary transformation. The second is the constraint derived from the properties of f (U) = diag(UρU). For the single-qubit case, these properties are derived as follows and they can naturally extend to multi-qubit systems through tensor products. Consider the following matrix representations:

ρ=(ρ11ρ12ρ21ρ22),U=(u11u12u21u22),U1=(u11u12u21u22).

This yields the functional form:

f(U)=( ρ11| u11 |2+2Re(ρ12u12¯u11)+ρ22| u12 |2, ρ11| u21 |2+2Re(ρ12u21¯u22)+ρ22| u22 |2 )T.

Under the chosen parameterization, we derive a symmetric relation: f(U)+f(U1)=2(cos2(θ2)f(I)+sin2(θ2)f(X)), where

I=(1001),X=(0110).

Furthermore, let

H1=12(1111),H2=12(1ii1),

and f¯1=(f(H1)f(I)+f(X)2),f¯2=(f(H2)f(I)+f(X)2), we obtain the antisymmetric component:

f(U)f(U1)=2(sin(θ)cos(ψ)f¯1sin(θ)sin(ψ)f¯2).

These two constraints are incorporated into the total loss function via l1regularization: total =cross-entropy +λ(symm +invariant ), where λ serves as the regularization coefficient.

We have also investigated replacing the LSTM model with a multi-head attention mechanism, which has shown remarkable success in various sequence modeling tasks. However, LSTM maintains a higher accuracy in our case and offers easier training procedures. This observation suggests that the sequential nature of quantum measurements may be better modeled by the LSTM’s memory structure, which is demonstrated in the following section.

Numerical Experiments

Before proceeding with the experimental analysis, we detail the configuration of our numerical experiments. To quantify the performance of the model, we employ the classical fidelity measure between the predicted distribution p and the true distribution q:

f(p,q)=i=1Npiqi.

This metric provides a natural measure of the similarity between the probability distributions of the measurement.

We now detail the hyperparameters employed in our implementation. The model was trained for 50 epochs across all experiments. While this choice lacks rigorous foundation, empirical evidence shows its consistency in achieving convergence across various system sizes. The regularization coefficient λ was scaled dynamically with the particle number, ranging from 5 to 50. For smaller systems, we found that a smaller λ value leads to faster loss convergence, while larger systems benefit from increased λ values to enforce stronger constraints on the neural network’s behavior.

To address the computational complexity of evaluating ℒsymm and ℒinvariant, we implement a stochastic index sampling strategy. Rather than computing losses over all indices, we randomly select 50 indices for loss calculation in each iteration. This approach reduces computational overhead while maintaining the effectiveness of the constraints. To ensure polynomial scaling of computational resources, we impose strategic constraints on the model architecture. The dimensionality of both the RNN hidden states and the word vector embeddings is set to 2n2, where n denotes qubit number.

TFIM Models

For initial validation, we test our model against the ground state of the transverse field Ising model (TFIM), a system for studying quantum phase transitions. The TFIM Hamiltonian is expressed as: H=Jzi,jσizσjzJxiσix, where 〈i, j〉 denotes nearest-neighbor interactions, σiz and σix are Pauli operators acting on site i. We fix the ferromagnetic coupling strength Jz = 1 as our energy scale and vary the transverse field strength Jx [0, 2] to probe different quantum phases. This parameterization enables us to investigate the system’s behavior at the quantum critical point at Jx / Jz = 1.

The model’s performance, measured by fidelity and illustrated in Figure 2, consistently exceeds 90% across different parameter regimes. These results demonstrate the model’s robustness and its capability to capture essential features of the quantum state’s probability distribution.

Figure 2.

The prediction fidelity for different number of qubits. The fidelity is computed by averaging 400 random samples.

We test the neural network on quantum systems of 6 and 12 qubits at Jx = 1 using 600 random unitary matrices. The prediction fidelity under these transformations is shown as heatmaps in Figure 3. The model achieves prediction fidelity above 90%, validating its effectiveness in capturing the characteristics of the quantum state.

Figure 3.

The performance of the neural network on TFIM states. (a) is for 6-qubit system, and (b) is for 12-qubit system. Both show the accuracy of the neural network predictions on randomly sampled unitary matrices.

We also examine the relationship between sample size and model performance at a fixed transverse field of 1.0. The model accuracy is evaluated using average fidelity across 400 random unitary matrices. Figure 4 indicates that high fidelity can be achieved with limited measurements: 6-qubit systems reach approximately 90% fidelity with 200 samples, while 12-qubit systems require approximately 600 samples for comparable performance. Overall, both the model parameters and required training samples scale polynomially with qubit number, indicating efficient scaling of the neural network architecture.

Figure 4.

The average prediction accuracy as the number of samples increases. Generally, as the number of samples increases, the average prediction accuracy also improves.

Finally, we show how the minimum required sample size varies with qubit number for a given fidelity threshold, as in Figure 5. We use the same method as in Figure 4 to generate results for quantum systems ranging from 6 to 12 qubits, and define the first intersection point with the fidelity level line as the minimum sample size. We select two fidelity levels, 0.9 and 0.95, to plot. Due to the inherent randomness in the data generation and training process, the result is not strictly monotonically increasing as expected. However, the trend follows an approximately power-law relation with a power less than 2.

Figure 5.

Relation between qubit number and sample size. The overall trend suggests a power-law relationship between the sample size and qubit number. The dashed lines represent the fitted curves for the corresponding data.

Comparison between Different Neural Network Models

Given that Cha et al. [25] showed the applicability of attention mechanisms to quantum state tomography, we conduct a comparative analysis between attention-based approaches and our proposed method. We investigate two distinct attention-based implementations: (1) reformulating our quantum state reconstruction task as a sequence-to-sequence translation problem, and (2) substituting the LSTM architecture with a multi-head attention mechanism while maintaining the overall framework. To ensure fair comparison and prevent information leakage, we use causal masking in the attention mechanisms to maintain the sequential nature of quantum measurements. Our empirical results, as illustrated in Figure 6, demonstrate that our LSTM-based architecture outperforms both attention-based variants.

Figure 6.

Performances of different neural network architecture. The fidelity is computed by averaging 400 random samples. It can be found that our LSTM model behaves well, but the attention-based neural networks (ANN (1) and ANN (2)) are unstable.

Other States

To further validate our findings, we extend our analysis to a more complex quantum system: the cat state from quantum optics, which has the form |ϕcat=|α+|α, where |α〉 is the coherent state, and α ∈ C. While theoretically infinite-dimensional, this state can be effectively studied through dimensional truncation, providing an ideal test case for our method. We generate quantum cat states using the QuTiP library [26]. As illustrated in Figure 7, the neural network demonstrates robust performance in accurately representing this quantum state, further verifying the generalizability of our approach.

Figure 7.

Performance of the neural network on different quantum states. Both of them displays the accuracy of the neural network predictions on randomly sampled unitary matrices. The first row consists of 6-qubit systems, while the second row comprises 12-qubit systems. From left to right, each column represents a different quantum state: cat-state, random state, and w-state.

We also conduct experiments on two different quantum states. The first state is a randomly generated state using Qutip, with a dense density matrix. From the results in Figure 7, the neural network shows impressive performance in this state, with the fidelity of each measurement being very close to 1. The second state is a W-state with n2 nonzero elements, where n is the qubit number. It has the following form:

|ϕw=1n(|1000n+|0100++|0001).

The results in this state show that the fidelity remains approximately above 0.9.

Conclusion

This work presents a novel approach for quantum sample tomography that differentiates itself from conventional methods. We have developed both theoretical foundations and practical implementations for this challenging problem. Our analysis, grounded in local Rademacher complexity theory, establishes a fundamental theorem that rigorously justifies the application of machine learning method to quantum measurement prediction.

Building upon this mathematical foundation, we have developed a specialized Long Short-Term Memory (LSTM) architecture, incorporating symmetry-preserving constraints and invariance properties inherent to quantum systems. Our model maintains polynomial complexity with system size. Through several numerical experiments across various quantum systems, including the TFIM, we demonstrate that our approach consistently achieves good accuracy compared to other methods, including recent attention-based models. However, RNNs have several limitations. One challenge is the gradient propagation when processing long sequences of quantum states. Additionally, our model assumes noise-free conditions, and small perturbations in the quantum system may lead to huge changes in the RNN parameters. Future work is to improve the neural network performance for large-scale quantum systems.

Language:
English
Publication timeframe:
1 times per year
Journal Subjects:
Physics, Quantum Physics