Open Access

The method of detection and localization of configuration defects in geodetic networks by means of Tikhonov regularization

   | Dec 17, 2021

Cite

Problem of defects in geodetic network

The numerical elaboration of a geodetic network usually consists of two stages. At first, the initial diagnostics of data is performed, which allows detection and removal of various types of observational errors and geometric defects. Secondly, the final elaboration of the network is performed, including adjustment of observations and calculation of the coordinates of points. Defects in a geodetic network, that is, situations requiring identification and elimination, can be divided into several types, as shown in Figure 1.

Figure 1

Defects in geodetic network: outliers and geometric (datum or configuration) defects.

A metric defect in observations (outliers) means that the absolute value of the observational correction (error estimator) exceeds a certain threshold level (depending on the adopted probabilistic model and the significance level). Statistical methods are used to identify such errors, but the correct result depends on the network's local reliability (local redundance) (Baarda, 1968; Prószyński and Kwaśniak, 2019). The reliability of a geodetic network is related to the conditioning of the normal matrix in the problem of adjustment of the geodetic network using the least-squares method. In general, the point here is that in ill-conditioned systems, detection of a possible metric defect (outlier) can be difficult or impossible. The difficulty in identifying outliers is also due to the principle of least squares because this method causes some kind of ‘spilling’ of errors over the network elements located in a specific vicinity of the outlier observation. In addition, the identification task becomes more difficult as the size of the observational system increases. Therefore, other rules for observation adjustment are formulated, and the so-called robust estimation, in contrast to the averaging feature of the least-squares method, causes some differences between observations. These replace the least squares function by an objective function with spline components (Hampel (1971); Huber (1964) and many others’ proposals) or based on regular, but non-square components of the objective function (Holland and Welsch, 1977; Kadaj, 1978, 1988). A comprehensive review of the robust estimation methods can be found in the book by Wiśniewski (2013).

The outlier phenomena may be related to the coordinates of the tie points. A straightforward method of detecting such errors was presented by Prof. Hausbrandt (Hausbrandt, 1954; Janusz, 1958). It assumes that the coordinate reference points are treated as pseudo-observations with specific mean errors, estimated on the basis of information on the accuracy of the determination of points in the network of the appropriate class. The magnitude of the correction to such pseudo-observation indicates a possible existence of an outlier. The principles of robust estimations can also be used for the same purpose, although the Hausbrandt method seems to be sufficiently effective.

This article is not a complete dissertation on the above subject. We will focus mainly on the problem of detecting configuration defects, which – as shown by practical experience – are a very common phenomenon and difficult to identify, especially in large data sets.

The geometric (topology) defect (u = uref + uobs) can be classified into two types, namely datum defect (missing of reference elements) and configuration defects (network observational structure defects). The first type implies that a certain number uref of elementary data is missing to determine the position of the object (network) in the coordinate system. It may be a situation intended to perform the so-called free network adjustment (Mittermayer, 1972) and may also be the result of an unintentional mistake. A configuration defect means the lack of a certain amount (uobs) of observations for network determination. The reason may not only be the omission of certain observations, but also numbering errors of points in measurement plans. While the reference defect is easily identifiable, as it is a matter of several parameters, the configuration defect is usually difficult to spot, especially in the case of geodetic networks of considerable size in terms of the number of points and observations. The main topic of this publication is the ability to identify network configuration defects automatically.

The functional and stochastic model of a geodetic network

The functional model of a geodetic network can be presented in the form of a vector equation (col = single-column matrix): L+V=F(X), {\boldsymbol {L}} + {\boldsymbol {V}} = {\boldsymbol {F}}\left({\boldsymbol {X}} \right), where:

L = col[Li : i = 1, 2, . . . , m] is an observational vector, with some covariance matrix C,

P = C−1 is the weight matrix,

V = col[Vi : i = 1, 2, . . . , m] is a vector of unknown corrections,

X = col[Xj : j = 1, 2, . . . , n] is a vector of unknown parameters (point coordinates),

F(X) = col[Fi(X) : i = 1, 2, . . . , m] is a vector function of the vector X.

Let X0 denote the vector of approximate coordinates – the approximation of the vector X. The linearized Equation (1) has the form: v=Axl, {\boldsymbol {v}} = {\boldsymbol {A}} \cdot {\boldsymbol {x}} - {\boldsymbol {l}}, where:

A = (dF/dX)(0) = [aij](m×n); (mn); aij = (dFi/dXj)(0)

x = XX0; l = LF(X0)

v = vector of observational corrections in the linearized system (2).

The least-squares solution for Equation (2) is the minimization of the function: Φ(x)=vTPv=(Axl)TP(Axl). {\boldsymbol {\Phi}}\left({\boldsymbol {x}} \right) = {{\boldsymbol {v}}^T} \cdot {\boldsymbol {P}} \cdot {\boldsymbol {v}} = {\left({{\boldsymbol {A}} \cdot {\boldsymbol {x}} - {\boldsymbol {l}}} \right)^T} \cdot {\boldsymbol {P}} \cdot \left({{\boldsymbol {A}} \cdot {\boldsymbol {x}} - {\boldsymbol {l}}} \right).

From the necessary condition for the extreme, we get the normal equation: ATPAx=ATPlorBx=w, {{\boldsymbol {A}}^T} \cdot {\boldsymbol {P}} \cdot {\boldsymbol {A}} \cdot {\boldsymbol {x}} = {{\boldsymbol {A}}^T} \cdot {\boldsymbol {P}} \cdot {\boldsymbol {l}}\;\;\;{\rm{or}}\;\;\;{\boldsymbol {B}} \cdot {\boldsymbol {x}} = {\boldsymbol {w}}, where: B=ATPA:isthe(n×n)symmetricmatrix,rank(B)nu,u0,w=ATPl:isthe(n×1)vector. \matrix{{{\boldsymbol {B}} = {{\boldsymbol {A}}^T} \cdot {\boldsymbol {P}} \cdot {\boldsymbol {A}}:\;{\rm{is}}\;{\rm{the}}\;\left({n \times n} \right)\;{\rm{symmetric}}\;{\rm{matrix}},\;{rank}\left({\boldsymbol {B}} \right) - n - u,u \ge 0,} \hfill \cr {{\boldsymbol {w}} = {{\boldsymbol {A}}^T} \cdot {\boldsymbol {P}} \cdot {\boldsymbol {l}}:\;\;\;{\rm{is}}\,{\rm{the}}\;\left({n \times 1} \right){\rm{vector}}.} \hfill \cr}

If u = uref + uobs > 0, then the normal equation is singular, generating a linear subspace of solutions. In particular, if uref > 0 (datum defect) and the value of the defect is known, while uobs = 0, then it is possible to apply ‘free adjustment’ (e.g. Mittermayer (1972)) using the so-called pseudo-inversion (generalized Moore–Penrose inversion; see more details in Bjerhammar (1973); Moore (1920); Penrose (1955)). An important feature of such a situation is that the size of the defect is usually known in advance. A completely different case occurs when a defect is unknown, both in terms of its value and its situational location in the network. We only know that it exists because the matrix of the system of equations is singular. Detection and localization of such a defect are the main issues of this work.

Least-squares adjustment in case if a defect is known

In the case of a non-zero and known defect, an unequivocal solution of the normal equation (4) may take place after applying an additional condition (conditions) to the vector of unknowns x. The universal condition in applications has the following form: xTx=min. {{\boldsymbol {x}}^T} \cdot {\boldsymbol {x}} = min.

Then the unequivocal solution is written in matrix notation using the so-called generalized inversion (pseudo-inversion or Moore–Penrose inversion; see Moore (1920); Penrose (1955)): x=Bw. {\boldsymbol {x}} = {{\boldsymbol {B}}^\dagger} \cdot {\boldsymbol {w}}.

The B pseudo-inversion, where the solution (7) satisfies the condition (6), is defined as follows (Deutsch, 1965). Let M, N be matrices of the size (r × n) and rank(M) = rank(N) = r and satisfy the condition B = MT · N. Then B=NT(NNT)1(MMT)1M. {{\boldsymbol {B}}^\dagger} = {{\boldsymbol {N}}^T} \cdot {\left({{\boldsymbol {N}} \cdot {{\boldsymbol {N}}^T}} \right)^{- 1}} \cdot {\left({{\boldsymbol {M}} \cdot {{\boldsymbol {M}}^T}} \right)^{- 1}} \cdot {\boldsymbol {M}}.

In particular, if we assume M = N = R then M = RT · R and B=RT(RRT)1(RRT)1R. {{\boldsymbol {B}}^\dagger} = {{\boldsymbol {R}}^T} \cdot {\left({{\boldsymbol {R}} \cdot {{\boldsymbol {R}}^T}} \right)^{- 1}} \cdot {\left({{\boldsymbol {R}} \cdot {{\boldsymbol {R}}^T}} \right)^{- 1}} \cdot {\boldsymbol {R}}.

The above condition is fulfilled by the R matrix resulting from the Cholesky–Banachiewicz factorization of the B matrix. It has a ‘trapezoidal’ form (Figure 2).

Figure 2

Structure of the R matrix (0 – elements assumed to be zero, X – other elements of the matrix), r – rank of the normal matrix B, u = nr = defect of the normal matrix.

Note that in the case of zero defect (u = 0) B = B−1, then formula (9) takes the form: B=RT(RT)1R1(RT)1R1R==IR1(RT)1I=(RTR)1=B1. \matrix{{{{\boldsymbol {B}}^\dagger}} \hfill & {= {{\boldsymbol {R}}^T} \cdot {{\left({{{\boldsymbol {R}}^T}} \right)}^{- 1}} \cdot {{\boldsymbol {R}}^{- 1}} \cdot {{\left({{{\boldsymbol {R}}^T}} \right)}^{- 1}} \cdot {{\boldsymbol {R}}^{- 1}} \cdot {\boldsymbol {R}} =} \hfill \cr {} \hfill & {= {\boldsymbol {I}} \cdot {{\boldsymbol {R}}^{- 1}} \cdot {{\left({{{\boldsymbol {R}}^T}} \right)}^{- 1}} \cdot {\boldsymbol {I}} = {{\left({{{\boldsymbol {R}}^T} \cdot {\boldsymbol {R}}} \right)}^{- 1}} = {{\boldsymbol {B}}^{- 1}}}. \hfill \cr}

Another way of determining pseudo-inversion is the spectral decomposition of the normal matrix. This matrix is symmetric and non-negative definite, so the spectral decomposition takes the form: B=SΛST;Λ=diag(λ1,λ2,,λr,o1,o2,,ou), {\boldsymbol {B}} = {\boldsymbol {S}} \cdot {\boldsymbol {\Lambda}} \cdot {{\boldsymbol {S}}^T};\;\;\;{\boldsymbol {\Lambda}} = {\rm{diag}}\left({{\lambda _1},{\lambda _2}, \ldots,{\lambda _r},{o_1},{{\rm{o}}_2}, \ldots,{{\rm{o}}_u}} \right), where

01 = 02 = . . . = 0u = 0,

S = matrix of eigenvectors as an orthonormal matrix: ST · S = I,

Λ = diagonal matrix of eigenvalues: λ1 ≥ λ2 ≥. . . ≥ λr > 0,

then B=STΛS;Λ=diag(λ11,λ21,,λr1,o1,o2,,ou). {{\boldsymbol {B}}^\dagger} = {{\boldsymbol {S}}^T} \cdot {{\boldsymbol {\Lambda}}^\dagger} \cdot {\boldsymbol {S}};\;\;\;{{\boldsymbol {\Lambda}}^\dagger} = {\rm{diag}}\left({\lambda _1^{- 1},\lambda _2^{- 1}, \ldots,\lambda _r^{- 1},{{\rm{o}}_1},{{\rm{o}}_2}, \ldots,{{\rm{o}}_u}} \right).

Thus, using formula (8) or (9), we can adjust the network if the defect value is known. Typically, this situation is related to the datum defect and is intentional in the aspect of the so-called free network adjustment, consisting in the optimal fit of the network into a figure defined by approximate coordinates with simultaneous adjustment of the observations (Mittermayer, 1972). However, a troublesome problem may be if a non-zero defect of a network is unknown and is a result of mistakes in the network adjustment process. This will be mainly a network configuration defect (uobs > 0); however, missing observational data or errors in point numbering may also result in missing reference elements, that is, datum defect. Usually, such a situation is of an error nature, for example, as a result of accidental omission of certain observations in the data sets, necessary to obtain the effect of network determination, or as a result of errors of point identifiers in the observation plans of the geodetic network.

Similar to gross observation error, a network configuration defect will not be generally easy to identify and locate, especially in large networks with a variety of observation data. For this purpose, procedures using the exact pseudo-inversion formula (8) or (9) will not be (in principle) effective. First of all, they require knowledge of the defect value itself, not to mention its location in the network. Is it possible, then, to resolve the above issues in an ‘automatic’ manner?

The problem of detecting and locating configuration defects can be solved using the Tikhonov regularization method. This method was formulated to solve singular, linear algebraic equations (Tikhonov, 1963, 1965). It should be mentioned here that the problem of regularization also refers to the original works: Levenberg (1944); Marquardt (1963, 1970); Phillips (1962). Phillips’ work formulated the problem of regularization when solving integral equations. The work of Levenberg and the first of Marquardt's works concerned the iterative solution of the systems of non-linear equations as a method optimising the convergence of the iterative process. Another work by Marquardt (1970) already referred to the problems of regularization in the iterative process.

Tikhonov procedure as alternative to generalized inversion and as a method for detecting configuration defects
General formula for Tikhonov regularization

Tikhonov regularization (Tikhonov, 1963, 1965) is based on the condition: vTPv+αxTx=min {{\boldsymbol {v}}^T} \cdot {\boldsymbol {P}} \cdot {\boldsymbol {v}}\; + \alpha \cdot \;{{\boldsymbol {x}}^T} \cdot {\boldsymbol {x}} = min from where: (B+αI)xα=wxα=(B+αI)1w=Bα1w,Bα=B+αI;I=unitmatrix, \matrix{{\left({{\boldsymbol {B}} + \alpha {\boldsymbol {I}}} \right) \cdot {{\boldsymbol {x}}_\alpha} = {\boldsymbol {w}} \Rightarrow {{\boldsymbol {x}}_\alpha} = {{\left({{\boldsymbol {B}} + \alpha {\boldsymbol {I}}} \right)}^{- 1}} \cdot {\boldsymbol {w}} = {\boldsymbol {B}}_\alpha ^{- 1} \cdot {\boldsymbol {w}},} \hfill \cr {{\boldsymbol {B}}\alpha = {\boldsymbol {B}} + \alpha {\boldsymbol {I}};\;\;\;{\boldsymbol {I}} = {\rm{unit}}\;{\rm{matrix}},} \hfill \cr} where α > 0 is a ‘small’ number on which the difference between solutions (14) and (7) depends.

If α decreases to zero, then solution (14) approaches (7) (Bjerhammar, 1973): limα0Bα1w=Bw \mathop {\lim}\limits_{\alpha \to 0} {\boldsymbol {B}}_\alpha ^{- 1} \cdot {\boldsymbol {w}} = {{\boldsymbol {B}}^\dagger} \cdot {\boldsymbol {w}} but the Bα1 {\boldsymbol {B}}_\alpha ^{- 1} matrix is not, however, an appropriate approximation of the B pseudo-inversion. This approximation is the matrix Qα=Bα1BBα1=Bα1(IαBα1)=Bα1α(Bα1)2B {{\boldsymbol {Q}}_\alpha} = {\boldsymbol {B}}_\alpha ^{- 1} \cdot {\boldsymbol {B}} \cdot {\boldsymbol {B}}_\alpha ^{- 1} = {\boldsymbol {B}}_\alpha ^{- 1} \cdot \left({{\boldsymbol {I}} - \alpha \cdot {\boldsymbol {B}}_\alpha ^{- 1}} \right) = {\boldsymbol {B}}_\alpha ^{- 1} - \alpha \cdot {\left({{\boldsymbol {B}}_\alpha ^{- 1}} \right)^2} \approx {{\boldsymbol {B}}^\dagger} based on a theorem, compatible with (15): limα0Qα=B=Q \mathop {\lim}\limits_{\alpha \to 0} {{\boldsymbol {Q}}_\alpha} = {{\boldsymbol {B}}^\dagger} = {\boldsymbol {Q}}

(matrix B and vector w are expressed by the formulas (4) and (5)).

The Q and Qα matrices are also theoretical covariance matrices of the estimated parameters (coordinates of the network points), respectively, for the exact and approximate (regularized) solution of the system, assuming that the standard deviation of the observation with a unit weight s0 = 1.

Influence of the regularization parameter on results of Equation (13)

By subtracting the normal equation (4) (least squares) from the normal equation (14) (least squares with Tikhonov regularization), we will estimate the size of the difference between the solutions using the relative error (η): B(xαx)+αxα=0orBΔx=αxα {\boldsymbol {B}}\left({{{\boldsymbol {x}}_\alpha} - {\boldsymbol {x}}} \right) + \alpha \cdot {{\boldsymbol {x}}_\alpha} = 0\;\;\;{\rm{or}}\;\;\;{\boldsymbol {B}} \cdot \Delta {\boldsymbol {x}} = - \alpha \cdot {\boldsymbol {x}}\alpha Δx=αBxα \Delta {\boldsymbol {x}} = - \alpha \cdot {{\boldsymbol {B}}^\dagger} \cdot {{\boldsymbol {x}}_\alpha} η=ΔxE/xαEαBS=αQS=αλr1, \eta = {\left\| {\Delta {\boldsymbol {x}}} \right\|_E}/{\left\| {{{\boldsymbol {x}}_\alpha}} \right\|_E} \le \alpha {\left\| {{{\boldsymbol {B}}^\dagger}} \right\|_S} = \alpha \cdot {\left\| {\boldsymbol {Q}} \right\|_S} = \alpha \cdot \lambda _r^{- 1}, where: ‖ . ‖E is the vector Euclidean norm (associated with the spectral norm ‖ . ‖S of the matrix) and λr is the smallest non-zero eigenvalue of the matrix B. Then, λr1 \lambda _r^{- 1} is the largest eigenvalue of the matrix Q = B. We can see that the relative error depends significantly on the value of the regularization parameter in relation to the smallest, non-zero eigenvalue. In the literature, e.g. Bell and Roberts (1973), we also find the estimate of η ≤ αr + α)−1, which, including the condition α ≪ λr, should not differ significantly from (20). In any case, however, the estimate using the eigenvalue λr is practically not verifiable unless we know the spectral decomposition of the matrix B. Such an operation is not necessary as it is too expensive. The value of λr1 \lambda _r^{- 1} can be estimated with some excess, assuming an average value of the expected maximum error of the mean coordinate of the network point. Let us denote it as μx. From the trace Tr properties of the matrix, we have: λr1i=1,,r(λr1)=Tr(Q)nλr1μx2. \lambda _r^{- 1} \le \sum\limits_{i = 1, \ldots,r} {\left({\lambda _r^{- 1}} \right) = Tr\left({\rm{Q}} \right) \le n\lambda _r^{- 1}\mu _x^2.}

We will now return to (20) using the last estimate: ηαnμx2. \eta \le \alpha \cdot n \cdot \mu _x^2.

If the relative error was to be smaller than ηmax, the parameter α should be taken as α<ηmax(nμx2)1=αmax. \alpha < {\eta _{max}} \cdot {\left({n \cdot \mu _x^2} \right)^{- 1}} = {\alpha _{max}}.

For example, let ηmax = 0.01, μx = 0.01 m, n = 1000 (number of unknowns), then α < 0.001.

The parameter α should also be limited from the bottom due to the rounding error in a digital machine (Wilkinson, 1994). For adding α to the diagonal element of the matrix B to be a significant operation, its value should exceed the rounding error level of the diagonal elements of the matrix. Taking into account (23), we will get a selection range (Kadaj, 1979): α(max{bii}2t,αmax), \alpha \in \left({max \left\{{{b_{ii}}} \right\} \cdot {2^{- t}},{\alpha _{max}}} \right), where bii is the diagonal element of the matrix B and t is the number of significant binary digits in the floating-point notation of a real number.

An estimation of the relative error of the Tikhonov regularization method concerns the case of a one-time solution according to formula (14) or, otherwise, formula (13) in relation to formulas (4) and (6). In practice, regularization is used in iterative processes that correct a one-time solution (e.g. Bakushinskii (1992); Bell and Roberts (1973); George (2010); Hanke and Groetsch (1998); Marquardt (1963); Mehsner (2013); Scherzer (1993)). In particular, this applies to non-linear least-squares tasks using the Gauss–Newton iterative procedure, and thus to the adjustment of geodetic networks also.

Regardless of whether the solution (14) is used once or is refined in the iterative process, the covariance matrix of the unknown vector determined by formula (13) is approximate. The assessment of the relative error of this matrix in terms of the relative increase of the spectral norm leads to a restriction on the parameter α analogous to (24), but with a ‘sharper’ upper limit (Kadaj, 1979): αmax=(12)ηmax(nμx2)1 {\alpha _{max}} = \left({{1 \over 2}} \right) \cdot {\eta _{max}} \cdot {\left({n \cdot \mu _x^2} \right)^{- 1}} .

Using Tikhonov's regularization, is it possible detect and locate a configuration defect in a geodetic network? The covariance matrix Qα expressed by formula (16) and the pseudo-inversion B do not contain information that would allow for the identification and location of the defect. Such information is provided by the Bα1 {\boldsymbol {B}}_\alpha ^{- 1} matrix. Using the spectral decompositions, traces of these matrices will be compared: Tr(B)=λ11++λr1+0+0++0=C1+0 Tr\left({{{\boldsymbol {B}}^\dagger}} \right) = \lambda _1^{- 1} + \ldots + \lambda _r^{- 1} + 0 + 0 + \ldots + 0 = {C_1} + 0 Tr(Qα)=λ1(λ1+α)2++λr(λr+α)2+0++0++0=C2+0 \matrix{{Tr\left({{{\boldsymbol {Q}}_\alpha}} \right)} \hfill & {= {\lambda _1}{{\left({{\lambda _1} + \alpha} \right)}^{- 2}} + \ldots + {\lambda _r} \cdot {{\left({{\lambda _r} + \alpha} \right)}^{- 2}} + 0 +} \hfill \cr {} \hfill & {+ \;0 + \ldots + 0 = {C_2} + 0} \hfill \cr} Tr(Bα1)=(λ1+α)1++(λr+α)1+α1++α1+α1=C3+uα1 \matrix{{Tr\left({{\boldsymbol {B}}_{_\alpha}^{- 1}} \right)} \hfill & {= {{\left({{\lambda _1} + \alpha} \right)}^{- 1}} + \ldots + {{\left({{\lambda _r} + \alpha} \right)}^{- 1}} + {\alpha ^{- 1}} +} \hfill \cr {} \hfill & {+ \;{\alpha ^{- 1}} + \; \ldots \;{\alpha ^{- 1}} = {C_3} + u \cdot {\alpha ^{- 1}}} \hfill \cr} where u is the network defect. Since α is a small number in relation to λr, the approximate equality holds C1C2C3 and the dominant big value component u · α−1 appears only in the trace of the matrix Bα1 {\boldsymbol {B}}_\alpha ^{- 1} . It is located in the diagonal elements of the matrix inverse to Bα, or more precisely in the elements corresponding to the coordinates of undetermined points. The configuration defect is then identified in large values of the mean errors a posteriori of the respective coordinates (the diagonal elements of the Bα1 {\boldsymbol {B}}_\alpha ^{- 1} matrix are squares of mean errors). The next section focuses particularly on the choice of parameter for this purpose.

Selecting the parameter α for detection and location of a configuration defect

Let the regularization parameter α be interpreted as α=1/μc2 \alpha = 1/\mu _c^2 , where μc denotes a priori expected mean coordinate errors for the unknown parameters (point coordinates). In a probabilistic and statistical sense, it would also mean applying the idea of Bayesian estimation. Contrary to the commonly used Gauss–Markov model for observation adjustment, Bayesian estimation additionally uses a priori information (stochastic model) for the unknown parameters. In our case, on the one hand, it will free the task from the defect and singularity of the normal matrix, while on the other hand, it will allow to identify missing observations or those given with erroneous point numbering. In the general model of Bayesian estimation (e.g. Bossler (1972)), instead of the scalar matrix α · I used here, a full CX covariance matrix may appear as additional a priori information about the determined unknowns.

Suppose the matrix of a system of normal equations B(n x n) is singular due to a network defect, rank(B) = r < n. Assume, for example, the coordinates of all points determined in the network μx = μy = μ c = 100 m, that is, with the weight α = 0.0001 do not significantly affect the results of the network adjustment, we make the non-singular system of normal equations (the network becomes determinable, and the normal matrix becomes Bα = B + αI and is significantly positively defined with the order r = n). Let us add that all eigenvalues (including zero values) of matrix B are increased by the added parameter (λi = λi + α; i = 1, 2, . . . , n), and the inverse matrix Bα1=(B+αI)1 {\boldsymbol {B}}_\alpha ^{- 1} = {\left({{\boldsymbol {B}} + \alpha \cdot {\boldsymbol {I}}} \right)^{- 1}} exists and has positive eigenvalues 1/λi = 1/(λi + α).

In our case, the upper limit results from the assumption that the diagonal elements of the inverse Bα1 {\boldsymbol {B}}_\alpha ^{- 1} matrix for indeterminate points are significantly different in size in relation to the diagonal elements of fully determinable points. If the network was completely determinable (with zero u-defect), then the diagonal elements of the matrix (assuming the use of standardized observations with the unit mean error value a priori) would be the squares of the coordinate mean errors. For example, for a horizontal geodetic network, the use of the regularization formula means that we add the equations dx = 0, dy = 0 to the existing system (in the previously adopted notations, dx, dy are some components of the vector x) with assumed a priori (artificially) large standard deviations (mean errors), for example, 100 m, and with appropriately small weights (as above) to reveal undetermined points with sufficiently large values of the resulting mean errors. For this purpose, we assume the condition for selecting a regularization parameter (left part as in (24)): max{bii}21<(kμmax)2αμmax2(''signifiesastronginequality), \matrix{{max \left\{{{b_{ii}}} \right\}{2^{- 1}} < {{\left({k \cdot {\mu _{max}}} \right)}^{- 2}} \le \alpha \ll \mu _{max}^{- 2}} \hfill \cr {\;\;\;\;\;\;\;\;\;\;\left({' \ll '\;{\rm{signifies}}\;{\rm{a}}\;{\rm{strong}}\;{\rm{inequality}}} \right),} \hfill \cr} where μmax is the expected maximum (approximately) error of the coordinate of the determinable point in the network, k is the multiple of the assumed mean error for pseudo-observation dx = 0, dy = 0, for example, k = 10, 100, 1000, . . . (see attached numeric example) and t is the number of significant binary digits of a real number.

Illustrative example of detection and location of configuration defects in a large traverse geodetic network

The following example illustrates the possibility of detecting configuration defects using the Tikhonov regularization. The object of the test is a relatively large geodetic network with a traverse structure (Figures 3 and 4). The network includes 6227 points with 448 reference points, 6298 angular observations, 1487 directional observations at 391 stations and 6829 length measurements.

Figure 3

A symbolic sketch of an extensive geodetic network in a district in Poland. In the Polish classification, it is a third-class network related to the network of classes I and II. The sketch shows a place in the network where configuration defects have been artificially introduced.

Figure 4

Localized configuration defects (numerical results in Table 1).

In the professional adjustment of this network, the following standard deviations of observations were assumed: for directional observation: 28 [cc], for length measurements: constant standard deviation 0.016 m and proportional to the length of 0.0002 m/km. The average length of the side is approx. 350 m. In the resulting report, the estimated average standard deviation of the point was 0.038 m.

Artificial configuration defects were introduced in the network by excluding five angles at the vertices of the traverse marked in Figure 4. The classical least-squares adjustment algorithm signals the singularity of the normal matrix and the program is stopped. We know that the defect exists, but we are unable to locate it and quantify it. In practical situations, detecting configuration defects in such a large observation system would not be an easy task.

The Tikhonov regularization procedure becomes helpful here. In accordance with the example provided in section 4.3, we assume the parameter α = 1/10, 000 (according to the assumption: μx = μy = 100 m). Table 1 shows a fragment of the network adjustment report, where it is clearly visible that the resultant point error takes significantly large values just for the points for which there are deficiencies in the centre angles. These values, however, are smaller than 100 m because they occur in the vicinity of 5 points and the observations (angles and lengths) measured at the neighbouring points ‘work’ in a sense for the points with configuration defects.

Fragment of the result set (coordinates and accuracy parameters) of the network adjustment containing configuration defects using Tikhonov regularization with the parameter = 1/10,000 (assuming a priori standard deviations of the coordinates μx = μy = 100 m)

Point No x [m] y [m] s [m] a [m] b [m] θ[m]
11351051 5734128.7313 7593963.7917 0.0229 0.0165 0.0159 114.89
11351052 5733836.1122 7593815.7940 0.0337 0.0265 0.0207 125.33
11351031 5734659.9622 7593795.2190 0.0338 0.0289 0.0175 47.35
11351030 5734794.1818 7593605.8420 0.0555 0.0500 0.0241 43.80
11351029 5734961.8797 7593605.2604 0.0691 0.0633 0.0277 57.73
11351028 5735150.9054 7593600.7769 0.0883 0.0820 0.0329 69.19
11351027 5735344.5530 7593600.1071 99.9993 99.9993 0.0506 99.78
11351026 5735593.1898 7593599.2057 90.0123 90.0123 0.0558 99.77
11351025 5735845.2708 7593593.3791 43.5952 43.5952 0.0460 101.09
11351024 5736163.6210 7593598.8395 0.0848 0.0792 0.0302 74.12
11351023 5736508.7176 7593598.4821 0.0569 0.0512 0.0249 55.53
11351022 5736792.5664 7593337.1697 0.0276 0.0214 0.0174 61.38
11351112 5737672.6161 7592856.1053 0.0273 0.0207 0.0178 56.84
11351102 5737876.8970 7592817.7828 0.0304 0.0232 0.0196 49.43
11351101 5738021.9520 7592509.6190 0.0357 0.0276 0.0226 49.04

Notations: a, b, θ – ellipse parameters standard deviations; a, b (b ≤ a) – the semi-axes, estimated standard deviation of the point: s = s=(μx2+μy2)1/2 s = {\left({\mu _x^2 + \mu _y^2} \right)^{1/2}} ; θ – directional angle of the semi-axis a; x, y – Cartesian Gauss-Krüger coordinates in PL-1992 system.

Table 2 shows the analogous results of the actual network adjustment (without defects). Comparing the errors of points that do not have a direct observational relationship with the ‘defect’ points in Tables 1 and 2, we find that the Tikhonov regularization practically does not change the accuracy parameters of the network, except for the defective places.

Fragment of the result set (corresponding to the data in Table 1) without defects and without Tikhonov regularization

Point No x [m] y [m] s [m] a [m] b [m] θ[m]
11351051 5734128.7307 7593963.7936 0.0209 0.0153 0.0143 20.80
11351052 5733836.1113 7593815.7965 0.0310 0.0237 0.0199 126.34
11351031 5734659.9605 7593795.2131 0.0263 0.0208 0.0162 53.54
11351030 5734794.1760 7593605.8299 0.0389 0.0322 0.0218 48.54
11351029 5734961.8766 7593605.2400 0.0445 0.0379 0.0234 58.27
11351028 5735150.9048 7593600.7438 0.0500 0.0431 0.0254 66.20
11351027 5735344.5553 7593600.1044 0.0538 0.0467 0.0266 71.41
11351026 5735593.1955 7593599.3810 0.0558 0.0488 0.0270 74.54
11351025 5735845.2670 7593593.0363 0.0545 0.0477 0.0263 74.50
11351024 5736163.6139 7593598.8498 0.0487 0.0421 0.0245 69.68
11351023 5736508.7137 7593598.4877 0.0392 0.0324 0.0221 57.19
11351022 5736792.5638 7593337.1718 0.0230 0.0166 0.0160 87.27
11351112 5737672.6161 7592856.1053 0.0262 0.0199 0.0171 56.84
11351102 5737876.8970 7592817.7828 0.0292 0.0223 0.0189 49.43
11351101 5738021.9520 7592509.6190 0.0343 0.0265 0.0217 49.04

Designations as in Table 1.

The described method of automatic detection and localization of configuration defects, based on the Tikhonov regularization idea, was implemented in 1993 in the GEONET geodetic calculation system (www.geonet.net.pl). The test calculations presented in this paper were performed by means of the programs of this system.

Conclusions

The article presents the methodology of automatic detection of configuration defects in geodetic networks. This issue is important, especially in large networks, at the stage of initial measurement processing, where detection of possible data defects is usually a very labour-intensive task. Computer programs for adjusting geodetic networks ‘recognise’ the geometric structure of the network based on the identifiers (numbers) of points in the observation plans, in the list of approximate coordinates and in the list of tie point coordinates. In case of errors in point identifiers, the network structure is incorrectly defined. In particular, errors in identifiers may result in situations similar to the case of missing data.

Both the theoretical basis and numerous practical tests performed with the GEONET system programs show that the described method based on the Tikhonov regularization idea allows to automatically detect possible configuration defects in the geodetic network, interpreted by the computer program as gaps in data sets. In ‘raw’ observational data, as practice teaches, this will often be the result of errors in point identifiers.

When choosing a regularization parameter, we follow its pseudo-probabilistic (pseudo-Bayesian) interpretation. In this interpretation, the regularization parameter means very small value of the weight of the coordinate of the determined point (e.g. 0.0001), which corresponds to ‘excessively’ large value of the standard deviation of the coordinate of the determined point (e.g. 100 m). Such an assumption does not significantly affect the results of aligning the part of the network that is free from configuration defects, but allows to identify defect sites by showing large values of diagonal elements of the inverse of the regularized normal matrix.

The problem of configuration defects may occur in any real observation system, including hybrid networks (satellite-terrestrial, integrated). The given example of the classical network is only an illustration of the problem and its solution using the Tikhonov regularization.

Regardless of the possibility of applying the Tikhonov regularization, there are other methods supporting the identification of configuration defects in the geodetic network. One of them, implemented in the GEONET system, consists in the analysis of the topological structure of the network in terms of counting the number of independent observations that mark each point of the network in relation to other points of this network. However, it is easy to give examples of when such an approach will not work. It may be, for example, a network fragment not related to the rest of the network and not having tie points, but meeting the condition of correct (relative) determinability in relation to other points of this fragment.

eISSN:
2391-8152
Language:
English
Publication timeframe:
Volume Open
Journal Subjects:
Computer Sciences, other, Geosciences, Geodesy, Cartography and Photogrammetry