Open Access

Bayesian-Informed Fatigue Life Prediction for Shallow Shell Structures

, ,  and   
Jul 07, 2025

Cite
Download Cover

INTRODUCTION

Crack propagation analysis plays a critical role in assessing structural durability and informing damage tolerance design. This approach assumes that initial flaws, introduced during manufacturing or handling, may be present even before the structure enters service. With sufficient information regarding crack geometry, material properties, and loading spectrum, fracture mechanics approaches can be used to determine or optimise inspection intervals, ensuring that cracks will not grow to critical sizes between inspections (Davidson et al., 2003). Shell structures are widely used in the aerospace and marine industries due to their superior capacity to withstand pressure loads. Accurately predicting the fatigue life of such structures is therefore essential for guiding damage tolerance design and inspection protocols to ensuring structural integrity. Ideally, fatigue life should be estimated from the actual initial flaw size to the critical crack length by accounting for both short and long crack propagation phases. However, two main challenges arise: (1) typical initial flaw sizes are extremely small and often undetectable by conventional non-destructive inspection (NDI) methods, making routine inspections impractical, especially in fleet-wide applications; and (2) short crack growth behaviour is highly sensitive to microstructural effects and is thus difficult to model accurately (Navarro, 1988).

To address these issues, the USAF MIL-A-83444 standard recommended a deterministic initial flaw size of 0.005 inch (Wood & Eagle, 1979). Later, the USAF DTDH-0016 (Miedlar et al., 2002) introduced a probabilistic approach that considers an Equivalent Initial Flaw Size Distribution (EIFSD), accounting not only for manufacturing defects but also for structural details and the stochastic nature of fatigue crack growth. EIFS is often estimated by back-calculating from a predefined total life to an assumed initial crack length at time zero. Earlier approaches employed the Kitagawa–Takahashi (KT) diagram (Kitagawa & Takahashi, 1976), which combines the fatigue limit and long crack growth threshold to define a non-propagating region. A modified KT diagram (Maierhofer et al., 2015) improves prediction accuracy by incorporating finite notch depths and the gradual build-up of crack closure. Related studies have also used the KT framework to analyse manufacturing and corrosion defects, providing insight into the transition from short to long crack growth and the underlying fatigue strength (Balbín et al., 2021; Bergant et al., 2023).

EIFS can also be estimated by performing reverse crack growth analysis using S–N data fitted with statistical distributions such as two-parameter lognormal or Weibull (Davidson et al., 2003). More recent studies have adopted Bayesian inference, treating EIFS as a model parameter that can be statistically calibrated using inspection or fatigue data. The work by Makeev et al. (2007) and Cross et al. (2007) introduced EIFS as a probabilistic parameter, applying Bayesian updating and Maximum Likelihood Estimation (MLE) to calibrate EIFSDs. Sankararaman et al. (2010) extended this methodology to more complex structural geometries and multiaxial variable amplitude loading, incorporating finite element analysis, surrogate modelling, and crack growth simulation. In later work, the same team (Sankararaman et al., 2011) proposed an improved approach by explicitly modelling various sources of uncertainty, such as loading variability, model error, and experimental noise, rather than aggregating them into a single noise term.

The calibration of EIFS or EIFSD typically relies on data from in-service maintenance or overhaul inspections. However, obtaining such data is often resourceintensive and impractical. In this study, the Dual Boundary Element Method (DBEM) is used to generate synthetic inspection data for shallow shell structures as a proof-of-concept, aiming to demonstrate the feasibility of the proposed approach when real inspection data are not available. DBEM is particularly suited for crack propagation modelling due to its computational efficiency and its ability to handle crack growth without the need for re-meshing. Unlike FEM, DBEM requires only boundary discretisation, resulting in reduced system size and faster simulation. Previous work using DBEM for EIFS inference has focused on plate structures (Morse et al., 2017, 2020) and, more recently, on shallow shells (Zhuang et al., 2024.). In such studies, dense grid sampling was required to evaluate the posterior distribution over a predefined range of EIFSD parameters. Coarse parameter discretisation risks a significant loss of accuracy, while fine grids demand considerable computational effort, particularly when each likelihood evaluation involves Monte Carlo simulation. Moreover, the inference results are sensitive to the initial grid range and resolution. The present work addresses these limitations by introducing an iterative parameter space narrowing strategy. Rather than evaluating the entire parameter space simultaneously, the method begins with a coarse discretisation to locate the high probability regions within the EIFSD. These regions are then adaptively refined in subsequent iterations, allowing for focused likelihood evaluations and significantly reducing computational cost. This approach maintains high inference accuracy while improving computational efficiency.

In summary, this study extends previous work on EIFSD inference in shell structures by incorporating an adaptive grid refinement strategy. The proposed methodology builds upon prior work by the lead author (Zhuang et al., 2024.). The remainder of the paper is organised as follows: Section 2 introduces the theoretical background of EIFSD and outlines the inference framework, as well as the numerical implementation using DBEM and the iterative parameter space narrowing strategy. Section 3 presents a numerical example based on the fuselage window of a Boeing 787 Dreamliner. Finally, Section 4 summarises the main findings and outlines potential directions for future work.

METHODOLOGY
The Equivalent Initial Flaw Size Approach

The Equivalent Initial Flaw Size (EIFS) is a calibration parameter used to simplify fatigue life estimation by enabling the use of a long crack growth model. It allows engineers to replace the complex behaviour of short cracks with more established long crack models, such as the Paris Law (Paris and Erdogan, 1963). While long crack models exhibit stable and well-characterised monotonic crack growth, short crack behaviour is often irregular due to microstructural influences and crack closure effects (Larrosa et al., 2015, 2017). The EIFS is defined such that the number of cycles required to grow a crack from the EIFS to a critical size using the long crack model is equivalent to the number of cycles needed for an actual short crack to grow to the same critical size. This approach eliminates the need to explicitly model short crack growth, making fatigue life prediction more practical and computationally efficient.

EIFS can be defined mathematically as (Sankararaman et al., 2011): N=IFSac1gs(a)da=EIFSac1gl(a)daN = \int_{IFS}^{{a_c}} {{1 \over {{g_s}(a)}}} da = \int_{EIFS}^{{a_c}} {{1 \over {{g_l}(a)}}} da where IFS is the actual initial flaw size and ac is some critical crack size. The function gs(a) represents a short crack growth model which can capture both short and long crack behaviour, while gl(a) corresponds to a long crack growth model. Figure 1 is a graphical demonstration of the concept of EIFS, where the area under both curves is equivalent, thereby representing the same fatigue life.

Figure 1.

Graphical illustration of the EIFS concept as an model calibration parameter compared to the physical parameter IFS (Sankararaman et al., 2011).

The long crack propagation in this study is modelled using the Paris Law in its simplest form. Its integral form is given as [16]: N=a0a1da/dNdaN = \int_{{a_0}}^a {{1 \over {da/dN}}} da where N is the number of load cycles required for the crack to grow from the initial length a0 to the final crack length a, and d da/dN denotes the crack growth rate. The Paris Law expresses this rate as: dadN=C(ΔKeff)m{{da} \over {dN}} = C{\left( {\Delta {K_{eff}}} \right)^m} where C and m are material-dependent constants, and ΔKeff is the effective stress intensity factor range, defined as the difference between its maximum and minimum values.

Evaluation of the Stress Intensity Factor using the DBEM

The Dual Boundary Element Method (DBEM), originally introduced for fracture mechanics by Portela et al. (1992) and further developed by Aliabadi and co-workers (Aliabadi, 2002), is a specialised variant of the classical Boundary Element Method (BEM), particularly suited for modelling crack problems without the need for re-meshing. Unlike domain-based approaches such as the Finite Element Method (FEM), DBEM requires only boundary discretisation, thereby significantly reducing computational cost, especially in problems involving multiple crack growth steps. For shallow shell structures, the DBEM has been extended by Dirgantara and Aliabadi (2001) and later enhanced with the Dual Reciprocity Method (DRM) (Wen et al., 1999) to account for domain integrals arising from body forces and bending behaviour.

In this study, the DBEM is employed to model fatigue crack growth in shallow shell structures. Stress intensity factors (SIFs) are extracted using the Crack Surface Displacement Extrapolation (CSDE) technique (Dirgantara & Aliabadi, 2002), which estimates the SIFs based on the displacement field near the crack tip. A full derivation of the DBEM formulation is beyond the scope of this paper; for detailed formulations involving membrane and bending stress resultants, the reader is referred to the works of Aliabadi and Dirgantara (Aliabadi, 2002; Dirgantara & Aliabadi, 2002).

For curved shell structures, the Mode I, II, and III stress intensity factors are expressed in terms of the membrane stress resultant intensity factors (K1m, K2m) and the bending stress resultant intensity factors (K1b, K2b, K3b) as follows: [ 1+x32(1R1+1R2) ]KI=1hK1m+6h2K1b[ 1+x32(1R1+1R2) ]KII=1hK2m+6h2K2b[ 1+x32(1R1+1R2) ]KIII=32h[ 1(2x3h)2 ]K3b\matrix{ {\left[ {1 + {{{x_3}} \over 2}\left( {{1 \over {{R_1}}} + {1 \over {{R_2}}}} \right)} \right]{K_I} = {1 \over h}{K_{1m}} + {6 \over {{h^2}}}{K_{1b}}} \hfill \cr {\left[ {1 + {{{x_3}} \over 2}\left( {{1 \over {{R_1}}} + {1 \over {{R_2}}}} \right)} \right]{K_{II}} = {1 \over h}{K_{2m}} + {6 \over {{h^2}}}{K_{2b}}} \hfill \cr {\left[ {1 + {{{x_3}} \over 2}\left( {{1 \over {{R_1}}} + {1 \over {{R_2}}}} \right)} \right]{K_{III}} = {3 \over {2h}}\left[ {1 - {{\left( {{{2{x_3}} \over h}} \right)}^2}} \right]{K_{3b}}} \hfill \cr }

Here, h denotes the shell thickness, R1 and R2 are the principal radii of curvature in the x and y directions, respectively, and x3 is the through-thickness coordinate measured from the mid-surface. The subscripts m and b refer to the membrane and bending components of the stress resultant intensity factors, respectively. In this study, the maximum SIFs are evaluated at the upper surface by substituting x3= h/2 into the expressions above. The effective stress intensity factor is then calculated as: Keff=EGeff{K_{eff}} = \sqrt {E{G_{eff}}} where Geff is the effective energy release rate Geff=G1+α(G2+G3+G4+G5){G_{eff}} = {G_1} + \alpha \left( {{G_2} + {G_3} + {G_4} + {G_5}} \right) α=| ΔK1b || ΔK1b |+| ΔK1m |\alpha = \sqrt {{{\left| {\Delta {K_{1b}}} \right|} \over {\left| {\Delta {K_{1b}}} \right| + \left| {\Delta {K_{1m}}} \right|}}}

The energy release rate components were defined as: G1=KIm2E,G2=KIIm2E,G3=πKIb2E,G4=πKIIb2E,G5=8π(1+v)KIIIb25E{G_1} = {{K_{Im}^2} \over E},{G_2} = {{K_{IIm}^2} \over E},{G_3} = {{\pi K_{Ib}^2} \over E},{G_4} = {{\pi K_{IIb}^2} \over E},{G_5} = {{8\pi (1 + v)K_{IIIb}^2} \over {5E}} where E is Young’s modulus and v is Poisson’s ratio.

Bayesian Inference

Inferring the EIFSD relies on updating the prior distribution using inspection data to obtain a posterior distribution. In this work, the inspection data consists of the crack size detected at the inspection interval of Nins = 1×106 cycles. The theory presented in this section follows the previous work by Sankararaman et al. (2010). It is assumed that the ‘true’ EIFSD follows a two parameter lognormal distribution with mean μ and standard deviation σ, such that θtrue ~ lnN(μ, σ). A set of trial pairs (μ^i,σ^j)\left( {{{\hat \mu }_i},{{\hat \sigma }_j}} \right) is defined within a possible range prior to Bayesian updating, with each trial distribution assumed to follow θij~lnN(μ^i,σ^j){\theta _{ij}}\~\ln N\left( {{{\hat \mu }_i},{{\hat \sigma }_j}} \right). Bayesian updating allows the posterior distribution to gradually converge towards one of these trial pairs as more inspection data become available. The likelihood that a given trial pair (μ^i,σ^j)\left( {{{\hat \mu }_i},{{\hat \sigma }_j}} \right) corresponds to the true EIFSD (μ, σ) given the kth inspection observation NKinsN_K^{ins} is: L(μ^i,σ^jNKins,Y)=fNμ^i,σ^j,Y(NKinsμ^i,σ^j,Y)=1NKins2πβij2exp([ log(NKins)αij ]22βij2)\matrix{ {L\left( {{{\hat \mu }_i},{{\hat \sigma }_j}\mid N_K^{ins},Y} \right) = {f_{N\mid {{\hat \mu }_i},{{\hat \sigma }_j},Y}}\left( {N_K^{ins}\mid {{\hat \mu }_i},{{\hat \sigma }_j},Y} \right)} \hfill \cr { = {1 \over {N_K^{ins}\sqrt {2\pi \beta _{ij}^2} }}\exp \left( { - {{{{\left[ {\log \left( {N_K^{ins}} \right) - {\alpha _{ij}}} \right]}^2}} \over {2\beta _{ij}^2}}} \right)} \hfill \cr } where αij and βij are the location and shape parameters of the lognormal distribution fNμ^i,σ^j,Y{f_{N\mid {{\hat \mu }_i},{{\hat \sigma }_j},{\bf{Y}}}}. These parameters represent the distribution of crack length at Nins = 1×106 cycles, resulting from the propagation of an initial crack drawn from the trial EIFSD θij. The vector Y denotes the uncertainties in material properties, geometry, and loading conditions that affect the crack growth process. The likelihood of the trial pair θij~lnN(μ^i,σ^j){\theta _{ij}}\~\ln N\left( {{{\hat \mu }_i},{{\hat \sigma }_j}} \right) after incorporating l inspection data can be written as follows: Lnorm (μ^i,σ^jN1:lins ,Y)=k=1lL(μ^i,σ^jNKins ,Y)L(μ^i,σ^jN1:lins ,Y)dμ^dσ^{L^{{\rm{norm }}}}\left( {{{\hat \mu }_i},{{\hat \sigma }_j}\mid N_{1:l}^{{\rm{ins }}},Y} \right) = {{\prod\nolimits_{k = 1}^l L \left( {{{\hat \mu }_i},{{\hat \sigma }_j}\mid N_K^{{\rm{ins }}},Y} \right)} \over {\int\!\!\!\int L \left( {{{\hat \mu }_i},{{\hat \sigma }_j}\mid N_{1:l}^{{\rm{ins }}},Y} \right)d\hat \mu d\hat \sigma }}

Here, the denominator serves as a normalisation term. The likelihood that individual μ^i{{\hat \mu }_i} and σ^j{{\hat \sigma }_j} corresponds to the true mean and standard deviation can then be separated as follows (Sankararaman et al., 2010): Lnorm (μ^iN1:lins ,Y)=Lnorm (μ^i,σ^N1:lins ,Y)fΣ^N1:l1ins,Y(σ^N1:l1ins,Y)dσ^Lnorm (σ^jN1:lins ,Y)=Lnorm (μ^,σ^jN1:lins,Y)fM^N1:l1ins(μ^N1:l1ins,Y)dμ^\matrix{ {{L^{norm }}\left( {{{\hat \mu }_i}\mid N_{1:l}^{ins{\rm{ }}},Y} \right) = \int {{L^{norm{\rm{ }}}}} \left( {{{\hat \mu }_i},\hat \sigma \mid N_{1:l}^{ins{\rm{ }}},Y} \right){f_{\hat \Sigma \mid N_{1:l - 1}^{ins},{\bf{Y}}}}\left( {\hat \sigma \mid N_{1:l - 1}^{ins},Y} \right)d\hat \sigma } \hfill \cr {{L^{norm }}\left( {{{\hat \sigma }_j}\mid N_{1:l}^{ins{\rm{ }}},Y} \right) = \int {{L^{norm{\rm{ }}}}} \left( {\hat \mu ,{{\hat \sigma }_j}\mid N_{1:l}^{ins},Y} \right){f_{\hat M\mid N_{1:l - 1}^{ins}}}\left( {\hat \mu \mid N_{1:l - 1}^{ins},Y} \right)d\hat \mu } \hfill \cr }

Here, fM^N1i1ins,Y{f_{\hat M\mid N_{1i - 1}^{ins},Y}} and fΣ^N1:l1ins,Y{f_{\hat \Sigma \mid N_{1:l - 1}^{ins},Y}} represent the prior distributions. Accordingly, initial guesses of the prior distributions are required for the first iteration of Bayesian inference. These priors are typically selected based on engineering judgement or alternatively assumed to follow a normal distribution. The posterior distributions can then be updated iteratively from the prior terms as new inspection data becomes available. These posterior distributions are given as (Sankararaman et al., 2010): fM^N1:lins,Y(μ^iN1:lins,Y)=L(μ^iN1:lins,Y)fM^N1:l1ins,Y(μ^iN1:l1ins,Y)L(μ^N1:lins,Y)fM^N1:l1ins,Y(μ^N1:l1ins,Y)dμ^{f_{\hat M\mid N_{1:l}^{ins},Y}}\left( {{{\hat \mu }_i}\mid N_{1:l}^{ins},Y} \right) = {{L\left( {{{\hat \mu }_i}\mid N_{1:l}^{ins},Y} \right){f_{\hat M\mid N_{1:l - 1}^{ins},Y}}\left( {{{\hat \mu }_i}\mid N_{1:l - 1}^{ins},Y} \right)} \over {\int L \left( {\hat \mu \mid N_{1:l}^{ins},Y} \right){f_{\hat M\mid N_{1:l - 1}^{ins},Y}}\left( {\hat \mu \mid N_{1:l - 1}^{ins},Y} \right)d\hat \mu }} fΣ^N1lins,Y(σ^jN1:lins,Y)=L(σ^jN1:lins,Y)fΣ^N1:l1ins,Y(σ^jN1:l1ins,Y)L(σ^N1:lins,Y)fΣ^N1:l1ins,Y(σ^N1:l1ins,Y)dσ^{f_{\hat \Sigma \mid N_{1l}^{ins},Y}}\left( {{{\hat \sigma }_j}\mid N_{1:l}^{ins},Y} \right) = {{L\left( {{{\hat \sigma }_j}\mid N_{1:l}^{ins},Y} \right){f_{\hat \Sigma \mid N_{1:l - 1}^{ins},Y}}\left( {{{\hat \sigma }_j}\mid N_{1:l - 1}^{ins},Y} \right)} \over {\int L \left( {\hat \sigma \mid N_{1:l}^{ins},Y} \right){f_{\hat \Sigma \mid N_{1:l - 1}^{ins},Y}}\left( {\hat \sigma \mid N_{1:l - 1}^{ins},Y} \right)d\hat \sigma }}

The estimation of mean and standard deviation of EIFSD is obtained by the following (Sankararaman et al., 2010): μBU,1:l=E(M^)=μ^fM^Nl:iins,Y(μ^N1:lins,Y)dμ^{\mu _{BU,1:l}} = E(\hat M) = \int {\hat \mu } {f_{\hat M\mid N_{l:i}^{ins},Y}}\left( {\hat \mu \mid N_{1:l}^{ins},Y} \right)d\hat \mu σBU,1:l=E(Σ^)=σ^fΣ^N1:lins,Y(σ^N1:lins,Y)dσ^{\sigma _{BU,1:l}} = E(\hat \Sigma ) = \int {\hat \sigma } {f_{\hat \Sigma \mid N_{1:l}^{ins},Y}}\left( {\hat \sigma \mid N_{1:l}^{ins},Y} \right)d\hat \sigma

It is expected that the estimated values μBU,1:l and σBU,1:l will converge to the true values of μ and σ as more inspection data becomes available.

A noticeable source of error in the Bayesian updating method is that the precision and accuracy of the inferred results highly depend on the number of trial candidates used in the algorithm. These trial candidates are usually generated by uniformly dividing the possible range of mean and standard deviation into an n×n grid. In this setting, a small ‘n’ may fail to include the true value, while a large ‘n’ can lead to high computational cost, especially when high accuracy is required. To address this issue, an adaptive grid sampling method is proposed in this work. As more inspection data are sequentially incorporated, the posterior probability distribution gradually becomes more concentrated around a certain region in the mean–standard deviation space (Kurchin et al., 2019). Instead of using a dense grid from the beginning, the proposed strategy starts with a coarse grid, and only those trial pairs with posterior likelihood greater than a threshold Lthreshold are retained and further subdivided. The Bayesian updating is then repeated using the refined grid.

NUMERICAL EXAMPLE

To demonstrate the effectiveness of the proposed method for shallow shell structures, a computational analysis of a fuselage window structure from the Boeing 787 Dreamliner is presented. An initial crack is assumed at the corner of the window, as shown in Figure 2a. The outer geometry is treated as deterministic, with W1 = 1 m and L1 = 1 m. The structure is made of Aluminium 2024-T3 and subjected to cabin pressure, with a Young’s modulus of E = 73.1 GPa and Poisson’s ratio ν = 0.33. A prior finite element analysis is conducted to identify the location of the highest stress concentration, which is used as the assumed crack initiation point, as shown in Figure 2b. Although DBEM is employed for crack propagation analysis in this study, FEA is more efficient for obtaining a full-field stress distribution in the uncracked structure. Using DBEM for this purpose would require a large number of domain points, reducing its computational advantage. The true EIFS value is assumed to follow a lognormal distribution, given by θtrue ~ lnN(8.47, 0.424) mm in this analysis. The dimensions of the window section are listed in Table 1, based on Soni et al. (2014). The selection of the coefficient of variation for each uncertain parameter follows the approach in Koh and See (1994). The applied pressure P is determined by the difference between the cabin pressure and the atmospheric pressure at cruising altitude. While ϕ is fixed for a given geometry, the parameter α represents the crack initiation angle measured from the window fillet, is treated as a random variable in the inference model to account for possible variations in crack propagation direction.

Figure 2.

a) The geometry of the shallow shell fuselage window structure. b) FEA results indicating the stress concentration location with ϕ = 29.24°.

Details of the shell structure parameters and the random variables used in the EIFS inference.

Parameter Description Distribution Mean COV
W2 Inner width Lognormal 0.468 m 0.01
L2 Inner length Lognormal 0.273 m 0.01
R2 Inner radius Lognormal 0.127 m 0.01
h Thickness Lognormal 0.01 m 0.01
RK Radius of curvature Lognormal 2.73 m 0.01
α Crack initiation angle Lognormal 29.24° 0.05
P Domain pressure Lognormal 7.1 Psi 0.04
C Paris law constant Lognormal 1.60×10(11)(m/cycle)(MPam)m1.60 \times {10^{( - 11)}}{{({\rm{m}}/{\rm{ cycle }})} \over {{{\left( {{\rm{MPa}}\sqrt m } \right)}^m}}} 0.1
m Paris law exponent Lognormal 3.59 0
Surrogate model

A Co-Kriging model is used in the analysis to predict the effective stress intensity factor (SIF) at the crack tip. This model incorporates the correlation between a low-fidelity model and a limited number of high-fidelity samples to improve overall prediction accuracy. The theoretical background of Co-Kriging can be found in (Forrester et al., 2008), and the present analysis employs the ooDACE toolbox (Couckuyt et al., 2014) for model construction.

The BEM meshes used in the analysis are shown in Figure 3, where two types of meshes are considered: a coarse mesh for the low-fidelity model (Figure 3a) and a fine mesh for the high-fidelity model (Figure 3b). The coarse mesh consists of 176 boundary nodes and 56 DRM points, with the crack tip meshed using 3 elements. The fine mesh consists of 264 boundary nodes and 268 DRM points, with the crack tip meshed using 5 elements. Details of the crack tip mesh and the definition of the crack initiation angle α are provided in Figure 3c. The two mesh configurations were selected based on a convergence study of the SIF at the crack tip. The high-fidelity model exhibits convergence, whereas the low-fidelity model predicts a SIF that differs by 7.20% from the high-fidelity model. Due to the exponential sensitivity of fatigue life to the SIF range – governed by the Paris Law exponent m – this discrepancy results in a 32.85% discrepancy in the estimated fatigue life.

Figure 3.

BEM mesh of the structure: a) coarse mesh used in the low-fidelity model; b) fine mesh used in the high-fidelity model. The DRM points are indicated by red crosses; c) detailed view of the crack tip region for both meshes, along with the definition of the crack initiation angle α.

The training dataset consists of responses from 32 high-fidelity models and 467 low-fidelity models, using the Matérn 5/2 kernel function. A total of 232 test samples were generated using DBEM to evaluate the prediction errors of the surrogate models. To evaluate the prediction accuracy of the Co-Kriging models, several common regression metrics are reported: the root relative squared error (RRSE), mean absolute percentage error (MAPE), mean absolute error (MAE), and the root mean squared error (RMSE). In addition, R2 shows how well the model explains the data, with values closer to 1 implying better fit. A summary of the errors in the predicted effective SIF and the corresponding fatigue life is provided in Table 2, showing that the prediction error in the Keff model is within an acceptable range, while the fatigue life model for N exhibits larger errors. Again, this is mainly attributed to effect of the exponential term in the Paris Law. Scatter plots of the actual versus predicted values are shown in Figure 4. The larger prediction errors in fatigue life N are mostly concentrated in the tail of the distribution, where very high fatigue lives are observed (i.e., N> 2×106). However, good agreement is achieved in the region of interest around N = 1×106, which corresponds to the assumed inspection interval.

Figure 4.

Prediction errors of the Co-Kriging model compared to the true values generated from DBEM for a) Keff and b) N.

Model errors of the Co-Kriging predictions for Keff and N, compared to the test dataset.

Model RRSE (%) MAPE (%) MAE RMSE R2
Keff 4.613 0.621 0.0511MPam 0.0511{\rm{MPa}}\sqrt m 0.065MPam0.065{\rm{MPa}}\sqrt m 0.998
N 12.376 78.035 6.887×104 cycles 1.112×105 cycles 0.985
Bayesian Inference of EIFSD

This section presents the procedure of the Bayesian updating method, assuming that an inspection is carried out at Nins = 1×106 cycles and the observed crack length is recorded. Two sets of data are prepared prior to the Bayesian updating:

Inspection data simulated from DBEM. The inspection data are generated using the true EIFSD θtrue ~ lnN(8.47, 0.424), with uncertainties sampled based on Table 1. The initial crack length is sampled from θtrue and propagated using DBEM. The crack length at Nins is recorded as the inspection data. In total, 5,000 inspection samples are generated.

Trial samples and Monte Carlo simulation. Trial pairs of the mean and standard deviation for possible EIFSDs, denoted as θij~lnN(μ^i,σ^j){\theta _{ij}}\~lnN\left( {{{\hat \mu }_i},{{\hat \sigma }_j}} \right), are defined before Bayesian updating. The candidate range for the mean is μ^i[2,12]{{\hat \mu }_i} \in [2,12] mm, and for the standard deviation is σ^j[0.1,1]{{\hat \sigma }_j} \in [0.1,1] mm. These ranges are uniformly divided into ntrial samples to construct the trial space. For each trial pair, initial crack lengths are sampled from the corresponding θij distribution, along with uncertainties defined in Table 1. Following this, nMCS = 1×105 Monte Carlo samples are generated and evaluated using the Co-Kriging model trained in the previous section. The resulting distribution of crack lengths at Nins is fitted to a lognormal distribution to obtain the parameters αij and βij in Eqn. 9.

It is expected that as more inspection data are incorporated into the Bayesian updating process, the posterior estimation will gradually converge to the true value θtrue. To demonstrate the effectiveness of the adaptive grid sampling strategy, three cases are considered: a trial sample space with ntrial = 30 and 60 without adaptive refinement, and a coarse initial space with ntrial = 8 using adaptive sampling.

The refinement process of narrowing down the trial space is shown in Figure 5a, where the true EIFSD is marked with a red cross. Starting from an initial ntrial = 8, a total of 27 refinement steps are performed, and the trial space gradually focuses around the true value. Figure 5b shows the inferred EIFSD after applying 5000 inspection data at each refinement step. Note that a0 is sampled from the EFSD characterised by (μ^,σ^)(\hat \mu ,\hat \sigma ). It can be observed that as the refinement progresses, the inferred EIFSD becomes increasingly closer to the true value.

Figure 5.

a) Schematic of the adaptive grid sampling strategy, showing the progressive subdivision of the trial space into regions with higher posterior probability; b) Comparison of the inferred EIFSD at the end of each refinement step.

The convergence of the inferred mean and standard deviation is shown in Figure 6. As more inspection data are incorporated into the Bayesian updating process, the inferred parameters gradually approach the actual mean and standard deviation in all cases. To highlight the advantage of the refinement strategy, two fixed trial sample sizes, ntrial = 30 and 60, without adaptive sampling, are compared with the adaptive refinement case. It can be observed that the case with ntrial = 30 shows a larger error in the inferred mean, while the adaptive refinement and the case with ntrial = 60 yield similar estimates. A similar trend is observed for the convergence of the standard deviation, where both the ntrial = 60 case and the adaptive refinement strategy achieve comparable precision relative to the true value. This result aligns with expectations: when the trial sample space is too coarse, it may not contain the true value, and the Bayesian updating process may converge to an inaccurate trial pair. In contrast, the adaptive refinement strategy enables continuous narrowing of the trial space toward regions of high posterior probability, leading to improved accuracy in the area of interest.

Figure 6.

Convergence of the EIFSD mean and standard deviation from Bayesian inference with different ntrial of the trial space.

A summary of the inferred parameters and their errors compared to the true values is given in Table 3. The inferred means show good agreement with the true mean in all cases, while the adaptive refinement strategy achieves higher precision. When fewer trial samples are used, the estimated standard deviations exhibit larger errors relative to the true value. The adaptive strategy produces results comparable to the ntrial = 60 case. However, it should be noted that the error between the inferred values and the true values is also influenced by the error introduced by the Co-Kriging model. This is because the inspection data are generated directly from DBEM, while the trial space predictions are obtained through MCS using the Co-Kriging model. The comparison of CPU time is also included, with contributions from two main sources: 1) the Bayesian updating scheme, and 2) the time required to generate trial samples through Monte Carlo simulation (MCS) using the Co-Kriging model.

Convergence results of the inferred mean and standard deviation from Bayesian inference, along with the associated computational cost in terms of CPU time.

CPU time
Model θμ error (%) θσ error (%) Bayesian (s) MCS (hrs)
True EIFSD 8.470 × 0.424 × ×
ntrial = 30 8.552 0.968 0.503 18.9 7.88 1.15
ntrial = 60 8.440 0.354 0.406 4.02 79.46 4.59
Adaptive (27 steps) 8.465 0.059 0.401 5.20 75.12 2.21

In this study, the computational time for one MCS run was approximately 110.25 seconds. To improve efficiency, parallel computing with 24 cores was employed during the MCS process. It can be observed that coarser trial sample spaces require significantly less computational time for both Bayesian updating and MCS, due to the reduced number of trial pairs. In contrast, the fine grid trial space leads to the highest computational cost, as a larger number of trial pairs must be evaluated. The proposed adaptive strategy provides a balanced solution. While its computational cost is slightly higher than that of the coarse grid, it achieves a level of precision comparable to the fine grid case. This demonstrates that adaptive refinement not only improves accuracy but also maintains computational efficiency by focusing resources on regions with high posterior probability.

CONCLUSIONS

This study presents an adaptive sampling strategy for the Bayesian updating of the Equivalent Initial Flaw Size Distribution (EIFSD). The proposed method addresses the limitations of previous approaches, where the accuracy of the results could only be improved by exhaustively sampling the trial space with a very dense trial set, which results in high computational cost. The adaptive strategy achieves similar accuracy to that of a fine grid sampling approach, while requiring only slightly more computational effort than a coarse grid, making it both efficient and practical for EIFSD estimation.

The method was demonstrated through a numerical example of a Boeing 787 Dreamliner fuselage window, with crack propagation analysis performed using the DBEM. To reduce the cost of Monte Carlo simulation (MCS), Co-Kriging surrogate models were trained and used in place of the computationally expensive DBEM. The Bayesian updating procedure was shown to produce highly accurate results, with only 0.059% error in the inferred mean and 5.2% error in the inferred standard deviation compared to the true EIFSD. The accuracy can be further improved by: 1) reducing the discrepancy between the surrogate model and the DBEM using more advanced modelling techniques; 2) increasing the resolution of the trial sample space in the Bayesian inference process.

Language:
English
Publication timeframe:
1 times per year
Journal Subjects:
Engineering, Introductions and Overviews, Engineering, other