Uneingeschränkter Zugang

Metamodel-Based Inverse Design of a Composite Material with Prescribed Interval Effective Elastic Properties

,  und   
26. Juni 2025

Zitieren
COVER HERUNTERLADEN

INTRODUCTION

The design of new engineering materials allows for filling holes in the macroscopic property-space maps. These new materials include particulate and fibrous composites, sandwich structures, foams, lattice structures and others [1]. The components of these new materials are described by their properties and shape on a lower (e.g. micro) scale. One can evaluate the macroscopic properties, mechanical or other, of non-homogeneous materials using homogenisation methods. Computational methods, mainly finite and boundary element methods, have gained increasing attention from engineers and researchers due to their versatility also in the problems of multiscale modelling and homogenization [2,3,4,5]. The homogenisation problems consist of the evaluation of effective properties (e.g. elastic) of non-homogeneous materials on the macro scale. The problem involves a solution to the boundary value problem (BVP) at the microscale: simulation of a representative volume element (RVE) with given microstructural topology and parameters. On the other hand, one can formulate and solve an inverse problem to identify the microstructure topology and parameters, based on the macroscopic properties of homogenized material [6,7]. Among the crucial factors related to the analysis of heterogeneous engineering materials, uncertainties are currently of interest due to their impact on the precise design and manufacturing of such structures. Therefore, the development of computational approaches capable of dealing with uncertainties is crucial for the successful design and manufacturing of new materials.

Observed uncertainty falls into one of two categories: aleatoric or epistemic [8,9]. Aleatoric uncertainty is natural, random, and irreducible. Epistemic uncertainty is due to incomplete scientific knowledge and can be reduced by new insights. From the probability point of view, the uncertainty of engineering quantities can be grouped into stochastic uncertainty (known probability), incertitude (unknown probability) or ignorance (fixed values). A very common in engineering problems, including up-to-date experimental material characterisation methods, e.g. [10], is the second category (incertitude) which includes cases of intervals and fuzzy numbers. From the point of view of information processing, this representation of quantities is also known as information granularity [11,12,13]. In engineering practice, this representation is useful in the situation of insufficient data availability that does not allow the engineer to describe the process as stochastic. In the context of modelling, another classification of uncertainty can be introduced: uncertainty induced by data, component (model uncertainty), and structure (structural uncertainty) [9,14]. In general, incertitude uncertainties in geometry, material properties, loading, and boundary conditions significantly affect structural analysis, introducing variability in performance estimations, leading to potential inaccuracies [15]. By including imperfect data in the quantification of uncertainty, more reliable and relevant results can be obtained that better reflect the complexity and uncertainty present in real-world conditions. Note that in the case of metamodel-based computational homogenization preparation stage, the boundary conditions or loading for the RVE with uncertain parameters (geometry, material parameters and phase bonding conditions) can be precise as they are driven by the homogenization procedure that imposes macro strains (or stresses) and their increments for non-linear structures. However, at the possible latter stage of analysing a macro-structure, that is beyond the scope of the present work, the transferred macro strains or stresses to the RVE as boundary conditions can be uncertain due to all macro-structure uncertainties under consideration.

Recently, researchers have considered the incertitude uncertainty to be of practical importance in engineering problems, including homogenisation and multi-scale modelling. One of the basic approaches applied to numerical modelling of mechanical and micromechanical structures with incertitude properties is the interval and fuzzy finite element method. Note that its efficiency is deteriorated by an amount of conservatism (overestimation of the uncertainty in the output) due to the interval matrix assembly phase [16]. In [17], 2D composite structures were analysed, with the application of Taylor expansions of effective elasticity matrices, with sensitivity analysis. In [18], stochastic FEM was applied to the analysis of the random RVE, to determine the respective limits and evaluate fuzzy Young’s modulus of a composite material. The modulus was applied to the dynamic simulation of multi-body structures to obtain fuzzy response curves. In [19], a fuzzy RVE was introduced that allows one to consider the spatial variability of the ply level of the ply level in laminates. Fuzzy FEM is coupled with the radial basis functions metamodel. Global responses in dynamics and stability problems were solved. In [20] fuzzy numbers were applied to the computational homogenisation of composite materials with fuzzy microstructure parameters.

One can observe that the recent approach to the solution of the BVP on a micro scale in complex problems involving homogenisation and multi-scale modelling, becomes the shift of the RVE numerical modelling to the offline stage, outside the main loop of the procedure under consideration: multiscale simulation, optimisation or identification. This preliminary stage prepares data and creates model for metamodelling or a data-driven approach. Such an approach employs metamodels (surrogates) built based on observations computed by high-fidelity numerical models, usually analysed by the FEM. The metamodels include polynomial response surfaces [20], radial-basis functions [19,21], Kriging [22,23], artificial neural networks [24], combined analytical-numerical metamodels [25], and possibly others. Such an approach leads to a substantial reduction in the overall computation time compared to the traditional approach based on in-loop time-consuming simulations of the RVE. The approach with offline metamodel-based RVE simulations also has potential advantages in the inverse problems involving incertitude uncertainties, due to the substantial number of calculations related to the interval arithmetic.

The goal of the present work is to formulate and solve a novel problem of inverse design of composite materials with desired incertitude (interval) macroscopic properties and incertitude microstructure unknown data. To solve the problem efficiently, the metamodelling approach is applied with appropriate design of experiment (DoE) for numerical calculations based on a high-fidelity finite element method model built in Ansys software. The DoE takes into account the admissible ranges of geometry and material parameters of the microstructure of the composite. The minimised objective function is dependent on the desired bounds of the macroscopic elastic properties of the designed material. In some situations, e.g. manufacturing, it may be more convenient to have wider ranges of input microstructure parameters. Therefore, these ranges are also incorporated into objectives. For interval analysis, directed interval arithmetic is applied, which is considered efficient in solving engineering problems due to its feature of including cancelation laws for both addition and multiplication [26,27,28,29,30]. In identification problems, the interval parameters are represented as pairs of real numbers (components of the directed intervals). Such an approach allows for the application of regular evolutionary algorithms for real variables. The inverse problem is solved by both approaches of scalarization with presumed weights for objective functions, and the Pareto frontier approach as well. A problem concerning unidirectional fibre-reinforced composite is solved and results are presented. While similar methodologies for computational homogenization are documented in the literature, we believe that no corresponding solutions of inverse design problems for inhomogeneous materials with epistemic incertitude structural parameters, including the multi-objective identification (optimization) approach, have been reported.

The paper is organised as follows: Section 2 contains an introduction to computational homogenisation and related inverse problem as well as the design of the experiment and the description of applied metamodels. Section 3 introduces the notation and operators of interval arithmetic and directed interval arithmetic. In Section 4 the idea and formulation of the inverse problem involving computational homogenisation with incertitude input parameters of the microstructure and desired incertitude of the effective elastic constants are presented. In Section 5, numerical examples are solved with the approaches of weighted objective functions and the Pareto frontier. Section 6 contains conclusions.

COMPUTATIONAL HOMOGENISATION AND INVERSE DESIGN USING METAMODELS

Multiscale modelling allows the structure to be modelled at different length scales. One of the elements of multiscale modelling that enables the analysis of heterogeneous materials, such as composites or porous materials, at the microscale is homogenisation. Among the different homogenisation methods, computational homogenisation is the most versatile [2].

Computational homogenisation [31] allows equivalent macroscopic material properties to be calculated from microscopic data. The basic idea of computational homogenisation is to represent the microstructure of a material by a statistically representative piece of its geometry, called a representative volume element (RVE). Stress analysis for RVE provides detailed information about the material's macro-scale behaviour. The analysis can be carried out using numerical methods such as the finite element method (FEM) [32] or the boundary element method (BEM) [33].

The RVE represents the structure of the entire medium (or a portion of it in the case of local periodicity) and thus contains all the information required to fully describe both the structure and properties of that medium. The RVE must satisfy the scale separation condition, the Hill Mandel condition, and the imposed boundary conditions [34].

The separation of scales condition assumes that the RVE geometry must be of an appropriate size: lmicro lRVE lmacro, where lmicro, lRVE and lmacro are characteristic microscale, RVE scale and macroscale dimensions, respectively.

The Hill-Mandel condition describes the equality of the average energy density at the microscale and the macroscopic energy density at the macrostructure point corresponding to the RVE location: σijεij = σij εij , where: σij, εij are the components of the stress and strain tensors respectively, 〈·〉 is the averaged value of the considered field: ·=1|V|V(·)dV, where V is the RVE volume.

Boundary conditions have to satisfy the Hill-Mandel condition, e.g. in the form of periodic, linear displacement, or uniform traction boundary conditions. In this paper, the periodic boundary conditions are used: ui+ui= εij ·(xi+xi),xΓ:ni+=niti+=ti,xΓ:ni+=ni, where ui+ , ui are displacements of the corresponding points at the opposite RVE boundaries, xi+ , xi represent locations of the corresponding points at the opposite RVE boundaries, ti+ , ti are tractions on the corresponding points at the opposite RVE boundaries, Γ is the external boundary of RVE, ni+ , ni are normal vectors at the opposite RVE boundaries.

Periodic boundary conditions allow for the determination of equivalent material properties with greater accuracy and with fewer internal inclusions or voids in the RVE than displacement and traction boundary conditions [35].

A heterogeneous material is usually assumed to consist of two or more homogeneous phases that obey the laws of continuum mechanics. In this paper, it is assumed that there is an ideal contact between the phases. The behaviour of an elastic body made of a linear isotropic material (single phase) under external loads is described by [36]:

geometrical relations: εij=12(ui,j+uj,i),

constitutive law (Hooke’s law): σij=λδijεkk+2Gεij,

equilibrium equations: σij,j+bi=0,i,j=1,2,3,

where ui represents the components of the displacement tensor, λ is the Lamé’s parameter, δij is the Kronecker’s delta, G is the Kirchhoff modulus and bi denotes the volume forces.

Using Equations (5) and (6), the equilibrium equations expressed by displacements (Navier-Lamé equations) are obtained: Gui,jj+(G+λ)λuj,ji+bi=0.

For linear elastic materials, there is no need to analyse the RVE for each point in the macrostructure, and equivalent material properties can be calculated prior to analysing the macroscopic model. In order to obtain equivalent properties, described by a fourth-order material tensor, a series of RVE analyses must be performed with specific boundary conditions.

For a homogenised orthotropic material, 6 analyses allow the calculation of 9 stiffness coefficients C representing stress-strain relationship. In this case, Eq. (6) can be written using the Voigt notation as [37]: [ σ11 σ22 σ33 σ23 σ13 σ12 ]=[ C11C12C13000C22C23000C33000C4400symC550C66 ][ ε11 ε22 ε33 2 ε23 2 ε13 2 ε12 ].

The problem of determining the values of microscopic properties for given macroscopic data is known as the inverse design [6,7]. The inverse design belongs to a group of ill-posed problems as different sets of microscopic parameters can fulfil the assumed macroscopic values. One of the techniques for solving common inverse design problems is the use of optimisation methods. Due to the presence of a large number of local extremes of the objective functions, global optimisation methods, such as evolutionary algorithms, artificial immune systems, particle swarm, and ant colony algorithms, are recommended [38]. In this paper, evolutionary algorithms (EAs) are applied to solve the inverse design problem [39].

The inverse design problem can be described as a constrained optimisation task: g(p)= Cij(p)-Cij* min, where p is a vector of design variables, g(p) is an objective function, Cij(p) are elastic constants dependent on the design variables, Cij* denotes required stiffness coefficients and ∥·∥ is a matrix Euclidean norm.

Due to manufacturing and material property constraints, optimisation constraints are related to design variables: biLpi,+biL,i=1,2,,n, where n is the number of design variables, bL and bU are vectors of lower and upper design variable constrains, respectively: bLU=[ bLbU ]=[ b1Lb1Ub2Lb2UbnLbnU ].

Solving an inverse design problem requires multiple calculations of the objective function(s) value. This is particularly time-consuming when FEM is used to solve the boundary value problem. A way to reduce computational costs may be to use response surfaces (RS) as the metamodel [40].

The use of RS allows the highly nonlinear behaviour of the structure to be modelled, without the need to perform complex mathematical operations on the system matrices. RS is generated from precomputed sets of output parameters. The RS constitutes an approximate parametric model of the original system created from a precomputed set of models with different input parameter values. The quality of RS strongly depends on the shape of the exact response function being approximated, the number of data points, and the volume of the design space in which the model is built. To reduce the number of cases required for evaluation, an appropriate design of experiment (DoE) based on the type of expected response and the proposed RS should be used [41].

Then, the calculated values of the output parameters are used to compute RS.

This paper adopts the 2nd order polynomial method as RS. This method uses an enhanced quadratic form to represent the relationship between inputs and outputs and is suitable for optimisation problems because of the generation of smooth functions with a single extremum.

The work assumes a very exact fit of the metamodel, and the uncertainty of its parameters is ignored. As a result, only the uncertainty associated with the design variables is considered.

Once the RS has been created, an assessment of its quality should be carried out to ensure that the values of the output parameters are correctly represented. In the paper the coefficient of determination R2, the predicted residual error sum of squares PRESS and the standard error of the estimate σest quality metrics are used to assess the quality of RS.

DIRECTED INTERVALS AND DIRECTED INTERVAL ARITHMETIC

If the parameters of the system are not precisely determinable, they may be treated as uncertain and modelled as information granules [13]. In mechanical systems, uncertainties typically arise from both design and manufacturing processes. The most widely used granularity models for such uncertainties are rough sets, interval numbers, fuzzy numbers, and random variables.

GrC can be introduced into both the numerical homogenisation and inverse design procedures considered in the paper. Uncertainties that can be used as model parameters can relate to geometry, material properties, loads, or boundary conditions. As a result of the analysis, ranges of estimated quantity values can be obtained. When a specific range of parameter values is known, the interval numbers allow the representation of uncertainty.

Interval arithmetic is based on the interval representation of a single number [42]: a¯=[ a,a+ ]={ aa¯:aaa+ }, where a and a+ ∈ ℝ are left and right ends of the interval a¯ . The central value of the interval is calculated as follows: cw(a¯)=mean(a,a+).

In the case where a = a+, the interval is called degenerate. Classical interval arithmetic is based on simple arithmetic for real numbers, which is extended to interval numbers. This results in the following operations on interval numbers: addition, subtraction, multiplication, division, multiplication by a scalar, and the inverse of an interval [43].

The main drawback of classical interval arithmetic is the lack of operations opposite to addition and inverse to multiplication [44]. As a result, e.g. when solving interval systems of equations, a widening of the intervals occurs. This effect can be significantly reduced by using directed interval numbers and directed interval arithmetic.

A directed interval number is defined as an ordered pair of real numbers: a¯=[ a,a+ ]={ aD:a,a+ }, where 𝔻 = ℙ ∪ 𝕀 is a set of all proper ℙ and all improper 𝕀 interval numbers with real ends [27].

The directed interval numbers are proper if a < a+ and improper if aa+. Additionally, the set ℤ = ℤ ∪ ℤ𝕀 contains all directed intervals with element 0: ={ a¯:a0a+ }𝕀={ a¯𝕀:a+0a }.

Each interval in set 𝔻 has two functionals [40]:

the ‘direction’ functional: τ(a¯)={ +, if aa+,if a>a+ ,

the ‘sign’ functional: a¯D\σ(a¯)={ +,ifa,a+>0,if a,a+<0 .

These functionals determine the result of an arithmetic operation performed on two directed intervals. Based on these, basic operations in directed interval arithmetic are defined as:

addition: a¯Da¯+b¯=[ a+b,a++b+ ],

subtraction: a¯Da¯b¯=[ ab+,a+b ],

multiplication: a¯·b¯={ [ aσ(b¯)·bσ(a¯),aσ(b¯)·bσ(a¯) ],a¯,b¯D\[ aσ(a¯)τ(b¯)·bσ(a¯),aσ(a¯)τ(b)·bσ(a¯) ],a¯D\,b¯[ aσ(b¯)·bσ(b¯)τ(a¯),aσ(b¯)·bσ(b¯)τ(a¯) ],a¯,b¯D\,[ min{ a·b+,a+·b },max{ a·b,a+·b+ } ],a¯,b¯[ max{ a·b,a+·b+ },min{ a·b+,a+·b } ],a¯,b¯I0¯,(a¯,b¯I)(a¯I,b¯)

division (0 ∉ [b, b+]): a¯b¯={ [ aσ(b¯)/bσ(a¯),aσ(b¯)/bσ(a¯) ],a¯,b¯D\[ aσ(b¯)/bσ(b¯)τ(a¯),aσ(b¯)/bσ(b)τ(a) ],a¯,b¯D\.

The symbols σ and –σ occurring in the superscript determine which end of the interval appears in the formula depending on whether it is a proper or improper number. Specifically, —σ changes the particular end from left to right or vice versa.

The directed interval arithmetic defines two additional operators:

opposite of addition: a¯DDa¯=[ a,a+ ],

inverse of multiplication: a¯D\1/a¯D=[ 1/a,1/a+ ],

which allow determining two additional directed operations:

directed subtraction: a¯,b¯Da¯Db¯=[ ab,a+b+ ],

directed division: a¯/Db¯={ [ aσ(b¯)/bσ(a¯),aσ(b¯)/bσ(a¯) ],a¯,b¯D\[ aσ(b¯)/bσ(b¯),aσ(b¯)/bσ(b) ],a¯,b¯D\.

As a result, the a¯Da¯=0¯ and a¯/Da¯=1¯ operations can be obtained. These operations allow the efficient application of arithmetic operations of subtraction and division with a significant decrease in unwanted interval widening [44]. Moreover, the difference of the interval numbers appearing in the objective function can take a value close to or equal to 0, which is advantageous in optimisation problems where the width of this difference can be minimised.

GRANULAR COMPUTATIONAL INVERSE DESIGN

In this paper, a granular approach is proposed to perform homogenisation and inverse design for inhomogeneous materials with uncertainties of their microstructure parameters. The Granular Computational Homogenisation (GCH) procedure is applied to perform the inverse design. The first step of GCH is related to the analysis of the material structure and available property data. On the basis of the microstructure of the material, a geometrical model of the structure (RVE) is prepared which is transferred to the FEM software. Material property data are used to create a numerical model of the microstructure, including the constitutive relationships. The decision about the geometry and the treatment of individual material properties as certain or uncertain results in input parameters for the GCH procedure. The number of identified parameters affects the number of sample calculations in the DoE phase. The GCH procedure, together with numerical examples showing homogenisation results for granular linear and nonlinear heterogeneous materials, is described in detail in the previous publication [20].

The Granular Computational Inverse Design (GCID) problem is related to the search for ranges of properties of the model on the micro scale that result in a specific material behaviour on the macro scale. The problem is formulated as an optimisation task using granular computing and can be described as: {minimisefg(p¯)subjectto:biLpi,+biU,i=1,2,,n

To create the objective function for the optimisation algorithm, the resulting interval stiffness coefficients are used to assess the consistency with the assumed uncertain values of the material properties. The measure of mismatch (distance) between two interval numbers is described by the norm: D([ a,a+ ],[ b,b+ ])=(ab)2+(a+b+)2.

This representation allows for a continuous and smooth objective function. The objective function for linear-elastic material properties is described by: fk(p¯)=f[ D(C¯ij(p¯)DC¯ij*) ], where C¯ij(p¯) is an interval function of stiffness coefficient based on RS, C¯ij* is an assumed output interval of stiffness coefficient, fk(p¯) is a function describing the k-th optimisation objective in multi-objective problems.

An additional objective of material optimisation is related to the current width of the design variables: w(p¯i)=| pi+pi |.

A larger uncertainty width of the optimised design variables makes it easier to find a material with properties that meet the specified values. For multiple design variables, the widths of the parameters can vary considerably. To provide an equivalent treatment of the parameters, width normalisation is introduced. The normalized width w(p¯i) of interval parameter p¯i is described as: w(p¯i)=w(p¯i)biUbiL,

In the optimisation process, the minimum normalized width of all parameters is maximized: fk(p¯)=minpiw'(p¯i)max,i=1,,n, where n is a number of design variables.

To solve optimisation tasks, global optimisation algorithms in the form of evolutionary algorithms are employed. For a single-objective optimisation task, the distributed evolutionary algorithm (DEA) is applied [45]. DEA is based on the concept of coevolutionary algorithms, where an entire population of individuals is divided into two or more subpopulations, usually with an identical number of individuals in each. Each subpopulation evolves almost independently, exchanging information with other subpopulations during the migration phase. DEA has been shown to be highly efficient and effective in many optimisation problems [38].

To solve a multi-objective optimisation problem, the MOOPTIM (MultiObjective OPTIMization tool) algorithm [46] is used. The algorithm is based on Pareto concept, and it allows one to obtain a set on non-dominated solutions. MOOPTIM is an improved version of NSGA-II [47] with different selection and new as well as modified mutation and crossover operators. As presented in [48], MOOPTIM outperforms NSGA-II in some benchmark and engineering problems, especially for functions difficult to optimise, that is, strongly multimodal, with a non-convex or discontinuous Pareto front. To assess the quality of the resulting Pareto front, a weighted hypervolume indicator [49] is used.

The concept of GCID is illustrated in Fig. 1. In the first step, the optimisation problem is formulated (number of design variables, number of stiffness coefficients considered, constraints on the variables, number of criteria). Based on this information, an optimisation approach (use of a single- or multi-objective algorithm) is selected. The next step involves the Granular Computational Homogenisation procedure to calculate the output interval stiffness coefficients based on the interval variables. The output values are used to compute the values of the objective functions. These operations are performed until the stopping criterion of the optimisation algorithm is met.

Fig. 1.

Granular Computational Inverse Design scheme

A multi-criteria optimisation problem can be solved using multi-objective algorithms or single-criteria algorithms with additional assumptions, such as the use of a weighting method or changing objectives into optimisation constraints. The use of a multi-objective algorithm may require the use of additional nonlinear constraints related to the values of the objective function. As the result of the multi-objective algorithm is a Pareto front, the search area can be reduced by limiting the maximum distance between the required values of the stiffness coefficients and the values given by the optimisation algorithm. As a result of the optimisation, it is possible to obtain maximum ranges of uncertain microscopic properties of the homogeneous materials of which the final material is composed.

The GCID procedure assumes the use of regular optimisation algorithms (based on classic arithmetic) to solve the problem described by granular data. An application of EA for problems with imprecise data is done by converting the design variables. It is performed by introducing two design variables as a and a+ values of each proper interval input parameter. In order to ensure the proper intervals are maintained, the correct sequence of end values is verified and adjusted as needed.

NUMERICAL RESULTS

A fibre-reinforced composite with uniform distribution of unidirectional fibres is considered. RVE contains 9 uniformly distributed parallel fibres. The dimensions of the RVE are assumed as 30×30×30 μm (Fig. 2). The volume fraction of the reinforcement f depends on the diameter of the fibre df. The geometry is discretised into 29 484 high-quality hexahedral elements with quadratic shape functions (He×20), resulting in 124 531 nodes and 373 593 DoFs.

Fig. 2.

The RVE model of the fibre-reinforced composite material: a) unit cell geometry, b) FEM mesh

Since a uniform fibre distribution is assumed, the material can be treated as orthotropic with two equivalent perpendicular directions (x2 and x3). As a result, the number of independent stiffness coefficients in Eq. 9 reduces to six due to the following equalities: C12=C13,C22=C33,C55=C66.

The aim of the inverse design is to optimise the possible ranges of input parameters at the microscale in order to obtain the presumed equivalent macroscopic linear material properties due to normal strain loading in one direction. Microscopic material models are assumed to be isotropic ones.

The first objective is to obtain an uncertain value of the C22 coefficient of the stiffness matrix: C¯22*=[15.2,16.8] GPa. This factor is related to the stress response to the load in the direction perpendicular to the fibre orientation and is crucial to determining the mechanical properties of the laminate. The second objective is to obtain the width of the identified parameters’ ranges in assumed intervals.

The interval values of elastic moduli, Poisson’s ratios of fibre and matrix materials, as well as the volume fraction of the reinforcement are used as the design variables: p¯1=E¯m[GPa],p¯2=v¯m[],p¯3=E¯f[GPa],p¯4=v¯f[],p¯5=f¯[].

It is assumed that reinforcement volume fraction range is f∈[0.2, 0.4] while assumed linearly-elastic material properties’ ranges (variable constrains) are:

matrix Young’s modulus Em ∈ [2, 8] GPa,

matrix Poisson’s ratio vm ∈ [0.3, 0.4],

fibre Young’s modulus Ef ∈ [50, 450] GPa,

fibre Poisson’s ratio vf∈ [0.2, 0.35].

As the introduced objectives are potentially contradictory, two approaches to the optimisation problem are applied. The first approach involves a single-objective evolutionary algorithm using a weighting method to combine both optimisation goals. In the second approach, a multi-objective evolutionary algorithm is used to determine the Pareto front for the two objective functions. To solve the inverse design problem, computational homogenisation based on the material microstructure representation, 2nd order polynomial RS for interval variables and evolutionary algorithms are used.

To solve the optimisation problem, the RS, describing material behaviour for a wide range of identified parameters, is created. To create the RS in terms of the 2nd order polynomial, the DoE is performed. The CCF variant of Central Composite Design is used, resulting in 43 design points [41]. The output values are calculated using periodic boundary conditions. Numerical homogenisation is performed for each design point.

All calculations were performed on a workstation notebook with an Intel Core i7-8750H 6-core 2.2GHz processor and 32GB of RAM. The computational time for homogenisation for each design point was approximately 60 s, and the total computation time for all design points to obtain the response surface was approximately 45 min. The optimisation algorithms used within the methodology (populational algorithms, here the Evolutionary Algorithm) require a very large number of calculations of the objective function. The use of a response surface, once construct-ed, provides almost instantaneous information about the value of the objective function.

After the design points calculation, the coefficients of 2nd order polynomial RS are calculated utilising the least squares method for each output parameter. The number of variable input parameters is equal to 5, resulting in 21 polynomial coefficients. The RS is described by: C22RS=β0+k=15βkpk+k,lβklpkpl, where k,l{ (1,1),(1,2),(1,3),(1,4),(1,5),(2,2),(2,3),(2,4),(2,5),(3,3),(3,4),(3,5),(4,4),(4,5),(5,5) }, pξ is the ξ-th design variable value (p1 = Em, p2 = vm, p3 = Ef, p4 = vf, p5 = f).

The values of the polynomial coefficients β and quality metrices for C22RS are collected in Tab. 1.

Polynomial coefficients for C22RS and quality metrics

β0 2.663E+01
βk β1 = -8.969E-01; β2 = -1.410E+02; β3 = -3.450E-04; β4 = -3.622E-01; β5 = -4.214E+01
βkl β11 = 6.144E+00; β12 = 5.427E-04; β13 = 1.586E-01; β14 = 4.170E+00; β15 = 6.277E-03; β22 = 2.490E+00; β23 = 5.768E+01; β24 = 4.311E-03; β25 = 1.405E-02; β33 = 3.669E+00; β34 = -1.067E-02; β35 = 2.056E+02; β44 = -1.086E-05; β45 = -8.753E-01; β55 = 3.672E+01
R2 0.99586
PRESS 23.23325
σest 0.35107

Independent datasets were used to create and test the models. An additional 47 sets of combinations of random parameters were used to calculate the PRESS metric values. Quality metrics indicate very good match between the samples and the RS.

Weighted sum approach

To solve the optimisation problem using a single-objective algorithm, a single objective function describing three optimisation goals was proposed: matching the obtained values of the stiffness coefficients with the expected ones (minimisation of g1(p¯) ) and widening the normalised widths of the design parameters (maximisation of g2(p¯) ). The optimisation problem is formulated as follows: { minimisefg(p¯)=φ1g1(p¯)φ2g2(p¯)subjectto:{ g1(p¯)=D(C¯22(p¯)DC¯22*)g2(p¯)=minpiw(p¯i),i=1,2,,5bLipi,+bUi,i=1,2,,5bLU=[ 2.08.00.30.450.0450.00.20.350.20.4 ] , where w(p¯i) is a width of design parameter p¯i , φ1 and φ2 are weighting coefficients. Three sets of weighting coefficients are considered: A: φ1 = 0.5, φ2 = 0.5; B: φ1 = 0.25, φ2 = 0.75; C: φ1 = 0.75, φ2 = 0.25. In order to be able to apply the regular (non-interval) optimisation algorithm, both ends of each design parameter are treated as design variables, resulting in 10 design variables.

The optimisation is performed by the DEA algorithm. The DEA parameters were set on initial DEA runs and the authors' previous experience. The parameters of the DEA are:

number of subpopulations: sp_n = 2,

number of individuals in each subpopulation sp_size = 100,

simple crossover and Gaussian mutation probability psG = 1,

uniform mutation probability pfm = 0.1,

rank selection pressure: ps = 0.8,

number of generations (stopping criterion) gen_no = 100.

The results in the form of the best solution for 30 independent runs of DPEA for each set of weighting coefficients with information about the average objective function value avg[ fg(p¯) ] and its standard deviation σ[ fg(p¯) ] , the relative difference δw between C¯22* and C¯22(p¯) widths: δw=| w(C¯22(p¯))w(C¯22*) |w(C22*), and the relative difference δc between C¯22* and C¯22(p¯) central values: δc=| cw(C¯22(p¯))cW(C¯22*)cW(C¯22*) |, are collected in Tab. 2.

The best results of the single-objective optimisation for different weighting coefficients

Variant A B C
Weighting coefficients φ1 = 0.25, φ2 = 0.75 φ1 = 0.5, φ2 = 0.5 φ1 = 0.75, φ2 = 0.25
p¯1=Em[GPa] [6.05064, 6.46002] [4.91259, 5.15732] [5.29892, 5.54251]
p¯2=vm[] [0.37837, 0.38391] [0.34833, 0.35504] [0.36931, 0.37214]
p¯3=Ef[GPa] [443.933, 354.949] [298.173, 331.484] [303.429, 370.799]
p¯4=vf[] [0.30482, 0.31628] [0.24757, 0.27883] [0.26779, 0.34706]
p¯5=f[] [0.20564, 0.21752] [0.37770, 0.38936] [0.29828, 0.31968]
g1(p¯)[GPa] 1.0987E-02 1.6975E-03 7.7079E-04
g2(p¯)[] 5.5390E-02 4.0799E-02 2.8280E-02
fg(p¯) -3.8796E-02 -1.9551E-02 -6.4919E-03
avg[ fg(p¯) ] -1.6084E-02 -5.5497E-03 1.6779E-02
σ[ fg(p¯) ] 9.9484E-03 1.1636E-02 5.7618E-02
C¯22(p¯)[GPa] [15.210668, 16.797376] [15.200626, 16.801578] [15.200105, 16.800763]
δw [%] 2.5140E-02 6.8900E-03 2.7100E-03
δc [%] 8.3075E-01 5.9500E-02 4.1120E-02

The values of the relative differences between C*22 and C¯22(p¯) , summarised in Tab. 2, indicate very good agreement with the assumed values of C*22. It can be seen that the weighting coefficients affect the g1(p¯) and g2(p¯) in such a way that a higher φ1 value results in lower (better) values of g1(p¯) and lower (worse) values of g2(p¯) and vice versa. Standard deviation values for all combinations of φ1 and φ2 show the high repeatability of fg(p¯) results for all 30 DPEA runs in each case.

Exemplary optimisation results for the first five best results and the worst one for φ1 = 0.75 and φ2 = 0.25 are collected in Tab. 3. The results indicate that the objective function is multimodal, as different sets of design variables result in similar objective function values. In particular, the expected proportional effect of the value of parameter p¯3 on C¯22(p¯) is compensated for by the values of the other interval parameters.

Exemplary results of the single-objective optimisation for φ1 = 0.75, φ2=0.25

Rank p¯1=Em[GPa] p¯2=vm[] p¯3=Ef[GPa] p¯4=vf[] p¯5=f[] g1(p¯)[GPa] g2(p¯)[] fg(p¯) C¯22(p¯)[GPa] δv, δc [%]
1 [5.29892, 5.54251] [0.36931, 0.37214] [303.429, 370.799] [0.26779, 0.34706] [0.29828, 0.31968] 7.7079E-04 2.8280E-02 -6.4919E-03 [15.200105, 16.800763] 2.7100E-034.1120E-02
2 [6.58579, 6.84576] [0.31106, 0.31661] [179.078, 257.930] [0.23188, 0.34675] [0.31161, 0.32033] 1.0045E-02 4.3329E-02 -3.2986E-03 [15.195634, 16.790953] 4.1916E-022.9256E-01
3 [6.02565, 6.17328] [0.32825, 0.33579] [260.926, 321.946] [0.26363, 0.25501] [0.33863, 0.31755] 4.2133E-03 2.4606E-02 -2.9916E-03 [15.202722, 16.803215] 1.8553E-023.0813E-02
4 [5.75597, 6.09119] [0.37103, 0.36848] [80.847, 99.605] [0.22293, 0.29563] [0.29018’0.30173] 4.9521E-03 2.5470E-02 -2.6534E-03 [15.204895, 16.800749] 1.7638E-022.5913E-01
5 [5.79374, 6.38056] [0.37613, 0.37806] [317.142, 421.272] [0.33603, 0.34437] [0.23377, 0.23645] 2.3205E-03 1.3425E-02 -1.6159E-03 [15.199783, 16.802310] 6.5406E-031.5794E-01
30 [7.34717, 7.41631] [0.34548, 0.34419] [98.303, 450,000] [0.28926, 0.29484] [0.20000, 0.22936] 5.8223E-02 1.1523E-02 4.0787E-02 [15.194355, 16.857949] 1.6345E-013.9746E+00

The values of δw and δc indicate that there are very low discrepancies between the assumed and obtained widths and central values of interval stiffness coefficients.

Pareto approach

The optimisation problem for the Pareto approach is described as: { minimisef1(p¯)maximisef2(p¯)subjectto:{ f1(p¯)=D(C¯22(p¯)DC¯22*)f2(p¯)=minpiw(p¯i),i=1,2,,5bLipi,+bUi,i=1,,5bLU=[ 2.08.00.30.450.0450.00.20.350.20.4 ].

Nonlinear constraints in the form of are introduced to limit the search area of the Pareto front: f1(p¯)<2.5 [GPa] and f2(p¯)<0.2 . This has been implemented by means of an exterior penalty method [50].

Optimisation is performed by the MOOPTIM algorithm. The MOOPTIM parameters were set on initial MOOPTIM runs and the authors' previous experience as:

number of individuals pop_size = 100,

Gaussian mutation probability pmG = 0.7,

Gaussian mutation range rmG = 0.2,

uniform mutation probability pmu = 0.1,

simple crossover probability psG = 0.1,

arithmetic crossover probability psG = 0.1,

number of generations (stopping criterion) gen_no = 200.

The results of the optimisation in the form of Pareto fronts for 10 executions of MOOPTIM are shown in Fig. 3. To compare the results with the weighted sum approach, DPEA results are also presented. Selected results for the Pareto approach are summarised in Tab. 4.

Fig. 3.

Pareto fronts and DPEA results

Results of the multi-objective optimisation

Run IH Pareto point p¯1=Em[GPa] p¯2=vm[] p¯3=Ef[GPa] p¯4=vf[] p¯5=f[] f1(p¯)[GPa] f2(p¯)[] C¯22(p¯)[GPa] δw, δc [%]
1 -0.26058 minf1(p¯) [6.21597, 6.45371] [0.32362, 0.32760] [149.183, 180.177] [0.21958, 0.30889] [0.32641, 0.34275] 2.9865E-03 3.9623E-02 [15.197072, 16.799410] 1.0994E-021.4612E-01
maxf2(p¯) [5.65513, 6.52451] [0.30761, 0.32362] [377.437, 450.000] [0.31219, 0.33490] [0.30399, 0.36323] 2.4308E+00 1.4490E-01 [13.106417, 18.035232] 2.6823E+002.0805E+02
2 -0.28839 minf1(p¯) [6.64138, 6.87106] [0.36113, 0.36698] [71.940, 162.432] [0.26203, 0.31893] [0.22596, 0.23219] 2.9716E-03 3.1145E-02 [15.201644, 16.802470] 1.2856E-025.1625E-02
maxf2(p¯) [5.78269, 6.87106] [0.32687, 0.34218] [334.408, 450.000] [0.29318, 0.35000] [0.25825, 0.28928] 2.2397E+00 1.5310E-01 [13.032410, 17.363930] 5.0114E+001.7072E+02
3 -0.29693 minf1(p¯) [6.02157, 6.24235] [0.30924, 0.32460] [313.892, 323.541] [0.20476, 0.23224] [0.34545, 0.35226] 9.1474E-04 2.4120E-02 [15.199641, 16.799159] 3.7500E-033.0125E-02
maxf2(p¯) [5.94323, 6.93317] [0.30700, 0.32460] [357.547, 450.000] [0.28208, 0.35000] [0.27794, 0.32512] 2.3664E+00 1.6499E-01 [13.03876217.763848] 3.7418E+001.9532E+02
4 -0.28615 minf1(p¯) [5.67805, 5.81318] [0.30305, 0.32223] [318.653, 450.000] [0.21592, 0.30478] [0.38134, 0.38597] 4.6047E-03 2.2522E-02 [15.201917, 16.795817] 7.0812E-033.8125E-01
maxf2(p¯) [5.26435, 6.37166] [0.34681, 0.36387] [345.252, 450.000] [0.31243, 0.35000] [0.28161, 0.31635] 2.4268E+00 1.7065E-01 [13.389715, 18.416232] 6.0642E-012.1416E+02
5 -0.27313 minf1(p¯) [5.49730, 5.83237] [0.33979, 0.34532] [377.152, 450.000] [0.20000, 0.24683] [0.33794, 0.34838] 6.2008E-04 5.2240E-02 [15.200134, 16.799395] 1.4719E-034.6188E-02
maxf2(p¯) [5.20406, 6.14267] [0.31508, 0.33451] [359.835, 450.000] [0.30955, 0.33613] [0.33212, 0.37497] 2.4372E+00 1.5643E-01 [13.098494, 18.034225] 2.7103E+002.0848E+02
6 -0.30318 minf1(p¯) [6.74116, 7.13123] [0.34609, 0.35073] [326.580, 388.616] [0.22717, 0.31078] [0.22181, 0.23521] 2.8838E-03 4.6350E-02 [15.201307, 16.802570] 1.2116E-027.8937E-02
maxf2(p¯) [6.84060, 8.00000] [0.31912, 0.34471] [313.585, 450.000] [0.28905, 0.35000] [0.20547, 0.24324] 2.4610E+00 1.8884E-01 [13.588805, 18.660209] 7.7817E-012.1696E+02
7 -0.29861 minf1(p¯) [6.07599, 6.13976] [0.35545, 0.36251] [66.934, 103.602] [0.23487, 0.27334] [0.29319, 0.31667] 1.0141E-02 1.0628E-02 [15.204597, 16.790961] 1.3881E-028.5225E-01
maxf2(p¯) [6.16085, 7.23273] [0.30000, 0.31959] [311.748, 381.206] [0.24644, 0.27261] [0.28919, 0.33169] 2.3757E+00 1.7365E-01 [13.514604, 18.474349] 3.4522E-022.0998E+02
8 -0.28615 minf1(p¯) [7.06127, 7.38220] [0.30000, 0.30279] [357.869, 392.536] [0.28429, 0.32087] [0.27701, 0.29914] 3.8161E-03 -2.796E-02 [15.201885, 16.803318] 1.6259E-028.9562E-02
maxf2(p¯) [5.11271, 6.11818] [0.37069, 0.38602] [326.957, 450.000] [0.32087, 0.35000] [0.24097, 0.27701] 2.1827E+00 1.5331E-01 [13.333662, 17.931799] 2.2954E+001.8738E+02
9 -0.29202 minf1(p¯) [6.98408, 7.14029] [0.32483, 0.33341] [89.1451, 110.587] [0.27858, 0.34763] [0.27141, 0.28943] 2.6195E-03 2.6035E-02 [15.202586, 16.799585] 6.7844E-031.8756E-01
maxf2(p¯) [5.12698, 6.16768] [0.36979, 0.38819] [293.851450.000] [0.30270, 0.33132] [0.25029, 0.28474] 2.3811E+00 1.7227E-01 [13.529283, 18.496622] 8.0953E-022.1046E+02
10 -0.29388 minf1(p¯) [6.41585, 6.79624] [0.37605, 0.37959] [50.000, 71.4883] [0.20603, 0.24717] [0.21665, 0.22781] 2.9710E-03 3.5399E-02 [15.199480, 16.802953] 7.6031E-032.1706E-01
maxf2(p¯) [6.41585, 7.39224] [0.34290, 0.36104] [346.451, 450.000] [0.30325, 0.34305] [0.22490, 0.25735] 2.1655E+00 1.6221E-01 [14.412384, 18.817226] 3.8425E+001.7530E+02

These include the minimum values of f1(p¯) the maximum values of f2(p¯) , the values of the resulting design variables and the values of the hyper-volume indicators IH for ideal point (0.0, 0.2) and nadir point (2.5, 0.0). As negative hypervolume values are generated by the algorithm described in [49], a lower IH value denotes a better approximation set.

The results show that the objective functions are contradictory; hence, an increase in the uncertainty of the design variables causes a drift beyond the assumed value of the stiffness coefficient. However, the Pareto fronts show that there exists an individual with f1(p¯) value close to 0 with specific non-zero value of f2(p¯) . The best results for the first objective function (f1(p¯)=6.2008E04) has been obtained for the 5th run of the MOOPTIM algorithm while the best results for the second objective function (f2(p¯)=1.8884E01) has been obtained for the 6th run of the algorithm.

The average value of IH is equal to -0.288657, while its standard deviation is 0.01291, indicating a high similarity of hypervolume indicators for the different MOOPTIM runs. The best IH value has been obtained for the 6th run of the algorithm – purple triangles in Fig. 3, while the worst IH value has been achieved for the 1st run (red diamonds). The values of δw and δc for the Pareto points representing the best results for f1(p¯) show a very low difference between the assumed and obtained widths and central values of interval stiffness coefficients, not exceeding 0.9% in the worst case.

The multi-objective algorithm also explores the possible stiffness coefficient matching for large uncertainties of the identified parameters. The highest values of f2(p¯) (limited to 0.2 by a non-linear constraint) are from the range of 0.14-0.19.

As in the case of the single-objective algorithm, the multi-objective algorithm calculates similar values of f1(p¯) and f2(p¯) for different sets of design parameters. The solved optimisation problem is multimodal, indicating that different materials can satisfy the optimisation objectives. This may be due to a relatively large number of design variables that describe the mechanical properties of the structure.

The presented approach belongs to a-posteriori multi-criteria optimisation methods, in which a second phase is necessary to select a single solution. Since all solutions on the Pareto front are equivalent, additional criteria can be used, such as cost, manufacturability or material availability. Moreover, additional non-linear constraints may limit the search area and should be considered in the decision-making proce

CONCLUSIONS

The paper presents the Granular Computational Inverse Design procedure as a novel approach to the problems of identification of interval microstructure parameters of inhomogeneous materials for desired incertitude macroscopic properties. As a result of the identification, interval parameters of the constituent geometry and the material properties on the microscale are obtained. Directed interval arithmetic is applied instead of the classical one to reduce the undesirable effect of interval widening as a result of arithmetical operations. To speed up the calculations, response surfaces has been applied to build the metamodel. The Central Composite Design method has been used as the Design of Experiment method because of the high precision of the mesh. The quality of the RS obtained has been using several quality metrics.

The GCID procedure has been verified on a fibre-reinforced composite with a uniform distribution of unidirectional fibres. Two approaches to the multi-objective optimisation are presented. A-priori methods (here the weighted sum approach) combine objectives using predefined weights and transform a multi-criteria problem into a single-criteria problem, which is solved here using a DEA algorithm. This method targets a specific region of the solution space, generating fewer candidate solutions. The second approach uses the MOOPTIM algorithm to generate Pareto optimal solutions without taking preferences into account. The decision maker can analyse different post-optimisation scenarios.

Single-criteria optimisation usually produces specific solutions that may not be optimal in different scenarios, whereas multi-criteria optimisation reveals a range of viable solutions. Multi-criteria algorithms like MOOPTIM explore a wider design space, resulting in a multimodal optimisation problem where different sets of design variables can achieve similar values of the objective function. The choice of approach depends on the researcher.

The results obtained show the high efficiency of the GCID procedure in both cases.

The presented methodology can be applied to parameter identification problems in material design and manufacturing processes with epistemic uncertainties. It is planned to extend the proposed attitude to diverse types of information granularity (fuzzy numbers), novel hybrid materials including e.g. auxetics, and different material models (nonlinear materials), and others. The extension may require application of more sophisticated and accurate metamodels.

Sprache:
Englisch
Zeitrahmen der Veröffentlichung:
4 Hefte pro Jahr
Fachgebiete der Zeitschrift:
Technik, Elektrotechnik, Elektronik, Maschinenbau, Mechanik, Bioingenieurwesen, Biomechanik, Bauingenieurwesen, Umwelttechnik