Metamodel-Based Inverse Design of a Composite Material with Prescribed Interval Effective Elastic Properties
Online veröffentlicht: 26. Juni 2025
Seitenbereich: 268 - 278
Eingereicht: 17. Dez. 2024
Akzeptiert: 14. Apr. 2025
DOI: https://doi.org/10.2478/ama-2025-0033
Schlüsselwörter
© 2025 Witold BELUCH et al., published by Sciendo
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
The design of new engineering materials allows for filling holes in the macroscopic property-space maps. These new materials include particulate and fibrous composites, sandwich structures, foams, lattice structures and others [1]. The components of these new materials are described by their properties and shape on a lower (e.g. micro) scale. One can evaluate the macroscopic properties, mechanical or other, of non-homogeneous materials using homogenisation methods. Computational methods, mainly finite and boundary element methods, have gained increasing attention from engineers and researchers due to their versatility also in the problems of multiscale modelling and homogenization [2,3,4,5]. The homogenisation problems consist of the evaluation of effective properties (e.g. elastic) of non-homogeneous materials on the macro scale. The problem involves a solution to the boundary value problem (BVP) at the microscale: simulation of a representative volume element (RVE) with given microstructural topology and parameters. On the other hand, one can formulate and solve an inverse problem to identify the microstructure topology and parameters, based on the macroscopic properties of homogenized material [6,7]. Among the crucial factors related to the analysis of heterogeneous engineering materials, uncertainties are currently of interest due to their impact on the precise design and manufacturing of such structures. Therefore, the development of computational approaches capable of dealing with uncertainties is crucial for the successful design and manufacturing of new materials.
Observed uncertainty falls into one of two categories: aleatoric or epistemic [8,9]. Aleatoric uncertainty is natural, random, and irreducible. Epistemic uncertainty is due to incomplete scientific knowledge and can be reduced by new insights. From the probability point of view, the uncertainty of engineering quantities can be grouped into stochastic uncertainty (known probability), incertitude (unknown probability) or ignorance (fixed values). A very common in engineering problems, including up-to-date experimental material characterisation methods, e.g. [10], is the second category (incertitude) which includes cases of intervals and fuzzy numbers. From the point of view of information processing, this representation of quantities is also known as information granularity [11,12,13]. In engineering practice, this representation is useful in the situation of insufficient data availability that does not allow the engineer to describe the process as stochastic. In the context of modelling, another classification of uncertainty can be introduced: uncertainty induced by data, component (model uncertainty), and structure (structural uncertainty) [9,14]. In general, incertitude uncertainties in geometry, material properties, loading, and boundary conditions significantly affect structural analysis, introducing variability in performance estimations, leading to potential inaccuracies [15]. By including imperfect data in the quantification of uncertainty, more reliable and relevant results can be obtained that better reflect the complexity and uncertainty present in real-world conditions. Note that in the case of metamodel-based computational homogenization preparation stage, the boundary conditions or loading for the RVE with uncertain parameters (geometry, material parameters and phase bonding conditions) can be precise as they are driven by the homogenization procedure that imposes macro strains (or stresses) and their increments for non-linear structures. However, at the possible latter stage of analysing a macro-structure, that is beyond the scope of the present work, the transferred macro strains or stresses to the RVE as boundary conditions can be uncertain due to all macro-structure uncertainties under consideration.
Recently, researchers have considered the incertitude uncertainty to be of practical importance in engineering problems, including homogenisation and multi-scale modelling. One of the basic approaches applied to numerical modelling of mechanical and micromechanical structures with incertitude properties is the interval and fuzzy finite element method. Note that its efficiency is deteriorated by an amount of conservatism (overestimation of the uncertainty in the output) due to the interval matrix assembly phase [16]. In [17], 2D composite structures were analysed, with the application of Taylor expansions of effective elasticity matrices, with sensitivity analysis. In [18], stochastic FEM was applied to the analysis of the random RVE, to determine the respective limits and evaluate fuzzy Young’s modulus of a composite material. The modulus was applied to the dynamic simulation of multi-body structures to obtain fuzzy response curves. In [19], a fuzzy RVE was introduced that allows one to consider the spatial variability of the ply level of the ply level in laminates. Fuzzy FEM is coupled with the radial basis functions metamodel. Global responses in dynamics and stability problems were solved. In [20] fuzzy numbers were applied to the computational homogenisation of composite materials with fuzzy microstructure parameters.
One can observe that the recent approach to the solution of the BVP on a micro scale in complex problems involving homogenisation and multi-scale modelling, becomes the shift of the RVE numerical modelling to the offline stage, outside the main loop of the procedure under consideration: multiscale simulation, optimisation or identification. This preliminary stage prepares data and creates model for metamodelling or a data-driven approach. Such an approach employs metamodels (surrogates) built based on observations computed by high-fidelity numerical models, usually analysed by the FEM. The metamodels include polynomial response surfaces [20], radial-basis functions [19,21], Kriging [22,23], artificial neural networks [24], combined analytical-numerical metamodels [25], and possibly others. Such an approach leads to a substantial reduction in the overall computation time compared to the traditional approach based on in-loop time-consuming simulations of the RVE. The approach with offline metamodel-based RVE simulations also has potential advantages in the inverse problems involving incertitude uncertainties, due to the substantial number of calculations related to the interval arithmetic.
The goal of the present work is to formulate and solve a novel problem of inverse design of composite materials with desired incertitude (interval) macroscopic properties and incertitude microstructure unknown data. To solve the problem efficiently, the metamodelling approach is applied with appropriate design of experiment (DoE) for numerical calculations based on a high-fidelity finite element method model built in Ansys software. The DoE takes into account the admissible ranges of geometry and material parameters of the microstructure of the composite. The minimised objective function is dependent on the desired bounds of the macroscopic elastic properties of the designed material. In some situations, e.g. manufacturing, it may be more convenient to have wider ranges of input microstructure parameters. Therefore, these ranges are also incorporated into objectives. For interval analysis, directed interval arithmetic is applied, which is considered efficient in solving engineering problems due to its feature of including cancelation laws for both addition and multiplication [26,27,28,29,30]. In identification problems, the interval parameters are represented as pairs of real numbers (components of the directed intervals). Such an approach allows for the application of regular evolutionary algorithms for real variables. The inverse problem is solved by both approaches of scalarization with presumed weights for objective functions, and the Pareto frontier approach as well. A problem concerning unidirectional fibre-reinforced composite is solved and results are presented. While similar methodologies for computational homogenization are documented in the literature, we believe that no corresponding solutions of inverse design problems for inhomogeneous materials with epistemic incertitude structural parameters, including the multi-objective identification (optimization) approach, have been reported.
The paper is organised as follows: Section 2 contains an introduction to computational homogenisation and related inverse problem as well as the design of the experiment and the description of applied metamodels. Section 3 introduces the notation and operators of interval arithmetic and directed interval arithmetic. In Section 4 the idea and formulation of the inverse problem involving computational homogenisation with incertitude input parameters of the microstructure and desired incertitude of the effective elastic constants are presented. In Section 5, numerical examples are solved with the approaches of weighted objective functions and the Pareto frontier. Section 6 contains conclusions.
Multiscale modelling allows the structure to be modelled at different length scales. One of the elements of multiscale modelling that enables the analysis of heterogeneous materials, such as composites or porous materials, at the microscale is homogenisation. Among the different homogenisation methods, computational homogenisation is the most versatile [2].
Computational homogenisation [31] allows equivalent macroscopic material properties to be calculated from microscopic data. The basic idea of computational homogenisation is to represent the microstructure of a material by a statistically representative piece of its geometry, called a representative volume element (RVE). Stress analysis for RVE provides detailed information about the material's macro-scale behaviour. The analysis can be carried out using numerical methods such as the finite element method (FEM) [32] or the boundary element method (BEM) [33].
The RVE represents the structure of the entire medium (or a portion of it in the case of local periodicity) and thus contains all the information required to fully describe both the structure and properties of that medium. The RVE must satisfy the scale separation condition, the Hill Mandel condition, and the imposed boundary conditions [34].
The separation of scales condition assumes that the RVE geometry must be of an appropriate size:
The Hill-Mandel condition describes the equality of the average energy density at the microscale and the macroscopic energy density at the macrostructure point corresponding to the RVE location:
Boundary conditions have to satisfy the Hill-Mandel condition, e.g. in the form of periodic, linear displacement, or uniform traction boundary conditions. In this paper, the periodic boundary conditions are used:
Periodic boundary conditions allow for the determination of equivalent material properties with greater accuracy and with fewer internal inclusions or voids in the RVE than displacement and traction boundary conditions [35].
A heterogeneous material is usually assumed to consist of two or more homogeneous phases that obey the laws of continuum mechanics. In this paper, it is assumed that there is an ideal contact between the phases. The behaviour of an elastic body made of a linear isotropic material (single phase) under external loads is described by [36]:
geometrical relations:
constitutive law (Hooke’s law):
equilibrium equations:
where
Using Equations (5) and (6), the equilibrium equations expressed by displacements (Navier-Lamé equations) are obtained:
For linear elastic materials, there is no need to analyse the RVE for each point in the macrostructure, and equivalent material properties can be calculated prior to analysing the macroscopic model. In order to obtain equivalent properties, described by a fourth-order material tensor, a series of RVE analyses must be performed with specific boundary conditions.
For a homogenised orthotropic material, 6 analyses allow the calculation of 9 stiffness coefficients
The problem of determining the values of microscopic properties for given macroscopic data is known as the inverse design [6,7]. The inverse design belongs to a group of ill-posed problems as different sets of microscopic parameters can fulfil the assumed macroscopic values. One of the techniques for solving common inverse design problems is the use of optimisation methods. Due to the presence of a large number of local extremes of the objective functions, global optimisation methods, such as evolutionary algorithms, artificial immune systems, particle swarm, and ant colony algorithms, are recommended [38]. In this paper, evolutionary algorithms (EAs) are applied to solve the inverse design problem [39].
The inverse design problem can be described as a constrained optimisation task:
Due to manufacturing and material property constraints, optimisation constraints are related to design variables:
Solving an inverse design problem requires multiple calculations of the objective function(s) value. This is particularly time-consuming when FEM is used to solve the boundary value problem. A way to reduce computational costs may be to use response surfaces (RS) as the metamodel [40].
The use of RS allows the highly nonlinear behaviour of the structure to be modelled, without the need to perform complex mathematical operations on the system matrices. RS is generated from precomputed sets of output parameters. The RS constitutes an approximate parametric model of the original system created from a precomputed set of models with different input parameter values. The quality of RS strongly depends on the shape of the exact response function being approximated, the number of data points, and the volume of the design space in which the model is built. To reduce the number of cases required for evaluation, an appropriate design of experiment (DoE) based on the type of expected response and the proposed RS should be used [41].
Then, the calculated values of the output parameters are used to compute RS.
This paper adopts the 2nd order polynomial method as RS. This method uses an enhanced quadratic form to represent the relationship between inputs and outputs and is suitable for optimisation problems because of the generation of smooth functions with a single extremum.
The work assumes a very exact fit of the metamodel, and the uncertainty of its parameters is ignored. As a result, only the uncertainty associated with the design variables is considered.
Once the RS has been created, an assessment of its quality should be carried out to ensure that the values of the output parameters are correctly represented. In the paper the coefficient of determination
If the parameters of the system are not precisely determinable, they may be treated as uncertain and modelled as information granules [13]. In mechanical systems, uncertainties typically arise from both design and manufacturing processes. The most widely used granularity models for such uncertainties are rough sets, interval numbers, fuzzy numbers, and random variables.
GrC can be introduced into both the numerical homogenisation and inverse design procedures considered in the paper. Uncertainties that can be used as model parameters can relate to geometry, material properties, loads, or boundary conditions. As a result of the analysis, ranges of estimated quantity values can be obtained. When a specific range of parameter values is known, the interval numbers allow the representation of uncertainty.
Interval arithmetic is based on the interval representation of a single number [42]:
In the case where
The main drawback of classical interval arithmetic is the lack of operations opposite to addition and inverse to multiplication [44]. As a result, e.g. when solving interval systems of equations, a widening of the intervals occurs. This effect can be significantly reduced by using directed interval numbers and directed interval arithmetic.
A directed interval number is defined as an ordered pair of real numbers:
The directed interval numbers are proper if
Each interval in set 𝔻 has two functionals [40]:
the ‘direction’ functional:
the ‘sign’ functional:
These functionals determine the result of an arithmetic operation performed on two directed intervals. Based on these, basic operations in directed interval arithmetic are defined as:
addition:
subtraction:
multiplication: division (0 ∉ [
The symbols
The directed interval arithmetic defines two additional operators:
opposite of addition:
inverse of multiplication:
which allow determining two additional directed operations: directed subtraction:
directed division:
As a result, the
In this paper, a granular approach is proposed to perform homogenisation and inverse design for inhomogeneous materials with uncertainties of their microstructure parameters. The Granular Computational Homogenisation (GCH) procedure is applied to perform the inverse design. The first step of GCH is related to the analysis of the material structure and available property data. On the basis of the microstructure of the material, a geometrical model of the structure (RVE) is prepared which is transferred to the FEM software. Material property data are used to create a numerical model of the microstructure, including the constitutive relationships. The decision about the geometry and the treatment of individual material properties as certain or uncertain results in input parameters for the GCH procedure. The number of identified parameters affects the number of sample calculations in the DoE phase. The GCH procedure, together with numerical examples showing homogenisation results for granular linear and nonlinear heterogeneous materials, is described in detail in the previous publication [20].
The Granular Computational Inverse Design (GCID) problem is related to the search for ranges of properties of the model on the micro scale that result in a specific material behaviour on the macro scale. The problem is formulated as an optimisation task using granular computing and can be described as:
To create the objective function for the optimisation algorithm, the resulting interval stiffness coefficients are used to assess the consistency with the assumed uncertain values of the material properties. The measure of mismatch (distance) between two interval numbers is described by the norm:
This representation allows for a continuous and smooth objective function. The objective function for linear-elastic material properties is described by:
An additional objective of material optimisation is related to the current width of the design variables:
A larger uncertainty width of the optimised design variables makes it easier to find a material with properties that meet the specified values. For multiple design variables, the widths of the parameters can vary considerably. To provide an equivalent treatment of the parameters, width normalisation is introduced. The normalized width
In the optimisation process, the minimum normalized width of all parameters is maximized:
To solve optimisation tasks, global optimisation algorithms in the form of evolutionary algorithms are employed. For a single-objective optimisation task, the distributed evolutionary algorithm (DEA) is applied [45]. DEA is based on the concept of coevolutionary algorithms, where an entire population of individuals is divided into two or more subpopulations, usually with an identical number of individuals in each. Each subpopulation evolves almost independently, exchanging information with other subpopulations during the migration phase. DEA has been shown to be highly efficient and effective in many optimisation problems [38].
To solve a multi-objective optimisation problem, the MOOPTIM (MultiObjective OPTIMization tool) algorithm [46] is used. The algorithm is based on Pareto concept, and it allows one to obtain a set on non-dominated solutions. MOOPTIM is an improved version of NSGA-II [47] with different selection and new as well as modified mutation and crossover operators. As presented in [48], MOOPTIM outperforms NSGA-II in some benchmark and engineering problems, especially for functions difficult to optimise, that is, strongly multimodal, with a non-convex or discontinuous Pareto front. To assess the quality of the resulting Pareto front, a weighted hypervolume indicator [49] is used.
The concept of GCID is illustrated in Fig. 1. In the first step, the optimisation problem is formulated (number of design variables, number of stiffness coefficients considered, constraints on the variables, number of criteria). Based on this information, an optimisation approach (use of a single- or multi-objective algorithm) is selected. The next step involves the Granular Computational Homogenisation procedure to calculate the output interval stiffness coefficients based on the interval variables. The output values are used to compute the values of the objective functions. These operations are performed until the stopping criterion of the optimisation algorithm is met.

Granular Computational Inverse Design scheme
A multi-criteria optimisation problem can be solved using multi-objective algorithms or single-criteria algorithms with additional assumptions, such as the use of a weighting method or changing objectives into optimisation constraints. The use of a multi-objective algorithm may require the use of additional nonlinear constraints related to the values of the objective function. As the result of the multi-objective algorithm is a Pareto front, the search area can be reduced by limiting the maximum distance between the required values of the stiffness coefficients and the values given by the optimisation algorithm. As a result of the optimisation, it is possible to obtain maximum ranges of uncertain microscopic properties of the homogeneous materials of which the final material is composed.
The GCID procedure assumes the use of regular optimisation algorithms (based on classic arithmetic) to solve the problem described by granular data. An application of EA for problems with imprecise data is done by converting the design variables. It is performed by introducing two design variables as
A fibre-reinforced composite with uniform distribution of unidirectional fibres is considered. RVE contains 9 uniformly distributed parallel fibres. The dimensions of the RVE are assumed as 30×30×30 μm (Fig. 2). The volume fraction of the reinforcement

The RVE model of the fibre-reinforced composite material: a) unit cell geometry, b) FEM mesh
Since a uniform fibre distribution is assumed, the material can be treated as orthotropic with two equivalent perpendicular directions (
The aim of the inverse design is to optimise the possible ranges of input parameters at the microscale in order to obtain the presumed equivalent macroscopic linear material properties due to normal strain loading in one direction. Microscopic material models are assumed to be isotropic ones.
The first objective is to obtain an uncertain value of the
The interval values of elastic moduli, Poisson’s ratios of fibre and matrix materials, as well as the volume fraction of the reinforcement are used as the design variables:
It is assumed that reinforcement volume fraction range is matrix Young’s modulus matrix Poisson’s ratio fibre Young’s modulus fibre Poisson’s ratio
As the introduced objectives are potentially contradictory, two approaches to the optimisation problem are applied. The first approach involves a single-objective evolutionary algorithm using a weighting method to combine both optimisation goals. In the second approach, a multi-objective evolutionary algorithm is used to determine the Pareto front for the two objective functions. To solve the inverse design problem, computational homogenisation based on the material microstructure representation, 2nd order polynomial RS for interval variables and evolutionary algorithms are used.
To solve the optimisation problem, the RS, describing material behaviour for a wide range of identified parameters, is created. To create the RS in terms of the 2nd order polynomial, the DoE is performed. The CCF variant of Central Composite Design is used, resulting in 43 design points [41]. The output values are calculated using periodic boundary conditions. Numerical homogenisation is performed for each design point.
All calculations were performed on a workstation notebook with an Intel Core i7-8750H 6-core 2.2GHz processor and 32GB of RAM. The computational time for homogenisation for each design point was approximately 60 s, and the total computation time for all design points to obtain the response surface was approximately 45 min. The optimisation algorithms used within the methodology (populational algorithms, here the Evolutionary Algorithm) require a very large number of calculations of the objective function. The use of a response surface, once construct-ed, provides almost instantaneous information about the value of the objective function.
After the design points calculation, the coefficients of 2nd order polynomial RS are calculated utilising the least squares method for each output parameter. The number of variable input parameters is equal to 5, resulting in 21 polynomial coefficients. The RS is described by:
The values of the polynomial coefficients
Polynomial coefficients for
2.663E+01 | |
---|---|
0.99586 | |
23.23325 | |
0.35107 |
Independent datasets were used to create and test the models. An additional 47 sets of combinations of random parameters were used to calculate the PRESS metric values. Quality metrics indicate very good match between the samples and the RS.
To solve the optimisation problem using a single-objective algorithm, a single objective function describing three optimisation goals was proposed: matching the obtained values of the stiffness coefficients with the expected ones (minimisation of
The optimisation is performed by the DEA algorithm. The DEA parameters were set on initial DEA runs and the authors' previous experience. The parameters of the DEA are:
number of subpopulations: number of individuals in each subpopulation simple crossover and Gaussian mutation probability uniform mutation probability rank selection pressure: number of generations (stopping criterion)
The results in the form of the best solution for 30 independent runs of DPEA for each set of weighting coefficients with information about the average objective function value
The best results of the single-objective optimisation for different weighting coefficients
Variant | A | B | C |
---|---|---|---|
Weighting coefficients | |||
[6.05064, 6.46002] | [4.91259, 5.15732] | [5.29892, 5.54251] | |
[0.37837, 0.38391] | [0.34833, 0.35504] | [0.36931, 0.37214] | |
[443.933, 354.949] | [298.173, 331.484] | [303.429, 370.799] | |
[0.30482, 0.31628] | [0.24757, 0.27883] | [0.26779, 0.34706] | |
[0.20564, 0.21752] | [0.37770, 0.38936] | [0.29828, 0.31968] | |
1.0987E-02 | 1.6975E-03 | 7.7079E-04 | |
5.5390E-02 | 4.0799E-02 | 2.8280E-02 | |
-3.8796E-02 | -1.9551E-02 | -6.4919E-03 | |
-1.6084E-02 | -5.5497E-03 | 1.6779E-02 | |
9.9484E-03 | 1.1636E-02 | 5.7618E-02 | |
[15.210668, 16.797376] | [15.200626, 16.801578] | [15.200105, 16.800763] | |
2.5140E-02 | 6.8900E-03 | 2.7100E-03 | |
8.3075E-01 | 5.9500E-02 | 4.1120E-02 |
The values of the relative differences between
Exemplary optimisation results for the first five best results and the worst one for
Exemplary results of the single-objective optimisation for
Rank | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
1 | [5.29892, 5.54251] | [0.36931, 0.37214] | [303.429, 370.799] | [0.26779, 0.34706] | [0.29828, 0.31968] | 7.7079E-04 | 2.8280E-02 | -6.4919E-03 | [15.200105, 16.800763] | 2.7100E-03 |
2 | [6.58579, 6.84576] | [0.31106, 0.31661] | [179.078, 257.930] | [0.23188, 0.34675] | [0.31161, 0.32033] | 1.0045E-02 | 4.3329E-02 | -3.2986E-03 | [15.195634, 16.790953] | 4.1916E-02 |
3 | [6.02565, 6.17328] | [0.32825, 0.33579] | [260.926, 321.946] | [0.26363, 0.25501] | [0.33863, 0.31755] | 4.2133E-03 | 2.4606E-02 | -2.9916E-03 | [15.202722, 16.803215] | 1.8553E-02 |
4 | [5.75597, 6.09119] | [0.37103, 0.36848] | [80.847, 99.605] | [0.22293, 0.29563] | [0.29018’ |
4.9521E-03 | 2.5470E-02 | -2.6534E-03 | [15.204895, 16.800749] | 1.7638E-02 |
5 | [5.79374, 6.38056] | [0.37613, 0.37806] | [317.142, 421.272] | [0.33603, 0.34437] | [0.23377, 0.23645] | 2.3205E-03 | 1.3425E-02 | -1.6159E-03 | [15.199783, 16.802310] | 6.5406E-03 |
30 | [7.34717, 7.41631] | [0.34548, 0.34419] | [98.303, 450,000] | [0.28926, 0.29484] | [0.20000, 0.22936] | 5.8223E-02 | 1.1523E-02 | 4.0787E-02 | [15.194355, 16.857949] | 1.6345E-01 |
The values of
The optimisation problem for the Pareto approach is described as:
Nonlinear constraints in the form of are introduced to limit the search area of the Pareto front:
Optimisation is performed by the MOOPTIM algorithm. The MOOPTIM parameters were set on initial MOOPTIM runs and the authors' previous experience as:
number of individuals Gaussian mutation probability Gaussian mutation range uniform mutation probability simple crossover probability arithmetic crossover probability number of generations (stopping criterion)
The results of the optimisation in the form of Pareto fronts for 10 executions of MOOPTIM are shown in Fig. 3. To compare the results with the weighted sum approach, DPEA results are also presented. Selected results for the Pareto approach are summarised in Tab. 4.

Pareto fronts and DPEA results
Results of the multi-objective optimisation
Run | Pareto point | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
1 | -0.26058 | [6.21597, 6.45371] | [0.32362, 0.32760] | [149.183, 180.177] | [0.21958, 0.30889] | [0.32641, 0.34275] | 2.9865E-03 | 3.9623E-02 | [15.197072, 16.799410] | 1.0994E-02 |
|
[5.65513, 6.52451] | [0.30761, 0.32362] | [377.437, 450.000] | [0.31219, 0.33490] | [0.30399, 0.36323] | 2.4308E+00 | 1.4490E-01 | [13.106417, 18.035232] | 2.6823E+00 |
|||
2 | -0.28839 | [6.64138, 6.87106] | [0.36113, 0.36698] | [71.940, 162.432] | [0.26203, 0.31893] | [0.22596, 0.23219] | 2.9716E-03 | 3.1145E-02 | [15.201644, 16.802470] | 1.2856E-02 |
|
[5.78269, 6.87106] | [0.32687, 0.34218] | [334.408, 450.000] | [0.29318, 0.35000] | [0.25825, 0.28928] | 2.2397E+00 | 1.5310E-01 | [13.032410, 17.363930] | 5.0114E+00 |
|||
3 | -0.29693 | [6.02157, 6.24235] | [0.30924, 0.32460] | [313.892, 323.541] | [0.20476, 0.23224] | [0.34545, 0.35226] | 9.1474E-04 | 2.4120E-02 | [15.199641, 16.799159] | 3.7500E-03 |
|
[5.94323, 6.93317] | [0.30700, 0.32460] | [357.547, 450.000] | [0.28208, 0.35000] | [0.27794, 0.32512] | 2.3664E+00 | 1.6499E-01 | [13.038762 |
3.7418E+00 |
|||
4 | -0.28615 | [5.67805, 5.81318] | [0.30305, 0.32223] | [318.653, 450.000] | [0.21592, 0.30478] | [0.38134, 0.38597] | 4.6047E-03 | 2.2522E-02 | [15.201917, 16.795817] | 7.0812E-03 |
|
[5.26435, 6.37166] | [0.34681, 0.36387] | [345.252, 450.000] | [0.31243, 0.35000] | [0.28161, 0.31635] | 2.4268E+00 | 1.7065E-01 | [13.389715, 18.416232] | 6.0642E-01 |
|||
5 | -0.27313 | [5.49730, 5.83237] | [0.33979, 0.34532] | [377.152, 450.000] | [0.20000, 0.24683] | [0.33794, 0.34838] | 6.2008E-04 | 5.2240E-02 | [15.200134, 16.799395] | 1.4719E-03 |
|
[5.20406, 6.14267] | [0.31508, 0.33451] | [359.835, 450.000] | [0.30955, 0.33613] | [0.33212, 0.37497] | 2.4372E+00 | 1.5643E-01 | [13.098494, 18.034225] | 2.7103E+00 |
|||
6 | -0.30318 | [6.74116, 7.13123] | [0.34609, 0.35073] | [326.580, 388.616] | [0.22717, 0.31078] | [0.22181, 0.23521] | 2.8838E-03 | 4.6350E-02 | [15.201307, 16.802570] | 1.2116E-02 |
|
[6.84060, 8.00000] | [0.31912, 0.34471] | [313.585, 450.000] | [0.28905, 0.35000] | [0.20547, 0.24324] | 2.4610E+00 | 1.8884E-01 | [13.588805, 18.660209] | 7.7817E-01 |
|||
7 | -0.29861 | [6.07599, 6.13976] | [0.35545, 0.36251] | [66.934, 103.602] | [0.23487, 0.27334] | [0.29319, 0.31667] | 1.0141E-02 | 1.0628E-02 | [15.204597, 16.790961] | 1.3881E-02 |
|
[6.16085, 7.23273] | [0.30000, 0.31959] | [311.748, 381.206] | [0.24644, 0.27261] | [0.28919, 0.33169] | 2.3757E+00 | 1.7365E-01 | [13.514604, 18.474349] | 3.4522E-02 |
|||
8 | -0.28615 | [7.06127, 7.38220] | [0.30000, 0.30279] | [357.869, 392.536] | [0.28429, 0.32087] | [0.27701, 0.29914] | 3.8161E-03 | -2.796E-02 | [15.201885, 16.803318] | 1.6259E-02 |
|
[5.11271, 6.11818] | [0.37069, 0.38602] | [326.957, 450.000] | [0.32087, 0.35000] | [0.24097, 0.27701] | 2.1827E+00 | 1.5331E-01 | [13.333662, 17.931799] | 2.2954E+00 |
|||
9 | -0.29202 | [6.98408, 7.14029] | [0.32483, 0.33341] | [89.1451, 110.587] | [0.27858, 0.34763] | [0.27141, 0.28943] | 2.6195E-03 | 2.6035E-02 | [15.202586, 16.799585] | 6.7844E-03 |
|
[5.12698, 6.16768] | [0.36979, 0.38819] | [293.851 |
[0.30270, 0.33132] | [0.25029, 0.28474] | 2.3811E+00 | 1.7227E-01 | [13.529283, 18.496622] | 8.0953E-02 |
|||
10 | -0.29388 | [6.41585, 6.79624] | [0.37605, 0.37959] | [50.000, 71.4883] | [0.20603, 0.24717] | [0.21665, 0.22781] | 2.9710E-03 | 3.5399E-02 | [15.199480, 16.802953] | 7.6031E-03 |
|
[6.41585, 7.39224] | [0.34290, 0.36104] | [346.451, 450.000] | [0.30325, 0.34305] | [0.22490, 0.25735] | 2.1655E+00 | 1.6221E-01 | [14.412384, 18.817226] | 3.8425E+00 |
These include the minimum values of
The results show that the objective functions are contradictory; hence, an increase in the uncertainty of the design variables causes a drift beyond the assumed value of the stiffness coefficient. However, the Pareto fronts show that there exists an individual with
The average value of
The multi-objective algorithm also explores the possible stiffness coefficient matching for large uncertainties of the identified parameters. The highest values of
As in the case of the single-objective algorithm, the multi-objective algorithm calculates similar values of
The presented approach belongs to a-posteriori multi-criteria optimisation methods, in which a second phase is necessary to select a single solution. Since all solutions on the Pareto front are equivalent, additional criteria can be used, such as cost, manufacturability or material availability. Moreover, additional non-linear constraints may limit the search area and should be considered in the decision-making proce
The paper presents the Granular Computational Inverse Design procedure as a novel approach to the problems of identification of interval microstructure parameters of inhomogeneous materials for desired incertitude macroscopic properties. As a result of the identification, interval parameters of the constituent geometry and the material properties on the microscale are obtained. Directed interval arithmetic is applied instead of the classical one to reduce the undesirable effect of interval widening as a result of arithmetical operations. To speed up the calculations, response surfaces has been applied to build the metamodel. The Central Composite Design method has been used as the Design of Experiment method because of the high precision of the mesh. The quality of the RS obtained has been using several quality metrics.
The GCID procedure has been verified on a fibre-reinforced composite with a uniform distribution of unidirectional fibres. Two approaches to the multi-objective optimisation are presented. A-priori methods (here the weighted sum approach) combine objectives using predefined weights and transform a multi-criteria problem into a single-criteria problem, which is solved here using a DEA algorithm. This method targets a specific region of the solution space, generating fewer candidate solutions. The second approach uses the MOOPTIM algorithm to generate Pareto optimal solutions without taking preferences into account. The decision maker can analyse different post-optimisation scenarios.
Single-criteria optimisation usually produces specific solutions that may not be optimal in different scenarios, whereas multi-criteria optimisation reveals a range of viable solutions. Multi-criteria algorithms like MOOPTIM explore a wider design space, resulting in a multimodal optimisation problem where different sets of design variables can achieve similar values of the objective function. The choice of approach depends on the researcher.
The results obtained show the high efficiency of the GCID procedure in both cases.
The presented methodology can be applied to parameter identification problems in material design and manufacturing processes with epistemic uncertainties. It is planned to extend the proposed attitude to diverse types of information granularity (fuzzy numbers), novel hybrid materials including e.g. auxetics, and different material models (nonlinear materials), and others. The extension may require application of more sophisticated and accurate metamodels.