Equivalent Morphology Concept in Composite Materials Using Machine Learning and Genetic Algorithm Coupling (2024)

1. Introduction

Composite materials[1,2,3,4,5] represent a growing area of interest in engineering and materials science. Their ability to combine different properties from constituent materials makes them particularly attractive for a diverse range of applications, spanning from aerospace to automotive industries and construction. However, theoptimal design of these materials remains a major challenge, especially when it comes to finding alternative morphologies that exhibit similar properties as existingmaterials.

In this regard, theuse of bioinspired methods[6] such as machine learning[7,8] and evolutionary algorithms—notably, genetic algorithms—is emerging as a promising approach. These methods draw inspiration from biological processes and natural evolution to solve complex problems and find optimal solutions. Thecoupling of machine learning[9] and evolutionary algorithms[10,11] holds significant promise for addressing the challenges associated with the design of composite materials. Machine learning techniques, withtheir ability to analyze vast amounts of data and identify intricate patterns, offer valuable insights into the relationships between material properties, fabrication processes, andperformance characteristics. Byleveraging machine learning, researchers can uncover hidden correlations and optimize material design parameters to achieve desired outcomes. Onthe other hand, evolutionary algorithms provide a powerful optimization framework for exploring the vast design space of composite materials. Bymimicking the process of natural selection, genetic algorithms can efficiently search for optimal material configurations that meet specified criteria, such as mechanical strength, thermal conductivity, orweight reduction. Through iterative generations and selection mechanisms, evolutionary algorithms iteratively refine and improve candidate designs and converge towards high-performingsolutions.

When combined, machine learning and evolutionary algorithms create a synergistic framework for the discovery and optimization of composite material morphologies. Machine learning algorithms can be employed to analyze experimental data, identify promising design directions, andgenerate informative features for evolutionary optimization. Inturn, evolutionary algorithms can leverage machine learning predictions to guide the search towards regions of the design space that are more likely to yield desirableoutcomes.

Maridassetal.[12] employed a genetic algorithm and artificial neural networks to analyze blends comprising polypropylene and waste ground rubber tire powder. Their study focused on understanding how varying levels of ethylene–propylene–diene monomer EPDM and polypropylene grafted maleic anhydride PP-g-MA influence mechanical characteristics. Byoptimizing formulations to maximize tensile strength and contrasting them with blends optimized for maximum elongation at the breaking point, they observed that increased concentrations of PP-g-MA and EPDM enhanced properties: afinding corroborated through SEM investigations. Additionally, they established a quantitative correlation between polymer content and mechanical attributes. Azharetal.[13] used the same coupling model to optimize the turning of glass-fiber-reinforced polymer GFRP composites. They aimed to find the best cutting parameters for efficient machining. Bystudying spindle speed, feed rate, anddepth of cut effects on the material removal rate MRR, tool wear rate TWR, andsurface roughness, they identified significant parameters and interactions. Empirical models linked output responses and cutting parameters, while ANN with GA achieved multi-response optimization. Santanoetal.[14] integrated ANNs and GAs to study how stir-cast Al-Zn-Mg-Cu matrix composites behave under two-body abrasion. They examined the wear rate, coefficient of friction, androughness of abraded surface RAS across various input parameters. Multi-objective optimization via Pareto solutions was utilized and highlighted the importance of particle quantity and abrasive size. Experimentation validates these findings, while analysis of micromechanisms and surface attributes clarifies their roles. Theoptimal condition for minimizing the wear rate and coefficient of friction while maintaining moderate RAS was determined to be 15 ± 2 wt% particle quantity. The same method was used by Zhen-jieetal.[15] to forecast the electromagnetic characteristics of coatings. Traditional methods face challenges in this regard due to the complexity of the parameters involved. Their approach effectively predicted EM properties, even when dealing with mixed absorbents, surpassing conventional ANN techniques. Furthermore, they employed GA to optimize coatings, resulting in enhanced electromagnetic absorption capabilities. Aveenetal.[16] used GA and ANN to improve drilling in composite materials. They investigated factors affecting hole quality in glass-fiber-reinforced composites by varying filler volumes (0%, 3%, and6%) and drilling parameters. Data from drilling experiments were analyzed using Python-based neural networks, withGA optimizing conditions to reduce delamination. These method are not only valid for composite materials, but they can also be used for bio-composites. Madanietal.[17] employed an ANN-GA model to investigate surface roughness during milling of alfa/epoxy biocomposites, spanning both composite and bio-composite materials. Through 100 trials, they assessed the surface quality and the influence of the cutting parameters and chemical treatment. Their hybrid approach, merging ANN and GA, refined a predictive model for surface roughness and demonstrated enhanced accuracy compared to conventional methods. This findings underscored the importance of feed per revolution and chemical treatment in affecting surface roughness. Also, deeper in the nanocomposite field, Chi-Huaetal.[18] fused GA with deep learning DL to enhance nanocomposite toughness. Their AutoComp Designer algorithm merges machine learning and AI-improved genetic algorithms to enable the creation of new material designs with decreased computational expense. Through neural network training on diverse material combinations, they predicted the properties of graphene nanocomposites without conventional simulations. This technique not only forecasts properties but also enhances fracture toughness by adjusting the material distribution. Validation via molecular dynamics simulations verified the improved performance. In addition to ANNs, Adaboost has also been coupled to genetic algorithms: for example, by Weidongetal.[19] in order to optimize nanocomposite adsorption capacity. This approach aimed to streamline experiments and reduce costs by developing predictive models. They correlated adsorption parameters with capacity and used simple models due to having a small dataset. AdaBoost and GA optimization improved modeling efficiency. Data from the literature were utilized, resulting in highly efficient models with low errorrates.

Our research aims to utilize ML-GA coupling to address the issue of equivalent morphology, which is aconcept explored by Elmoumenetal.[20]. They compared two morphologies: one featuring overlapping identical spherical inclusions and the other with identical hard inclusions. Using numerical simulations and statistical analysis, they evaluate the representativeness of these structures. Their findings revealed a disparity in the integral range: while microstructures with hard spheres have an integral range equivalent to one inclusion’s volume, those with overlapping spheres exhibit an integral range eight times larger. This observation prompted the introduction of the equivalent morphology concept EMC. The EMC was also utilized by Khdiretal.[21,22] to develop a computational hom*ogenization approach for porousmedia.

In this paper, we develop an innovative methodology to identify equivalent morphologies within a microstructure containing a circular inclusion using a coupling model based on the predictions of machine learning algorithms and the optimization of genetic algorithms. Our goal is to find alternative microstructures with different shapes, volume fractions, andphase contrasts that exhibit the same linear elastic and thermal behavior as the original microstructure. We employ advanced computational techniques to thoroughly analyze the microstructure’s linear elastic and thermal properties. Atthe core of our methodology is a model that integrates various machine learning algorithms, including artificial neural networks[23] and a highly effective gradient boosting framework[24] that is known for its performance with structured data. These algorithms significantly improve the predictive accuracy and efficiency of our analysis. Tooptimize the model’s parameters and accurately identify equivalent morphologies, we use a genetic algorithm. This algorithm, inspired by natural selection, iteratively refines a population of solutions to converge on an optimal set.

Section 1 focuses on acquiring diverse microstructures with varying shapes, volume fractions V f , andcontrast values C to compile a comprehensive database. Finite element simulations are used to determine mechanical properties. InSection 2, thedataset is split into two subsets. Thefirst subset, withfixed V f and C values, feeds into an XGBoost model for each inclusion shape. Thesecond subset, comprising circular microstructures with varying V f and C values, trains an artificial neural network model with integrated dropout layers. Inthe final section, by leveraging input parameters, thealgorithms generate morphologies with equivalent mechanical properties but differing geometries. Additionally, circular microstructures with varied V f and C values aresynthesized.

2. Data CollectionProcess

The profound complexity of the human brain offers compelling insights for enhancing machine learning models, including ANNs and XGBoost, owing to its unparalleled ability to learn from extensive datasets via observation, analysis, anditerative refinement. Insimulating this cognitive paradigm, theefficacy of those models is intricately tied to the scale and diversity of the training data, wherein larger and more diverse datasets contribute to greater alignment between predicted and actualoutcomes.

Our model’s input configuration integrates the spatial coordinates (x, y), the orientation angle θ relevant to elliptical with three different values of aspect ratios (1/3, 1/2, 2/3), square andtriangular inclusions, the volume fraction V f , andthe phase contrast C. Thecontrast quantifies the level of heterogeneity within a heterogeneous material. When the value of C is significantly high (C >> 1) or extremely low (C << 1), it indicates a material with pronounced heterogeneity. Conversely, when C = 1, thematerial achieves complete hom*ogeneity. Interms of elastic properties, thecontrast is defined as the ratio between the Young’s modulus of the inclusion and that of the matrix (C = E i E m ). Incontrast, forthermal properties, this ratio is determined by the thermal conductivity of both phases (C = λ i λ m ). These meticulously selected parameters are tailored for each 2D microstructure and serve as indispensable descriptors for a thorough characterization of the microstructural geometric attributes. Conversely, theoutput framework of our model encompasses distinct material properties, such as the bulk modulus, shear modulus, andthermal conductivity. These parameters serve as fundamental metrics for assessing the mechanical and thermal properties inherent to compositematerials.

2.1. MicrostructureGeneration

2.1.1. Multi-Shape 2D MicrostructureGeneration

To address the primary optimization challenge, we divide the first dataset into four discrete sub-databases, each delineated by a distinct geometric entity: namely, circle, ellipse, square, andtriangle, as shown in Figure 1. Withineach sub-database, datasets are structured to encompass spatial coordinates (x and y) alongside inclusion orientation θ data. Moreover, consistent values are set for the volume fraction and phase contrast parameters to ensure uniformity throughout the analyses, withvarious scenarios tested involving volume fractions ranging from 10% to 30% and contrasts from 10 to 200. Through this iterative process, we aim to elucidate their nuanced effects on system performance. This rigorous approach not only seeks to optimize system efficiency but also aims to validate its reliability across the full range of input values. Byconducting this exhaustive analysis, we anticipate uncovering optimal configurations across various scenarios, thereby enhancing our ability to effectively address the optimization challenge athand.

2.1.2. Circular 2D MicrostructureGeneration

The second database constitutes a pivotal stage within our analytical and optimization framework. Diverging from its predecessor, it incorporates more nuanced variable parameters tailored to exhaustively probe system performance. These parameters encompass four fundamental input variables: the abscissa (x), theordinate (y), thevolume fraction spanning from 10% to 67%, and,concurrently, thecontrast between the two phases ranging from 10 to 200. It is imperative to note that the inclusion shapes consistently maintain circularity. Moreover, toascertain the generation of random x and y values, we confine them within a specified interval predicated on the volume fraction, as shown in Figure 2. This methodological approach ensures the preservation of generated points within the confines of the predefinedmatrix.

2.2. Finite ElementCalculations

The subsequent phase in our experimental process involves the acquisition of output data, which we term “Labels” within our model. These labels serve as the repository for the computed parameters and encompass critical metrics such as bulk, shear, andthermal conductivity. It is worth noting that this rigorous data acquisition process remains consistent across both databases, ensuring uniformity and reliability in our analyses. Our pursuit of understanding the effective elastic and thermal attributes of composite materials leads us to employ numerical hom*ogenization, which is acornerstone technique in numerical analysis. This method allows us to unravel the complex interplay of material properties within the composite structure. Leveraging the finite element (FE) method, we embark on a detailed examination of composite material behavior. This entails subjecting a square matrix reinforced with a singular circular, elliptical, square, or triangular inclusion tometiculous simulation andscrutiny.

While the matrix component of the composite exhibits consistent linear elastic and thermal behavior, characterized by predetermined values for Young’s modulus, Poisson’s ratio, andthermal conductivity, thebehavior of the reinforcing inclusions varies markedly. This variability is contingent upon the chosen contrast value, which delineates the ratio of material properties between the inclusion and the matrix. Inour numerical framework, thecontrast value ranges from 10 to 200, representing a wide spectrum of material property disparities. Notably, we define the contrast value, denoted as C, asthe ratio of the Young’s modulus and thermal conductivity between the inclusion and the matrix ( C = E i E m = λ i λ m ). Here, E i and λ i represent the Young’s modulus and thermal conductivity of the reinforcing inclusion, respectively, while E m and λ m signify those of the matrixphase.

The evaluation of the bulk modulus k and shear modulus μ involves formulas that incorporate the Young’s modulus E and Poisson’s ratio ν asfollows:

μ = E 2 ( 1 + v ) and k = E 3 ( 1 2 v )

To ensure the accurate performance of finite element FE simulations, it is imperative to construct a mesh that faithfully captures the geometric intricacies of the analyzed system. Inthis particular investigation, we opted for a multi-phase element mesh configuration comprised of two-dimensional quadrilateral elements. This meshing strategy, depicted in Figure 3, was chosen to facilitate the simulation process and to enhance the reliability and fidelity of the results obtained from the FEanalysis.

BoundaryConditions

In finite element analyses, boundary conditions[25,26] are of paramount importance. These conditions define the behaviors that a system must adhere to at its spatial or temporal boundaries. They enable the representation of external constraints and boundary traits, which, in turn, facilitates the attainment of accurate and reliable solutions for the system’s equations. Byincorporating these conditions, we attain a holistic comprehension and predictive capability regarding the system’s overall behavior that encompasses external influences and interactions with the environment. Tocompute effective properties, we choose the same boundary conditions used in our previous papers[27,28].

Linear elasticity:

In this work, periodic boundary conditions to be prescribed on individual volume element V are considered:

The displacement field over the entire volume V takes the form:

u ̲ = E · x ̲ + v ̲ x ̲ V ,

where the fluctuation v ̲ is periodic. It takes the same values at two hom*ologous points on opposite faces of V. Thetraction vector σ · n ̲ takes opposite values at two hom*ologous points on opposite faces of V.

In our case, the periodic boundary conditions are considered as special cases forwhich specific values of E and σ are chosen. Tocompute effective properties in the case of periodic conditions, we choose an elementary volume with imposed macroscopic strain tensors as:

E k = 1 0 0 0 1 0 0 0 0 , E μ = 0 1 2 0 1 2 0 0 0 0 0

An apparent bulk modulus k app and an apparent shear modulus μ app can be defined as:

k app = 1 2 trace σ and μ app = σ 12

where: σ is the local stress tensor, σ 12 is the local shear tensor, and < > represents the average over the hole’s microstructure.

Thermal conductivity:

For the thermal problem, thetemperature, its gradient, and the heat flux vector are denoted by T , T , and q, respectively. Theheat flux vector and the temperature gradient are related by Fourier’s law, which reads:

q ̲ = λ T ̲

in the isotropic case. Thescalar λ is the thermal conductivity coefficient of the considered phase. A volume V of a heterogeneous material is considered again. For the linear elastic case, periodic boundary conditions are used in the study of the effective thermal conductivity:

The temperature field takes the form:

T = G ̲ · x ̲ + t x ̲ V

where t is periodic. Apparent conductivities coincide with the wanted effective properties for sufficiently large volumes V. To compute the effective thermal conductivity, thefollowing test temperature gradient G and flux Q are prescribed on the elementary volume as:

G ̲ = 1 1 0 and Q ̲ = 1 1 0

They are used, respectively, to define the following apparent conductivities:

λ app = 1 2 trace q ̲ and λ app = 1 2 trace T ̲

The periodic method was chosen for implementing the linear elasticity and thermal conductivity calculations due to its efficiency in terms of computation time while maintaining results comparable to alternative methods. Not only is this method the fastest, butit also offers a precise means of calculating the elastic properties of complex materials. Moreover, its relatively straightforward implementation makes it applicable to a diverse array of materials andgeometries.

As previously mentioned, theheterogeneous material under study consists of a matrix embedded with a singular inclusion. Thevolume fraction’s value is adjusted during the microstructure generation phase, while the contrast value necessitates modification to the calculation files prior to initiating simulations. This involves altering the Young’s modulus and thermal conductivity values to align with specific researchrequirements.

For conducting finite element calculations with the designated software, several files are imperative: a matrix file detailing the first phase’s properties, aninclusion file for the second phase’s properties, amesh file, andinput files. Specifically, our study utilizes three input files tailored to assess the bulk properties, shear properties, andthermal conductivity. These files collectively enable the accurate modeling of the material’s response under varied testingscenarios.

3. Machine LearningModels

3.1. Data Pre-Processing andSplitting

Prior to constructing neural networks, it is essential to undergo data preprocessing, which is acritical stage involving the normalization of input features such as the abscissa, ordinate, andinclusion orientation. Normalization plays a pivotal role in standardizing these feature values, thereby ensuring efficient and effective model convergence. This process is facilitated through the utilization of the

StandardScaler()

function, which is indispensable for adjusting the data by subtracting the mean μ and dividing by the standard deviation σ. Mathematically, this normalization technique is represented by the equation:

X normalized = X μ σ

Here, X represents the input data, μ denotes the mean, and σ signifies the standard deviation. By applying this normalization technique, all input features contribute equally to the neural network’s learning process, thereby promoting stability and enhancing performance throughout the training phase.

Furthermore, it is customary to divide the dataset into three distinct subsets: the training set, the validation set, and the test set. Each subset serves a unique purpose in the model development process. The training set is utilized to train the model’s parameters, enabling it to discern patterns and relationships within the data. The validation set assumes a crucial role in fine-tuning hyperparameters and assessing the model’s performance during training. Finally, the test set remains reserved for evaluating the model’s ability to generalize to unseen data, providing a reliable measure of its performance post-training and validation.

In our approach, we allocate 80% of the dataset for training, which includes the validation subset, while reserving the remaining 20% for testing. This partitioning strategy ensures an ample amount of data for model training and validation while also maintaining a separate portion for assessing the model’s performance on unseen data. Such meticulous partitioning is vital for ensuring the robustness and reliability of the trained model.

3.2. Prediction of the Thermomechanical Behavior of 2D Circular Microstructure with Various Volume Fraction and ContrastValues

3.2.1. ModelArchitecture

This intricately designed neural network architecture, which is constructed using Keras’s Sequential API, meticulously orchestrates a series of interconnected layers to proficiently process input data. At its core lies a dense layer housing 128 neurons, which are strategically selected for their ability to discern intricate patterns within the input dataset. By harnessing the ReLU activation function, this layer introduces crucial non-linearity, enhancing the model’s capacity to capture complex relationships.

Subsequently, a dropout layer is seamlessly incorporated, with the dropout rate set to 0.2. This regularization technique combats overfitting by selectively deactivating 20% of the neurons during training, fostering robust generalization. The deliberate inclusion of dropout layers at strategic junctures in the network underscores a meticulous effort to balance model intricacy with regularization. Continuing the network’s progression, another dense layer emerges, featuring 64 neurons and embracing the ReLU activation function. Like its precursor, this layer perpetuates the propagation of non-linear transformations throughout the network, enabling the extraction of intricate features from the input data. The subsequent dropout layer, matching its predecessor with a dropout rate of 0.2, fortifies the model against overfitting, thereby bolstering its capacity for resilient generalization. This methodical integration of dropout layers underscores a proactive approach to regularization, ensuring the model’s adeptness at capturing essential patterns while mitigating the risk of memorizing noise.

As the network unfolds, a third dense layer materializes, housing 64 neurons and employing the ReLU activation function. This layer serves as an additional conduit for non-linear transformations, facilitating the extraction of increasingly complex features from the input data.

Finally, the output layer takes form as a dense layer housing a single neuron, representing the model’s ultimate objective of direct linear regression. In this configuration, no activation function is specified, signifying a straightforward mapping of input features to output predictions. Figure 4 below explains the detailed architecture of the ANN model.

3.2.2. ModelCompilation

The compilation of a model stands as the final stage in its creation and is crucial for predicting optimal optimization decisions. This process involves setting three key parameters: the optimizer, loss function, and metrics.

The optimizer dictates the learning rate, influencing the speed at which the model computes optimal weights. In our case, “Adam” serves as our optimizer and dynamically adjusts the learning rate during training. Adam integrates two gradient descent methodologies:

Momentum: Momentum is utilized to expedite the gradient descent process by incorporating an exponentially weighted average of gradients. This accelerates convergence towards the minimal value. The update rule involves the current weight being adjusted by subtracting the product of the learning rate and the momentum.

w t + 1 = w t α m t

where:

-

w t represents the weights at time t;

-

m t denotes the aggregate of gradients at time t;

-

α signifies the learning rate;

-

δ L δ w t is the derivative of the loss function with respect to the weights at time t;

-

β is the moving average parameter (typically 0.9).

Root mean square propagation RMSP: RMSP is an adaptive learning algorithm that improves upon AdaGrad. Unlike AdaGrad, which accumulates squared gradients, RMSP computes an exponential moving average.

w t + 1 = w t α t v t + ε 1 / 2 δ L δ w t

where:

-

w t represents the weights at time t;

-

v t is the sum of the squares of past gradients;

-

α t signifies the learning rate at time t;

-

δ L δ w t is the derivative of the loss function with respect to the weights at time t;

-

β is the moving average parameter (typically 0.9);

-

ε is a small positive constant (usually 10 8 ).

Loss function: The loss function serves to measure the discrepancy between the predicted and actual values during the learning phase. In constructing our model, we opt for the mean absolute percentage error MAPE, which is a widely adopted metric for regression tasks. A lower MAPE value indicates better model performance, as it signifies a smaller deviation between predicted and true values. Its formula is expressed as:

MAPE = 1 n i = 1 n T i P i T i

where:

-

n is the number of fitted points;

-

T i is the actual value;

-

P i is the predicted value.

Metrics: The metrics serve as the criteria for assessing the model’s performance, akin to the loss function, albeit they are not utilized during training. Within our model, we select the mean absolute error (MAE) as our metric of choice. MAE quantifies the accuracy of the model by evaluating the average absolute difference between predicted and actual values.

MAE = 1 n j = 1 n T j P j

where:

-

n is the number of fitted points;

-

T j is the actual value;

-

P j is the predicted value.

3.2.3. Training andEvaluation

Figure 5 visually presents the loss function, which is a pivotal measure utilized to assess our model’s performance. This function serves to quantify the disparity between the model’s predicted values and the actual data values within the dataset. By tracking the behavior of the loss function throughout the training process, we gain valuable insights into the model’s learning dynamics and its ability to adapt to the dataset.

The blue curve delineates the trajectory of the training data’s error rate as the model progresses through successive epochs of training. Initially, this error rate is notably high, reflecting substantial disparities between the model’s initial predictions and the actual target values. This disparity primarily stems from the random initialization of weights at the onset of training. However, as the model iteratively learns from the training data and adjusts its parameters, the error rate gradually diminishes, signifying an enhancement in the model’s predictive accuracy. This gradual reduction continues until the error rate converges towards its minimum value, indicating the model’s proficiency in capturing the underlying data patterns. Conversely, the red curve depicts the prediction error observed on the validation dataset, which comprises data that the model did not encounter during training. Mirroring the trend observed in the training curve, the validation error also exhibits a decreasing trajectory over epochs, suggesting the model’s capacity to generalize effectively to unseen data instances. The convergence of the validation error towards the minimum value observed in the training curve reinforces the model’s capability to make precise predictions across various datasets.

Upon reaching convergence, the error function stabilizes within a narrow range: typically equal to 75 epochs. This stability signifies that the model has attained a consistent level of accuracy in predicting the target variables, underscoring its reliability and efficacy.

The final phase is prediction, which represents the culmination of our model generation process and marks the critical moment when the anticipated outcomes of our model are realized. It involves applying the prediction model to the test dataset, followed by an intricate comparative analysis between the resulting predictions and the actual values of the shear, bulk, and thermal moduli. This meticulous examination serves as the foundation for computing the coefficient of determination R 2 , which is a fundamental metric for evaluating the model’s predictive power.

R 2 = 1 i = 1 n y i y ^ i 2 i = 1 n y i y ¯ 2

where:

y i is the actual value;

y ^ i is the predicted value.

Moreover, the R 2 metric, also known as the Pearson linear coefficient of determination, serves as a cornerstone in evaluating the efficacy of our linear regression predictions. Its value ranges from 0 to 1, with higher values indicating a stronger correlation between the predicted and actual values, reflecting the model’s predictive accuracy.

Upon subjecting our model to rigorous testing using the initial test images from the principle database, we achieved an impressive R 2 score of 0.98. This noteworthy result underscores the robustness and appropriateness of the chosen model parameters, affirming our confidence in its predictive capabilities. To enhance our understanding and interpretation of the prediction outcomes, we employ regression curves as visual aids. These curves offer a clear representation of the relationship between the predicted and actual values, facilitating a more intuitive comprehension of the underlying predictive patterns. By plotting the regression lines against the dataset’s actual values, we gain valuable insights into the model’s performance and its ability to accurately predict the target variables.

The accompanying Figure 6 and Figure 7 provide a comprehensive visualization of the prediction outcomes for various modulus values, including bulk, shear, and thermal conductivity. Notably, the close alignment of the majority of data points with the linear regression line suggests strong correspondence between the predicted and actual values. This observation further bolsters our confidence in the regression model’s ability to accurately capture the complex relationships inherent in the dataset, thereby validating its utility in other cases.

3.3. Prediction of the Thermomechanical Behavior of a Multishape 2D Microstructure with Fixed Volume Fraction and ContrastValues

3.3.1. ModelDefinition

The gradient boosting regressor (GBR) is an iterative ensemble learning technique used primarily for regression tasks. It operates by sequentially training a series of weak learners—typically, decision trees—to enhance the overall predictive performance. At each iteration, a new weak learner is trained to minimize the negative gradient of the loss function relative to the previous model’s predictions. This is aimed at reducing the overall loss. Mathematically, the prediction at iteration m for a given input x i is calculated as:

y ^ i ( m ) = y ^ i ( m 1 ) + ν · tree m ( x i )

where y ^ i ( m ) represents the predicted value at iteration m, y ^ i ( m 1 ) is the prediction from the previous iteration, ν denotes the learning rate, and tree m ( x i ) signifies the prediction of the m-th decision tree for input x i .

The GBR algorithm aims to minimize the loss function L ( y , y ^ ) with respect to the predicted values y ^ , where y signifies the true target values. The final prediction is derived by summing the predictions of all weak learners:

y ^ i = m = 1 M y ^ i ( m )

where M represents the total number of iterations.

The training process of the GBR algorithm involves initializing the model with a constant value and refining it iteratively. At each iteration, the algorithm computes the residuals, trains a regression tree to predict these residuals, computes optimal weights for each terminal node of the tree, and updates the model accordingly. This iterative process continues for M iterations, resulting in an ensemble model that minimizes the mean squared error loss function L ( y , y ^ ) . This enables the model to effectively capture complex relationships within the data and make accurate predictions.

Algorithm:

  • Initialize model with a constant value:

    F 0 ( x ) = argmin γ i = 1 n L y i , γ

  • For m = 1 toM(number of iterations):

    (a)

    Compute residuals:

    r i m = L y i , F x i F x i F ( x ) = F m 1 ( x ) for i = 1 , , n

    (b)

    Train regression tree: Train a regression tree with features x against residuals r and create terminal node regions R j m for j = 1 , , J m

    (c)

    Compute optimal weights:

    γ j m = argmin γ x i R j m L y i , F m 1 x i + γ for j = 1 , , J m

    (d)

    Update the model:

    F m ( x ) = F m 1 ( x ) + ν j = 1 J m γ j m 1 x R j m

where:

  • F m ( x ) represents the prediction at iteration m for input x;

  • ν is the learning rate;

  • L ( y , F ( x ) ) is the loss function measuring the discrepancy between the true target values y and the predicted values F ( x ) ;

  • r i m are the residuals at iteration m;

  • R j m are the terminal node regions of the regression tree at iteration m;

  • γ j m are the optimal weights for each terminal node region at iteration m;

  • J m is the number of terminal node regions in the regression tree at iteration m.

In our study, we meticulously optimized the GBR model to enhance its performance in predicting continuous values within our dataset. Utilizing the grid search technique, we systematically explored a broad spectrum of hyperparameter configurations. This involved a comprehensive examination of key hyperparameters, including the number of estimators (n_estimators), where we scrutinized values of 200, 300, 400, and 500, representing the number of boosting stages (trees) within the ensemble. Similarly, we carefully assessed various learning rates (learning_rate), including 0.05, 0.1, 0.15, and 0.2, to regulate the contribution of each tree to the overall ensemble. Moreover, depths spanning from 4 to 7 were considered for the maximum depth of trees (max_depth) to manage the complexity of individual trees within the ensemble. We also evaluated split thresholds of 3, 5, 7, and 10 samples for the minimum number of samples for splitting nodes (min_samples_split), significantly impacting the process of constructing trees. Additionally, our exploration encompassed the minimum number of samples required to form a leaf node (min_samples_leaf), with values ranging from 2 to 5. Furthermore, we meticulously examined subsampling rates (subsample) of 0.7, 0.8, 0.9, and 1.0 to introduce randomness and alleviate overfitting. Employing a rigorous five-fold cross-validation approach, we thoroughly assessed the model’s performance across an array of hyperparameter combinations, ensuring robustness and reliability in our evaluations. Following this exhaustive search, we pinpointed the optimal hyperparameter values that maximized performance metrics such as the R-squared score and minimized the mean squared error. Armed with these optimal configurations, we proceeded to train a new GBR model on the complete training dataset. Subsequently, we meticulously evaluated the performance of this tuned model on an independent test dataset, thereby gauging its predictive accuracy and generalization capability. Through this meticulous optimization process, we successfully crafted a highly effective and dependable predictive model tailored to the unique characteristics of our dataset.

3.3.2. ModelCompilation

The process of compiling and training the model closely resembles that of the previously employed ANN, with one significant departure: the substitution of the the mean squared error MSE loss function for MAPE. This preference for MSE in XGBoost models addressing regression issues stems from its unbiased treatment of positive and negative errors as well as its differentiability, which facilitates the optimization of the gradient boosting algorithm. The MSE is computed using the formula:

MSE = 1 n i = 1 n ( Y i Y ^ i ) 2

where Y i represents the actual values, Y ^ i denotes the predicted values, and n refers to the total sample count.

3.3.3. Training andEvaluation

Figure 8 offers a nuanced analysis of the fluctuation in training and validation error across six distinct geometric forms, contingent upon the sample size employed during the learning process. This graphical representation serves to elucidate the manner in which model efficacy varies in relation to dataset magnitude. A primary observation centers on the rapid convergence of curves toward values proximal to zero for each geometric configuration. This convergence intimates the models’ adeptness in assimilating information from the available data. As the sample size expands, both training and validation errors exhibit a gradual decline, indicative of the models’ adept adaptation to data intricacies and their proficient extrapolation across heterogeneous data landscapes.

Moreover, upon scrutinizing the MSE values, a discernible correlation with the complexity of the geometric shapes emerges. The least intricate form, the circle, manifests the lowest MSE, owing to the model’s reliance on solely two input variables (x, y) for its characterization, thereby streamlining the learning process. Conversely, the apex of complexity, embodied by the triangle, evinces the highest MSE, primarily due to its more stringent constraints: particularly, the limitation of angles to less than 60 degrees. This augmented complexity renders the capture of data variability more arduous, consequently engendering heightened predictive errors. These observations underscore the paramount significance of accounting for data complexity in the development and selection of machine learning models. Shapes of greater complexity may necessitate more nuanced models or augmented dataset sizes to attain optimal performance. Conversely, shapes of lesser complexity may be effectively encapsulated by simpler models, fostering superior overall performance and diminished error margins.

Transitioning to the prediction phase, we utilized the coefficient of determination R 2 as our primary evaluation metric. Figure 9, Figure 10 and Figure 11 below present the prediction results of the bulk modulus through three different scenarios. Across our investigation encompassing three distinct scenarios, where we systematically adjusted contrast values and volumetric fractions, we uncovered compelling predictive capabilities. Notably, our assessments unveiled a consistent trend of high-quality predictions across various geometric shapes within each scenario.

Upon closer examination of the results, a noteworthy observation emerges: the test data points closely adhere to the y = x line, indicating robust predictive accuracy. This alignment translates into impressive prediction scores ranging from 0.91 to 0.99 across the different scenarios. Notably, these scores fall within the performance spectrum observed between the triangular and circular shapes, echoing the complexities identified during model training.

The commendable predictive performances underscore the meticulous selection of hyperparameters within the XGBoost framework. Particularly, the adoption of the grid search methodology proved instrumental in identifying optimal parameter configurations, regardless of the dataset’s geometric complexity. This meticulous approach to hyperparameter tuning not only ensures robust generalization but also instills confidence in the model’s consistent and reliable performance across datasets exhibiting diverse geometric shapes.

4. Optimal Generation of Equivalent 2DMicrostructure

4.1. Defining Desired MicrostructureBehaviors

The primary objective of this study is to elucidate the equivalent morphology of a microstructure embedded within a composite material featuring a randomly positioned circular inclusion. This investigation rigorously considers the influence of fixed values of contrast and volumetric fraction on the material’s behavior across both the elastic and thermal domains.

The overarching aim is to leverage two distinct databases as shown in Figure 12 to accomplish the following dual-fold objectives:

  • Determine an alternative morphology that mirrors the original microstructure’s essential characteristics in terms of volumetric fraction and contrast, albeit manifesting a different geometric configuration. This exploration encompasses geometric variations such as elliptical, square, or triangular shapes.

  • Explore the possibility of identifying an alternative morphology with a comparable geometric shape to the original microstructure while manifesting distinct values of volumetric fraction and contrast.

Within this context, the optimization problem can be articulated in a detailed manner:

4.2. Objective FunctionFormulation

We designate K pr , μ pr , λ pr as the elastic and thermal moduli characterizing the base microstructure. Our objective is to determine the optimal position ( x , y ) , orientation θ, and morphology of the inclusion, ensuring it emulates the behavior of the base microstructure. Additionally, we aim to precisely identify the inclusion’s spatial coordinates ( x , y ) , alongside quantifying the volume fraction and contrast between the two phases to accurately replicate this desired behavior.

The optimization task is geared towards maximizing the objective function within the confines of constraints dictated by the predetermined values of the elastic parameters K, μ, and λ. This comprehensive optimization process must meticulously consider the intricate interplay of elastic and thermal properties all while maintaining strict adherence to the specified parameter values. Consider the optimization problems:

Find the best set ( X , Y , V f , C ) subject to : 0 < X , Y < 1000 5 % < V f < 67 % , 10 < C < 200 leading to prescribed values : K = K pr , μ = μ pr , λ = λ pr Find the best set ( X , Y , θ ) subject to : X 1 , Y 1 < X , Y < X 2 , Y 2 , 0 < θ < 180 leading to prescribed values : K = K pr , μ = μ pr , λ = λ pr

where K pr , μ pr , λ pr represents the prescribed values.

To achieve the first goal, we maximize the fitness function f ( A ) , defined as:

f ( A ) = 1 A A p r + 1

This function evaluates the proximity of the inclusion parameters A to the optimal state A p r , where a higher value of f ( A ) indicates a closer resemblance to the optimal behavior.

Simultaneously, our endeavor involves minimizing the discrepancy A A p r , which reflects the deviation of the inclusion parameters from their optimal values.

Through this optimization process, we aim to achieve both objectives effectively, aligning the inclusion properties with the desired performance criteria.

4.3. GeneticAlgorithm

Initially, our goal is to tackle the challenge of achieving equivalent morphology using bioinspired methods, including genetic algorithms. The decision to employ genetic algorithms was driven by specific criteria relevant to our problem. GAs were chosen to determine composite morphology due to unique characteristics inherent to our research problem that other optimization methods struggle to handle effectively. The behavior of composites is influenced by multiple variables including the distribution, shape, orientation, and interaction of different material phases. These relationships are typically non-linear and involve complex interactions and synergistic effects that are challenging to model using conventional optimization techniques. Moreover, the response surface for the optimal morphology of a composite may feature numerous local optima. GAs, which directly manipulate real composite design parameters using specific genetic operators, are particularly adept at avoiding local optima and seeking the global optimum, which is a critical capability in our scenario, where straightforward solutions are insufficient. Genetic algorithms GAs are a subset of evolutionary algorithms EAs, which are widely recognized for their effectiveness in handling complex, multi-modal, and large-scale optimization problems. These stochastic search methods draw inspiration from the natural evolutionary process [29,30] and evolve a population of candidate solutions over successive generations. In this study, we delve into the genetic algorithm approach and utilize it to navigate through the optimization landscape of the problem at hand.

The foundational principle of GAs is rooted in the mechanisms of natural selection, where individuals within a population, each representing a potential solution characterized by an objective function or fitness, undergo evolutionary processes. The core idea is that the fittest individuals have a higher likelihood of being chosen to produce offspring for the next generation. This selection process is augmented by genetic operators that mimic biological evolution, including selection, crossover, and mutation [30,31]:

-

**Selection**: a stage that prioritizes individuals with higher fitness scores, granting them a greater chance to contribute their genetic material to the next generation.

-

**Crossover**: a recombination process that merges the genetic information of selected parents to generate offspring, thereby exploring new regions of the solution space.

-

**Mutation**: introduces random genetic variations to some offspring, preventing premature convergence to local optima and encouraging diversity within the population [32,33].

This simulated evolutionary cycle, marked by selection, reproduction, and mutation, is iterated over numerous generations. The GA’s objective is to identify the most optimal solution—i.e., the individual with the highest fitness—across all generations.

While traditional approaches often employ a binary representation for optimization parameters, our study adopts the direct manipulation of integer and real-number parameters, in alignment with contemporary practices. The optimization begins with the generation of an initial, random population, followed by the application of genetic operations, which were carefully selected based on precedents from the literature [30]. Key operations include tournament selection for choosing parents, whole arithmetical crossover for creating offspring, and random uniform mutation for introducing genetic diversity.

The GA process incorporates elitism to ensure the best solution from one generation is carried over to the next, thus preserving advantageous traits. Through iterations of these genetic operations, the GA progresses towards identifying the optimal solution.

The configuration of the genetic algorithm, including the specific genetic operators and their probabilities ( p c r o s s for crossover and p m u t for mutation) as well as the selection strategy for the crossover point ρ, follows the guidelines and recommendations outlined in the literature [30,31]. These parameters are finely tuned based on empirical evidence and preliminary testing to optimize the algorithm’s performance for the specific challenges presented by the optimization problem.

In numerical experiments, the choice of population size and number of generations is carefully tailored to the complexity of the optimization problems at hand [32,33]. By selecting 300 individuals and conducting 2000 generations, we aim to strike a balance between computational efficiency and thorough exploration of the solution space. These values are carefully selected based on considerations aligned with the nature and intricacy of the optimization problems under investigation (Figure 13).

4.4. Search for Equivalent Morphology Using the Coupling of Machine Learning and the GeneticAlgorithm

In order to solve the optimization challenge, an innovative approach has been devised to integrate two bioinspired methodologies seamlessly. In this novel strategy, the GA assumes a central role overseeing the evaluation of individuals’ fitness in each generation. Traditionally, fitness computation for analogous mechanical problems involves using simulation software like finite element methods, which can be time-intensive, especially with large datasets. To overcome this challenge, machine learning has been integrated into the GA framework. Unlike conventional methods, ML models pre-trained on fitness data are employed. These ML models exhibit remarkable accuracy, with prediction scores ranging from 0.92 to 0.99, thereby eliminating the need for resource-intensive simulations. During the GA execution, these ML-generated predictions seamlessly blend into the crossover and mutation processes. Consequently, this hybrid approach optimally leverages the synergies between GA and ML, resulting in a significant acceleration of the optimization process while ensuring precise and meticulous fitness evaluations.

4.4.1. Equivalent Over-FormInclusion

To address the primary optimization challenge, we implement a genetic algorithm in conjunction with XGBoost models trained on diverse datasets featuring various shape configurations, as shown in Figure 14. The objective is to identify the optimal parameters x, y, and θ that maximize a specific fitness function while simultaneously minimizing the discrepancies between the predicted values for the circular shape A p r and those for other shapes. The genetic algorithm systematically explores different parameter combinations x, y, and θ by leveraging XGBoost models to predict properties associated with different shapes. Through iterative adjustments to these parameters, the algorithm aims to enhance the agreement between predicted and target values for the circular shape. Essentially, it seeks parameter configurations that yield predictions most closely resembling the properties of a circular shape. Upon completion of the optimization process, the genetic algorithm provides the most effective parameter combinations, resulting in the best fits for various shapes. These combinations represent parameters conducive to achieving property values closely aligned with those of the circular shape, thereby fulfilling the optimization objective.

4.4.2. Equivalent CircularInclusions

Moving on to the second optimization challenge, we encounter a distinct scenario where only a single neural network model is utilized. Within this framework, the fitness function invokes an artificial neural network model that is specifically configured to accommodate randomly generated circular microstructures exhibiting diverse contrast values and volume fractions, as shown in Figure 15. The primary objective of the genetic algorithm in this context is to ascertain the optimal parameters (X, Y, V f , and C) that minimize the discrepancy between the prescribed value and the output provided by the model. To mitigate the risk of the GA converging towards identical solutions across iterations, a sophisticated algorithm has been developed. This algorithm operates at each iteration and introduces a penalty to the fitness function equivalent to the value of the previous solution. By imposing this penalty, the algorithm encourages the model to explore alternative solutions, thereby enhancing the diversity of the parameter space explored. For comprehensive insights into the intricacies of this algorithm, readers are directed to consult the preceding paper, where the algorithm is meticulously elucidated. The coupling of the GA with the ANN model stands poised to yield optimal parameter configurations capable of maximizing fitness across a spectrum of microstructural variations. This nuanced approach endeavors to achieve behaviors closely aligned with the prescribed criteria while allowing for nuanced adjustments in parameters such as X, Y, V f , and C. Through this methodical exploration of the parameter space, the GA effectively navigates the landscape of potential solutions, facilitating the identification of configurations that manifest the desired behavioral traits with precision and reliability.

4.4.3. Results andDiscussions

To assess the efficacy of combining machine learning with genetic algorithms to discover equivalent morphologies, diverse scenarios were crafted, as shown in Figure 16, with each manipulating different variables. The initial scenario centers around an inclusion positioned at the center of a 1000 × 1000 matrix that is firmly fixed at coordinates (500, 500). Here, the volume fraction stands at 25% and is accompanied by a contrast of 80 between the two phases. In the subsequent scenario, the inclusion is randomly situated within the matrix, mirroring real-world conditions more closely. With the volume fraction raised to 30% and the contrast lowered to 50, this adjustment evaluates the system’s capability to manage scenarios where the inclusion’s range is constrained and contrasts are less pronounced. Continuing this investigation, the third scenario reduces the volume fraction to 20% and further diminishes the contrast to 20. The inclusion’s placement remains randomized, reflecting scenarios where inclusion characteristics may be less defined. Each scenario is meticulously designed to scrutinize the ML-GA coupling’s aptitude in recognizing equivalent morphologies across a spectrum of conditions, spanning from ideal scenarios to more intricate and realistic environments.

Scenario A:

-

Inclusion position: centered inclusion with coordinates (500, 500);

-

Volume fraction V f : 25%;

-

Contrast C: 80.

Scenario B:

-

Inclusion position: randomly placed with coordinates (450, 550);

-

Volume fraction V f : 30%;

-

Contrast C: 50.

Scenario C:

-

Inclusion position: randomly placed with coordinates (250, 400);

-

Volume fraction V f : 20%;

-

Contrast C: 20

Interpretation

Figure 17 below illustrates the results obtained from various scenarios explored by the coupling model. Each scenario corresponds to a specific geometric configuration of the inclusion, thus influencing its behavior within the matrix. In the first scenario, where the inclusion is a centered circle, a unique solution is identified in the database. However, finding an equivalent morphology proves to be complex. Despite this, the model manages to discover a satisfactory solution with adaptation values close to one. This adequacy is particularly remarkable in the elliptical case, with a shape ratio of 2/3, as well as in the square case, for which the shape is similar to that of a circle. Other solutions were found with less adaptation in the elliptical case (aspect ratio = 1/2) and triangular case, but they had even lower adaptation in the case of an elongated elliptical inclusion (aspect ratio = 1/3), where the shape is far from circular, and where the degree of freedom in the matrix is limited. In the second scenario, where the volume fraction is fixed at 30%, the solutions do not differ considerably from those of the first scenario. This is because the inclusion has less freedom in the matrix, restricting the set of possible solutions. Conversely, in scenario C, where the volume fraction is reduced to 20% and the inclusion is randomly positioned, the solutions show increased accuracy. The adaptation values range between 0.86 and 0.98. This improvement is attributable to the greater freedom granted to the inclusion, promoting a more hom*ogeneous distribution within the matrix.

In the second figure (Figure 18), we scrutinize the outcomes derived from employing a secondary coupling model predicated on an artificial neural network. This model is specifically tailored to seek analogous morphological representations within the identical database as the input configuration. To ensure exhaustive exploration of the solution space, we implement a penalization algorithm. This mechanism, coupled with the stipulation of a fixed quantity of solutions for exploration, facilitates the generation of multiple potential solutions subsequent to each instance of penalization.

Upon meticulous examination of the results, we observe that the coupling model exhibits commendable efficacy across all scrutinized scenarios (A, B, and C). The obtained adaptation values fall within the range of 0.94 to 0.99, affirming the efficacy of the adopted coupling methodology. This substantiates the model’s adeptness in accurately delineating intricate interrelations among diverse variables and the corresponding fine-tuning of the model’s parameters.

Nevertheless, it is imperative to underscore that the adaptation value undergoes diminution following each penalization iteration. This decline can be attributed to the excision of previously identified solutions. This iterative culling process, which is aimed at eliminating suboptimal solutions while accentuating more promising ones, facilitates the convergence of the model towards more resilient configurations that better align with observed data.

The integration of machine learning models and genetic algorithms presents a hopeful avenue for identifying equivalent morphologies within microstructures housing circular inclusions. This hybrid approach harnesses the respective advantages of each technique: machine learning’s capacity to discern significant patterns from data and genetic algorithms’ prowess in optimization. Yet obstacles like the intricate shapes of inclusions and constraints posed by elevated volume fractions may result in fitness values diverging from ideal outcomes. Nonetheless, the method’s efficacy in confronting these hurdles and furnishing a structured framework for microstructural analysis remains unequivocal.

5. Conclusions

In this study, we developed a novel methodology aimed at discerning equivalent morphologies within a microstructure housing a circular inclusion. Our approach relies on a comprehensive analysis of the microstructure’s linear elastic and thermal behavior, which is facilitated by a coupling model integrating machine learning methodologies with genetic algorithms.

Our investigative process commenced with meticulous data acquisition. We systematically generated a range of microstructures embodying diverse inclusion shapes such as circles, ellipses, squares, and triangles: each characterized by a varying volume fraction V f and contrast value C. The overarching goal was to assemble a robust database representative of the manifold structures encountered in practical scenarios.

Subsequently, this dataset served as the foundation for determining key mechanical properties: namely, the bulk and shear moduli and the thermal conductivity. This task was accomplished through the utilization of finite element simulation techniques, which afford precise insights into the mechanical and thermal responses of the microstructures under consideration.

The assembled dataset was then partitioned into two distinct subsets. The first subset comprises data subsets corresponding to each inclusion shape, with fixed V f and C values, serving as inputs for an XGBoost machine learning model. The second subset encompasses circular microstructures, exhibiting variations in V f and C, to train an artificial neural network model with integrated dropout layers to mitigate overfitting. Both developed models demonstrated promising predictive capabilities, significantly streamlining the task of genetic algorithms. Leveraging the prescribed input microstructure parameters, these algorithms yielded two distinct outcomes: first, the generation of morphologies with equivalent mechanical properties, characterized by comparable fractions and contrasts but differing inclusion geometries; second, the synthesis of circular microstructures exhibiting variations in V f and C values.

The outcomes of our study substantiate the effectiveness of the proposed methodology in tackling the multifaceted challenge at hand. Particularly noteworthy is the commendable adaptation observed across pivotal mechanical moduli—namely, the bulk modulus, shear modulus, and thermal conductivity. This underscores the significance and efficacy of amalgamating machine learning methodologies with genetic algorithms in the composite materials field.

Author Contributions

Methodology, T.K. and T.M.; Software, H.B.; Validation, H.B.; Writing—original draft, H.B.; Supervision, T.K. and T.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to ethical and legal restrictions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Moumen, A.E. Prévision du Comportement des Matériaux Hétérogènes basée sur L’hom*ogénéisation Numérique: Modélisation, Visualisation et étude Morphologique. Ph.D. Thesis, University of Lille, Lille, France, 2014. [Google Scholar]
  2. Kovácik, J.; Simancik, F. Aluminium foam—Modulus of elasticity and electrical conductivity according to percolation theory. Scr. Mater. 1998, 39, 239–246. [Google Scholar] [CrossRef]
  3. Ding, Y. Analyse Morphologique de la Microstructure 3D de Réfractaires Électrofondus à Très Haute Teneur en Zircone: Relations avec les Propriétés Mécaniques, Chimiques et le Comportement Pendant la Transformation Quadratique-Monoclinique. Ph.D. Thesis, Ecole Nationale Supérieure des Mines de Paris, Paris, France, 2012. [Google Scholar]
  4. Zhou, Q.; Zhang, H.W.; Zheng, Y.G. A hom*ogenization technique for heat transfer in periodic granular materials. Adv. Powder Technol. 2012, 23, 104–114. [Google Scholar] [CrossRef]
  5. Chaboche, J.L. Le Concept de Contrainte Effective Appliqué à l’Élasticité et à la Viscoplasticité en Présence d’un Endommagement Anisotrope. In Mechanical Behavior of Anisotropic Solids/Comportment Méchanique des Solides Anisotropes; Boehler, J.P., Ed.; Springer: Dordrecht, The Netherlands, 1982; pp. 737–760. [Google Scholar] [CrossRef]
  6. Glier, M.W.; Tsenn, J.; Linsey, J.S.; McAdams, D.A. Methods for Supporting Bioinspired Design. In Proceedings of the ASME International Mechanical Engineering Congress and Exposition, Denver, CO, USA, 11–17 November 2011; Volume 2. [Google Scholar]
  7. Paturi, U.M.R.; Cheruku, S.; Reddy, N.S. The Role of Artificial Neural Networks in Prediction of Mechanical and Tribological Properties of Composites—A Comprehensive Review. Arch. Comput. Methods Eng. 2022, 29, 3109–3149. [Google Scholar] [CrossRef]
  8. Han, S.; Ma, N.; Zheng, H.; Han, Q.; Li, C. Intelligently optimized arch-honeycomb metamaterial with superior bandgap and impact mitigation capacity. Compos. Part A Appl. Sci. Manuf. 2024, 185, 108298. [Google Scholar] [CrossRef]
  9. Kibrete, F.; Trzepiecinski, T.; Gebremedhen, H.S.; Woldemichael, D.E. Artificial Intelligence in Predicting Mechanical Properties of Composite Materials. J. Compos. Sci. 2023, 7, 364. [Google Scholar] [CrossRef]
  10. Nandi, A.K.; Deb, K.; Ganguly, S.; Datta, S. Investigating the role of metallic fillers in particulate reinforced flexible mould material composites using evolutionary algorithms. Appl. Soft Comput. 2012, 12, 28–39. [Google Scholar] [CrossRef]
  11. Tabakov, P.Y.; Moyo, S. A comparative analysis of evolutionary algorithms in the design of laminated composite structures. Sci. Eng. Compos. Mater. 2015, 24, 13–21. [Google Scholar] [CrossRef]
  12. Balasubramanian, M.; Paglicawan, M.A.; Zhang, Z.X.; Lee, S.H.; Xin, Z.X.; Kim, J.K. Prediction and Optimization of Mechanical Properties of Polypropylene/Waste Tire Powder Blends using a Hybrid Artificial Neural Network-Genetic Algorithm (GA-ANN). J. Thermoplast. Compos. Mater. 2008, 21, 51–69. [Google Scholar] [CrossRef]
  13. Equbal, A.; Shamim, M.; Badruddin, I.A.; Equbal, M.I.; Sood, A.K.; Nik Ghazali, N.N.; Khan, Z.A. Application of the Combined ANN and GA for Multi-Response Optimization of Cutting Parameters for the Turning of Glass Fiber-Reinforced Polymer Composites. Mathematics 2020, 8, 947. [Google Scholar] [CrossRef]
  14. Sardar, S.; Dey, S.; Das, D. Modelling of tribological responses of composites using integrated ANN-GA technique. J. Compos. Mater. 2021, 55, 873–896. [Google Scholar] [CrossRef]
  15. Guan, Z.J.; Li, R.; Jiang, J.T.; Song, B.; Gong, Y.X.; Zhen, L. Data mining and design of electromagnetic properties of Co/FeSi filled coatings based on genetic algorithms optimized artificial neural networks (GA-ANN). Compos. Part Eng. 2021, 226, 109383. [Google Scholar] [CrossRef]
  16. Aveen, K.P.; Londhe, N.; Ullal, V.N.; Pranesh Rao, K.M. Effect of aluminium filler concentration on delamination in GFRP composite with optimized machining conditions using ANN-genetic algorithm. Eng. Res. Express 2023, 5, 015074. [Google Scholar] [CrossRef]
  17. Grine, M.; Slamani, M.; Laouissi, A.; Arslane, M.; Rokbi, M.; Chatelain, J.F. Enhanced investigations and modeling of surface roughness of epoxy/Alfa fiber biocomposites using optimized neural network architecture with genetic algorithms. Int. J. Adv. Manuf. Technol. 2024, 130, 3115–3132. [Google Scholar] [CrossRef]
  18. Yu, C.H.; Qin, Z.; Buehler, M.J. Artificial intelligence design algorithm for nanocomposites optimized for shear crack resistance. Nano Futures 2019, 3, 035001. [Google Scholar] [CrossRef]
  19. Li, W.; Suhayb, M.K.; Thangavelu, L.; Abdulameer Marhoon, H.; Pustokhina, I.; Alqsair, U.F.; El-Shafay, A.; Alashwal, M. Implementation of AdaBoost and genetic algorithm machine learning models in prediction of adsorption capacity of nanocomposite materials. J. Mol. Liq. 2022, 350, 118527. [Google Scholar] [CrossRef]
  20. Moumen, A.E.; Kanit, T.; Imad, A.; Minor, H.E. Effect of overlapping inclusions on effective elastic properties of composites. Mech. Res. Commun. 2013, 53, 24–30. [Google Scholar] [CrossRef]
  21. Khdir, Y.K.; Kanit, T.; Zaïri, F.; Naït-Abdelaziz, M. Computational hom*ogenization of plastic porous media with two populations of voids. Mater. Sci. Eng. A 2014, 597, 324–330. [Google Scholar] [CrossRef]
  22. Khdir, Y.K.; Kanit, T.; Zaïri, F.; Naït-Abdelaziz, M. A computational hom*ogenization of random porous media: Effect of void shape and void content on the overall yield surface. Eur. J. Mech. A/Solids 2015, 49, 137–145. [Google Scholar] [CrossRef]
  23. Kabbani, M.S.; El Kadi, H.A. Predicting the effect of cooling rate on the mechanical properties of glass fiber-polypropylene composites using artificial neural networks. J. Thermoplast. Compos. Mater. 2019, 32, 1268–1281. [Google Scholar] [CrossRef]
  24. Jiang, J.; Zhang, Z.; Fu, J.; Ramakrishnan, K.R.; Wang, C.; Wang, H. Machine learning assisted prediction of mechanical properties of graphene/aluminium nanocomposite based on molecular dynamics simulation. Mater. Des. 2022, 213, 110334. [Google Scholar] [CrossRef]
  25. Kanit, T.; Forest, S.; Galliet, I.; Mounoury, V.; Jeulin, D. Determination of the size of the representative volume element for random composites: Statistical and numerical approach. Int. J. Solids Struct. 2003, 40, 3647–3679. [Google Scholar] [CrossRef]
  26. Ben-Ltaief, N.; NGuyen, F.; Kanit, T.; Imad, A.; Bel-Hadj-Ali, N. Effect of particles morphology on the effective elastic properties of bio–composites reinforced by seashells: Numerical investigations. J. Compos. Mater. 2023, 57, 002199832211389. [Google Scholar] [CrossRef]
  27. Béji, H.; Kanit, T.; Messager, T. Prediction of Effective Elastic and Thermal Properties of Heterogeneous Materials Using Convolutional Neural Networks. Appl. Mech. 2023, 4, 287–303. [Google Scholar] [CrossRef]
  28. Beji, H.; Kanit, T.; Messager, T.; Ben-Ltaief, N.; Ammar, A. Mathematical Models for Predicting the Elastic and Thermal Behavior of Heterogeneous Materials through Curve Fitting. Appl. Sci. 2023, 13, 13206. [Google Scholar] [CrossRef]
  29. Goldberg, D. Genetic Algorithms in Search, Optimization, and Machine Learning, 1st ed.; Addison-Wesley: Lebanon, IN, USA, 1989. [Google Scholar]
  30. Michalewicz, Z. Genetic Algorithms + Data Structures = Evolution Programs, 3rd ed.; Springer: Berlin, Germany, 1999. [Google Scholar]
  31. Evolutionary Algorithms in Engineering and Computer Science: Recent Advances in Genetic Algorithms, Evolution Strategies, Evolutionary Programming, Genetic Programming and Industrial Applications; Wiley: New York, NY, USA, 1999.
  32. Messager, T.; Pyrz, M.; Gineste, B.; Chauchot, P. Optimal laminations of thin underwater composite cylindrical vessels. Compos. Struct. 2002, 58, 529–537. [Google Scholar] [CrossRef]
  33. Abdul-Hameed, H.; Messager, T.; Zaïri, F.; Naït-Abdelaziz, M. Large-strain viscoelastic–viscoplastic constitutive modeling of semi-crystalline polymers and model identification by deterministic/evolutionary approach. Comput. Mater. Sci. 2014, 90, 241–252. [Google Scholar] [CrossRef]

Equivalent Morphology Concept in Composite Materials Using Machine Learning and Genetic Algorithm Coupling (1)

Figure 1. Exploration of geometric characteristics of 2D microstructure images with multi-shape inclusion.

Figure 1. Exploration of geometric characteristics of 2D microstructure images with multi-shape inclusion.

Equivalent Morphology Concept in Composite Materials Using Machine Learning and Genetic Algorithm Coupling (2)

Equivalent Morphology Concept in Composite Materials Using Machine Learning and Genetic Algorithm Coupling (3)

Figure 2. Exploration of geometric characteristics of circular inclusion of 2D microstructure images with different volume fractions.

Figure 2. Exploration of geometric characteristics of circular inclusion of 2D microstructure images with different volume fractions.

Equivalent Morphology Concept in Composite Materials Using Machine Learning and Genetic Algorithm Coupling (4)

Equivalent Morphology Concept in Composite Materials Using Machine Learning and Genetic Algorithm Coupling (5)

Figure 3. Complete integration of two-dimensional multi-phase element mesh configuration.

Figure 3. Complete integration of two-dimensional multi-phase element mesh configuration.

Equivalent Morphology Concept in Composite Materials Using Machine Learning and Genetic Algorithm Coupling (6)

Equivalent Morphology Concept in Composite Materials Using Machine Learning and Genetic Algorithm Coupling (7)

Figure 4. Neural network architecture using dropout layers.

Figure 4. Neural network architecture using dropout layers.

Equivalent Morphology Concept in Composite Materials Using Machine Learning and Genetic Algorithm Coupling (8)

Equivalent Morphology Concept in Composite Materials Using Machine Learning and Genetic Algorithm Coupling (9)

Figure 5. Mean absolute percentage curves using training and validation data as a function of the training examples.

Figure 5. Mean absolute percentage curves using training and validation data as a function of the training examples.

Equivalent Morphology Concept in Composite Materials Using Machine Learning and Genetic Algorithm Coupling (10)

Equivalent Morphology Concept in Composite Materials Using Machine Learning and Genetic Algorithm Coupling (11)

Figure 6. Prediction results of the bulk modulus using the ANN model.

Figure 6. Prediction results of the bulk modulus using the ANN model.

Equivalent Morphology Concept in Composite Materials Using Machine Learning and Genetic Algorithm Coupling (12)

Equivalent Morphology Concept in Composite Materials Using Machine Learning and Genetic Algorithm Coupling (13)

Figure 7. Prediction results of the thermal conductivity using the ANN model.

Figure 7. Prediction results of the thermal conductivity using the ANN model.

Equivalent Morphology Concept in Composite Materials Using Machine Learning and Genetic Algorithm Coupling (14)

Equivalent Morphology Concept in Composite Materials Using Machine Learning and Genetic Algorithm Coupling (15)

Figure 8. Mean squared error curves using training and validation data as a function of the training examples.

Figure 8. Mean squared error curves using training and validation data as a function of the training examples.

Equivalent Morphology Concept in Composite Materials Using Machine Learning and Genetic Algorithm Coupling (16)

Equivalent Morphology Concept in Composite Materials Using Machine Learning and Genetic Algorithm Coupling (17)

Figure 9. Predicted bulk modulus results for different shapes with V f = 25% and C = 80.

Figure 9. Predicted bulk modulus results for different shapes with V f = 25% and C = 80.

Equivalent Morphology Concept in Composite Materials Using Machine Learning and Genetic Algorithm Coupling (18)

Equivalent Morphology Concept in Composite Materials Using Machine Learning and Genetic Algorithm Coupling (19)

Figure 10. Predicted bulk modulus results for different shapes with V f = 30% and C = 50.

Figure 10. Predicted bulk modulus results for different shapes with V f = 30% and C = 50.

Equivalent Morphology Concept in Composite Materials Using Machine Learning and Genetic Algorithm Coupling (20)

Equivalent Morphology Concept in Composite Materials Using Machine Learning and Genetic Algorithm Coupling (21)

Figure 11. Predicted bulk modulus results for different shapes with V f = 20% and C = 20.

Figure 11. Predicted bulk modulus results for different shapes with V f = 20% and C = 20.

Equivalent Morphology Concept in Composite Materials Using Machine Learning and Genetic Algorithm Coupling (22)

Equivalent Morphology Concept in Composite Materials Using Machine Learning and Genetic Algorithm Coupling (23)

Figure 12. An in-depth analysis of databases utilized for determining equivalent morphologies.

Figure 12. An in-depth analysis of databases utilized for determining equivalent morphologies.

Equivalent Morphology Concept in Composite Materials Using Machine Learning and Genetic Algorithm Coupling (24)

Equivalent Morphology Concept in Composite Materials Using Machine Learning and Genetic Algorithm Coupling (25)

Figure 13. Optimization approach using genetic algorithms.

Figure 13. Optimization approach using genetic algorithms.

Equivalent Morphology Concept in Composite Materials Using Machine Learning and Genetic Algorithm Coupling (26)

Equivalent Morphology Concept in Composite Materials Using Machine Learning and Genetic Algorithm Coupling (27)

Figure 14. Equivalent morphology search with multishape inclusion using XGBoost and genetic algorithm coupling.

Figure 14. Equivalent morphology search with multishape inclusion using XGBoost and genetic algorithm coupling.

Equivalent Morphology Concept in Composite Materials Using Machine Learning and Genetic Algorithm Coupling (28)

Equivalent Morphology Concept in Composite Materials Using Machine Learning and Genetic Algorithm Coupling (29)

Figure 15. Equivalent morphology search with circular inclusion using ANN and genetic algorithm coupling.

Figure 15. Equivalent morphology search with circular inclusion using ANN and genetic algorithm coupling.

Equivalent Morphology Concept in Composite Materials Using Machine Learning and Genetic Algorithm Coupling (30)

Equivalent Morphology Concept in Composite Materials Using Machine Learning and Genetic Algorithm Coupling (31)

Figure 16. Various scenarios to address (a) V f = 25% and C = 80, (b) V f = 30% and C = 50, and (c) V f = 20% and C = 20.

Figure 16. Various scenarios to address (a) V f = 25% and C = 80, (b) V f = 30% and C = 50, and (c) V f = 20% and C = 20.

Equivalent Morphology Concept in Composite Materials Using Machine Learning and Genetic Algorithm Coupling (32)

Equivalent Morphology Concept in Composite Materials Using Machine Learning and Genetic Algorithm Coupling (33)

Figure 17. Results of equivalent over-form inclusion through coupling model.

Figure 17. Results of equivalent over-form inclusion through coupling model.

Equivalent Morphology Concept in Composite Materials Using Machine Learning and Genetic Algorithm Coupling (34)

Equivalent Morphology Concept in Composite Materials Using Machine Learning and Genetic Algorithm Coupling (35)

Figure 18. Results of equivalent circular inclusion through coupling model.

Figure 18. Results of equivalent circular inclusion through coupling model.

Equivalent Morphology Concept in Composite Materials Using Machine Learning and Genetic Algorithm Coupling (36)

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.


© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Equivalent Morphology Concept in Composite Materials Using Machine Learning and Genetic Algorithm Coupling (2024)

References

Top Articles
Latest Posts
Article information

Author: Edwin Metz

Last Updated:

Views: 5965

Rating: 4.8 / 5 (58 voted)

Reviews: 81% of readers found this page helpful

Author information

Name: Edwin Metz

Birthday: 1997-04-16

Address: 51593 Leanne Light, Kuphalmouth, DE 50012-5183

Phone: +639107620957

Job: Corporate Banking Technician

Hobby: Reading, scrapbook, role-playing games, Fishing, Fishing, Scuba diving, Beekeeping

Introduction: My name is Edwin Metz, I am a fair, energetic, helpful, brave, outstanding, nice, helpful person who loves writing and wants to share my knowledge and understanding with you.