Analyzing the Impact of Population Size in AI-Based Reconstruction of the Thermal Parameter in Heat Conduction Modeling

The research shows a novel approach leveraging swarm algorithms, the artificial bee colony (ABC) and ant colony optimization (ACO), to rebuild the heat transfer coefficient, especially for the continuous border condition. The authors utilized their application software to do numerical computations, employing classical variants of swarm algorithms. The numerical calculations employed a functional determining error to assess the accuracy of the esti - mated result. The coefficient of the thermally conductive layer was recalibrated utilizing swarm methods within the range of 900–1500 W/m 2 K and subsequently compared to a predetermined reference value. A finite element mesh consisting of 576 nodes was used for the calculations. The study involved simulations with populations of 5, 10, 15, and 20 individuals. Furthermore, each scenario also considered noise of 0%, 2%, and 5% of the reference values. The results make it evident that the reconstructed values of the kappa coefficient, cooling curves, and temperatures for the ABC and ACO algorithms are physically correct. The consequences indicate a notable level of satisfaction and strong concurrence with the anticipated κ parameter values. The results from the numerical simulations demon - strate considerable promise for applying artificial intelligence algorithms to optimize production processes, analyze data, and facilitate data-driven decision-making. This contribution not only underscores the effectiveness of swarm intelligence in engineering applications but also opens new avenues for research in thermal process optimization.


INTRODUCTION
When examining the concepts of artificial intelligence and machine learning, it is expected to draw comparisons to human intellect [1].The fundamental components of human intelligence encompass knowledge's practical application, information acquisition and assimilation, and the cognitive and general capabilities involved.Conceptual and abstract thinking, interactive abilities, deductive reasoning, goal orientation, and proficiency in analysis and memorization are among the most noteworthy facets of the learning process.
This area of literature encompasses an array of models pertaining to intelligent machines, as expounded upon by Rutkowski and Slota in their respective works [2,3].Moreover, artificial intelligence (AI) utilizes not just human behavioral patterns but also incorporates collective intelligence from various species, such as bees, ants, wolves, whales, and others, as well as genetic algorithms.
Artificial intelligence (AI) has played a significant role in effectively tackling a range of intricate challenges, as shown by its application in enhancing road safety.To augment the security of mobile vehicles, adaptive safety systems have been devised, necessitating prompt anticipation of probable hazards to prevent collisions or trafficrelated occurrences.The AI-based method, developed by Meier et al. [4], introduces an automated approach for learning a prediction function.Integrating adaptive safety systems with these models has enhanced performance and passenger security.
Swarm algorithms have made significant advances in artificial intelligence, drawing inspiration from biological processes found in a variety of animal groups, such as ant colonies, bee swarms, worm clusters, and bird flocks.Hackwood et al. in their seminal work [5] introduced the notion of intelligent swarming, highlighting the amazing adaptability of these algorithms in handling varied constraints such as geographical limitations and variable independence.
As the world moves towards the era of Industry 4.0, the effective utilization of artificial intelligence algorithms has greater significance.Karaboga [6] suggested a solution to the difficulty of heat conduction with an unidentified heating source.The author handled this particular physical problem by framing it as an optimization task and exploring and identifying improved solutions using heuristic approaches such as algorithms based on genetics.To explore the available solutions, these algorithms use evolutionary mechanisms and natural selection notions.
The cognitive abilities displayed by bees have been the basis for the swarming algorithms.Karaboga [7] introduced the artificial bee colony (ABC) algorithm, a model which were influenced by the foraging behavior of honey-producing bees.The proposed model has three main components: forager bees, food sources, and inactive bees.In addition, bees utilize a sophisticated dance procedure to improve their communication about food sources with other colony members.
Bee/ant swarm and gradient-driven algorithms are two separate approaches employed for optimization purposes.Gradient-driven methods are commonly used to tackle optimization problems in the conventional approach.The approaches mentioned above employ the function derivative to ascertain the precise point at which the function attains its minimum value.In contrast, the bee/ant swarm algorithms approach utilizes natural evolution as a conceptual framework to choose the most advantageous solution, deriving insights from the known evolutionary mechanisms exhibited by organisms in their natural habitats.The utilization of gradient calculation is unnecessary in both bee and ant algorithms.
Gradient-driven algorithms are particularly effective in solving problems where the target function has a regular contour, and the function derivatives are known.These algorithms are advantageous because they can fast and efficiently find the local minimum of the cost function.According to the cited sources [8,9], it is suggested that bee and ant swarm algorithms exhibit greater efficacy in addressing problem scenarios characterized by irregular cost function profiles or limited knowledge of function values solely at the mesh nodes.
The study illustrates that swarm algorithms possess more resistance to deviations, such as input parameters mistakes and algorithm implementation issues, in comparison to gradient-driven algorithms.Furthermore, the ant colony optimization (ACO) and artificial bee colony methods exhibit better scalability when compared to gradient-driven techniques.The use of gradient-driven algorithms for complicated problems might pose challenges due to the necessity of computing gradients for all variables being optimized.
Swarm algorithms exhibit superior efficiency compared to gradient methods due to their ability to enable simultaneous problem-solving within the population.Conversely, gradient-driven algorithms may prove time-consuming when used for large-scale problems [10].
The algorithm selection is ultimately contingent upon the specific issue that requires resolution.When faced with a problem that exhibits numerous local minima, bee, and ant swarm algorithms are considered superior to gradient approaches.Gradient-driven algorithms are a more favorable option for dealing with problems that possess a smooth profile and are readily solvable.
The notion of local minimum resilience concerns the ability of an optimization algorithm to bypass local minimums that lack optimality on a global scale.Because gradient optimization techniques rely on tracing the route of the steepest fall in the cost function, they are prone to encountering local minima.Consequently, these algorithms proceed in the direction where the cost function exhibits the most rapid drop.Suppose the solution space contains numerous local minima.In that case, it is possible for the gradient optimization process to become trapped within one of these local minima, so bypassing the global minimum is possible.The ABC and ACO algorithms have a reduced susceptibility to local minimums due to their independence from the differentiation of the cost function.
As a result, these algorithms randomly explore the space of solutions, increasing the possibility of identifying a global minimum.However, resistance to local minimums is only one of numerous factors to take into account when selecting an optimization strategy.Additional significant elements are the complexity and runtime of the algorithm.
It should be noted that inverse problems exhibit variability, and in certain instances, the utilization of gradient optimization algorithms may prove to be a more advantageous alternative [11].
Literature from recent years shows examples of the successful use of swarm algorithms.Determining the most efficient route within an urban area connecting two places situated at a specified distance apart constitutes a distinct computational challenge.The problem was addressed using the ant colony optimization technique.This algorithm maximizes the utilization of distributed and large-scale systems.In a scholarly article by Komar [12], a comparative analysis was conducted to evaluate the efficiency of ant colony optimization in comparison to typical navigation techniques for determining the shortest path between two given places.The study's findings indicated that ACO exhibited superior performance in terms of efficiency.
Hetmaniok et al. [13,14] carried out research in which they used the algorithms based on swarm intelligence to solve the inverse heat transfer problems, with a focus on the boundary condition of heat exchange with environment.The researchers rebuilt the temperature field inside the defined region and identified the heat transfer coefficient as a crucial stage in the problem-solving approach.The precision of the estimated solution was determined by minimizing the functional in the context of the heat conduction issue.The researchers highlighted the effectiveness of swarm algorithms in tackling inverse issues, with special emphasis on handling input errors and parameter selection.
The finite element method (FEM) is extensively employed in computer simulations for the numerical computation of many phenomena.FEM is a robust numerical technique utilized to solve partial differential equations.It is particularly prevalent in applications such as the continuous casting of steel and numerous other domains [15,16,17].As a result, the authors use the finite element method in the numerical section of their study.
Conductivity-radiation transient phenomena are commonly observed in engineering contexts, such as when investigating heat transport in combustion chambers and designing thermal insulation.A number of unidentified variables, such as absorption, emissivity, and thermal conductivity, frequently define these difficulties.The methodical process of deducing unknown parameters from empirical observations or experimental data is referred to as inverse analysis.
The researchers in the paper [18] introduce a novel approach to inverse analysis, which aims to determine the thermal characteristics of materials under transient conductivity-radiation scenarios.The method provided in this study is also founded upon the finite element method.The researchers also utilized the genetic algorithm (GA) and stochastic optimization tool, to investigate and determine the optimal values for thermal characteristics.The study demonstrates that the approach described in this research is capable of accurately and consistently estimating unknown factors using test data.Fourier's law is a commonly used theoretical framework for studying the transfer of heat in solid materials.Nevertheless, Fourier's law does not hold true in several scenarios, including cases involving significant temperature gradients or materials exhibiting non-uniform characteristics.It is crucial to use a more comprehensive model, such as Non-Fourier heat conduction, in cases like these.
The authors of the study [19] introduce a methodology for inverse analysis that aims to estimate parameters in systems governed by non-Fourier's law.As in the previous case, the approach is founded upon the FEM and the GA.The writers proved the efficacy of the approach utilizing in the context of the 2D non-Fourier problem of conductivity and radiation.The researchers demonstrated that the proposed methodology can reliably estimate parameters associated with the mentioned law based on experimental data.
The utilization of AI algorithms and their subsequent implementations in domains such as heat conduction exemplify the capacity for groundbreaking solutions across diverse disciplines.The ongoing advancement and exploration of AI approaches, such as swarm algorithms, in conjunction with mathematical models in the field of heat transfer and other related areas, will persistently propel scientific progress and create novel opportunities for further research and development.
Based on preliminary research [20,21,22], it was found that both ABC and ACO optimization algorithms achieve good results in reconstructing the heat conduction coefficient of the separation layer.Now, the authors are asking if increasing the population size makes sense and what impact it might have on the final results.Therefore, it is essential to carefully balance the population size with the desired level of accuracy and available resources.Increasing the number of individuals in a population can affect the efficiency of algorithms, but there is a limit beyond which the benefits may be marginal or even invisible.With a small population, there is a risk of getting stuck in local minima, limiting the algorithm's ability to find an optimal solution.On the other hand, too large a population can result in excessive use of computing resources, which can be inefficient in terms of time and computing power.One way to accelerate the calculations was to consider only one-fourth of the cast-mold system due to the axisymmetric geometry.Increasing the number of individuals in the population can also adversely affect the computation time of the algorithm, which can be particularly important in the case of a large-scale problem.However, the final decision on the optimal number of individuals in the population should be based on consideration of the specifics of the problem under study, available computational resources, and the evaluation of experimental results.
This article examines the suitability and efficiency of swarm intelligence algorithms, specifically the artificial bee colony and ant colony optimization, for optimizing continuous boundary conditions.Our study aims to rebuild the heat transfer coefficient for the thermally conductive layer within a specific range and evaluate the accuracy of these estimations using numerical simulations.To the best authors' knowledge, this is one of the initial instances linking swarm algorithms with reconstruction selected thermal parameters in a continuous boundary condition.
In the subsequent sections of the article, three main research areas are presented sequentially.The first one involves the mathematical model of heat conduction, which serves as the basis for analyzing thermal processes in the studied systems.The second part goes into more detail about the model and how the ABC and ACO algorithms work.These algorithms are used to find the best solutions for designed systems.The third area shows how to use special software for numerical modeling.This software lets you check and understand the results of models that have already been made, which makes it easier to get a full picture of the processes that were studied.The entirety is summarized with conclusions drawn from the conducted research.

MATHEMATICAL MODEL Heat transfer
Transient heat conduction is a phenomenon that takes place when bodies undergo heating or cooling processes in their attempt to attain thermal equilibrium with their surrounding environment.The process of heat transfer that occurs among parts of body that are in contact with one another is referred to as conduction.The subsequent equation delineates the mathematical representation of conduction in a singular body: where: and ∂T/∂t is the time derivative of temperature.
It is customary in foundry engineering to use Eq. 1 to describe the heat flow during the cooling of castings.The cooling rate determines the properties of the final product; hence, determining accurate parameters is crucial from the engineer's perspective.The subject under consideration pertains to initial-boundary value problems, necessitating the inclusion of suitable initial and boundary conditions.The authors employed Cauchy conditions as the initial conditions, wherein specified temperature values are assigned at the beginning moment.The initial time, indicated as , is equal to zero seconds and is required to calculate the beginning temperature distribution [23]: where: r is the field vector at a given point.
There are four distinct categories of boundary conditions that are linked to the phenomenon of heat transfer: • the first form of boundary condition (Dirichlet) specifies that the temperature distribution :  = (  ) ( • the second form of boundary condition (von Neumann) specifies that heat flux is known on the boundary • the third form of boundary condition (Newton's or Robin's) specifies that heat exchange with the environment occurs on the border Γ C of the region Ω: (, )  =  0 () :  =   (3) :  = ( −   ) (5) (5) where: α is the heat transfer coefficient of exchange with the environment, T is the temperature at the boundary between the body and Γ C , T env is the ambient temperature, q denotes the heat flux inflow (T < T env into the area Ω or outflow T > T env from the area Ω); • the fourth form of boundary condition (continuity condition) specifies that heat exchange occurs on the boundary Γ D separating areas Ω 1 and Ω 2 .Two cases are possible here: − ideal contact between areas, − lack of ideal contact -heat exchange through the separation layer describe κ coefficient: where: λ p is the thermal conductivity coefficient of the separation layer, and δ is the thickness of that layer [23,24].

Artificial intelligence algorithms
Bee and ant algorithms are categorized as swarm algorithms and are classified within the domain of metaheuristic algorithms.A metaheuristic refers to a broad computational problem-solving approach that may be applied to address a wide range of problems as defined by the terms specified inside the algorithm.Frequently, these models draw upon comparisons to tangible phenomena in the fields of physics, chemistry, and biology, which can be analyzed via the lens of optimization principles [25].Metaheuristics make it possible to find solutions that come close to the optimum, even without specific knowledge of a particular optimization problem.These methods show a fast flexibility with respect to constraints and the size of the solution space, without depending on the number of variables.Gerardo et al. in their individual articles [26] and Hackwood [5] have proposed the notion of swarm intelligence.The algorithms were developed on the basis of practical studies of natural processes, such as the collective behavior of bird flocks, ant colonies, worm communities and bee swarms.

Bee algorithm
Bee algorithm consists of two distinct groups of bees, forming an artificial bee colony.The colony's first part consists of worker bees.The latter part of the swarm has a proportionate number of bees that are not involved in any type of occupation.
One of the underlying premises of the ABC algorithm refers to the count of bees that are not employed in the population is equivalent to the population of working bees.This implies that each bee is associated with a singular food source within a specific environment.When employed bees deplete a food source, they become unemployed.
An effective solution to the problem encountered in the bee algorithm is the optimization of feed supply locations.The amount of nectar present in the food source directly impacts the effectiveness of the remedy, thereby determining its quality.The first phase of the bee algorithm entails the stochastic creation of the starting population () and the exploration of a certain number of food sources ().For each iteration in the range to , a solution is considered equivalent to the location of the food supply.The iterations are of fundamental importance in the process of updating the solution since they involve many transitions that determine the coordinates of the source location after initiation.The employed bee's adjustment of the solution is contingent upon local knowledge, while the evaluation of the new source is dependent on the quantity of nectar available.The recollection of the food source's updated location is contingent upon the condition that the quantity of nectar in subsequent rounds surpasses the value seen in earlier iterations.Alternatively, the preceding state is retained.Thus, there is a mutual exchange of information between the worker and the unemployed bees.In the context of bee behavior, worker bees engage in a process known as foraging, during which they collect nectar from various sources.Subsequently, these employed bees engage in a type of communication wherein they share information regarding the quantity of nectar obtained.This information exchange is facilitated through the utilization of a formula that enables the calculation of the aforementioned nectar quantity: (7) where: J(x│i) the quality of a given source x i .
The selection of food source by bees is contingent upon the quantity of nectar fit j present in the food source.The primary requirements for selection by an unemployed bee is the probability value pi associated with picking a food source, which is determined using the following formula [7,27]: :  = ( −   ) Subsequently, the food source coordinates v ij are updated in accordance with the given relation: :  = ( −   ) where: k ϵ {1,2,...,SN} and ϕ ij ϵ [-1,1] is a random number and j ϵ{1,2,...,D.
A vector of x i is a solution vector and the D factor represents the quantity of optimization criteria utilized in the bee algorithm.The parameter must be distinct from i.

Ant algorithm
The ant algorithm is commonly employed in the context of graph theory to address the task of identifying the shortest path.This approach draws inspiration from the behavior of actual ants.The search process involves identifying the most efficient route connecting the anthill and the food source, with the objective of minimizing the distance traveled.The ants exhibit stochastic behavior in selecting the direction of their foraging activities while simultaneously depositing a trail of pheromones as they return to the anthill.The route on a certain path gradually vanishes if other ants fail to visit the road.The ants have a greater tendency to choose shorter routes because pheromone trails spread more slowly along these roads compared to longer ones.It is noteworthy that in the context of ant behavior, the phenomenon of positive feedback occurs when ants, upon discovering a more favorable path, exhibit an increased tendency to utilize it.
The artificial ants exhibit cooperative behavior in order to collaboratively explore and find the most effective solution for intricate combinatorial issues.While searching for a remedy, there is a correlation between the ants and the whole knowledge they utilize.The ants collectively develop a shared repertoire of strategies over a period of time, specifically in the form of the most efficient routes that guide them towards their objective.Nevertheless, there are distinctions between artificial ants and their natural counterparts.Artificial ants traverse the edges of the input graph, whereas natural ants possess the ability to select any path.The efficacy of the solution within the ACO algorithm is intricately linked to the transmission of pheromones.An important property of a group of ants is that each ant can recognize the expected solution throughout each iteration.The method yields the optimal solution, which is determined by the most efficient ant.The pheromone pathway is modified during the course of the artificial ant's exploration when it discovers a more optimal route compared to the previously constructed one.As a result, future ants have a higher propensity to select specific edges within the graph.The process of tracing trace-reinforced ants is subject to the influence of the distance between the anthill and the foraging region, as shown by the path length in the graph.The likelihood of a subsequent ant adhering to the path established by its predecessor is positively correlated with the intensity of the pheromone trail.
The foraging paths of all ants adhere to a set of rules.Initially, the nodes that the ant will traverse are randomly generated, with each ant having a unique set of nodes.The number of ants is denoted by M, and each ant is assigned a distinct value of k ranging from 1 to M. The probability p ij , which represents the likelihood of an ant in node i selecting node j, is determined by the equation: :  =   (3) :  =   (3) =    +   (   −    ) (9) 12) where: η -is the heuristic function, the constants α and β dictate the influence of pheromone values and heuristic values on the decision-making process k th ant, G represents a route within the graph that can be traveled by k th ant, τ ij and τϵ represent the pheromone array, which stores information on the remaining amount of pheromone, t is the time step iteration, R is node in the graph.
The optimal strategy for retaining a route in memory is when the subsequent path is superior to the preceding one.Once all ants have traversed all available paths, the pheromone array is then updated according to the prescribed formula: :  =   (3) (, )  =  0 () :  =   (3) =    +   (   −    ) (9) (12) (11) where: value ∆τ ij k is the amount of pheromone left by the k th ant on the movement path, ∆τ ij best is the amount of pheromone left by the best ant on the path of movement, ρ is the evaporation coefficient in the range (0-1), which determines what part of the pheromone is to remain (0 -evaporates everything, 1 -nothing evaporates).
During the execution of the algorithm, a mechanism of pheromone evaporation is incorporated to prevent the uncontrolled proliferation of the pheromone trail.The roulette wheel approach is used to introduce unpredictability in the node selection process for each ant during the early iterations.The probability calculated using equation (Eq.10) is considered in the randomized selection procedure.The optimal route, as determined by the highest quality index, is identified along the trajectory connecting the anthill and the feeding area subsequent to the initial traversal of all ants.The trajectory of an ant changes when the quality indicator is set, but only if it has reached the highest grade.The ideal pathway for traversal is formed by probabilistically selecting novel nodes inside each stratum of the network.The path nodes serve as an approximation for the nodes present in each layer, specifically those that possess the highest quality index.Subsequently, the pheromone array of each iteration of calculation is updated according to the formula (Eq.11).After completing the aforementioned task, the probability calculation is initiated, using the established pheromone array , and then advancing to subsequent iterations of computation [28,29].

Assumption of the research
The analysis focuses on the physical phenomenon of heat conduction, specifically the continuity boundary condition, which makes the problem at hand an important challenge for engineers and researchers in these two fields.The study focused on analyzing the impact of swarm algorithms' input parameters on computer simulations of heat conduction.The results obtained are an important contribution to the development of the field, allowing us to better understand the dynamics of thermomechanical processes using advanced computational techniques based on artificial intelligence.
The purpose of this paper is to analyze in detail the results obtained from the numerical experiment, with a particular focus on the question of whether increasing the population size used by the bee and ant algorithms may have validity and what effects it may have.Attention will also be paid to the effect of this variable on the reconstruction of the parameter κ -the heat conduction coefficient through the separation layer, by these algorithms, which plays a crucial role in the analyzed process.This work's presented results and conclusions are intended not only to increase knowledge in the field of thermo-mechanics and artificial intelligence algorithms but also to provide practical guidance for optimizing numerical processes in the context of thermal conductivity.
The GMSH software was used to construct a model of geometry and a finite element mesh [30].The numerical calculations in this study use the TalyFEM package and algorithms built in the C ++ computer language [31].The TalyFEM tool utilizes the finite element method to simulate specific physical processes.Utilizing data structures from the PETSc library, such as vectors, matrices, and pre-existing solvers, ensures good calculation performance [32].The experiments were conducted using a computer system, running Linux, with an Ubuntu version.
Implementing swarming algorithms in Python has been carried out, with adaptations made to enable their integration with the TalyFEM framework [33].The error reduction in the approximation solution was accomplished by employing the ABC and ACO algorithms, respectively.The reference temperature values were determined using a constant reference heat transfer coefficient, denoted as , and the temperatures recorded during the simulation.
Simulations were conducted for a single parameter.The research used a mesh of the finite elements divided into 576 nodes.Optimizing coefficients is undertaken within a specific range of values, specifically between 900 and 1500 [W/m 2 K].The referencing temperatures were determined for the referencing coefficient of = 1000 [W/m 2 K].
The simulations were conducted for the Al-2%Cu alloy.The material properties are displayed in Table 1.The initial temperatures for the cast and the casting mold were T 0 = 960 K and T 0 = 590 K, respectively.The procedure's input parameters criterion for completing computations was the number of iterations, which served as the basis for the article's computations.The algorithm's convergence control was assessed by considering the functional value: (12) (12) where: i is the number of nodes in the FEM mesh, j is the number of time steps, N i is the number of nodes in all node pairs considered, N j is the number of time steps, and, T ij is the benchmark temperatures generated at a constant benchmark heat transfer coefficient κ and U ij denotes the temperatures obtained during the simulation [34].
The obtained results refer to the layer that separates the cast and the casting mold into two distinct tessellations, as depicted in (Figure 1).The continuity boundary condition in the heat conduction model necessitates the presence of distinct nodes at the contact between the cast mold and the casting mold.The spatial coordinates of the nodes located at the interface between the cast and the casting mold are identical, which makes it much easier to implement this boundary condition in the code.
Figure 1a depicts a quadrilateral (casting) enclosed within a quadrilateral (mold) that is separated by a layer with a heat-conducting coefficient of κ.Considering the geometric symmetry, just one-fourth of the casting-mold system was considered.The right and top borders of the casting mold were assumed to have a boundary condition of the third type; the left and bottom edges of the two areas were isolated, and a boundary condition of the fourth kind with non-ideal contact was assumed between the areas.In the boundary condition of the third kind, convective heat exchange with the surroundings was assumed, assuming that the ambient temperature is T env = 300 K, and the heat transfer coefficient with the surroundings is equal to α = 100 W/(m 2 K).

RESULTS AND DISCUSSION
The computations were conducted for the bee and ant algorithms on a tessellation consisting of 576 nodes.The populations considered had 5, 10, 15, and 20 individuals.All calculations were conducted for 6 iterations of ABC or ACO algorithms.To optimize the search process and ensure accurate outcomes, the implementation used a characteristic commonly associated with heuristic algorithms, namely the requirement for repeated iterations.The algorithms were ran three times to do the computations in each circumstance.Each scenario additionally incorporated disturbances of 0%, 2%, and 5% with respect to the reference values.
Perturbations were added to the values of temperature within the framework of our computations.In general, the accuracy of temperature measurements using a thermocouple during experimental procedures is considered to be high.It is generally accepted that the measurement accuracy for thermocouples falls within a range of plus or minus 2 °C, while the disturbances are expected to be within a maximum of 5%.Based on empirical evidence, we chose four disturbance values that span from 0 to 5%.When subjected to a 5% disturbance, it was observed that the input parameters of the ABC and ACO algorithms do not ensure that an optimal solution is achieved compared to the referencing value [35].
During our work, we utilized a uniform distribution to represent the disruption, using the random uniform function offered by the Python computer language.The distribution of disturbance exhibits symmetry, indicating that a temperature perturbation of 5% corresponds to a range of -2.5% to +2.5%.
In the case of the ABC algorithm in Figure 2, a diverse situation can be observed.The number of individuals of 5 or 10 resulted in outcomes burdened with a high value of the standard deviation.Only the number of individuals of 15 reduced this value to a lower level, and at the number of 20 individuals, the standard deviation reached values close to 0.
Figure 3 represents case of the ACO algorithm, where the standard deviation for the results of the restoration of the coefficient is low regardless of the number of individuals.Only a slight increase in the standard deviation is observed when running the algorithm with only 5 individuals.For this reason, it can be concluded that the ACO algorithm in the study of the effect of the number of individuals on the quality of the results gave reproducible results from the lower value of 5 individuals, and from the number of individuals of 10 it was difficult to observe an improvement in the results in terms of reducing the standard deviation.
As a summary of the Figure 2 and Figure 3 discussion, the ACO and ABC algorithms respond differently to increasing the number of individuals in the study.While the ACO algorithm reached standard deviation values close to zero very quickly, the ABC algorithm needed a number of 20 individuals (which is the accepted maximum number of individuals in the presented study) to reach such standard deviation values.
In the Table 2 we can observe that for both algorithms, as the population size increases, the results become more stable.For the ACO algorithm, the stability of the results is more pronounced than for the ABC algorithm.The ABC algorithm for the number of individuals with a 20 standard deviation obtains results close to 0, which means that the results are very stable.An increase in the level of disturbance leads to a deterioration in the quality of the results for both algorithms.
For the ABC algorithm, for a perturbation level of 5%, the standard deviation is higher compared to a perturbation level of 0%.For the ABC algorithm, as the number of individuals increases, the results become more stable and better.An increase in the level of disturbance leads to a deterioration in the quality of the results.For the ABC algorithm, using as many individuals as possible to get stable and good results; in case of high disturbance level additional care in choosing control, parameters must be taken to ensure the quality of results.This is not as necessary for the ACO algorithm.
Figure 4 and Figure 5 based on the presented graphs, it can be concluded that both algorithms, ABC and ACO, provide very accurate reconstruction of cooling curves.No significant differences can be seen between the curves obtained for the two algorithms.All curves coincide very well with the reference temperature.There is no clear preference for one of the algorithms.The accuracy of the results depends on the correctly chosen parameters of the algorithms.Figure 6 and Figure 7 the graphs showing the cooling curves for the casting show that the values for ABC and for ACO are close to each other and to the reference cooling curve.The differences are small and do not affect the overall accuracy of the simulation.In the case of the graphs showing the cooling curves for the mold, the values for ABC and for ACO are also close to each other and to the reference cooling curve.The differences are even smaller than for the casting.In the case of the graphs of the temperature difference curves obtained from the reference temperatures, it can be seen that the differences between the curves for the two algorithms are small.Figure 8 and Figure 9 in the first-time step for ABC, the difference reached 0.6 K for 5 individuals after about 15 s, the differences in cooling curves shrink to almost zero.In the case of ACO, for 5 individuals after the first-time step, the difference did not exceed 0.25 K, and, as in the case of ABC, after about 10 s, the differences in cooling curves shrink to almost zero.Regardless of the population size, the differences between the observed temperature amplitude and the reference were less significant for ACO than ABC.
In the case of a 2% disturbance, ACO has a larger difference for 5 individuals relative to reference temperatures than ABC.20 individuals present similar relative results for both algorithms.With this disturbance, the smallest differences in  cooling curves relative to reference can be observed for 10 individuals for both ABC and ACO.
In the case of the 5% perturbation, in the initial time steps the differences increased to about 1.25 [K].For ABC, the smallest differences in cooling curves relative to the reference are observed for 5% disturbance and they increase for 10% and for 20%, while for ACO there is no noticeable change in the difference in temperature waveforms relative to the reference for the number of individuals considered.Figure 10 and Figure 11 based on the data presented, it can be concluded that both algorithms, ABC and ACO, are capable of providing accurate simulations of casting cooling curves.However, ACO is more robust than ABC.Both algorithms are able to provide accurate reproductions of the boundary condition parameters required to simulate casting temperature waveforms.ACO is more robust to disturbances than ABC.For small disturbances (2%), the differences between the curves obtained for both algorithms are small and do not affect the overall accuracy of the simulation.For large disturbances (5%), ACO provides more accurate simulations than ABC.

CONCLUSIONS
Based on the results presented, an increase in the number of individuals is significant when calculating with the bee algorithm.For the ant algorithm, increasing the number of individuals does not significantly affect the accuracy of the results.
For the ABC algorithm, it is recommended to use a large number of individuals to obtain the most accurate and stable results.
The research investigated the performance of the bee and ant algorithms in reconstructing cooling curves for a tessellation including 576 nodes, with populations consisting of 5, 10, 15, and 20 individuals.The findings indicated that both algorithms yielded precise simulations of the cooling curves, with no notable disparities across the curves.The obtained cooling curves are consistent with the reference ones, which suggest the physical correctness of the determined parameters of the continuity boundary condition.The ant algorithm demonstrated enhanced stability and consistent outcomes with a reduced population size.In contrast, the bee algorithm exhibited a more significant standard deviation when applied to a smaller group of individuals.The precision of the outcomes is contingent upon the selected parameters of the algorithms.The ant algorithm exhibited a more significant deviation from reference temperatures than the bee algorithm for a sample size of five persons, specifically for modest perturbations of 2%.In general, both techniques can provide precise simulations of the cooling curves of castings.

Figure 1 .
Figure 1.View of: a) dimensions of the geometry in mm, b) geometry, c) 576-nodes finite element mesh

Figure 2 .
Figure 2. The standard deviation and mean value of the coefficient a) 0%, b) 2%, c) 5% noise for algorithm ABC

Figure 3 .
Figure 3.The standard deviation and mean value of the coefficient a) 0%, b) 2%, c) 5% noise for algorithm ACO

Figure 4 .Figure 5 .
Figure 4. Reconstructed cooling curves in a mold for the ABC algorithm with a) 0%, b) 2%, and c) 5% noise of the reference temperature value

Figure 6 .Figure 7 .
Figure 6.Reconstructed cooling curves in a cast for the ABC algorithm with a) 0%, b) 2%, and c) 5% noise of the reference temperature value

Figure 8 .
Figure 8. Differences in cooling curves between reference and reconstructed values of parameter κ in a mold for the ABC algorithm with (a) 0%, (b) 2%, and (c) 5% noise of the reference temperature value

Figure 9 .
Figure 9. Differences in cooling curves between reference and reconstructed values of parameter κ in a mold for the ACO algorithm with (a) 0%, (b) 2%, and (c) 5% noise of the reference temperature value

Figure 10 .Figure 11 .
Figure 10.Differences in cooling curves between reference and reconstructed values of parameter κ in a cast for the ABC algorithm with (a) 0%, (b) 2%, and (c) 5% noise of the reference temperature value

Table 1 .
Material properties

Table 2 .
Reconstructed the coefficient κ value and relative error (σ%) for 5, 10, 15, and 20 individuals using ABC and ACO algorithms.Computations were conducted for six iterations of bee and ant algorithms