Artículo Científico / Scientific Paper

https://doi.org/10.17163/ings.n20.2018.02

pISSN: 1390-650X / eISSN: 1390-860X

DESIGN OF A NEURAL NETWORK FOR THE

PREDICTION OF THE COEFFICIENT OF

PRIMARY LOSSES IN TURBULENT FLOW

REGIME

 

DISEÑO DE UNA RED NEURONAL PARA LA

PREDICCIÓN DEL COEFICIENTE DE PÉRDIDAS

PRIMARIAS EN RÉGIMEN DE FLUJO

TURBULENTO

Jairo Castillo-Calderón1,*, Byron Solórzano-Castillo1, José Moreno-Moreno2

 

Abstract

Resumen

This investigation is focused on the design of a neural network for the prediction of the friction factor in turbulent flow regime, being this factor indispensable for the calculation of primary losses in closed ducts or pipes. MATLAB®Neural Networks Toolbox is used to design the artificial neural network (ANN), with backpropagation. The database includes 724 points obtained from the Moody diagram. The Reynolds number and the relative roughness of the pipe are the input variables of the ANN, the output variable is the coefficient of friction. The Levenberg-Marquardt algorithm is used for training the ANN by using different topologies, varying the number of hidden layers and the number of neurons that are hidden in each layer. The best result was obtained with a 2-30-30-1 topology, exhibiting a mean squared error (MSE) of 1.75E-8 and a Pearson correlation coefficient R of 0.99999 between the neural network output and the desired output. Furthermore, a descriptive analysis of the variable was performed in the SPSS® software, where the mean relative error obtained was 0.162%, indicating that the designed model is able to generalize with high accuracy.

La presente investigación está orientada al diseño de una red neuronal para la predicción del factor de fricción en régimen de flujo turbulento, siendo este indispensable para el cálculo de pérdidas primarias en conductos cerrados o tuberías. Se utiliza Neural Networks Toolbox de MATLAB®para diseñar la red neuronal artificial (RNA), con retropropagación, cuya base de datos comprende 724 puntos obtenidos del diagrama de Moody. Las variables de entrada de la RNA son el número de Reynolds y la rugosidad relativa de la tubería; la variable de salida es el coeficiente de fricción. Utilizando el algoritmo de entrenamiento de Levenberg-Marquardt se entrena la RNA con distintas topologías, variando el número de capas ocultas y el número de neuronas ocultas en cada capa. Con una estructura 2-30-30-1 de la RNA se obtuvo el mejor resultado, exhibiendo un error cuadrático medio (ECM) de 1,75E-8 y un coeficiente de correlación de Pearson R de 0,99999 entre la salida de la red neuronal y la salida deseada. Además, mediante un análisis descriptivo de variable en el software SPSS®, se obtiene que el error relativo medio es de 0,162 %, indicando que el modelo diseñado es capaz de generalizar con alta precisión.

 

 

Keywords: Moody diagram, friction factor, head loss, artificial neural network, backpropagation, turbulent flow.

Palabras clave: diagrama de Moody, factor de fricción, pérdida de carga, red neuronal artificial, retropropagación, flujo turbulento.

1,* Facultad de la Energía, las Industrias y los Recursos Naturales no Renovables, Carrera de Ingeniería Electromecánica Universidad Nacional de Loja, Ecuador Autor para correspondencia : jairocastilloc07@gmail.com.

* https://orcid.org/0000-0002-5321-4518,  https://orcid.org/0000-0002-0071-2249

2   Carrera de Ingeniería Electromecánica, Universidad Nacional de Loja, Ecuador.  https://orcid.org/0000-0002-0205-2635

Recibido: 13-02-2018, aprobado tras revisión: 28-05-2018

Forma sugerida de citación: Castillo-Calderón, J.; Solórzano-Castillo, B. y Moreno-Moreno, J. (2018). «Diseño de una red neuronal para la predicción del coeficiente de pérdidas primarias en régimen de flujo turbulento». Ingenius. N.° 20, (julio-diciembre). pp. 21-27. doi: https://doi.org/10.17163/ings.n20.2018.02.

 

 

 

1. Introduction

The most widely used method to transport fluids from one place to another is to drive them through a pipe system, with circular sections being the most common for this purpose, providing greater structural strength and a greater cross section for the same outer perimeter than any another way [1].

The flow of a fluid in a pipeline is accompanied by a load loss that is accounted for in terms of energy per weight unit of the fluid that flows through it [2].

The primary losses or load losses in a rectilinear conduit of constant section are due to the friction of the fluid against itself and against the walls of the pipe that contains it. On the other hand, secondary losses are load losses caused by elements that modify the direction and speed of the fluid. For both types of loss, part of the energy of the system is converted into thermal energy (heat), which is dissipated through the walls of the pipeline and of devices such as valves and couplings [2, 3].

The estimation of the losses of load due to the friction in pipes is an important task in the solution of many practical problems in the different branches of the engineering; hydraulic design and the analysis of water distribution systems are two clear examples.

In the calculation of the pressure losses in pipes, whether the current regime is laminar or turbulent plays a discriminating role [3]. The flow regime depends mainly on the ratio of inertial forces to viscous forces in the fluid, known as Reynolds number (NR) [4]. Thus, if the is less than 2000 the flow will be laminar and if it is greater than 4000 it will be turbulent [2]. The majority of flows that are found in practice are turbulent [2–4], for this reason the present investigation is developed with this type of flow regime.

Equation 1 proposed by Darcy-Weisbach is valid for the calculation of frictional losses in laminar and turbulent regime in circular and non-circular pipes [2–4].

(1)

 

Where:

hL : loss of energy due to friction (N.m/N).

f : friction factor.

L : length of the flow stream (m).

D : diameter of the pipe (m).

v : average flow speed (m/s).

g : gravitational acceleration (m/s2).

Equation 2, the implicit relationship known as the Colebrook equation, is universally used to calculate the friction factor in turbulent flow [3, 4]. Note that it has an iterative approach.

(2)

Where:

/D: relative roughness. It represents the ratio of the average roughness height of the pipe to the diameter of the pipe.

An option for the direct calculation of the turbulent flow friction factor is Equation 3 developed by K. Swamee and K. Jain [2].

(3)

 

Equations (2) and (3), and others such as that of Nikuradse, Karman and Prandtl, Rouse, Haaland, are obtained experimentally and their use can be cumbersome. Thus, the Moody diagram is one of the most used means to determine the friction factor in turbulent flow [2–4]. This shows the friction factor as a function of the Reynolds number and the relative roughness. The use of the Moody diagram or the aforementioned equations is a traditional means of determining the value of the friction factor when solving problems with manual calculations. However, this can be inefficient. For the automation of the calculations it is necessary to incorporate the equations in a program or spreadsheet to obtain the solution.

This investigation presents an alternative proposal for the prediction of the friction factor using artificial intelligence, specifically an ANN that allows the calculation to be automatic and reliable, thus reducing time and avoiding errors that may occur when using the previously mentioned alternatives.

2. Materials and methods

2.1. ANN design

The multilayer network to be developed has forward connections (feedforward) and employs the backpropagation algorithm which is a generalization of the least squares algorithm. It works through supervised Learning and, therefore, it needs a set of training instructions that describe the response that the network should generate from a given input [5].

2.1.1. ANN database

The initialization parameters of the ANN are obtained from a set of 724 data tabulated in Microsoft Excel. These data were acquired using Moody’s diagram, that is, through the graphical method that contemplates a sequence of steps based on [2]. The data set considers 43 Reynolds Number values, (4000 "/D 1 × 108), 20 curves of relative roughness, (1×10−6 /D  0, 05), and the respective friction factors.

The Reynolds numbers used, shown in Table 1, correspond to those marked on the scale of the abscissas of Figure 1, with the purpose of achieving an exact calculation in the Moody diagram.

The Reynolds number and the relative roughness are the ANN’s input variables and the friction factor is the output variable or variable to be predicted. In order to establish an adequate database, only the friction factors that are the consequence of an obvious intersection of any of the 43 Reynolds Numbers in each of the relative roughness curves are considered.

 

 

Table 1. Reynolds numbers used

2.2. ANN topology

No concrete rules can be given to determine the number of hidden layers and the number of hidden neurons that a network must have to solve a specific problem; the size of the layers, both input and output, is usually determined by the nature of the application [7, 8].

Thus, the problems of the present investigation suggest that the Reynolds number and the relative roughness are the two inputs applied in the first layer and the friction factor, which is the output, is considered in the last layer of the network.

The number of hidden neurons intervenes in the learning and generalization efficiency of the network; in addition, a single hidden layer is usually sufficient for the convergence of the solution. However, there are occasions when a problem is easier to solve with more than one hidden layer [7, 8].

Therefore, the optimal number of hidden layers and neurons is determined through experimentation.

To be precise, the most appropriate topology of the ANN is selected by testing different configurations by varying the number of hidden layers from one to three and the number of neurons within each hidden layer from 5 to 40 with increments of 5.

Figure 1. Moody’s diagram for the coefficient of friction in smooth and rough wall ducts [6].

2.2.1. ANN Training

The supervised learning of an ANN implies the existence of a training controlled by an external agent so that the inputs produce the desired outputs by strengthening the connections. One way to carry this out is the establishment of previously known synaptic

weights [5]. For this reason, the set of input-output pairs is applied to the ANN, that is, examples of inputs and their corresponding outputs [5, 8, 9].

The network is trained with the Levenberg-Marquardt backpropagation algorithm, as it is stable, reliable and facilitates the training of standardized data sets [10–12]. The training is an iterative process and the software, by default, divides the

 

 

set of 724 data into 3 groups: 70% is comprised by training data, 15% by test data and the remaining 15% by validation data. In each iteration, when using new data from the training set, the backpropagation algorithm allows the output generated from the network to be compared with the desired output and an error is obtained for each of the outputs. As the error propagates backward, from the output layer to the input layer, the synaptic weights of each neuron are modified for each example, so that the network converges to a state that allows all training patterns to be successfully classified [9]. This is to say that the ANN training is carried out by error correction. As the network is trained, it learns to identify different characteristics of the set of inputs, so that when presented with an arbitrary pattern after training, it possesses the ability to generalize, understood as the ease of giving satisfactory outputs to entries not submitted in the training phase [13].

Due to the nature of the input and output data of the multilayer network, the activation or transfer functions must be continuous, and may even be different for each layer, as long as they are differentiable [9–13]. Thus, the tansig activation function is applied in the hidden layers and the purelin activation function in the output layer. These functions are commonly used when working with the backpropagation algorithm.

The ANN learning process stops when the error rate is acceptably small for each of the learned patterns or when the maximum number of iterations of the process has been reached [10], [14], [15]. The performance function used to train the ANN is the mean square error (MSE), denoted by Equation 4 [10–12]. The relative error, reflected arithmetically by Equation 5, is involved in the analysis [10–16].

(1)

 

(2)

 

Summarizing the above, Table 2 contains the design characteristics of the ANN applied to the different topologies tested.

Table 2. Design features of the ANN

3. Results and discussion

3.1. ANN architecture selection

According to the proposed methodology, a total of 24 architectures are trained, the results of which are shown in Table 3. It is observed that the topologies 2-30-30-1 and 2-25-25-25-1 present better results, since they have an average relative error of 0.1620% and 0.2282%, respectively, and a Pearson correlation coefficient of 0.9999 for both cases. However, the first one is selected because it shows a lower relative error of the predicted values compared to the desired ones and demands a lower computational expenditure. An outline of the structure of the selected ANN is shown in Figure 2. It shows the two external inputs, Reynolds number and relative roughness, applied to the first layer, the 2 hidden layers with 30 neurons each and in the last layer a neuron whose output is the friction factor. Entries are limited only to the flow of information while processing is carried out in the hidden and output layers [5].

Table 3. Results of the different architectures tested

Using the IBM SPSS Statistics 22® software, a descriptive analysis of the relative error variable is performed for the 724 data of the selected architecture. The histogram of Figure 3 represents the frequency distributions. The results show that the average is

0.1620%, the minimum relative error is 0% and the maximum is 4.2590%.

 

 

Figure 2. Structure of the designed ANN.

In addition, the standard deviation is 0.327, indicating that the dispersion of the data with respect to the mean is small. The distribution of data shows that there is a considerable predominance of relative error less than 1% in 97% of the total data analyzed. Supporting what is reflected in the histogram, Table 4 summarizes the values of the three quartiles obtained from the statistical analysis. Under Q1 there are relative errors between the desired output and the network output of less than 0.0313%. Q2, which is the median value, points out that half of the relative errors are below 0.0720%. Q3 states that three quarters of the data have a relative error of less than 0.1758%. From Q3, low relative errors are obtained, however, there are lagged values that are greater than 1%, but these represent only 3% of the total data analyzed. The above shows the quality of approximation of the predicted values of the ANN with respect to those of the Moody diagram.

Figure 3. Relative error histogram.

Table 4. Measures of non-central position of the relative error

3.2. Model performance

The performance of training data sets, tests and validation compared to the desired output is shown in Figure 4. The sample intended for validation is used to measure the degree of generalization of the network, stopping training when it no longer improves, this prevents overfitting [12], understood as a poor performance of the model to predict new values. It is noted that the training process of the ANN with topology 2-30-30-1 is truncated in 91 iterations, because it is when the lowest MSE value of validation is obtained, which is 1, 7492 × 10−8.

That is, the performance function has been minimized to the maximum and will no longer have a tendency to decrease after 91 iterations. Because the MSE value is very small, closest to zero, the ANN model is able to generalize with great precision.

Figure 4. Performance of the ANN training process.

Figure 5 shows the results of the Pearson R correlation coefficient for the designed ANN structure. The line indicates the expected values and the black circles represent the predicted values. The prediction is efficient, and a good performance of the network is observed, since a global index of 0.999999 is obtained indicating a strong and positive linear relationship between the friction factors of the Moody diagram and those granted by the ANN.

Figure 5. Correlation between expected and predictedvalues

 

 

Several tests are performed with combinations of input pairs that have not been used during training in order to verify the correct performance of the model. Thus, Table 5 details the 36 combinations of input data applied to the ANN and the relative error reached by each of them.

According to Table 5 and Figure 6, the relative error is not distributed equally in the range of input values. In the generated 3D surface graph, the predominance of a relative error lower than 0.5% is observed, corresponding to 24 of the 36 combinations of input pairs applied to the ANN. In addition, there are only 2 relative errors above 1%, concerning the 2 most prominent peaks on the surface, with a maximum of 1.325% for NR = 1,5E5 y "/D = 0,006. The results derived from these 32 tests corroborate the correct functioning of the network and its capacity to generalize by presenting inputs different from those used in the training phase.

Table 5. Relative error results for data not considered in training

Figure 6. Relative error distribution.

4. Conclusions

The ANN designed in this research represents a reliable and highly accurate alternative to predict the coefficient of primary losses in turbulent flow regime, giving an average relative error of 0.1620% and a Pearson R correlation coefficient of 0.99999 between the values of the Moody diagram and the predicted ones.

The training process was stopped at 91 iterations, reaching an MSE of 1.7492×10−8 that indicates the generalization capacity of the proposed ANN.

The results obtained show that the set of 724 data was suff ciently large to allow the ANN, during the training, to be able to learn the relationship between the inputs and outputs applied.

The developed model allows to solve flow problems that involve calculations of the friction factor in an automatic way, taking advantage of the computational speed that the neural networks offer, reducing time and avoiding errors that can be caused when using traditional alternatives.

References

[1] J. R. Calderón Córdova and C. X. Pozo Calva, “Diseño y construcción de un banco de pruebas para pérdidas de carga en tuberías y accesorios con simulación,” Tesis de Grado. Universidad Politécnica Salesiana. Ecuador, 2011. [Online]. Available: https://goo.gl/MiF65x

[2] R. L. Mott, Mecánica de Fluidos, 2006, ch. Ecuación general de la energía; número de Reynolds, flujo laminar, flujo turbulento y pérdidas de energía debido a la fricción, pp. 197–243. [Online]. Available: https://goo.gl/SkTHPd

[3] C. Mataiz, Mecánica de fluidos y máquinas hidráulicas, 1986, ch. Resistencia de superficie: pérdidas primarias en conductos cerrados o tuberías, pp. 203–226. [Online]. Available: https://goo.gl/mW1mkL

[4] Y. A. Cengel and J. M. Cimbala, Mecánica de fluidos: fundamentos y aplicaciones, 2006, ch. Flujo en tuberías, pp. 223–342. [Online]. Available: https://goo.gl/DMttmi

[5] P. Ponce Cruz, Inteligencia Artificial con Aplicaciones a la Ingeniería, 2011, ch. Inteligencia Artificial, pp. 1–32. [Online]. Available: https://goo.gl/XED1Vo

[6] F. M. White, Mecánica de Fluidos, 5th ed., 2003, ch. Flujo viscoso en conductos, pp. 335–435. [Online]. Available: https://goo.gl/vULEcg

[7] A. Campos Ortiz, “Proceso de distribución aplicando redes neuronales artificiales con supervisión,” Master’s thesis, Universidad Autónoma de Nuevo León, México, 1998. [Online]. Available: https://goo.gl/io73HZ

[8] J. R. Coutiño Ozuna, “Aplicación de redes neuronales en la discriminación entre fallas y oscilaciones de potencia,” Master’s thesis, Universidad Autónoma de Nuevo León, 2002. [Online]. Available: https://goo.gl/yKvEFs

[9] N. Peláez Chávez, “Aprendizaje no supervisado y el algoritmo wake-sleep en redes neuronales,” Tesis de grado Universidad Tecnológica de la Mixteca, 2012. [Online]. Available: https://goo.gl/oeygXA

[10] U. Offor and S. Alabi, “Artificial neural network model for friction factor prediction,” Journal of Mechanical Science an Chemical Engineering, vol. 4, pp. 77–83, 2016. doi: http://dx.doi.org/10.4236/msce.2016.47011.

[11] M. R. G. Meireles, P. E. M. Almeida, and M. G. Simoes, “A comprehensive review for industrial applicability of artificial neural networks,” IEEE Transactions on Industrial Electronics, vol. 50, no. 3, pp. 585–601, June 2003. doi: https://doi.org/10.1109/TIE.2003.812470.

[12] D. Brkic and ˆCojbašic, “Intelligent flow friction estimation,” Computational Intelligence and Neuroscience, vol. 2016, 2016. doi: https://doi.org/10.1155/2016/5242596.

 

 

[13] J. Hilera and V. Martínez, Redes neuronales artificiales: fundamentos, modelos y aplicaciones, 1994, ch. Redes neuronales con conexiones hacia adelante, pp. 101–180. [Online]. Available: https://goo.gl/rovX8y

[14] T. Manning, R. D. Sleator, and P. Walsh, “Biologically inspired intelligent decision making,” Bioengineered, vol. 5, no. 2, pp. 80–95, 2014. doi: https://doi.org/10.4161/ bioe.26997, pMID: 24335433. [Online]. Available: https://doi.org/10.4161/bioe.26997

[15] R. Yousefian and S. Kamalasadan, “A review of neural network based machine learning approaches for rotor angle stability control,” CoRR, vol. abs/1701.01214, 2017. [Online]. Available: https://goo.gl/4RYRWs

[16] O. E. Turgut, M. Asker, and M. T. ¸Coban, “A review of non iterative friction factor correlations for the calculation of pressure drop in pipes,” Bitlis Eren University Journal of Science and Technology, vol. 4, no. 1, pp. 1–8, 2014. doi: http://dx.doi.org/10.17678/beujst.90203.