Author: Liam Critchley
Here in this article, we look at what computer simulations are, how they utilise mathematical concepts, how (and why) they are used across various engineering industries and what advantages computer simulations (or modelling) bring to the engineering community.
What are Computer Simulations?
Computer simulations, or models, are computer programs that can predict or create a theoretical reality based on algorithms and statistical probabilities. The algorithms are designed to predict the most likely outcome/interactions and can be a useful tool in many engineering industries to determine the optimal working procedures. Computer simulations in their most basic sense are a step-by-step mathematical approximation applied to a real-world or theoretical environment.
A simulation process generally proceeds through a logical series of operations. Once a model is chosen, a method to implement the model can be performed on a computer which calculates the algorithm output and allows for visualisation of the data. This complete series of steps provides the best theoretical result possible for a real-world example. Real-world examples can be unpredictable, depending on the environment, so the theoretical is not always colloquial with the actual.
The most common types of simulation are equation-based, agent-based, Monte Carlo and multiscale simulations. Even within these areas there are many sub-sets depending on the attributes that the simulations enforce. These can be stochastic or deterministic, discrete or continuous, local or distributed dynamic systems. The difference between these types of simulation produce drastically different operating functions. Stochastic simulations use random number generation to model events, whereas deterministic simulations follow determined parameters and conditions. Continuous simulations produce numerical models based around differential and algebraic equations, but discrete simulations are known for their rotational degrees-of-freedom and complex geometries. Distributed models run on a series of connected computers, whereas local distributions are limited to one machine.
Computer simulations in civil engineering utilise an approach that is different to most other engineering simulations. Where most simulations revolve around numerical simulations involving partial differential equations, those used in civil engineering revolve around a series of actions. Rather than looking at mathematically viable theories, civil engineering simulations look at real-world systems (or an abstract model of one) which looks at the static, dynamic and functional behaviour patterns. Civil engineering simulations also consider the real-world environment where the environment impacts the system model, but the model reacts to systematic events by a multi-level distribution approach.
On the technical side, a lot of the construction and civil engineering industries now use composite materials due to their enhanced strength and durability. Composites are a mixture of two or more materials that combine to make a single material- one generally acts as a matrix and another as the filler. The ratio of materials can seriously change the composition and properties in a composite. Defects, structural failure, stress failure and thermal degradation are just a few of the problems that can occur if the ratio is not right. Computational modelling of the ideal structure and precursor ratios can save a lot of time and money to determine the ideal composite material. Computational modelling can not only determine the molecular interactions internally, but the way that the composite interacts with its environment can also be deduced; as can its intrinsic and extrinsic properties. The modelling of structural building materials allows for more complex structures to be designed without the risk of real-world structural failure.
Many areas of aerospace engineering utilise computer simulations, from materials testing to flights simulators. Computation fluid dynamics (CFD) is regularly used to test the aerodynamic ability of wing systems and how the orientation, size and placement can be used to increase the aerodynamic capacity of aircraft. The physical limitations of many materials in conditions against the norm, i.e. at high altitudes, can be deduced and compared as many materials show different properties under changes in temperature and/or pressure. Simulations are also used to trial many other systems found within aircraft, including electrical networks, drives, control systems, fuel systems, control and sensor technologies. Aerospace production is an expensive and complex procedure, so the ability to limit material and technological failings by theoretically deducing the best processes is of great use to the industry. Programs such as MATLAB and ANSYS, amongst many others, are regularly used by aerospace engineers.
Computational modelling of reaction vessel conditions, pump flow rates and chemical reactions are commonplace in chemical and process engineering. The computational software of choice for many chemical engineers is MATLAB. MATLAB is a multi-paradigm numerical computing program which employs matrix and vector changes, manipulation of data, utilisation of algorithms and mathematical models.
Chemical engineers regularly use MATLAB to not only design the best experimental setup, but to also produce mathematical and statistical models to determine the optimal concentrations of reactants, reaction conditions and theoretical yields/outputs of products. Simulations are vital to many industrial scale processes as the optimal conditions can be deduced on a large scale instead of utilising a series of trial and error’ experiments, which can take many months to perfect. Computational fluid dynamics (CFD) is used when any reaction vessel contains a fluidic flow, as the interactions on the molecular scale are drastically different to those in the solid-state. Before any power/chemical plant is built nowadays, multiple computational simulation phases are developed before construction even begins and is an invaluable tool. As software becomes more complex, the computational accuracy of reaction systems becomes greater and is now worth more to the industry than ever before.
The fields of computational chemistry and biology regularly employ computer simulations, mainly in the form of molecular modelling. There is a plethora of software and simulation techniques available to theoretical scientists, but many use Gaussview, Abalone and GAMESS which employ methods such as Monte Carlo simulation, molecular dynamics (MD) and density functional theory (DFT). All the programs and methods employ different (sometimes vastly different) ways of utilising computer simulations. On a basic level, Monte Carlo simulations build risk analysis models using a probability distribution, MD tracks the physical movement of atoms/molecules through N-body simulations and DFT probes the electronic structure of condensed phases in many-body systems.
Computational chemistry is commonly used to determine bond lengths and angles, interspatial distances between molecules, molecular interactions, whole system interactions, molecular orbitals, reaction energetics, intermolecular forces and how reactions occur. Computational biology, also known as bioinformatics, generally uses large data sets, algorithms and statistical methods to commonly deduce genomes, evolutionary trends, neural interactions and to predict the pharmacological effects of a drug on the human body before physical trials.
In physics, computational simulations are common in the form of numerical models to solve problems where a quantitative theory already exists. The majority of physics (especially theoretical physics) is based around the application of mathematical concepts and statistical probabilities, which are used to predict and test theories regarding planet distances, space, black holes, quantum physics and particle physics. Because most of physics relies on mathematical theories to understand and prove concepts, a large portion of physics research is spent performing both computational and mathematical modelling simulations. However, physics problems can be mathematically complex and have a high computational cost, which generally requires a high-performance computer to run the simplest of problems.
Design and Analysis of Computer Experiments (DACE)
Design and Analysis of Computer Experiments (DACE) is a MATLAB toolbox that can be used when working with kriging approximations. It has been gaining popularity in recent years as a useful tool. The aim of using DACE is to transfer data from physical experiments into a computational model, using it as a surrogate model for a computational model- it is in essence, a metamodel. Data generated from physical experiments are deterministic in nature- i.e. there is an equal number of inputs and outputs. The software also considers the ‘design of experiments’ approach, where multiple inputs, i.e. experimental parameters, can be used to evaluate the model for the kriging approximation. DACE interpolates the response values at certain points to provide a ‘best-fit’, based on a metric. DACE approaches the actual response at a finite number of points, but creates a certain domain that is then used as the surrogate for the original model.
(1) Computer Simulations in Science, Stanford Encyclopaedia of Philosophy, https://plato.stanford.edu/entries/simulations-science
(2) Guven U., Modelling and Simulation in Engineering Solutions, PPT, http://www.academia.edu/1709194/Modelling_and_Simulation_in_Engineering_Solutions
(3) Hartmann D., Computer Simulation for Contemporary Problems in Civil Engineering, International Conference on Computing in Civil Engineering 2005
(4) Volin Y. M., Ostrovskii G. M., Three phases in the development of computer simulation of chemical engineering systems, Theoretical Foundations of Chemical Engineering, 2006, 40(3), 281-290
(5) Computer Aided Chemical Engineering, Chapter 2: Introduction in process simulation, Integrated Design and Simulation of Chemical Processes 2003, 13, 33-58