Dr Edward Kansa – Computational Method Cures The Curse Of Dimensionality
The Kansa method for engineers and physicists
Dr Kansa began his career by earning a PhD at the Vanderbilt University with a thesis in many-body quantum physics. Following his PhD, he worked at the United States Bureau of Mines as a research physicist. There, he used his understanding of applied mathematics and physics to model coal dust fires and explosions. Later on, his work at the Lawrence Livermore Laboratory focused on modelling natural gas explosions, the spread of pollutants in ground water, earthquakes, the safe deposition of spent nuclear reactor fuel, and the problem of tank ruptures and ammonia spills. During his career, he held teaching positions with the University of California’s Department of Mechanical and Aerospace Engineering and with Embry- Riddle Aeronautical University’s Department of Physical Sciences, in Oakland, California.
Dr Kansa’s work faced him with some of the most computationally complex and challenging applied mathematical problems, namely dynamical systems and fluid dynamics. Apart from their countless applications in physics and engineering, fluid dynamics and dynamical systems are still open areas for research, which means they sometimes pose problems that are not readily solvable through known, closed form methods. The main difference between fluid dynamics and dynamical systems is their domains of applicability. While fluid dynamics can model the flow of water through a pipe, dynamical systems address the problem of avalanches in sand piles, or other types of complex time evolutions, for example attractor maps or the orbital motion of planets.
To illustrate properly the importance and challenges of research in fluid dynamics one has to know that this is one of the Millennium Prize problems. Only 6 still open problems in pure mathematics or mathematical physics are considered attention worthy enough to be part of this prize. The mathematician solving any of these open problems, among which are the Navier-Stokes equations governing fluid dynamics, will receive a noteworthy cash reward and will most likely remain in the history books. However, solving the open problems in fluid dynamics and dynamical systems does not guarantee that these problems are computationally easier to approach. In fact, they would still be extremely computationally intensive. And this is where Dr Kansa’s work comes in.
A numerical problem of the past
In the 1950s, when FORTRAN was just being developed, Turing had only just proposed his test and computers were the size of a room, it was nearly impossible for scientists to find numerical answers to engineering and physics questions. Given the very limited computing resources available at the time, the solution was to break continuous problems into their component parts onto local meshes. This situation led to the development of mesh generation that has become an art form over complex domains. Various finite methodologies have evolved since the 1950s.
The finite difference method is used to approximate the derivatives of continuous functions with finite difference equations. However, the basic notions of calculus show how the difference should tend to zero rather than to a small but finite error. Naturally, these differences compound when calculations are large and not particularly well-behaved. At the time when computers were themselves relying on inaccurate floating point approximations, roundoff errors were adding to the differences between the numerical calculation and the actual value of the derivative. The most noteworthy problem was not necessarily that the calculations were approximate, but that they were unreliable. Sometimes it was difficult for scientists to tell when they obtained an approximately precise result or when the opposite was true, that the result was not even in the same ballpark as the value for which they were searching. This led to an ingrained caution against ‘illconditioning’. Simply stated, ill-conditioning is the accumulation of round-off errors on finite precision computers that can render calculations meaningless. Computer scientists devised methods to control rounding errors, and develop extended precision arithmetic so that, at the end of a calculation, the results are reliable to at least ‘n’ significant digits.
The finite element or finite volume method on the other hand, is splitting a surface or a volume into component parts and assigning a finite function – onto a mesh – to each of these pieces. In other words, the mathematical treatment involved applying a function to model the behaviour of each piece. However, most functions used for modelling mathematical surfaces are continuous functions. In advanced calculus such functions can be understood through Taylor series, which sum up the derivatives of a function to obtain an approximation approaching the initial function as the number of derivatives approaches infinity. The problem is that when using a Taylor series on a surface with a finite radius, the series can converge to the initial function inside the radius but may be very far from the function outside this boundary. This situation hides a trap similar to that of the finite difference method. For smooth functions, there are an infinite number of derivatives, and therefore the Taylor series has an infinite number of terms which can cause errors. The errors arising from the remaining terms of the Taylor series expansions are known as truncation errors; these should go to zero as the mesh spacing goes to zero, but this does not happen in reality. Additionally, the unknowns come from the infinite tail of the series and not from the first few terms at the front. Cyclic functions such as the sine and cosine are particularly nice because their derivatives repeat at fixed intervals, making the problem easy to solve. However, functions with great value variations between one point and the next cannot be approached through generalising the first few derivatives. It would then be left to the engineer or physicist needing the computation to check by pen and paper whether the calculation is correct. This is often impossible and, moreover, it defeats the purpose of using a computer.
The finite difference and mesh methods described so far have direct applications in physics and engineering. As you’ve gathered by now, Dr Kansa’s complex work often faced him with exactly these problems on a daily basis. He knew scientists often had to sacrifice the accuracy of the physics to make calculations approachable through the available numerical methods. He felt that since physics had to be adapted to the computational methods, it became slightly artificial. Artificiality happened because the functions known to behave well are only a few compared to the number of functions that truly model the behaviours of some systems. When faced with such problems in the past, scientists had to approximate complex behaviours with slightly unsuitable functions, which were guaranteed to work with the available numerical methods but were not necessarily the best modelling option. Scientists relied on brute force to obtain reliable numerical results by using massively parallel computers, relying upon arcane numerical methods designed in the 1950s.
‘I demonstrated that by moving points in space as exact differentials with source terms, no artificial physics needed to be introduced for time dependent problems’
From astrophysics to explosions and cancer growth
Examples illustrating the problems of approximations start in the two dimensional plane and extend to the curse of dimensionality, a phenomenon occurring in higher dimensions. An irregular, snaking line in the plane is a visual representation of a spline, a piece-wise row of mathematical functions held together by knots. Knots are points where the functions patched together have enough derivatives to ensure the existence of a path from one component function to the next. Any spline passing through more than three points in the plane is difficult to model because the points at the end of each separate function oscillate too much due to numerical approximations. When this problem is extended to more than one dimension, it amplifies proportionally to the number of dimensions – and the errors are not simply due to computational methods. Sometimes, the input of a function can be a value obtained after a measurement, for example the length of an airplane wing. The function modelling the behaviour of the wing inside an air current may be overly sensitive to the initial measurement error, in such a way that its output errors grow in the second decimal digit even if the initial error was in the tenth decimal digit; the output error in this case would be 8 orders of magnitude larger than the input error. When a function or process such as solving systems of equations is accumulating too many errors too fast, the problem is called ill-conditioning. For some cases, this problem becomes exponentially worse as the degrees of freedom of the system increases – a phenomenon called the curse of dimensionality.
For example, the quantum mechanical calculation of the electronic structure of H O is a 9-dimensional problem. With 10 discretisation points per dimension, the total 1 billion is considered very sparsely discretised. Sometimes, the system has sparse matrices – with many points where the action is zero – however, the zeroes still need to be calculated, thus increasing the calculation time. Naturally, the problem is even more difficult when the matrix is dense, meaning that it has a lot more non-zero entries.
Dr Kansa started looking for a solution to these cases, a solution that would make his and his colleagues’ lives easier and their calculations more accurate. ‘I became interested in highly convergent meshfree computational methods because the standard methods changed the physics to conform to the numerics, rather than changing the numerics to conform to the physics. I demonstrated that by moving points in space as exact differentials with source terms, no artificial physics is needed to be introduced for time dependent problems’, Dr Kansa explains.
Dr Kansa’s work can be applied to many fields, from astrophysics and quantum mechanics, to plasma fusion, cancer growth, and shocks and explosions. For example, shocks and explosions are problematic to compute out of the theory. Firstly, this is because the theory is that of Navier and Stokes, incomplete in this area. Secondly, the calculations may become singular and therefore result in infinities, which are certainly not observed in practice. The solution is better numerical methods, as Dr Kansa observed: ‘The main idea in our recent work is that numerical methods of the 1950s were adapted to very primitive computers and have not evolved since then. The excuse that meshfree methods suffer from “ill-conditioning” is obsolete, and fast extended precision methods now exist. It seems counter-intuitive that global or widely banded methods based on C˚˚ functions requiring extended arithmetic precision is superior in both accuracy and execution speed on a specific computer. Exponential convergence always wins over the slow convergence rates of finite methods. Presently, these C˚ radial basis functions appear to be suited ideally for high dimensional problems.
C˚˚ radial basis functions are global in nature yield either full or broadly banded systems of linear or nonlinear equations. The infinity sign shows that such functions belong to a class of functions with an infinite number of derivatives, otherwise known as smooth functions. Dr Kansa extended the class of radial basis function to include finite jump discontinuities in the normal propagation direction, but is infinity smooth in the tangential directions. C˚ functions converge exponentially, depending upon the shape ratio defined as the ratio of the average shape parameter to the average minimum distance. The larger the ratio becomes, the faster is the convergence rate. In contrast, standard linear finite difference and finite element schemes only converge at the typical quadratic rates.
C˚˚ functions are suitable for many types of equations – integral, partial differential, integrals, or integro-partial differential equations. Moreover, highly dimensional systems are reduced to one-dimensional problems due to the equivalence between the location and value of the points in the original governing equation and the points of the C˚˚ functions. Since C˚ functions only depend on the distance to a centre, they can work in the flat Euclidean, spherical polar geometry, or with geodesics such as those used in general relativity. Changing the norm – the notion of the distance in the space – makes radial basis functions versatile and applicable to most computational physics problems.
In addition, Dr Kansa found another advantage of meshfree C˚ functions. For time dependent problems, one can find a local velocity at each evaluation point that transforms a complicated nonlinear equation with source terms into an exact differential for the expansion coefficients. The time dependent expansion coefficients are exactly solved. Any redistribution of evaluation points is rigorously constrained to conserve mass, momentum components, and total energy without violating the basic principles of physics.
Radial basis functions have many advantages. Whereas the finite methods can only use the first few terms of the Taylor series and become worse as the number of dimensions increases, radial basis functions have been shown to exponentially converge to the original function depending on the dimensionality of the problem and the shape ratio parameter. Since radial basis functions are continuous, their derivatives are not interrupted at the traditional mesh boundaries seen in finite methods. Finite methods have compact support, or alternatively, the value of the functions outside the mesh boundary is zero. In his recent work, Dr Kansa found a formula that models solutions on the interior space and on its boundary as a sum of a radial basis functions and a set of time-dependent expansions coefficients with a forcing function. To extend this solution method to higher dimensions, he offers a second formulation in which the boundary, interior, and time dependent problem are cast into a ‘fuzzy’ global minimisation problem that works for both well-posed and ill-posed problems. The system of equations can be shown to be spectrally equivalent with the initial problem, and therefore the solution is mathematically correct. Such systems of equations can sometimes still result in some ill-conditioning arising from round-off errors due to the limited precision of floating point computation. Presently, computer chip manufacturers are concerned with providing hardware for social media and gaming devices, not for scientific computing. Consequently, alternative software remedies of varying quality are available. Dr Kansa and Pavel Holoborodko showed applications of their extended precision arithmetic in their recent work. The method can use multiprecision tools, eigenvalue decomposition, and can find global zeros of a function, displaying impressive numerical precision when compared to traditional methods.
Dr Kansa’s research opens new paths towards better numerical methods, yet challenges still remain. One of the greatest challenges is the necessity to divide the calculations to massively parallel computer systems, which would greatly reduce the time for each computation. But Dr Kansa promised to continue his efforts in that direction: ‘My next research area will be exploring new methods to reduce the computational time to solve important multi-dimensional equation systems.’
A powerful numerical method offers a viable way to calculate computationally intensive problems in physics and engineering, without the errors plaguing finite volume and element methods
Meet the researcher
Dr Edward Kansa received his PhD at the Vanderbilt University in 1972 in many-body quantum theory, after which he continued to work in applied math and physics modelling problems with the U.S. Bureau of Mines and Lawrence Livermore National Laboratory from 1980 to 2004. Dr Kansa held teaching positions at the Department of Mechanical and Aerospace Engineering of University of California and the Department of Physical Sciences of Embry-Riddle Aeronautical University. The Kansa meshfree method has wide applications in engineering and physical science for differential equations in computationally intensive contexts. Kansa authored over 120 papers and received over 4800 citations.
Pavel Holoborodko, Multiprecision Computing toolbox, Kanagawa, Japan.
Alexandre I. Fedoseyev, Computational Fluid Dynamics Corp., Huntsville, Alabama, USA
Juergen Gieser, Ruhr Universitaet, Bochum, Germany
Levan Ling, Hong Kong Baptist University, Hong Kong
Yiu-Chiug Hon, City University of Hong Kong, Hong Kong
Antonio Fererra, University of Porto, Oporto, Portugal
Nicolas A. Libre, Missouri Science and Technical University, Missouri, USA
Anita Wong, Open University of Hong Kong, Hong Kong
Efim A. Galperin, University of Quebec at Montreal, Montreal, Quebec, Canada
Athena Markogolu, University of Portsmouth, Portsmouth, UK
Scott Sarra, Marshall University, Hunington, West Virginia, USA
National Science Foundation (United States)
SBIR Small Business Innovative Research, division of the National Science Foundation
E. J. Kansa, Multiquadrics – A scattered data approximation scheme with applications to computational fluid dynamics: I. Surface approximations and partial derivative estimates, Comput. Math. Appl., 1990, 19, 127-145.
E. J. Kansa, Multiquadrics – A scattered data approximation scheme with applications to computational fluid dynamics: II. Solutions to parabolic, hyperbolic, and elliptic partial differential equations, Comput. Math. Appl., 1990, 19, 147–161.
E. J. Kansa, R. C. Aldredge and L. Ling, Numerical simulation of twodimensional combustion using mesh free methods, Eng, Anal. Bound. Elem., 2009, 33, 940–950.
E. J. Kansa and J. Geiser, Numerical solution to time-dependent 3D inviscid Burgers’ equations, Eng. Anal. Bound. Elem, 2013, 37, 637–645.
E. J. Kansa and P. Holoborodko, On the ill-conditioned nature of C¥ strong form collocation, Eng. Anal. Bound. Elem., submitted, 4 May, 2016.