Numerical Analysis and Scientific Computing Seminars 2012/13
Semester Two (Spring 2013)
-
25 January 2013
An approach to adaptive stochastic Galerkin methods for random elliptic PDE
Claude J. Gittelson (Purdue University)
11.00 am - Frank Adams Room 1, Alan Turing BuildingAbstract (click to view)Solutions of elliptic boundary value problems with random coefficients in the operator admit efficient approximations by polynomials on the parameter domain. Each coefficient in such an expansion is a spatially-dependent function, and can be approximated within a hierarchy of finite element spaces. I will describe an adaptive strategy for selecting a finite set of active coefficients along with suitable finite element spaces, and discuss work in progress on simplifying practical and theoretical aspects of this algorithm.
-
8 February 2013
Distribution of aligned letter pairs in optimal alignments of
random sequences
Raphael Hauser (University of Oxford)
3.00 pm - Frank Adams Room 1, Alan Turing BuildingAbstract (click to view)Considering the optimal alignment of two i.i.d. random sequences of length n, we show that when the scoring function is chosen randomly, almost surely the empirical distribution of aligned letter pairs in all optimal alignments converges to a unique limiting distribution as n tends to infinity. This result is interesting because it helps understanding the microscopic path structure of a special type of last passage percolation problem with correlated weights, an area of long-standing open problems. Characterizing the microscopic path structure yields furthermore a robust alternative to optimal alignment scores for testing the relatedness of genetic sequences, and it allows to determine the exact asymptotic order of fluctuation of the alignment score in most cases.
-
22 February 2013
Practical performance modelling of scientific codes at scale
Robert Bird and David Beckingsale (University of Warwick)
3.00 pm - Frank Adams Room 1, Alan Turing BuildingAbstract (click to view)Understanding the performance of scientific codes provides opportunities to save time and money, and make predictions about future hardware. As the performance of supercomputers marches towards exascale, predicting application performance becomes more complex. The number of processor cores in modern supercomputers continues to increase, and this trend looks set to accelerate in the race to exascale. Additional processors cause applications to spend more time transferring data. Understanding the structure of these communications, and being able to predict the time they take, is key to accurately predict the performance of these applications. Performance modelling encompasses a range of techniques that allow practitioners to estimate numerous performance metrics for their codes. In the context of highly-parallel scientific codes, designed for large supercomputers, we use performance models to predict how long a code will take to run with a given set of parameters, such as problem size and the number of cores used. This allows a user to know, a priori, how long their desired simulation will take to complete, without needing to waste time and money actually running the program at the desired scale. In this talk we motivate and describe the development of performance models for scientific codes running on many processors. We present a detailed example in which we apply these techniques to a magnetohydrodynamics application, before looking at future areas that will benefit from these techniques.
-
8 March 2013
Linear barycentric rational interpolation: properties and applications
Georges Klein (University of Fribourg)
3.00 pm - Frank Adams Room 1, Alan Turing BuildingAbstract (click to view)Equispaced nodes have the bad reputation to be a poor choice for one-dimensional interpolation. From data on such nodes it is not possible to construct an approximation scheme that converges very rapidly to the sampled function and is simultaneously computationally stable. Linear barycentric rational interpolation with the weights presented by Floater and Hormann turns out to be very efficient in practice, especially when the function is sampled at equispaced nodes. We review fundamental properties as well as applications of this rational approximation scheme.
-
22 March 2013
Energy-aware sparse and dense linear algebra
Enrique S. Quintana Ortí (Universidad Jaime I)
3.00 pm - Frank Adams Room 1, Alan Turing BuildingAbstract (click to view)Power is becoming a crucial challenge that the high performance computing community will have to face to efficiently leverage the Exascale systems that will be available at the end of this decade. In this talk we will address several aspects related to this issue on multicore and many-core (GPU) processors. Specifically, we introduce a simple model of power dissipation, and a tools to compose a power tracing framework. Then, using experimental data, we evaluate both the potential and real energy reduction that can be attained for the task-parallel execution of dense and sparse linear algebra operations on multi-core and many-core processors, when idle periods are leveraged by promoting CPU cores to a power-saving C-state.
-
26 April 2013
Adaptive inexact Newton methods with a posteriori stopping criteria for nonlinear diffusion PDEs
Martin Vohralík (INRIA Paris-Rocquencourt)
3.00 pm - Frank Adams Room 1, Alan Turing BuildingAbstract (click to view)We consider nonlinear algebraic systems resulting from numerical discretizations of nonlinear partial differential equations of diffusion type. To solve these systems, some iterative nonlinear solver, and, on each step of this solver, some iterative linear solver are used. We derive adaptive stopping criteria for both iterative solvers. Our criteria are based on an a posteriori error estimate which distinguishes the different error components, namely the discretization error, the linearization error, and the algebraic error. We stop the iterations whenever the corresponding error does no longer affect the overall error significantly. Our estimates also yield a guaranteed upper bound on the overall error at each step of the nonlinear and linear solvers. We prove the (local) efficiency and robustness of the estimates with respect to the size of the nonlinearity owing, in particular, to the error measure involving the dual norm of the residual. Our developments hinge on equilibrated flux reconstructions and yield a general framework. We show how to apply this framework to various discretization schemes like finite elements, nonconforming finite elements, discontinuous Galerkin, finite volumes, and mixed finite elements; to different linearizations like fixed point and Newton; and to arbitrary iterative linear solvers. Numerical experiments for the p-Laplacian illustrate the tight overall error control and important computational savings achieved in our approach.
-
3 May 2013
The periodic table of finite elements
Doug Arnold (University of Minnesota)
3.00 pm - Frank Adams Room 1, Alan Turing BuildingAbstract (click to view)Finite element methodology, reinforced by deep mathematical analysis, provides one of the most important and powerful toolsets for numerical simulation. Over the past forty years a bewildering variety of different finite element spaces have been invented to meet the demands of many different problems. The relationship between these finite elements has often not been clear, and the techniques developed to analyze them can seem like a collection of ad hoc tricks. The finite element exterior calculus, developed over the last decade, has elucidated the requirements for stable finite element methods for a large class of problems, clarifying and unifying this zoo of methods, and enabling the development of new finite elements suited to previously intractable problems. In this talk, we will discuss the big picture that emerges, providing a sort of periodic table of finite element methods.
-
10 May 2013
The Defect Correction Method and its Applications
Choi-Hong Lai (University of Greenwich)
3.00 pm - Frank Adams Room 1, Alan Turing BuildingAbstract (click to view)In many CFD software platforms the fully unsteady incompressible or compressible flow equations may be solved by means of Direct Numerical Simulation (DNS), (Large Eddy Simulation) LES, or unsteady Reynolds Averaged Navier-Stokes equations (URANS). Very often high order schemes are used in LES and URANS in order to obtain fluid flow results using a coarser mesh. One typical difficulty involved in the implementation of high order schemes is that the sparse structure of the linear system of equations becomes different from those second order difference schemes. This means that different linear solvers may be required in order to achieve optimal speed of solving such linear systems. On the other hand incorporating high order schemes into an existing CFD software package complicates the nonlinear and linear routines. Every time a new stencil of a high order scheme is formed new routines and data structure are required. This generates problems in software automation. There are two objectives in this talk. First an easy implementation method, based on the mathematical concept of a defect correction principle, with easy adoption of an existing CFD software package, for high order schemes is discussed. Second the filtering effect of LES by using high order schemes with fine temporal and spatial mesh is examined. From this examination the defect correction method may be used as a simple method of extracting information equivalent to those from a sub-grid model.
-
24 May 2013
A New Family of Eigenvalue Algorithms for Solving Linear and Non-Linear
Problems with Applications
Eric Polizzi (University of Massachusetts)
3.00 pm - Frank Adams Room 1, Alan Turing BuildingAbstract (click to view)The FEAST algorithm for real symmetric and Hermitian problems based on a density matrix and contour integration techniques has been proposed in 2009. FEAST offers many important and unique capabilities for achieving high performance, robustness, accuracy, and potential for exploiting multiple levels of parallelism on modern computer architectures. A numerical library package has been subsequently developed and freely released in 2009 and upgraded with a second-generation version in 2012. In Feb. 2013, the FEAST solver (SMP version) has been integrated into the Intel Math Kernel Library (MKL v11.02), and it is now featured as the Intel library's main sparse eigenvalue solver. In this presentation, we aim at further expanding the current capabilies of the FEAST eigenvalue algorithm and developing an unified numerical approach for solving linear, non-linear, symmetric and non-symmetric eigenvalue problems. The resulting algorithms retain many of the properties of Hermitian FEAST including the multi-level parallelism. Using a series of numerical examples, we propose to outline how various applications in computational nanosciences (DFT and TDDFT) can directly benefit from our efforts to improve and expand the capabilities of the FEAST algorithm and solver.
-
31 May 2013
Function Space MCMC Methods and Application to Bayesian Inverse Problems
Simon Cotter (University of Manchester)
3.00 pm - Frank Adams Room 1, Alan Turing BuildingAbstract (click to view)Markov chain Monte Carlo (MCMC) methods are a powerful family of tools that allow us to draw samples from any complex probability distribution we choose. These are particularly useful where we know the form of the probability distribution up to a normalising constant, whose value is computationally intractable. These methods are often used when tackling inverse problems, where we wish to characterise a posterior probability distribution. This posterior probability distribution arises from Bayes' theorem, which tells us how we can combine prior beliefs and observations in order to be able to analyse the dynamics which gave rise to the observations. In this talk, we introduce function space MCMC methods, and show how they arise from discretisations of particular SPDEs. These methods have properties which lead to far better convergence rates when the quantity that we are trying to understand through observations, is a member of a Hilbert space. If there is time, we shall explore an example of the use of these methods in shape registration, with application to biomedical image analysis.
Semester One (Autumn 2012)
-
5 October 2012
The science of ice sheets: the mathematical modeling and
computational simulation of ice flows
Max Gunzburger (Florida State University)
3.00 pm - Frank Adams Room 1, Alan Turing BuildingAbstract (click to view)The melting of ice in Greenland and Antarctica would, of course, be by far the major contributor any possible sea level rise. Thus, to make science-based predictions about sea-level rise, it is crucial that the ice sheets covering those land masses be accurately mathematically modeled and computationally simulated. In fact, the 2007 IPCC report on the state of the climate did not include predictions about sea level rise because it was concluded there that the science of ice sheets was not developed to a sufficient degree. As a result, predictions could not be rationally and confidently made. In recent years, there has been much activity in trying to improve the state-of-the-art of ice sheet modeling and simulation. In this lecture, we review a hierarchy of mathematical models for the flow of ice, pointing out the relative merits and demerits of each, showing how they are coupled to other climate system components (ocean and atmosphere), and discussing where further modeling work is needed. We then discuss algorithmic approaches for the approximate solution of ice sheet flow models and present and compare results obtained from simulations using the different mathematical models.
-
26 October 2012
On the choice of parameters in rational Krylov methods for f(A)b
Stefan Guettel (University of Manchester)
3.00 pm - Frank Adams Room 1, Alan Turing BuildingAbstract (click to view)We consider the problem of approximating the vector f(A)b, where f(A) is a function of a large square matrix and b is a vector, using rational Krylov methods. The choice of parameters for these methods is currently an active area of research. We provide an overview of different approaches for obtaining (in some sense) optimal parameters, with an emphasis on the resolvent function (being closely related to the exponential function), and variants of Markov functions like the square root or the logarithm.
-
9 November 2012
Fast solvers for time-dependent optimal control
problems
Martin Stoll (MPI Magdeburg)
3.00 pm - Frank Adams Room 1, Alan Turing BuildingAbstract (click to view)This presentation focuses on the fast solution of optimal control problems subject to PDE-constraints. We introduce some of the basic methodologies for the steady control problem and will then show how these techniques can be extended for the solution of time-dependent problems. The bottleneck for all linear and nonlinear problems is the solution of the linear systems in saddle point form. Our goal is to derive efficient preconditioners that show robustness with respect to the crucial problem parameters such as mesh-size and regularization parameter. We will show how our approach can be carried over to a variety of problems (Heat Equation, Stokes, Reaction-Diffusion) and present numerical results illustrating the competitiveness of this methodology.
-
23 November 2012
New fast algorithm for polynomial root-finding
Pavel Zhlobich (University of Edinburgh)
3.00 pm - Frank Adams Room 1, Alan Turing BuildingAbstract (click to view)We will review several well-known relations between matrices and orthogonal polynomials to demonstrate how these two theories benefit from each other. Often using a technique from orthogonal polynomials one can obtain an answer to a problem in matrix theory easier and vice versa. We will then concentrate on one particular example - a new fast algorithm that computes roots of a polynomial represented by its coefficients in an arbitrary polynomial basis as eigenvalues of a rank-structured matrix. Results of preliminary numerical experiments demonstrate that the new algorithm is faster and more accurate than currently known analogues.
-
7 December 2012
Phase transition phenomena in convex optimization and geometry
Martin Lotz (University of Manchester)
3.00 pm - Frank Adams Room 1, Alan Turing BuildingAbstract (click to view)Convex optimization is a widely considered approach for solving under-determined systems of equations in the presence of structural assumptions. Example problems are sparse signal recovery or low-rank matrix recovery. The capability of these and related methods exhibits a curious phenomenon: as the amount of information about the solution increases, the optimization method shifts rapidly from an extraordinarily small chance of success to an extraordinarily large chance of success. In this talk we explore the geometric foundations of such phase transitions. We identify a parameter, the statistical dimension of a convex cone, that determines exactly the location of such phase transition for high-dimensional problems. A series of examples and applications will be discussed.
Further information
For further information please contact the seminar organiser Vanni Noferini.