Please note, this is archived content and is no longer being updated. It is provided for historical records. Links on the pages may be broken or redirect to our current site.
Archived
material
You are here: MIMS > events > seminars > numerical analysis and scientific computing
MIMS

Numerical Analysis and Scientific Computing Seminars 2007/08

Semester One

Semester Two

  • 01 Feb
    2008
    Elliptic Reconstruction in A Posteriori Error Analysis for Evolution Equations
    Omar Lakkis (University of Sussex)

    3.00 - Frank Adams Room 1, Alan Turing Building
    Abstract (click to view)

    I will address the usage of the elliptic reconstruction technique (ERT) in a posteriori error analysis for fully discrete schemes for evolution partial differential equations with particular focus on parabolic equations. A posteriori error estimates are effective tools in error control and adaptivity and a mathematical rigorous derivation justifies and improves their use in practical implementations. The flexibility of the ERT allows a virtually indiscriminate use of various parabolic PDE techniques such as energy methods, duality methods and heat-kernel estimates, as opposed to direct approaches which leave less maneuver room. Thanks to the ERT, parabolic stability techniques can be combined with different elliptic a posteriori analysis techniques, such as residual or recovery estimators, to derive a posteriori error bounds. The method unifies and simplifies most previously known analysis, and it provides previously unknown error bounds (e.g., pointwise norm error bounds for the heat equation). [Results with Ch. Makridakis and A. Demlow.] A further feature of the ERT, which I would like to highlight, of the ERT is its simplifying power. It allows to derive estimates where the analysis would be dautingly complicated. As an example, I will illustrate its use in the context of non-conforming methods, with a special eye on discontinuous Galerkin methods [with E. Georgoulis] and "ZZ" recovery-type estimators [with T. Pryer].

  • 15 Feb
    2008
    Numerical solution of nonlinear matrix equations arising in Markov chains
    Bruno Iannazzo (Università dell'Insubria)

    2.00 - Frank Adams Room 1, Alan Turing Building
    Abstract (click to view)

    The numerical methods for structured Markov chains are often based on the solution of nonlinear matrix equations, such as the unilateral quadratic matrix equation or the nonsymmetric algebraic Riccati equation. We give a review of the theoretical properties of the equations and we present some reliable algorithms developed so far. In particular we present some recent techniques which allow the algorithms to work well also in the ill-conditioned cases, when the derivative of the operator defining the equation is singular or nearly singular.

  • 29 Feb
    2008
    How Many Random Projections Does One Need to Recover a k-sparse Vector?
    Jared Tanner (University of Edinburgh)

    3.00 - Frank Adams Room 1, Alan Turing Building
    Abstract (click to view)

    The essential information contained in most large data sets is small when compared to the size of the data set. +-That is, the data can be well approximated using relatively few terms in a suitable transformation. +-This paradigm of Compressed Sensing suggests a revolution in how information is collected and processed. In this talk we consider a stronger notion of compressibility, sparsity, which measures the number of non-zero entries. +-For data sets which are sparse (possibly following a transformation), the data can often be recovered efficiently, with relatively few randomized measurements by utilizing highly non-linear optimization based reconstruction techniques. This work was joint with David L. Donoho.

  • 14 Mar
    2008
    Particle-mesh methods for diffeomorphic matching of curves
    Colin Cotter (Imperial College London)

    3.00 - Frank Adams Room 1, Alan Turing Building
    Abstract (click to view)

    Diffeomorphic matching is a technique for analysing shape information where one seeks a diffeomorphism that takes one shape to another and is "nearest to the identity" with respect to a chosen metric. The technique can be used to classify shapes and perform statistics in a parameterisation-free way. I will describe a particle-mesh discretisation for matching curves, and will discuss a few interesting properties of the method when a geometric integrator is used. I will also discuss current work on solver strategies and our applications to bioflow datasets being developed at Imperial College.

  • 11 April
    2008
    Polynomial Roots, Approximate Greatest Common Divisors and Matrix Rank Estimation
    Joab Winkler (University of Sheffield)

    3.00 - Frank Adams Room 1, Alan Turing Building
    Abstract (click to view)

    The computation of the roots of a polynomial is a classical problem in mathematics, and numerous algorithms have been developed for their determination. Most of them, however, yield poor results for polynomials that have multiple roots, which is seen computationally by the break up of a multiple root into a cluster of closely spaced simple roots. When, however, structured perturbations that preserve the multiplicities of the roots are imposed on the coefficients, the roots are very well-conditioned because their structured condition numbers are small. This result forms the basis of a polynomial root finding algorithm that also requires approximate greatest common divisor (GCD) computations and the estimation of the rank of a matrix. These topics are discussed in detail, and in particular, it is shown that structured matrix methods applied to a Sylvester resultant matrix can be used to compute an approximate GCD of two inexact polynomials.

  • 25 April
    2008
    The Envelope Method
    Beresford Parlett (UC Berkeley)

    3.00 - Frank Adams Room 1, Alan Turing Building
    Abstract (click to view)

    The task is to compute orthogonal eigenvectors (without Gram-Schmidt) of symmetric tridiagonals for isolated clusters of close eigenvalues. +-We review an "old" method, the Submatrix method, and describe an extension which significantly enlarges the scope to include several mini-clusters within the given cluster. +-An essential feature is to find the envelope of the associated invariant subspace.

  • 2 May
    2008
    Numerical differential equations without discretization: chebfuns and chebops
    Toby Driscoll (University of Delaware)

    3.00 - Frank Adams Room 1, Alan Turing Building
    Abstract (click to view)

    In 2004, Battles and Trefethen introduced the "chebfun" system, which a MATLAB package that uses Chebyshev polynomial approximation algorithms to enable working with functions numerically as though they are simple atomic objects. Because the system is based on adaptive function sampling, it rather easily serves as the foundation for adaptive spectral solutions of differential equations. Just as chebfuns hide the details of Chebyshev approximation from the user, a new "chebop" package has been defined to hide the details of spectral DE methods. A chebop object serves as a linear operator on chebfuns, and the system-solving and eigenvalue functions of MATLAB have been extended to solve adaptively linear boundary-value and eigenvalue problems for DEs. In this talk I will introduce the two packages and provide many demonstrations of how they may be used to solve linear and nonlinear boundary-value and initial-value problems. This talk is complementary to the colloquium to be given 11 June 2008 by Prof. L. N. Trefethen.

  • 8 May 2008


    Coefficients of Ergodicity: An Introduction
    Ilse Ipsen (North Carolina State University)
    2.00pm Frank Adams 1 Alan Turing Building
  • 9 May
    2008
    Preconditioning for self-adjointness
    Andy Wathen (Oxford University)

    3.00 - Frank Adams Room 1, Alan Turing Building
    Abstract (click to view)

    Linear Algebra is a major component of many areas of Numerical Analysis from PDEs to Optimization. In the latter area in particular, saddle-point (or KKT) systems are ubiquitous. As the desire for problems of larger and larger dimension arises (for example from PDE Optimization), iterative linear system solution methods provide the main possible solution approaches. No one wants slow convergence, so preconditioning is almost always needed to ensure adequately rapid convergence. For symmetric matrices, the Conjugate Gradient and MINRES algorithms are methods of choice which are underpinned by descriptive convergence theory that gives clear guidance on desirable preconditioners to enable rapid convergence. For these methods symmetric and positive definite preconditioning is usually employed to preserve symmetry. In this talk I will describe and illustrate with examples useful situations where nonsymmetric preconditioning can be used to give self-adjoint preconditioned matrices in non-standard inner products. The most long-standing and widely used example is the method of Bramble and Pasciak which arose in the context of the Stokes problem in PDEs - a saddle point system. In these situations symmetric iterative methods can be employed in such inner products with the consequent benefits. This is joint work with Martin Stoll.

  • 17 June
    2008
    Linearizations of Singular Matrix Polynomials and the Recovery of Minimal Indices
    Steve Mackey (Kalamazoo)

    10.00 - Frank Adams Room 1, Alan Turing Building
    Abstract (click to view)

Further information

For further information please contact the seminar organiser Timo Betcke.

Previous Seminars