Imperial-MATH50003 / MATH50003NumericalAnalysis2022-23

Notes and course material for MATH50003 Numerical Analysis (2022–2023)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

MATH50003NumericalAnalysis (2022–2023)

Notes and course material for MATH50003 Numerical Analysis

Lecturer: Dr Sheehan Olver

Office hour: Mondays 11am, Huxley 6M40

Notes (PDF)

Background material

  A. Introduction to Julia (PDF): we introduce the basic features of the Julia language.
  B. Asymptotics and Computational Cost (PDF): we review Big-O, little-o and asymptotic to notation, and their usage in describing computational cost.
  C. Adjoints and Normal Matrices (PDF): we review complex inner-products, adjoints, normal matrices, and the spectral theorem.

I: Computing with numbers

  1. Integers (PDF): we discuss how computers represent integers using modular arithmetic.
  2. Reals (PDF): we discuss how computers represent reals using IEEE floating-point arithmetic.
  3. Divided Differences (PDF): we discuss how we can approximate derivatives using floating point arithmetic, with error bounds.
  4. Dual Numbers (PDF): we discuss how a special commutative ring, dual numbers, which are defined similar to complex numbers, facilitate fast and accurate computation of derivatives.

II: Computing with matrices

  1. Structured Matrices (PDF): we discuss types of structured matrices (dense, triangular, and banded).
  2. Orthogonal Matrices (PDF): we discuss types of orthogonal and unitary matrices (permutations, rotations, and reflections).
  3. Least Squares and QR (PDF): we discuss least squares and the computation of the QR factorisation.
  4. PLU and Cholesky (PDF): we discuss the PLU and Cholesky factorisations, as well as symmetric positive definite matrices.
  5. Norms (PDF): we discuss vector and matrix norms.
  6. Singular Value Decomposition (PDF): we discuss the Singular Value Decomposition (SVD) and 2-norms of matrices.
  7. Condition Numbers (PDF): we discuss stability, backward error analysis, and condition numbers.

III: Computing with functions

  1. Fourier Expansions (PDF): we discuss approximating Fourier expansions using the Trapezium Rule.
  2. Discrete Fourier Transform (PDF): we discuss how the approximate Fourier coefficients can be recast as a unitary matrix, and how approximate Fourier expansinos interpolate.
  3. Orthogonal Polynomials (PDF): we introduce orthogonal polynomials and discuss their properties including three-term recurrences and Jacobi matrices.
  4. Classical Orthogonal Polynomials (PDF): we discuss special families of orthogonal polynomials which arise in applications, in particular Chebyshev and Legendre polynomials.
  5. Interpolation and Quadrature (PDF): we discuss how interpolating a function by a polynomial leads naturally to a quadrature rule, for approximating integrals.
  6. Gaussian Quadrature (PDF): a powerful quadrature rule comes from diagonalising truncations of Jacobi matrices, which is exact for polynomials twice the degree expected.

Non-examinable material: I.3 Example 1/2, II.2 Definition 7/Lemma 2, II.3.2, II.3 Example 2, II.4.1, II.4.2, III.2.3

Assessment

  1. Practice computer-based exam (Solutions)
  2. 2021–22 Computer-based exam (Solutions)
  3. 2022–23 Computer-based exam (Solutions)
  4. Practice final exam
  5. Final exam (pen-and-paper): TBA

Problem sheets (pen-and-paper)

  1. Integers and Reals (Solutions)
  2. Bounding Errors (Solutions)
  3. Divided Differences and Dual Numbers (Solutions)
  4. Structured and Orthogonal Matrices (Solutions)
  5. QR Factorisations (Solutions)
  6. Cholesky and Norms (Solutions)
  7. SVD and Condition Numbers (Solutions)
  8. Fourier Expansions (Solutions)
  9. Orthogonal Polynomials (Solutions)
  10. Quadrature (Solutions)
  11. Revision (Solutions)

Labs (Julia-based)

  1. Introduction to Julia (Raw) (Solutions)
  2. Interval Arithmetic (Raw) (Solutions)
  3. Divided Differences and Dual Numbers (Raw) (Solutions)
  4. Structured Matrices (Raw) (Solutions)
  5. Orthogonal Matrices (Raw) (Solutions)
  6. Least squares, QR, and Cholesky (Raw) (Solutions)
  7. Singular value decomposition (Raw) (Solutions)

Lecture material

  1. Integers: Notebook
  2. Floating Point: Notebook
  3. Arithmetic: Notebook
  4. Bounding Errors: Notebook
  5. Divided Differences: Notebook
  6. Dual Numbers: Notebook
  7. Structured Matrices: Notebook
  8. Orthogonal Matrices: Notebook
  9. Reflections: Notebook

Reading List

  1. Tobin A. Driscoll & Richard J. Braun, Fundamentals of Numerical Computation, Julia Edition, Chapters 1–3, 7
  2. Nicholas J. Higham, Accuracy and Stability of Numerical Algorithms, Chapters 1–3
  3. Michael L. Overton, Numerical Computing with IEEE Floating Point Arithmetic, Chapters 2–6
  4. Lloyd N. Trefethen & David Bau III, Numerical Linear Algebra, Chapters 1–4
  5. Lloyd N. Trefethen, Approximation Theory and Approximation Practice, Chapters 1–4, 17–19
  6. The Julia Documentation
  7. The Julia–Matlab–Python Cheatsheet
  8. Ben Lauwens, Think Julia
  9. David A. Ham, Just enough Git to get by

What is numerical analysis?

Broadly speaking, numerical analysis is the study of approximating solutions to continuous problems in mathematics, for example, integration, differentiation, and solving differential equations. There are three key topics in numerical analysis:

  1. Design of algorithms: discuss the construction of algorithms and their implmentation in software.
  2. Convergence of algorithms: proving under which conditions algorithms converge to the true solution, and the rate of convergence.
  3. Stability of algorithms: the study of robustness of algorithms to small perturbations in data, for example, those that arise from measurement error, errors if data are themselves computed using algorithms, or simply errors arising from the way computers represent real numbers.

The modern world is built on numerical algorithms:

  1. Fast Fourier Transform (FFT): Gives a highly efficient way of computing Fourier coefficients from function samples, and is used in many places, e.g., the mp3 format for compressing audio and JPEG image format. (Though, in a bizarre twist, GIF, a completely uncompressed format, has made a remarkable comeback.)
  2. Singular Value Decomposition (SVD): Allows for approximating matrices by those with low rank. This is related to the PageRank algorithm underlying Google.
  3. Stochastic Gradient Descent (SGD): Minima of high-dimensional functions can be effectively computed using gradients in a randomised algorithm. This is used in the training of machine learning algorithms.
  4. Finite element method (FEM): used to solve many partial differential equations including in aerodynamics and weather prediction. Firedrake is a major project based at Imperial that utilises finite element method.

This is not to say that numerical analysis is only important in applied mathematics. It is playing an increasing important role in pure mathematics with important proofs based on numerical computations:

  1. The Kepler conjecture. This 400 year conjecture on optimal sphere packing was finally proved in 2005 by Thomas Hales using numerical linear programming.
  2. Numerical verification of the Riemann Hypothesis. It has been proved that there are no zeros of the Riemann zeta function off the critical line provided the imaginary part is less than 30,610,046,000.
  3. Steve Smale's 14th problem on the stability of the Lorenz system was solved using interval arithmetic.

Note these proofs are rigorous: as we shall see it is possible to obtain precise error bounds in numerical calculations, and the computations themselves can be computer-verified (a la The Lean Theorem Prover). As computer-verified proofs become increasingly important, the role of numerical analysis in pure mathematics will also increase, as it provides the theory for rigorously controlling errors in computations. Note that there are many computer-assisted proofs that do not fall under numerical analysis because they do not involve errors in computations or approximation or involve discrete problems, for example, the proof the Four Colour Theorem.

The Julia Programming Language

In this course we will use the programming language Julia. This is a modern, compiled, high-level, open-source language developed at MIT. It is becoming increasingly important in high-performance computing and AI, including by Astrazeneca, Moderna and Pfizer in drug development and clinical trial accelleration, IBM for medical diagnosis, MIT for robot locomotion, and elsewhere.

It is ideal for a course on numerical analysis because it both allows fast implementation of algorithms but also has support for fast automatic-differentiation, a feature that is of increasing importance in machine learning. It also is low level enough that we can really understand what the computer is doing. As a bonus, it is easy-to-read and fun to write.

To run Julia in a Jupyter notebook on your own machine:

  1. Download Julia v1.8.4
  2. Open the Julia app which will launch a new window
  3. Install the needed packages by typing (] will change the prompt to a package manager):
] add IJulia Plots ColorBitstring SetRounding
  1. Build Jupyter via
] bulid IJulia
  1. Launch Jupyter by typing
using IJulia
notebook()

About

Notes and course material for MATH50003 Numerical Analysis (2022–2023)

License:MIT License


Languages

Language:Jupyter Notebook 98.6%Language:Julia 1.4%