nikodallanoce / ComputationalMathematics

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

ComputationalMathematics

Computational Mathematics for Learning and Data Analysis project for the a.y. 2021/2022.

Group Members

Wildcard Project

(P) is the linear least squares problem $$\displaystyle \min_{w} \lVert \hat{X}w-\hat{y} \rVert$$ where

$$\hat{X}= \begin{bmatrix} X^T \newline \lambda I \end{bmatrix}, \hat{y} = \begin{bmatrix} y \newline 0 \end{bmatrix},$$

with $X$ the (tall thin) matrix from the ML-cup dataset by prof. Micheli, and $y$ is a random vector.

  • (A1) is an algorithm of the class of limited-memory quasi-Newton methods.
  • (A2) is thin QR factorization with Householder reflectors, in the variant where one does not form the matrix $Q$, but stores the Householder vectors $u_k$ and uses them to perform (implicitly) products with $Q$ and $Q^T$.
  • (A3) is an algorithm of the class of Conjugate Gradient methods.
  • (A4) is a standard momentum descent (heavy ball) approach.

Repository structure

πŸ“‚ComputationalMathematics
β”œβ”€β”€ πŸ“‚1_LBFGS  # Limited-memory quasi-Newton method
β”‚   β”œβ”€β”€ πŸ“„LBFGS.m # implementation of limited memory BFGS
β”‚   β”œβ”€β”€ πŸ“„run_lbfgs.m # choose the hyper-parameters and run L-BFGS
β”‚   └── πŸ“„...
β”œβ”€β”€ πŸ“‚2_QR  # Thin QR factorization with Householder reflectors
β”‚   β”œβ”€β”€ πŸ“„check_accuracy_thinqr.m # computes the accuracy of our implementation
β”‚   β”œβ”€β”€ πŸ“„householder_vector.m # builds the householder reflectors
β”‚   β”œβ”€β”€ πŸ“„thinqr.m # implementation of thin QR factorization
β”‚   β”œβ”€β”€ πŸ“„run_qr.m # choose the hyper-parameters and run thin QR
β”‚   └── πŸ“„...
β”œβ”€β”€ πŸ“‚3_CG  # Conjugate gradient method
β”‚   β”œβ”€β”€ πŸ“„cg.m # non-optmized version of conjugate gradient
β”‚   β”œβ”€β”€ πŸ“„cg_opt.m # optmized implementation of conjugate gradient
β”‚   β”œβ”€β”€ πŸ“„run_cg.m # choose the hyper-parameters and run conjugate gradient
β”‚   └── πŸ“„...
β”œβ”€β”€ πŸ“‚4_SMD  # Standard momentum descent (heavy ball)
β”‚   β”œβ”€β”€ πŸ“„smd.m # implementation of standard momentum descent
β”‚   β”œβ”€β”€ πŸ“„run_smd.m # choose the hyper-parameters and run standard momentum descent
β”‚   └── πŸ“„...
β”œβ”€β”€ πŸ“‚datasets  # Datasets used by the project
β”‚   └── πŸ—ƒοΈML-CUP21-TR.csv
β”œβ”€β”€ πŸ“‚utilities  # Methods for building the matrices, functions and gradients
β”‚   β”œβ”€β”€ πŸ“„build_lls.m # builds the function and gradient of lls
β”‚   β”œβ”€β”€ πŸ“„build_matrices.m # builds the required matrices
β”‚   β”œβ”€β”€ πŸ“„callback.m # computes the metrics
β”‚   └── πŸ“„compare_scalability # comparison of each method scalability
└── πŸ“„README.md

About


Languages

Language:MATLAB 86.7%Language:Python 13.3%