FAHAD MOSTAFA's repositories
Arnoldi-s-iteration-and-GMRES-Method-
it is common to seek a solution x minimizing r(x)=‖y-Ax‖^2 , where AϵR^(n×n). Let the initial guess x_0=0 them the residual vector is r_0=y. We can use GMRES method for solving y = Ax, using Krylov subspaces κ_m, where Arnoldi’s iteration has been applied to find orthonormal basis for κ_m,for all m=1,2,…
Machine_learning_short_review
Machine Learning Techniques using python.
QR_factorization
Simple and fast way
Bayesian_regression
https://en.wikipedia.org/wiki/Bayesian_linear_regression
ConjugateGradient_vs_SteepestDescent
Basic and algorithmic differences between CV and SD
data_reduction
A survey between data reduction techniques for Image Recognition
EM_algorithm
EM algorithm to estimate theta(theta1) iteratively
Gaussian_Mixture
Gaussian Mixer model
GMM_clustering
Simulate data from a mixture of two bivariate Gaussian distributions , and find clustering
MCMC_regression
https://events.mpifr-bonn.mpg.de/indico/event/30/material/slides/12.pdf
min_norm_solution
Consider a linear system of equations Ax=b. If the system is overdetermined, the least squares (approximate) solution minimizes ||b−Ax||2. Some source sources also mention ||b−Ax||. If the system is underdetermined one can calculate the minimum norm solution. But it does also minimize ||b−Ax||
minimum_norm_geo
Geometric way to calculate min norm
multinomial_logistic_regression-
multinomial logistic regression is a classification method that generalizes logistic regression to multiclass problems, i.e. with more than two possible discrete outcomes.
multivariate_newton
faster multivariate newton method
Nelder-Mead-Method
The Nelder Mead algorithm is developed using a simplex, which is a generalized triangle in N dimensions, and it follows an efective and computationally compact scheme.
Power-Iteration
Proximal gradient methods are a generalized form of projection used to solve non-differentiable convex optimization problems.
Proximal-Gradient-Descent
https://www.stat.cmu.edu/~ryantibs/convexopt/lectures/prox-grad.pdf
Simpson_rule
Simpson's rule uses a quadratic polynomial on each subinterval of a partition to approximate the function f(x) and to compute the definite integral
Spectral_clustering
Spectral clustering refers to a family of algorithms that cluster eigenvectors derived from the matrix that represents the input data’s graph. An important step in this method is running the kernel function that is applied on the input data to generate a NXN similarity matrix or graph (where N is our number of input observations). Subsequent steps include computing the normalised graph Laplacian from this similarity matrix, getting the eigensystem of this graph, and lastly applying k-means on the top K eigenvectors to get the K clusters. Clustering in this way adds flexibility in the range of data that may be analyzed and spectral clustering will often outperform k-means.
Support_Vector_Machine
SVM for Binary Classification
Tilted_Importance_Sampling_with_Classical_Monte_Carlo_simulation
Tilted Importance Sampling with Classical Monte Carlo simulation
Trapizoidal_rule
The trapezoid rule gives a better approximation of a definite integral by summing the areas of the trapezoids connecting the point