FAHAD MOSTAFA (FahadMostafa91)

FahadMostafa91

Geek Repo

Company:Texas Tech University

Location:Lubbock, Texas.

Github PK Tool:Github PK Tool

FAHAD MOSTAFA's repositories

Arnoldi-s-iteration-and-GMRES-Method-

it is common to seek a solution x minimizing r(x)=‖y-Ax‖^2 , where AϵR^(n×n). Let the initial guess x_0=0 them the residual vector is r_0=y. We can use GMRES method for solving y = Ax, using Krylov subspaces κ_m, where Arnoldi’s iteration has been applied to find orthonormal basis for κ_m,for all m=1,2,…

Language:MATLABLicense:MITStargazers:3Issues:2Issues:0

Machine_learning_short_review

Machine Learning Techniques using python.

Language:Jupyter NotebookLicense:GPL-3.0Stargazers:1Issues:2Issues:0

QR_factorization

Simple and fast way

Language:MATLABLicense:MITStargazers:1Issues:2Issues:0

Bayesian_regression

https://en.wikipedia.org/wiki/Bayesian_linear_regression

Language:RLicense:MITStargazers:0Issues:2Issues:0

BFGS_in_R

BFGS is an optimization method for multidimensional nonlinear unconstrained functions. BFGS belongs to the family of quasi-Newton Method

Language:RLicense:MITStargazers:0Issues:2Issues:0
Language:PythonLicense:MITStargazers:0Issues:2Issues:0

ConjugateGradient_vs_SteepestDescent

Basic and algorithmic differences between CV and SD

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

data_reduction

A survey between data reduction techniques for Image Recognition

Language:MATLABLicense:MITStargazers:0Issues:0Issues:0
Language:MATLABLicense:GPL-3.0Stargazers:0Issues:0Issues:0

EM_algorithm

EM algorithm to estimate theta(theta1) iteratively

Language:MATLABLicense:MITStargazers:0Issues:2Issues:0

Gaussian_Mixture

Gaussian Mixer model

Language:MATLABLicense:GPL-3.0Stargazers:0Issues:2Issues:0

GMM_clustering

Simulate data from a mixture of two bivariate Gaussian distributions , and find clustering

Language:MATLABLicense:MITStargazers:0Issues:2Issues:0
Language:PythonLicense:MITStargazers:0Issues:2Issues:0
Language:PythonLicense:MITStargazers:0Issues:0Issues:0

MCMC_regression

https://events.mpifr-bonn.mpg.de/indico/event/30/material/slides/12.pdf

Language:RLicense:MITStargazers:0Issues:0Issues:0

min_norm_solution

Consider a linear system of equations Ax=b. If the system is overdetermined, the least squares (approximate) solution minimizes ||b−Ax||2. Some source sources also mention ||b−Ax||. If the system is underdetermined one can calculate the minimum norm solution. But it does also minimize ||b−Ax||

Language:PythonLicense:MITStargazers:0Issues:2Issues:0

minimum_norm_geo

Geometric way to calculate min norm

Language:PythonLicense:MITStargazers:0Issues:2Issues:0

multinomial_logistic_regression-

multinomial logistic regression is a classification method that generalizes logistic regression to multiclass problems, i.e. with more than two possible discrete outcomes.

Language:RLicense:MITStargazers:0Issues:0Issues:0

multivariate_newton

faster multivariate newton method

Language:MATLABLicense:MITStargazers:0Issues:2Issues:0

Nelder-Mead-Method

The Nelder Mead algorithm is developed using a simplex, which is a generalized triangle in N dimensions, and it follows an efective and computationally compact scheme.

Stargazers:0Issues:2Issues:0

Power-Iteration

Proximal gradient methods are a generalized form of projection used to solve non-differentiable convex optimization problems.

Stargazers:0Issues:2Issues:0

Proximal-Gradient-Descent

https://www.stat.cmu.edu/~ryantibs/convexopt/lectures/prox-grad.pdf

Language:PythonLicense:MITStargazers:0Issues:0Issues:0
Language:PythonLicense:MITStargazers:0Issues:0Issues:0
Language:PythonLicense:MITStargazers:0Issues:0Issues:0

Simpson_rule

Simpson's rule uses a quadratic polynomial on each subinterval of a partition to approximate the function f(x) and to compute the definite integral

Language:Jupyter NotebookLicense:MITStargazers:0Issues:0Issues:0
Language:MATLABLicense:MITStargazers:0Issues:0Issues:0

Spectral_clustering

Spectral clustering refers to a family of algorithms that cluster eigenvectors derived from the matrix that represents the input data’s graph. An important step in this method is running the kernel function that is applied on the input data to generate a NXN similarity matrix or graph (where N is our number of input observations). Subsequent steps include computing the normalised graph Laplacian from this similarity matrix, getting the eigensystem of this graph, and lastly applying k-means on the top K eigenvectors to get the K clusters. Clustering in this way adds flexibility in the range of data that may be analyzed and spectral clustering will often outperform k-means.

Language:MATLABLicense:MITStargazers:0Issues:2Issues:0

Support_Vector_Machine

SVM for Binary Classification

Language:RLicense:Apache-2.0Stargazers:0Issues:2Issues:0

Tilted_Importance_Sampling_with_Classical_Monte_Carlo_simulation

Tilted Importance Sampling with Classical Monte Carlo simulation

Language:MATLABLicense:MITStargazers:0Issues:0Issues:0

Trapizoidal_rule

The trapezoid rule gives a better approximation of a definite integral by summing the areas of the trapezoids connecting the point

Language:Jupyter NotebookLicense:GPL-3.0Stargazers:0Issues:0Issues:0