Nathan Ankomah-Mensah's starred repositories
cs-video-courses
List of Computer Science courses with video lectures.
cyberpython2077
Using Python to Play Cyberpunk 2077
awesome-dnd
Resources for Dungeons & Dragons
PlotNeuralNet
Latex code for making neural networks diagrams
CtCI-6th-Edition
Cracking the Coding Interview 6th Ed. Solutions
coding-interview-university
A complete computer science study plan to become a software engineer.
PrettyErrors
Prettify Python exception output to make it legible.
linkedin-skill-assessments-quizzes
Full reference of LinkedIn answers 2024 for skill assessments (aws-lambda, rest-api, javascript, react, git, html, jquery, mongodb, java, Go, python, machine-learning, power-point) linkedin excel test lösungen, linkedin machine learning test LinkedIn test questions and answers
hpc-python
HPC Python lesson materials
OpenYandere
An open-source community rewrite of the game "Yandere Simulator" in C#. 2020 july 4th update (This isn't the leak guys soz.) but you may join discord https://discord.gg/e8RNBBw
COVID-EMDA
A Cross-Domain Data Hub with Electricity Market, Coronavirus Case, Mobility and Satellite Data in U.S.
bad-commit-message-blocker
Inhibits commits with bad messages from getting merged
deep_learning
Deep Learning - Code Hub: A repository for deep learning projects which includes simple basic functions, experimental projects and paper implementations.
Structured_Data_Random_Features_for_Large-Scale_Kernel_Machines
Kernel machines such as the Support Vector Machine are widely used in solving machine learning problem, since they can approximate any function or decision boundary arbitrary well with enough training data. However, those methods applied on the kernel matrix (Gram matrix) of the data scale poorly with the size of the training dataset. The kernel trick may become intractable to compute as the computation and storage requirements for the kernel trick are exponentially proportional to the number of samples in the dataset. It takes a long time to train a model when training examples have big volume. For some specialized algorithms for linear Support Vector Machines, they operate much more quickly when the dimensionality of data is small because they operate on the covariance matrix rather than the kernel matrix of the training data. This paper we’ve chosen proposes a way to combine the advantages of the linear and nonlinear approaches. This method transformed the training and evaluation of any kernel machine by mapping the input data to a randomized low-dimensional feature space in order to create corresponding opera- tions of a linear machine. Those randomized features are designed to ensure that the inner products of the transformed data are nearly equal to those in the feature space of a user specific shift-invariant kernel. This method gives competitive results with state-of-the-art kernel-based classification and re- gression algorithms. What’s more, random features fix the problem of large scale of training data when computing the kernel matrix. The results have similar or even better testing error.
Awesome-PlayStation-Vita
List of awesome stuff for PlayStation Vita
opencv_contrib
Repository for OpenCV's extra modules
machinevision-toolbox-matlab
Machine Vision Toolbox for MATLAB
job-autofiller
🍻 Chrome Extension — autofill job applications
Startcraft_pysc2_minigames
Startcraft II Machine Learning research with DeepMind pysc2 python library .mini-games and agents.
CSC310-S20
Jupiter Notebooks for the CSC310 "Programming for Data Science" course at the University of Rhode Island.