AbdiHakim Husssein (Hisaack)

Hisaack

Geek Repo

Company:Liquid Telecom

Location:Kenya

Github PK Tool:Github PK Tool

AbdiHakim Husssein's repositories

Regression_Template

Combine and Ready Template of my Works in Regression Models

Language:RStargazers:0Issues:0Issues:0
Language:RStargazers:0Issues:0Issues:0

Random_Forest_Classification

he explanation of the relationship between model variables and outputs is relatively easy for statistical models, such as linear regressions, thanks to the availability of model parameters and their statistical significance

Language:PythonStargazers:0Issues:0Issues:0

Polynomial_Regression

A form of regression analysis in which the relationship between the independent variable x and the dependent variable y is modelled as an nth degree polynomial in x.

Language:RStargazers:0Issues:0Issues:0

Naive_Bayes

Naive Bayes is the most straightforward and fast classification algorithm, which is suitable for a large chunk of data. Naive Bayes classifier is successfully used in various applications such as spam filtering, text classification, sentiment analysis, and recommender systems. It uses Bayes theorem of probability for prediction of unknown class.

Language:PythonStargazers:0Issues:0Issues:0

Multiple_Linear_Regression

Assumptions: Regression residuals must be normally distributed. A linear relationship is assumed between the dependent variable and the independent variables. The residuals are homoscedastic and approximately rectangular-shaped. Absence of multicollinearity is assumed in the model, meaning that the independent variables are not too highly correlated. At the center of the multiple linear regression analysis is the task of fitting a single line through a scatter plot. More specifically the multiple linear regression fits a line through a multi-dimensional space of data points. The simplest form has one dependent and two independent variables. The dependent variable may also be referred to as the outcome variable or regressand. The independent variables may also be referred to as the predictor variables or regressors. There are 3 major uses for multiple linear regression analysis. First, it might be used to identify the strength of the effect that the independent variables have on a dependent variable. Second, it can be used to forecast effects or impacts of changes. That is, multiple linear regression analysis helps us to understand how much will the dependent variable change when we change the independent variables. For instance, a multiple linear regression can tell you how much GPA is expected to increase (or decrease) for every one point increase (or decrease) in IQ. Third, multiple linear regression analysis predicts trends and future values. The multiple linear regression analysis can be used to get point estimates. An example question may be “what will the price of gold be 6 month from now?” When selecting the model for the multiple linear regression analysis, another important consideration is the model fit. Adding independent variables to a multiple linear regression model will always increase the amount of explained variance in the dependent variable (typically expressed as R²). Therefore, adding too many independent variables without any theoretical justification may result in an over-fit model.

Language:PythonStargazers:0Issues:0Issues:0

Arduino-Project

Contaning Webservers, Servo and Many other in this IoT Project

Language:CStargazers:0Issues:0Issues:0

Hierarchical-Clustering

Algorithm Agglomerative clustering works in a “bottom-up” manner. That is, each object is initially considered as a single-element cluster (leaf). At each step of the algorithm, the two clusters that are the most similar are combined into a new bigger cluster (nodes). This procedure is iterated until all points are member of just one single big cluster (root) (see figure below). The inverse of agglomerative clustering is divisive clustering, which is also known as DIANA (Divise Analysis) and it works in a “top-down” manner. It begins with the root, in which all objects are included in a single cluster. At each step of iteration, the most heterogeneous cluster is divided into two. The process is iterated until all objects are in their own

Language:PythonStargazers:0Issues:0Issues:0

Decision_Tree_Regression

Decision Tree Regression: Decision tree regression observes features of an object and trains a model in the structure of a tree to predict data in the future to produce meaningful continuous output. Continuous output means that the output/result is not discrete, i.e., it is not represented just by a discrete, known set of numbers or values.

Language:RStargazers:0Issues:0Issues:0

Decision_Tree_Classification

Data mining information about people is becoming increasingly important in the data-driven society of the 21st century. Unfortunately, sometimes there are real-world considerations that conflict with the goals of data mining; sometimes the privacy of the people being data mined needs to be considered. This necessitates that the output of data mining algorithms be modified to preserve privacy while simultaneously not ruining the predictive power of the outputted model. Differential privacy is a strong, enforceable definition of privacy that can be used in data mining algorithms, guaranteeing that nothing will be learned about the people in the data that could not already be discovered without their participation. In this survey, we focus on one particular data mining algorithm -- decision trees -- and how differential privacy interacts with each of the components that constitute decision tree algorithms. We analyze both greedy and random decision trees, and the conflicts that arise when trying to balance privacy requirements with the accuracy of the model.

Language:PythonStargazers:1Issues:0Issues:0

Artificial-Neural-Networks

Neural networks are a set of algorithms, modeled loosely after the human brain, that are designed to recognize patterns. They interpret sensory data through a kind of machine perception, labeling or clustering raw input. The patterns they recognize are numerical, contained in vectors, into which all real-world data, be it images, sound, text or time series, must be translated. Neural networks help us cluster and classify. You can think of them as a clustering and classification layer on top of the data you store and manage. They help to group unlabeled data according to similarities among the example inputs, and they classify data when they have a labeled dataset to train on. (Neural networks can also extract features that are fed to other algorithms for clustering and classification; so you can think of deep neural networks as components of larger machine-learning applications involving algorithms for reinforcement learning, classification and regression.)

Language:PythonStargazers:1Issues:0Issues:0

Kernerl-SVM

SVC and NuSVC are similar methods, but accept slightly different sets of parameters and have different mathematical formulations (see section Mathematical formulation). On the other hand, LinearSVC is another implementation of Support Vector Classification for the case of a linear kernel. Note that LinearSVC does not accept keyword kernel, as this is assumed to be linear. It also lacks some of the members of SVC and NuSVC, like support_. As other classifiers, SVC, NuSVC and LinearSVC take as input two arrays: an array X of size [n_samples, n_features] holding the training samples, and an array y of class labels (strings or integers), size

Language:PythonStargazers:0Issues:0Issues:0

K_Nearest_Neighbors

K-Nearest Neighbors The K-Nearest Neighbors algorithm is a supervised machine learning algorithm for labeling an unknown data point given existing labeled data. The nearness of points is typically determined by using distance algorithms such as the Euclidean distance formula based on parameters of the data. The algorithm will classify a point based on the labels of the K nearest neighbor points, where the value of K can be specified.

Language:PythonStargazers:1Issues:0Issues:0

K-Means

Clusters the data into k groups where k is predefined. Select k points at random as cluster centers. Assign objects to their closest cluster center according to the Euclidean distance function. Calculate the centroid or mean of all objects in each cluster. Repeat steps 2, 3 and 4 until the same points are assigned to each cluster in consecutive rounds.

Language:PythonStargazers:0Issues:0Issues:0

Convolutional-Neural-Networks---Python

deep artificial neural networks that are used primarily to classify images (e.g. name what they see), cluster them by similarity (photo search), and perform object recognition within scenes. They are algorithms that can identify faces, individuals, street signs, tumors, platypuses and many other aspects of visual data. Convolutional networks perform optical character recognition (OCR) to digitize text and make natural-language processing possible on analog and hand-written documents, where the images are symbols to be transcribed. CNNs can also be applied to sound when it is represented visually as a spectrogram. More recently, convolutional networks have been applied directly to text analytics as well as graph data with graph convolutional networks.

Language:PythonStargazers:1Issues:0Issues:0

Logistics-Regressions---Machine-Learning

appropriate regression analysis to conduct when the dependent variable is dichotomous (binary). Like all regression analyses, the logistic regression is a predictive analysis.

Language:PythonStargazers:0Issues:0Issues:0

ScriptableRenderPipeline

Scriptable Render Pipeline

Language:C#License:NOASSERTIONStargazers:0Issues:0Issues:0

ILSpy

.NET Decompiler

Language:C#Stargazers:0Issues:0Issues:0

resharper-unity

Unity support for both ReSharper and Rider

Language:C#License:Apache-2.0Stargazers:0Issues:0Issues:0

ml-agents

Unity Machine Learning Agents Toolkit

Language:C#License:Apache-2.0Stargazers:0Issues:0Issues:0

C-Sharp-Promise

Promises library for C# for management of asynchronous operations.

Language:C#License:MITStargazers:0Issues:0Issues:0

PostProcessing

Post Processing Stack

Language:C#License:NOASSERTIONStargazers:0Issues:0Issues:0

Awesome-Game-Networking

A Curated List of Game Network Programming Resources

Language:C++Stargazers:0Issues:0Issues:0

xamarin-demos

This repository contains the Syncfusion Xamarin UI control’s samples and the guide to use them.

Language:C#Stargazers:0Issues:0Issues:0
Stargazers:0Issues:0Issues:0

Ocean_Community_Next_Gen

Next gen iteration of the unity community ocean shader

Language:C#License:NOASSERTIONStargazers:0Issues:0Issues:0

Unity-Script-Collection

A maintained collection of useful & free unity scripts / library's / plugins and extensions.

License:GPL-3.0Stargazers:0Issues:0Issues:0

Unity-Design-Pattern

:tea: All Gang of Four Design Patterns written in Unity C# with many examples. And some Game Programming Patterns written in Unity C#. | 各种设计模式的Unity3D C#版本实现

Language:C#Stargazers:0Issues:0Issues:0

ar-drawing-java

A simple AR drawing experiment build in Java using ARCore.

License:Apache-2.0Stargazers:0Issues:0Issues:0
Language:C#License:MITStargazers:0Issues:0Issues:0