Brian Wade's repositories

quadcopter_with_PID_controller

Quadcopter dynamics simulation with two proportional–integral–derivative (PID) controllers that adjust the motor speeds of the quadcopter and enable control of position and orientation.

Language:PythonStargazers:26Issues:0Issues:0

Ballistics_Simulation

MATLAB Simulation of a six degree of freedom (6-DOF) ballistic flight of a 120mm mortar round.

actor_critic_quadcopter

Advantage Actor-Critic (A2C) reinforcement learning agent used to control the motor speeds on a quadcopter in order to keep the quadcopter in a stable hover following a random angular acceleration perturbation between 0-3 degrees per second in each of the control axes: pitch, roll, and yaw.

Language:MATLABStargazers:16Issues:1Issues:0

Bayesian_Opt_NN_Building_Heating_Loads

This project uses Bayesian Optimization to find the optimal hyperparameters for a fully-connected feed-forward neural network used to estimate the heating load on a building given eight different input features.

Language:MATLABStargazers:8Issues:0Issues:0

sonar_mine_NN_v_RF

This project uses a grid search optimization and then trains two models: a fully connected feed-forward neural network and a random forest to classify the returns from a sonar examining simulated mines (metal cylinders) and standard rocks (false mines).

Language:MATLABStargazers:5Issues:0Issues:0
Language:MATLABStargazers:4Issues:0Issues:0

Bayesian_Optimization_NN_HeartFailure

This project predicts the likelihood for heart failure. The project takes place in three parts: exploratory data analysis (EDA) and data preparation, the creation of three initial binary classification models including logistic regression, random forests, and a neural network. Then, the hyperparameters of the neural net were optimized using Bayesian Optimization.

Language:Jupyter NotebookStargazers:2Issues:0Issues:0

ReinforcementLearning_ActorCritic_Practice

Actor-Critic model trained using value advantages on the OpenAI Gym CartPole-V0 environment.

Language:PythonStargazers:2Issues:0Issues:0

MATLAB_Regression_Results_Statistics_Plots

The function calculates fit statistics and creates charts to understand how well a supervised learning regression model was able to predict target data.

Language:MATLABStargazers:1Issues:0Issues:0

ML_model_shootout_MATLAB

Grid search for hyperpameters on multiple supervised learning models to include neural networks, random forests, and tree ensemble models.

Language:MATLABLicense:GPL-3.0Stargazers:1Issues:0Issues:0

Gym_QLearning_and_RewardShaping

This repo demonstrates basic Q-learning for the Mountain Car Gym environment. It also shows how reward shaping can result in faster training of the agent.

Language:PythonStargazers:0Issues:1Issues:0

PyGame_Tanks

Top-down tank shooter game designed with PyGame. Players can play against enemy tanks who possess basic decision making ability. The next phase will focus on using reinforcement learning to give the enemy tanks better decision-making abilities.

Language:PythonStargazers:0Issues:0Issues:0

brianwade1

Config files for my GitHub profile.

Stargazers:0Issues:0Issues:0
Language:PythonStargazers:0Issues:0Issues:0

Synthetic_Terrain_and_Line_of_Sight

This program generates synthetic 2-dimensional terrain and calculates line-of-sight along that terrain. The program can generate any number of synthetic terrain sets with the associated line-of-sight vectors that indicate is line-of-sight exists between the observer and all the points along the terrain.

Language:PythonStargazers:0Issues:1Issues:0

terrain_line_of_sight_neuralnet

This program trains a fully connected feed-forward neural network to estimate if line-of-sight exists (binary, 0 or 1) for equally spaced points along a 2-dimensional terrain. The inputs to the model are the elevations of equally spaced points along the line-of-sight vector. The outputs are binary predictions of if line-of-sight exists between the observer and each point along the ground.

Language:PythonStargazers:0Issues:1Issues:0

TicTacToe_QLearning

This repo contains files that teaches an agent to play tic-tac-toe using the standard Q-learning algorithm. The algorithm also includes a form of action masking where the environment returns only feasible actions (locations on the board without an X or O) and the agent only evaluates the Q-Value of those feasible states.

Language:PythonStargazers:0Issues:0Issues:0

TicTacToe_Reinforcement_Learning

This repo teaches an agent to play tic-tac-toe using RayRLlib. It is still a work in progress.

Language:PythonStargazers:0Issues:0Issues:0