Gradient descent is an optimization algorithm which is commonly-used to train machine learning models and neural networks. Training data helps these models learn over time, and the cost function within gradient descent specifically acts as a barometer, gauging its accuracy with each iteration of parameter updates. Until the function is close to or equal to zero, the model will continue to adjust its parameters to yield the smallest possible error. Once machine learning models are optimized for accuracy, they can be powerful tools for artificial intelligence (AI) and computer science applications.
Purpose of this project is to demonstrate the practical implementation of how gradient descent algorithm works.
What things you need to install the software and how to install them.
- VSCode
- Python
- Numpy
To install numpy library run the following command on terminal.
pip install numpy
- Contributing to The Documentation Compendium is fairly easy. This document shows you how to get started
- The Codebase Structure has detailed information about how the various files in this project are structured
- Please ensure that any changes you make are in accordance with the Coding Guidelines of this repo
- Fork the repo
- Check out a new branch based and name it to what you intend to do:
- Example:
If you get an error, you may need to fetch fooBar first by using
$ git checkout -b BRANCH_NAME
$ git remote update && git fetch
- Use one branch per fix / feature
- Example:
- Commit your changes
- Please provide a git message that explains what you've done
- Please make sure your commits follow the conventions
- Commit to the forked repository
- Example:
$ git commit -am 'Add some fooBar'
- Push to the branch
- Example:
$ git push origin BRANCH_NAME
- Example:
- Make a pull request
- Make sure you send the PR to the
fooBar
branch - Travis CI is watching you!
- Make sure you send the PR to the
If you follow these instructions, your PR will land pretty safely in the main repo!