Arctangent1759 / generic_repository

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Me:
---
Name: Alexander Chu
SID: 23460953

Partner:
--------
Name: Leo Wu
SID: 23661771

Question 1
==========

To reproduce the results of question 1, run:

python q1.py <output>

which will train a SVM on 10,000 samples with parameters "-B 10000 -c 0.001 -s 2" and write the resulting model to the file located at <output>.

To train the SVM on other training set sizes, run:

python q1.py <c> <output>

where <c> an integer from 0 to 6 specifying the size of the training set, and <output> is the path of an output file with which to hold the resulting training model. 

In order to reproduce the trainings set size to error graph, the user needs matplotlib installed (see http://matplotlib.org/). To generate the graph, run:

python q1.py plot

For more detailed instructions on the usage of q1.py, run:

python q1.py

Usage Example:
python q1.py 6 foobar.model

Question 2
==========

To reproduce the results of question 2, run:

python q2.py <input>

where <input> is the name of the predictions file generated by q1. The default file used is "q1_svm.model". In order to reproduce the confusion matrix, the user needs matplotlib installed.

Usage Example:
python q2.py q1_svm.model

Question 3
==========

To reproduce the results of question 3, run:

python q3.py <filename> -c <c1> <c2> <c3> ... -B <B1> <B2> <B3> ... 

which will perform 10-fold cross-validation on costs specified by <c1>, <c2>, <c3>, ... and Biases <B1>, <B2>, <B3>, ..., and write the resulting SVM model to the file located at <filename>. If the parameter lists specified by -c or -B are omitted, then c and B take the default values of 0.001 and 10000, respectively.

To run cross-validation on only one value of c and/or B, run:

python q3.py <filename> -c <c> -B <B>

which will run cross-validation using parameters <c> and <B>, and write the result to <filename>.

Usage Example:
python q3.py foobar.model -c 0.1 0.01 0.001 -B 1 10 100

About


Languages

Language:Python 77.6%Language:TeX 22.4%