nfmcclure / tensorflow_cookbook

Code for Tensorflow Machine Learning Cookbook

Home Page:https://www.packtpub.com/big-data-and-business-intelligence/tensorflow-machine-learning-cookbook-second-edition

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

some questions about the detail of "05_nonlinear_svm.py"

hunterzju opened this issue · comments

the example can work well, but I wonder the mathematical theory of the code, could you show me how the code solve the svm problem.
To be specific, I tried the SVM in sklearn, but it is too slow. But in sklearn there is a parameter "C" which can control the softmargin, I want to know if the code support the softmargin and how it works.
thanks!

commented

Hi @hunterzju ,

Great question. And thanks for bringing this up. I just made a jupyter notebook out of the linear-svm code here:

https://github.com/nfmcclure/tensorflow_cookbook/blob/master/04_Support_Vector_Machines/02_Working_with_Linear_SVMs/02_linear_svm.ipynb

There is a constant in the loss function, alpha, which allows for the flexibility of the soft-margin. The higher it is the more tolerance for erroneous points. If you want to have the hard-margin behaviour, then you can set alpha = 0.

Unfortunately I didn't illustrate it very well, because the dataset that I choose are linearly seperable, so a soft margin is hard to illustrate here. I'm assuming if you try with a different combination of Iris features you can see the soft margin at work. Let me know if you have any more questions or if it isn't working for you.

I'm going to close the issue for now, but please reopen if you have any more questions. Thanks!