- Collect Training Data: (with your own codes to create TrainingData)
- Scale training data:
- Get parameters and Train data with parameters
- Test/Run classificaiton: same way to get test data:
fprintf(fp, "3 "); // label: 1, 2 ... for (j = 0; j < 12; j++) fprintf(fp, "%d:%f ", j, mel_cep[j]); fprintf(fp, "\n");
svm-scale.exe -l -1 -u 1 -s range1 TrainingData > TrainingData.scale
python grid.py TrainingData.scaleGot the best parameters c, g (2.0, 2.0)
svm-train.exe -c 2 -g 2 TrainingData.scale
The output is TrainingData.scale.model
//fprintf(fp, "1 "); // if you know the label, can get the accuracy output with this for (j = 0; j < 12; j++) fprintf(fp, "%d:%f ", j, mel_cep[j]); fprintf(fp, "\n");Scale the test data with the same scaling factor with training data:
svm-scale.exe -r range1 test_libsvm.t > test_libsvm.t.scaleRun the classification/Prediction: (write your own codes here)
svm-predict.exe test_libsvm.t.scale TrainingData.scale.model test_libsvm.t.predict
===================
To use the cross validation to check the accuracy:
svm-train.exe -v 5 TrainingData svm-train.exe -v 5 TrainingData.scale svm-train.exe -c 2 -g 2 -v 5 TrainingData.scale
without normalize (scale), Cross Validation Accuracy = 87%
with scale (-1 1), Cross Validation Accuracy = 92%
with parameter selection –c 2 –g 2, Cross Validation Accuracy = 95%
To get the probability information, in training part, use:
svm-train.exe -c 2 -g 2 -b 1 TrainingData.scale
In test/run part, make "int predict_probability=1;" and the program will call "predict_label = svm_predict_probability(model,x,prob_estimates);" , which writes the probability value in the output file.
[update:8/29/2011]:
To run step 3 successfully, make sure that you have that grid.py in the same directory, and in grid.py line 22:
make sure gnuplot installed in correct location, and svmtrain_exe is in correct directory too (may copy to a 'window' folder in current directory).
In file TrainingData.scale.model, take a look at 'label'. It could be 6 2 3 4 5 1, instead of 1 2 3 4 5 6
No comments:
Post a Comment