I just created a model that does a binary classification and has a dense layer of 1 unit at the end. I used Sigmoid activation. However, I get this error now when I wanna convert it to CoreML.
I tried to change the number of units to 2 and activation to softmax but still didn't work.
import coremltools as ct
#1. define input size
image_input = ct.ImageType(scale=1/255)
#2. give classifier
classifier_config = coremltools.ClassifierConfig(class_labels=[0, 1]) #ERROR here
#3. convert the model
coreml_model = coremltools.convert("mask_detection_model_surgical_mask.h5",
inputs=[image_input], classifier_config=classifier_config)
#4. load and resize an example image
example_image = Image.open("Unknown3.jpg").resize((256, 256))
# Make a prediction using Core ML
out_dict = coreml_model.predict({mymodel.input_names[0]: example_image})
print(out_dict["classLabels"])
# save to disk
#coreml_model.save("FINALLY.mlmodel")
I found the answer to my question.
Use Softmax activation and 2 Dense units as the final layer with either loss='binary_crossentropy' or `loss='categorical_crossentropy'
Good luck to hundreds of people who posted a similar question but received no answer.
My project is to forecast the wti crude oil price using ann. I already have the dataset and I divided it into 70% training data and 30% testing data. That's the only basic thing I know and I did for my project. Now I dunno what to do next since I don't have any tutorial or guidance I can refer to. Can anyone tell me what to do next?
Consider that you have TrainData, TargetTrain, TestData and TargetTest.
TrainData and TestData samples are in row and features are in column.
TargetTrain and TargetTest are two classes and are 0 or 1
InputNum=size(TrainData,2);
OutputNum=2; % two class problem
Xtr=TrainData;
Ytr=full(ind2vec(double(TargetTrain+1)));
Xts=TestData;
Yts=full(ind2vec(double(TargetTest+1)));
%% Network Structure
net = feedforwardnet(11);
%% Training
net.trainParam.showWindow=1;
net.trainParam.max_fail=7;
net = train(net,Xtr',Ytr);
For evaluation you can test:
out_train=net(Xtr');
out_test=net(Xts');
This code create ANN with 11 hidden nets.
I have downloaded some datasets from UCI for classification of RVM task.However,I am not sure about how to use it.I guess that these datasets must be normalized or do some other job before using it for training and testing.
For example,I have downloaded 'banknote authentication Data Set' on UCI.And use svmtrain in matlab to obtain a svm model(use svm model for testing data and then use rvm codes if result of svm classification is ok).
>> load banknote
>> meas = banknote(:,1:4);
>> species = banknote(:,5);
>> data = [meas(:,1), meas(:,2), meas(:,3), meas(:,4)];
>> groups = ismember(species,1);
>> [train, test] = crossvalind('holdOut',groups);
>> cp = classperf(groups);
>> svmStruct = svmtrain(data(train,:),groups(train),'showplot',true);
These is what I do in matlab,and get the following message:
??? Error using ==> svmtrain at 470
Unable to solve the optimization problem:
Maximum number of iterations exceeded; increase options.MaxIter.
To continue solving the problem with the current solution as the
starting point, set x0 = x before calling quadprog.
And here are a part of the dataset(total lines 1372 and use some for training and the rest for testing):
3.6216,8.6661,-2.8073,-0.44699,0
4.5459,8.1674,-2.4586,-1.4621,0
3.866,-2.6383,1.9242,0.10645,0
3.4566,9.5228,-4.0112,-3.5944,0
0.32924,-4.4552,4.5718,-0.9888,0
4.3684,9.6718,-3.9606,-3.1625,0
3.5912,3.0129,0.72888,0.56421,0
2.0922,-6.81,8.4636,-0.60216,0
3.2032,5.7588,-0.75345,-0.61251,0
1.5356,9.1772,-2.2718,-0.73535,0
1.2247,8.7779,-2.2135,-0.80647,0
3.9899,-2.7066,2.3946,0.86291,0
1.8993,7.6625,0.15394,-3.1108,0
-1.5768,10.843,2.5462,-2.9362,0
3.404,8.7261,-2.9915,-0.57242,0
So, any good advice about this problem?Thank you all for helping.
Later to commit.Use scale function to normalization the feature.And if the datasets have too many features,we can use PCA to reduce dimension.
Im using LIBSVM and MatLab to classify 34x5 data in 3 classes. I applied 10 fold Kfold cross validation method and RBF kernel. The output is this confusion matrix with 0.88 Correct rate (88 % accuracy). This is my confusion matrix
9 0 0
0 3 0
0 4 18
I would like to know what methods inside SVM to consider to improve the accuracy or other classifications method in Machine learning techniques. Any help?
Here is my SVM classification code
load Turn180SVM1; //load data file
libsvm_options = '-s 1 -t 2 -d 3 -r 0 -c 1 -n 0.1 -p 0.1 -m 100 -e 0.000001 -h 1 -b 0 -wi 1 -q';//svm options
C=size(Turn180SVM1,2);
% cross validation
for i = 1:10
indices = crossvalind('Kfold',Turn180SVM1(:,C),10);
cp = classperf(Turn180SVM1(:,C));
for j = 1:10
[X, Z] = find(indices(:,end)==j);%testing
[Y, Z] = find(indices(:,end)~=j);%training
feature_training = Turn180SVM1([Y'],[1:C-1]); feature_testing = Turn180SVM1([X'],[1:C-1]);
class_training = Turn180SVM1([Y'],end); class_testing = Turn180SVM1([X'], end);
% SVM Training
disp('training');
[feature_training,ps] = mapminmax(feature_training',0,1);
feature_training = feature_training';
feature_testing = mapminmax('apply',feature_testing',ps)';
model = svmtrain(class_training,feature_training,libsvm_options);
%
% SVM Prediction
disp('testing');
TestPredict = svmpredict(class_testing,sparse(feature_testing),model);
TestErrap = sum(TestPredict~=class_testing)./length(class_testing)*100;
cp = classperf(cp, TestPredict, X);
disp(((i-1)*10 )+j);
end;
end;
[ConMat,order] = confusionmat(TestPredict,class_testing);
cp.CorrectRate;
cp.CountingMatrix;
Many methods exist. If your tuning procedure is optimal (e.g. well executed cross-validation) your choices include:
Improve preprocessing, perhaps tailor new aggregated features based on domain knowledge. Most importantly (and most effectively): make sure your inputs are standardized properly, for example by scaling every dimension onto [-1,1].
Use another kernel: RBF kernels are known to perform very well in a wide variety of settings, but specialised kernels exist for many tasks. Don't consider this unless you know what you are doing. Since you are dealing with a low-dimensional problem, RBF is probably a good choice if your data is not structured.
Reweigh training instances: particularly important when your data set is unbalanced (e.g. some classes have a lot less instances than others). You can do this with the -wX options in libsvm. All sorts of reweighting schemes exist, including variants of boosting. I'm not a major fan of this, since such approaches are prone to overfitting.
Change the cross-validation cost function to suit your exact needs. Is accuracy really what you are looking for or do you want, say, high F1 or high ROC-AUC? It is surprising how many people optimize a performance measure they are not really interested in.
I tried svm with 4 features. I used Libsvm for training classifier then I want to draw decision boundries. I tried to draw in 2D space in matlab for 1 vs 3 (One vs One) and the 2D features were columns 1 and 3 of Iris data but it drew the wrong decision boundry. What is wrong? What should I do?
coef1v3 = [model.sv_coef(1:7,2); model.sv_coef(27:45,1)];
SVs1v3 = [model.SVs(1:7,:); model.SVs(27:45,:)];
b=model.rho;
w1v3 = SVs1v3'*coef1v3;
b1v3=b(2);
xp=linspace(min (data(:,1)),max (data(:,1)));
yp1=(-w1v3(1)*xp+b1v3)/w1v3(3);
plot(xp , yp1);
Nothing is wrong. just try dimension 1 and 3.No need to try every dimension.I did it and got true response.