everybody.I'd like to use caffe to train a 5 classes detection task with "SSD: Single Shot MultiBox Detector", so I changed the num_classes from 21 to 6.However,I get an following error:
"Check failed: num_priors_ * num_classes_ == bottom[1]->channels() (52392 vs. 183372) Number of priors must match number of confidence predictions."
I can understand this error,and I found 52392/6=183372/21,namely why I changed num_classes to 6,but the number of confidence predictions is still 183372. So how to solve this problem. Thank you very much!
Since SSD depends on the number of labels not only for the classification output, but also for the BB prediction, you would need to change num_output in several other places in the model.
I would strongly suggest you wouldn't do that manually, but rather use the python scripts provided in the 'examples/ssd' folder. For instance, you can change line 277 in 'examples/ssd/ssd_pascal_speed.py' to:
num_classes = 5 # instead of 21
And then use the model files this script provides.
Related
I just created a model that does a binary classification and has a dense layer of 1 unit at the end. I used Sigmoid activation. However, I get this error now when I wanna convert it to CoreML.
I tried to change the number of units to 2 and activation to softmax but still didn't work.
import coremltools as ct
#1. define input size
image_input = ct.ImageType(scale=1/255)
#2. give classifier
classifier_config = coremltools.ClassifierConfig(class_labels=[0, 1]) #ERROR here
#3. convert the model
coreml_model = coremltools.convert("mask_detection_model_surgical_mask.h5",
inputs=[image_input], classifier_config=classifier_config)
#4. load and resize an example image
example_image = Image.open("Unknown3.jpg").resize((256, 256))
# Make a prediction using Core ML
out_dict = coreml_model.predict({mymodel.input_names[0]: example_image})
print(out_dict["classLabels"])
# save to disk
#coreml_model.save("FINALLY.mlmodel")
I found the answer to my question.
Use Softmax activation and 2 Dense units as the final layer with either loss='binary_crossentropy' or `loss='categorical_crossentropy'
Good luck to hundreds of people who posted a similar question but received no answer.
I'm familiarizing myself with Pyspark and SparkML at the moment. To do so I use the titanic dataset to train a GLM for predicting the 'Fare' in that dataset.
I'm following closely the Spark documentation. I do get a working model (which I call glm_fare) but when I try to assess the trained model using summary I get the following error message:
RuntimeError: No training summary available for this GeneralizedLinearRegressionModel
Why is this?
The code for training was as such:
glm_fare = GeneralizedLinearRegression(
labelCol="Fare",
featuresCol="features",
predictionCol='prediction',
family='gamma',
link='log',
weightCol='wght',
maxIter=20
)
glm_fit = glm_fare.fit(training_df)
glm_fit.summary
Just in case someone comes across this question, I ran into this problem as well and it seems that this error occurs when the Hessian matrix is not invertible. This matrix is used in the maximization of the likelihood for estimating the coefficients.
The matrix is not invertible if one of the eigenvalues is 0, which occurs when there is multicollinearity in your variables. This means that one of the variables can be predicted with a linear combination of the other variables. Consequently, the effect of each of the variables cannot be identified with any significance.
A possible solution would be to find the variables that are (multi)collinear and remove one of them from the regression. Note however that multicollinearity is only a problem if you want to interpret the coefficients and not when the model is used for prediction.
It is documented possibly there could be no summary available for a model in GeneralizedLinearRegressionModel docs.
However you can do an initial check to avoid the error:
glm_fit.hasSummary() which is a public boolean method.
Using it as
if glm_fit.hasSummary():
print(glm_fit.summary)
Here is a direct like to the Pyspark source code
and the GeneralizedLinearRegressionTrainingSummary class source code and where the error is thrown
Make sure your input variables for one hot encoder starts from 0.
One error I made that caused summary not created is, I put quarter(1,2,3,4) directly to one hot encoder, and get a vector of length 4, and one column is 0. I converted quarter to 0,1,2,3 and problem solved.
I'm using the Caffe library for training a convolutional neural network (CNN). However, I'm getting the following error when using the concat layer to combine the output from two convolutional layers before applying it to a inner_product layer.
F1023 15:14:03.867435 2660 net.cpp:788] Check failed: target_blobs[j]->shape() == source_blob->shape() Cannot share param 0 weights from layer 'fc1'; shape mismatch. Source param shape is 400 800 (320000); target param shape is 400 400 (160000)
As far as I know I am using the concat layer in the exact same way as in BVLC_GoogLeNet. The concat layer can be found in my train.prototxt at pastebin under the name combined. The dimensions of my input blob is 256x8x7x24, where the data format in Caffe is batch_size x channels x height x width. I've tried training both using the pycaffe interface and the console. I get the same error. Below is code for training using the console.
solver_path = CAFFE_ROOT+'build/tools/caffe train -solver '
model_path = self.run_dir+'models/solver.prototxt'
log_path = self.run_dir+'models/training.log'
p = subprocess.Popen("GLOG_logtostderr=1 {} {} 2> {}".format(solver_path, model_path, log_path), shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
What is the meaning of this error? And how can it be resolved?
Update
As mentioned in the comments the log contains nothing else than the error. The stack trace for the error is the following:
# 0x7f231886e267 caffe::Net<>::ShareTrainedLayersWith()
# 0x7f231885c338 caffe::Solver<>::Test()
# 0x7f231885cc3e caffe::Solver<>::TestAll()
# 0x7f231885cd79 caffe::Solver<>::Step()
# 0x7f231885d6c5 caffe::Solver<>::Solve()
# 0x408d2b train()
# 0x4066f1 main
It should also be noted that my solver and code works fine for training the exact same CNN with only 1 "path" along the network, i.e. without the CONCAT layer.
I believe the issue you're having is that your train net has been updated to have a concat layer while your test net hasn't.
It would explain the 400x400 vs 400x800 issue you're having considering your concat merges two 400x400 layers. I can't know for certain without being able to see your test net.
I'm teaching myself classification, I read and understood the MatLab online help of the simple LDA classifier which uses the fisher iris dataset.
I have now moved to SVM. But even though I use the exact syntax from the help page I get an error of either not enough or too many input arguments.
I have made trained my SVMClassifier using svmtrain via the command:
SVMStruct = svmtrain(training,labels);
Where training is a 207 by 900 training matrix. There are 207 samples and 900 HoG descriptors or features. Similarly labels is a 207 by 1 column vector consisting of either +1 or -1 for their respective samples.
I then wanted to test it and see if this works by calling:
Group = svmclassify(SVMStruct,sample,'Showplot',true)
Where sample is a 2 by 900 matrix containing 2 test samples. I was expecting to get +1 and -1 as these are what the test samples should be labelled. But I get the error:
Too many input arguments.
And when I use the command
Group = svmclassify(SVMStruct,sample)
I get the error
Not enough input arguments.
You might have overloaded svmclassify function.
try
>> which svmclassify
to verify that you are actually calling the right function.
In case that you overloaded the function (that is, created a different function with the same name svmclassify) and it is located higher in your path then you'll need to rename the overloaded function and run svmclassify again.
I'm doing some cross-validation using a Matlab Weka Interface that I got from file exchange. My loop structure seems to work fine for Weka's Logistic classifier. However, when I try to do the exact same thing for AdaBoostM1, it throws the following error:
??? Java exception occurred: java.lang.ArrayIndexOutOfBoundsException
Error in ==> wekaClassify at 24 classProbs(t+1,:) = (classifier.distributionForInstance(testData.instance(t)))';
Error in ==> classifier_search at 225 [pred ~] = wekaClassify(matlab2weka('instance', featurelabels, tester), classifier);
I have determined through some testing that this only occurs when the number of instances in the training set is greater than the number of instances in the test set. I am sure you can see why that is a problem for me, since in most situations the training set is greater than the test set in size.
Is there something different about how I should format my inputs when using Adaboost rather than Logistic? Any information you can give regarding this problem would be so helpful.
I downloaded this code from this page: http://www.mathworks.com/matlabcentral/fileexchange/21204-matlab-weka-interface
Emails bounce from the account of the guy who made it, and he doesn't seem to respond to comments on the page - I'm hoping that maybe someone here has used this.
EDIT: Here is the code that I use to train and test the classifier:
classifier = trainWekaClassifier(matlab2weka('training', featurelabels, train), 'meta.AdaBoostM1', { strcat('-P 100 -S 1 -I ', num2str(r), '-W weka.classifiers.trees.DecisionStump')});
[pred ~] = wekaClassify(matlab2weka('instance', featurelabels, tester), classifier);
I haven't used this combination of software, so I can only take a guess at what could cause this.
Are your training/testing data matrices the right way round? They should be N-by-D (N instances, D features).
If you were passing in a D-by-N training matrix and a D-by-M testing matrix, then I would expect it to work only when M < N - which is what you describe - and even then, it wouldn't give a meaningful result.