In OrtCreateSession it fails trying to load an onnx model with message:
failed:[ShapeInferenceError] Attribute pads has incorrect size
What does it mean? Where do I look for the problem? Thanks for any ideas.
The error is coming from one of the convolution or maxpool operators. What this error means is the shape of pads input is not compatible with expected shape for this node.
You can refer to (https://github.com/onnx/onnx/blob/master/docs/Operators.md) to understand the expected size of pads attribute.
Where is this model from? If you got this model from onnx model zoo then please add model name\link here and also add your current onnxruntime version.
Please use https://github.com/microsoft/onnxruntime/issues for this issue.
Related
I'm writing neural networks using torch. Here's a little problem I can't solve.
I've put both network and network inputs into the GPU, but there was an error in training.
RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same
I have solved the problem.
After reviewing the training function, I confirm that the model and input data have been loaded onto the GPU. Then the problem must be in the network model. When I checked the network model, I found that I wasn't using structures like List. It is likely that the model was not loaded into the GPU when other custom network models were used.
So I checked each layer of the network model, which is to force each layer onto the GPU (.cuda()). Depending on where the error prompt is located, the problem was identified after each layer's attempt, which was that the model was not loaded onto the GPU when a custom model was called.
Finally, simply find the location to call the custom model in the network-defined forward function and force it into the GPU(.cuda()).
is there an API of how to get the intput and output node of a pytorch network ?
I tried model.features(), but this won't help.
Example: I get a pytorch network, and its structure in netron:
network
The Conv2d, MaxPool2d and Linear can be easily parsed. I get trouble with getting the information like name and size of the input node and output node.
For getting input to a particular layer means output of previous layer.
So for getting any information during forward pass and backward pass like output of a specific layer, gradient, or if you want to modify any of them there is concept in pytorch known as Hooks.
Find more information here https://pytorch.org/tutorials/beginner/former_torchies/nnft_tutorial.html#forward-and-backward-function-hooks
I'm trying to build a classifier in Weka. I have two data sets: training and testing. The two files are identical: with the same number and type of attributes. However, the weka explorer is giving me error saying Train and test set are not compatible. How to resolve this error?
Here is a snap of the two sets:
training set
testing set
Searching through their wiki, here's what i found:
One of Weka's fundamental assumption is that the structure of the training and test sets are exactly the same. This does not only mean that you need the exact same number of attributes, but also the exact same type. In case of nominal attributes, you must ensure that the number of labels and the order of the labels are the same.
https://weka.wikispaces.com/Why+do+I+get+the+error+message+%27training+and+test+set+are+not+compatible%27%3F
They may seem to be the same but you never know, you should at least try one of the visual diff applications suggested by them.
I hope this helped you in some way.
I tried to train my own neural net using my own imagedatabase as described in
http://caffe.berkeleyvision.org/gathered/examples/imagenet.html
However when I want to check the neural net after training on some standard images using the matlab wrapper I get the following output / error:
Done with init
Using GPU Mode
Done with set_mode
Elapsed time is 3.215971 seconds.
Error using caffe
Invalid input size
I used the matlab wrapper before to extract cnn features based on a pretrained model. It worked. So I don't think the input size of my images is the problem (They are converted to the correct size internally by the function "prepare_image").
Has anyone an idea what could be the error?
Found the solution: I was referencing the wrong ".prototxt" file (Its a little bit confusing because the files are quite similar.
So for computing features using the matlab wrapper one needs to reference the following to files in "matcaffe_demo.m":
models/bvlc_reference_caffenet/deploy.prototxt
models/bvlc_reference_caffenet/MyModel_caffenet_train_iter_450000.caffemodel
where "MyModel_caffenet_train_iter_450000.caffemodel" is the only file needed which is created during training.
In the beginning I was accidently referencing
models/bvlc_reference_caffenet/MyModel_train_val.prototxt
which was the ".prototxt" file used for training.
Does anyone use the matlab wrapper for the caffe framework? Is there a way how to extract an 4096 dimensional feature vector from an image?
I was already following
https://github.com/BVLC/caffe/issues/432
and also tried to remove the last lines in imagenet_deploy.prototxt to remove layers as suggested in another forum on github.
But still when I run "matcaffe_demo(im, 1)" I only get a 1000 dim vector of scores (for the image net classes).
Any help would be appreciated
Kind regards
It seems that you might not be calling the correct prototxt file. If the last layer defined in the prototxt has the top blob of 4096 dimension, there is no way for the output to be 1000 dimension.
To be sure, try creating a bug in the prototxt file and see whether the program crashes. If it doesn't then the program indeed is reading some other prototxt file.