Unexpected model output running Onnx model in Unity using Barracuda - unity3d

Context
I am trying to use a pre-trained model in ONNX format to do inference on image data in Unity. The model is linked to the executing component in Unity as an asset called modelAsset. I am using Barracuda version 1.0.0 for this and executing the model as follows:
// Initialisation
this.model = ModelLoader.Load(this.modelAsset);
this.worker = WorkerFactory.CreateWorker(WorkerFactory.Type.CSharpBurst, model);
// Loop
Tensor tensor = new Tensor(1, IMAGE_H, IMAGE_W, 3, data);
worker.Execute(tensor);
Tensor modelOutput = worker.PeekOutput(OUTPUT_NAME);
The data going into the input tensor (of which the model has only 1) is image data of h * w with 3 channels for RGB values between -0.5 and 0.5. The model has multiple outputs which I retrieve in the last line shown above.
Expected behavior
Using the same input data, the PyTorch model and converted ONNX model produce the same output data in Python (ONNXRuntime and PyTorch) as in Barracuda in Unity.
Problem
In python both the ONNX and PyTorch model produce the same output. However, the same ONNX model running in Barracuda produces a different output. The difference is mainly that we expect a heatmap but Barracuda consistently produces values somewhere between 0.001 and -0.0004 in these patterns:
This makes it almost seem like the model weights are not properly loaded.
What we found
When converting to ONNX as per the Barracuda manual we found that if we did not set the model to inference mode in the PyTorch net before conversion (link), these same, incorrect, results were generated by ONNXRuntime in Python. In other words, it looks like this inference mode is saved in the ONNX model and is recognized by ONNXRuntime in Python but not in Barracuda.
Our question
In general:
How do we get this model in Barracuda in Unity to produce the same results as ONNXRuntime/PyTorch in Python?
And potentially:
How does the inference mode get embedded into the ONNX file and how is it used in ONNXRuntime vs Barracuda?

So it turned out that there were 2 problems.
First, the input data had been orchestrated according to the ONNX model dimensions, however, Barracuda expects differently oriented data. "The native ONNX data layout is NCHW, or channels-first. Barracuda automatically converts ONNX models to NHWC layout." So our data was flattened into an array similar to the Python implementation which created the first mismatch.
Secondly, the Y-axis of the input image was inverted, making the model unable to recognize any people.
After correcting for these issues, the implementation works fine!

Related

How to import deep learning models from MATLAB to PyTorch?

I’m trying to import a DNN trained model from MATLAB to PyTorch.
I’ve found solutions for the opposite case (from PyTorch to MATLAB), but no proposed solutions on how to import a trained model from MATLAB to PyTorch.
Any ideas, please?
You can first export your model to ONNX format, and then load it using ONNX; prerequisites are:
pip install onnx onnxruntime
Then,
onnx.load('model.onnx')
# Check that the IR is well formed
onnx.checker.check_model(model)
Until this point, you still don't have a PyTorch model. This can be done through various ways since it's not natively supported.
A workaround (by loading only the model parameters)
import onnx
onnx_model = onnx.load('model.onnx')
graph = onnx_model.graph
initalizers = dict()
for init in graph.initializer:
initalizers[init.name] = numpy_helper.to_array(init)
for name, p in model.named_parameters():
p.data = (torch.from_numpy(initalizers[name])).data
Using onnx2pytorch
import onnx
from onnx2pytorch import ConvertModel
onnx_model = onnx.load('model.onnx')
pytorch_model = ConvertModel(onnx_model)
Note: Time Consuming
Using onnx2keras, then MMdnn to convert from Keras to PyTorch (Examples)

No Model Summary For GLMs in Pyspark / SparkML

I'm familiarizing myself with Pyspark and SparkML at the moment. To do so I use the titanic dataset to train a GLM for predicting the 'Fare' in that dataset.
I'm following closely the Spark documentation. I do get a working model (which I call glm_fare) but when I try to assess the trained model using summary I get the following error message:
RuntimeError: No training summary available for this GeneralizedLinearRegressionModel
Why is this?
The code for training was as such:
glm_fare = GeneralizedLinearRegression(
labelCol="Fare",
featuresCol="features",
predictionCol='prediction',
family='gamma',
link='log',
weightCol='wght',
maxIter=20
)
glm_fit = glm_fare.fit(training_df)
glm_fit.summary
Just in case someone comes across this question, I ran into this problem as well and it seems that this error occurs when the Hessian matrix is not invertible. This matrix is used in the maximization of the likelihood for estimating the coefficients.
The matrix is not invertible if one of the eigenvalues is 0, which occurs when there is multicollinearity in your variables. This means that one of the variables can be predicted with a linear combination of the other variables. Consequently, the effect of each of the variables cannot be identified with any significance.
A possible solution would be to find the variables that are (multi)collinear and remove one of them from the regression. Note however that multicollinearity is only a problem if you want to interpret the coefficients and not when the model is used for prediction.
It is documented possibly there could be no summary available for a model in GeneralizedLinearRegressionModel docs.
However you can do an initial check to avoid the error:
glm_fit.hasSummary() which is a public boolean method.
Using it as
if glm_fit.hasSummary():
print(glm_fit.summary)
Here is a direct like to the Pyspark source code
and the GeneralizedLinearRegressionTrainingSummary class source code and where the error is thrown
Make sure your input variables for one hot encoder starts from 0.
One error I made that caused summary not created is, I put quarter(1,2,3,4) directly to one hot encoder, and get a vector of length 4, and one column is 0. I converted quarter to 0,1,2,3 and problem solved.

Using Weka NaiveBayes with Matlab

I created a NaiveBayes model in Weka. I exported the model to disk. I now want to inject this model into MATLAB 2018, so that I can check how it performs via some data that I am receiving.
I load my model in MATLAB, by stating something like this:
loadedModel = weka.core.SerializationHelper.read('myweka.model');
I then create a Weka Instance object, and let it contain this data:
instance = infrequent,low,high,medium-high,high,medium,medium,low,low
If I run these two commands:
loadedModel.distributionForInstance(instance)
loadedModel.classifyInstance(instance)
I see the following output:
0.0001
0.9999
1
This is odd to me because if I observe the same record in WEKA ui, I see the same instance with probabilities 0.993 and 0.007, classified as '2'. (I can load the same model multiple times from disk in WEKA, and reproduce this behavior, which is correct) After further investigation, I noticed that regardless of the sequence of attributes my Instance object has, I always get the same probability output and the same classification by invoking the model via MATLAB.
There are some posts on the net that share the same problem, like these:
Always getting the same output
Weka - Classifier returns the same distribution for any input
However, the recommended solution to call 'instance.setClassMissing()' did not solve my issue. Is there anything I am missing, or can try to do in order to further troubleshoot the issue?
Does your test instance has same structure as your train set? If not, you need yo provide the same structure.
Weka indexes nominal attributes and stores the indices internally. So the nominal attributes order in train file is important. For example if your attribute is mapped as low=>0, high=>1 in training, you need to map them like this in your test set. Usually this is achieved by serializing the train header with the model.
Sample code for creating train header:
Instances trainHeader = new Instances(instances, 0);
trainHeader.setClassIndex(instances.classIndex());
When creating a new instance set its dataset:
Instance instance = ...
instance.setDataset(trainHeader);

CoreML model yields different results between coremltools and Xcode

I've created a .mlmodel file based on a custom PyTorch CNN model by converting the PyTorch model first to ONNX and then to CoreML using onnx_coreml. Using dummy data (a 3 x 224 x 224 array where every single value is 1.0), I've verified that the PyTorch model, the ONNX model (run using the Caffe backend) and the CoreML model (using coremltools) all yield identical results.
However, when I import the same model into Xcode and run it on a phone, even using dummy data, the model outputs do not match up.
The device I'm using does not seem to make a difference (I've tried on iPhones ranging from the XS Max all the way down to an SE). All are running iOS 12.2, and using Xcode 10.2.1
Here's the code (in Swift) I'm using to create the dummy data and get a prediction from my model:
let pixelsWide = Int(newImg.size.width)
let pixelsHigh = Int(newImg.size.height)
var pixelMLArray = try MLMultiArray(shape: [1, 1, 3, 224, 224], dataType: .float32)
for y in 0 ..< pixelsHigh {
for x in 0 ..< pixelsWide {
pixelMLArray[[0,0,0,x,y] as [NSNumber]] = 1.0
pixelMLArray[[0,0,1,x,y] as [NSNumber]] = 1.0
pixelMLArray[[0,0,2,x,y] as [NSNumber]] = 1.0
}
}
do {
let convModel = CNNModel()
var thisConvOutput = try convModel.prediction(_0: pixelMLArray)._1161
} catch { print("Error") }
I've verified that the input and output tags are correct, etc. etc.
This runs smoothly, but the first three values of thisConvOutput are:
[0.000139, 0.000219, 0.003607]
For comparison, the first three values running the PyTorch model are:
[0.0002148, 0.00032246, and 0.0035419]
And the exact same .mlmodel using coremltools:
[0.00021577, 0.00031877, 0.0035404]
Long story short, not being experienced with Swift, I'm wondering whether I'm doing something stupid in initializing / populating my "pixelMLArray" to run it through the model in Xcode on my device, since the .mlmodel results from coremltools are extremely close to the results I get using PyTorch. Can anyone help?
Your Core ML output on device: [0.000139, 0.000219, 0.003607]
Your output from coremltools: [0.00021577, 0.00031877, 0.0035404]
Note that these are very small numbers. When Core ML runs your model on the GPU (and possibly on the Neural Engine, not sure) it uses 16-bit floating point. These have much smaller precision than 32-bit floating point.
Note how 0.000139 and 0.00021577 are not the same number but they are both around 1e-4. This is below the precision limit of 16-bit floats. But 0.003607 and 0.0035404 are almost the same number, because they're about 10x larger and therefore don't lose as much precision.
Try running your Core ML model on the device using the CPU (you can pass an option for this when instantiating your model). You'll probably see that you now get results that are much closer (and probably identical) to the coremltools version, because Core ML on the CPU uses 32-bit floats.
Conclusion: from what you've shown so far, it looks like your model is working as expected, taking into consideration you will lose precision due to the computations happening with 16-bit floating points.

How to combine 2 trained models in Keras

I want to to concatenate the last layer before the output of 2 trained models and have a new model that uses the merged layer to give predictions. below is the relevant parts of my code:
model1 = load_model("model1_location.model")
model2 = load_model("model1_location.model")
merged_model = Sequential(name='merged_model')
merged_model.add(merge([model1.layers[-1],model2.layers[-1]]))
merged_model.add(Dense(3, activation='softmax'))
The above code gives the following error:
ValueError: Layer merge_2 was called with an input that isn't a symbolic tensor. Received type: <class 'keras.layers.core.Dense'>.
What is the correct way to combine those models, Alternatively how do I get a symbolic tensor from a layer?
you need to get the output attribute like so:
merged_model.add(merge([model1.layers[-1].output, model2.layers[-1].output]))