Do I have to preprocess test data using neural networks? - neural-network

I am using Keras (version 2.0.0) and I'd like to make use of pretrained models like e.g. VGG16.
In order to get started, I ran the example of the [Keras documentation site ][https://keras.io/applications/] for extracting features with VGG16:
from keras.applications.vgg16 import VGG16
from keras.preprocessing import image
from keras.applications.vgg16 import preprocess_input
import numpy as np
model = VGG16(weights='imagenet', include_top=False)
img_path = 'elephant.jpg'
img = image.load_img(img_path, target_size=(224, 224))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
features = model.predict(x)
The used preprocess_input() function bothers me
(the function does Zero-centering by mean pixel what can be seen by looking at the source code).
Do I really have to preprocess input data (validation/test data) before using a trained model?
a)
If yes, one can conclude that you always have to be aware of what preprocessing steps have been performed during training phase?!
b)
If no: Does preprocessing of validation/test data cause a bias?
I appreciate your help.

Yes you should use the preprocessing step. You can retrain the model without it but the first layers will learn to center your datas so this is a waste of parameters.
If you do not recenter your performances will suffer.
Great thread on reddit : https://www.reddit.com/r/MachineLearning/comments/3q7pjc/why_is_removing_the_mean_pixel_value_from_each/

Related

How to import deep learning models from MATLAB to PyTorch?

I’m trying to import a DNN trained model from MATLAB to PyTorch.
I’ve found solutions for the opposite case (from PyTorch to MATLAB), but no proposed solutions on how to import a trained model from MATLAB to PyTorch.
Any ideas, please?
You can first export your model to ONNX format, and then load it using ONNX; prerequisites are:
pip install onnx onnxruntime
Then,
onnx.load('model.onnx')
# Check that the IR is well formed
onnx.checker.check_model(model)
Until this point, you still don't have a PyTorch model. This can be done through various ways since it's not natively supported.
A workaround (by loading only the model parameters)
import onnx
onnx_model = onnx.load('model.onnx')
graph = onnx_model.graph
initalizers = dict()
for init in graph.initializer:
initalizers[init.name] = numpy_helper.to_array(init)
for name, p in model.named_parameters():
p.data = (torch.from_numpy(initalizers[name])).data
Using onnx2pytorch
import onnx
from onnx2pytorch import ConvertModel
onnx_model = onnx.load('model.onnx')
pytorch_model = ConvertModel(onnx_model)
Note: Time Consuming
Using onnx2keras, then MMdnn to convert from Keras to PyTorch (Examples)

Feature Selection in Multivariate Linear Regression

import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.linear_model import LinearRegression
from sklearn.datasets import make_regression
a = make_regression(n_samples=300,n_features=5,noise=5)
df1 = pd.DataFrame(a[0])
df1 = pd.concat([df1,pd.DataFrame(a[1].T)],axis=1,ignore_index=True)
df1.rename(columns={0:"X1",1:"X2",2:"X3",3:"X4",4:"X5",5:"Target"},inplace=True)
sns.heatmap(df1.corr(),annot=True);
Correlation Matrix
Now I can ask my question. How can I choose features that will be included in the model?
I am not that well-versed in python as I use R most of the time.
But it should be something like this:
# Create a model
model = LinearRegression()
# Call the .fit method and pass in your data
model.fit(Variables,Target)
# Or simply do
model = LinearRegression().fit(Variables,Target)
# So based on the dataset head provided, it should be
X<-df1[['X1','X2','X3','X4','X5']]
Y<-df1['Target']
model = LinearRegression().fit(X,Y)
In order to do feature selections. You need to run the model first. Then check for the p-value. Typically, a p-value of 5% (.05) or less is a good cut-off point. If the p-value crosses the upper threshold of .05, the variable is insignificant and you can remove it from your model. You will have to do this manually. You can also tell by looking from the correlation matrix to see which value has less correlation to the target. AFAIK, there are no libs with built-in functionality to do feature selection automatically. In the end, statistics are just numbers. It is up to humans to interpret the results.

How to test a trained Neural network in python?

I have trained a simple NN by modifying the following code
https://www.kaggle.com/ancientaxe/simple-neural-network-from-scratch-in-python
I would now like to test it on another sample dataset. how should i proceed with it ?
I see you use a model from scratch. In this case, you should run this code, as indicated in the notebook, after setting your X and y for your new test set. For more information, see the the notebook as I did not put here everything:
l1 = 1/(1 + np.exp(-(np.dot(X, w1))))
l2 = 1/(1 + np.exp(-(np.dot(l1, w2))))
You should better use a library like Tensorflow for building NN. Tensorflow is made for that and moreover you can save your model and load it later in order to test on new testsets.

pretrained densenet/vgg16/resnet50 + gp does not train on cifar10 data

I'm trying to train a hybrid model with GP on top of pre-trained CNN (Densenet, VGG and Resnet) with CIFAR10 data, mimic the ex2 function in the gpflow document. But the testing result is always between 0.1~0.2, which generally means random guess (Wilson+2016 paper shows hybrid model for CIFAR10 data should get accuracy of 0.7). Could anyone give me a hint of what could be wrong?
I've tried same code with simpler cnn models (2 conv layer or 4 conv layer) and both have reasonable results. I've tried to use different Keras applications: Densenet121, VGG16, ResNet50, neither works. I've tried to freeze the weights in the pre-trained models still not working.
def cnn_dn(output_dim):
base_model = DenseNet121(weights='imagenet', include_top=False, input_shape=(32,32,3))
bout = base_model.output
fcl = GlobalAveragePooling2D()(bout)
#for layer in base_model.layers:
# layer.trainable = False
output=Dense(output_dim, activation='relu')(fcl)
md=Model(inputs=base_model.input, outputs=output)
return md
#add gp on top, reference:ex2() function in
#https://nbviewer.jupyter.org/github/GPflow/GPflow/blob/develop/doc/source/notebooks/tailor/gp_nn.ipynb
#needs to slightly change build graph part because keras variable #sharing is not the same as tensorflow
#......
## build graph
with tf.variable_scope('cnn'):
md=cnn_dn(gp_dim)
f_X = tf.cast(md(X), dtype=float_type)
f_Xtest = tf.cast(md(Xtest), dtype=float_type)
#......
## predict
res=np.argmax(sess.run(my, feed_dict={Xtest:xts}),1).reshape(yts.shape)
correct = res == yts.astype(int)
print(np.average(correct.astype(float)))
I finally figure out that the solution is training larger iterations. In the original code, I just use 50 iterations as used in the ex2() function for MNIST data and it is not enough for more complicated network and CIFAR10 data. Adjusting some hyper-parameter (e.g. learning rate and activation function) also helps.

CoreML network output not even close to correct output

I am using a Keras network that uses an input image of 128x128 pixels, this network got an accuracy of more than 85% on the chars74K dataset. When I converted this network to a CoreML model the results are always 100% certain but always wrong, never the correct letter. The code for my Keras network can be found here: https://github.com/thijsheijden/chars74kCNN
The code I used to convert to a CoreMLModel is the following:
import coremltools
import h5py
import pandas
coreml_model = coremltools.converters.keras.convert(
"chars74kV4.0.h5", class_labels = "class_labels.txt", image_input_names= ['input'], input_names=['input'])
coreml_model.author = 'Thijs van der Heijden'
coreml_model.license = 'MIT'
coreml_model.description = 'A basic Deep Convolutional Neural Network to classify handwritten letters.'
coreml_model.input_description['input'] = 'A 128x128 pixel Image'
coreml_model.save('chars74k.mlmodel')
The code for my IOS App can be found here: https://github.com/thijsheijden/Visionary
I would greatly appreciate any help as I am really stuck on this one! Thanks in advance!