I have this AlexNet model in MATLAB:
net = alexnet;
layers = net.Layers;
layers(end-2) = fullyConnectedLayer(numClasses);
layers(end) = classificationLayer;
I'm using it to learn features from sequencies of frames from videos of different classes. So i need to extract learned features from the 'fc7' layer of this model to save these features as a vector and pass it to an LSTM layer.
The training process of this model for transfer learning its ok, all right.
I divided my data set in a x_train and a x_test sets using splitEachLabel() in my imageDatastore(), and using the function augmentedImageSource() to resize all the images for the network. Everything ok!
But when i try yo use this snippet of code shown bellow to resize images from my imageDatastore to be readed by the function activations(), to save the features as a vector, i'm getting an error:
imageSize = [227 227 3];
auimds = augmentedImageSource(imageSize, imds, 'ColorPreprocessing', 'gray2rgb');
Function activations:
layer = 'fc7';
fclayer = activations(mynet, auimds, layer,'OutputAs','columns');
The error:
Error using SeriesNetwork>iDataDispatcher (line 1113)
For an image input layer, the input data for predict must be a single image, a 4D array of images, or an imageDatastore with the correct size.
Error in SeriesNetwork/activations (line 791)
dispatcher = iDataDispatcher( X, miniBatchSize, precision, ...
Someone help me, please!
Thanks for the support!
Did you check the input size of that layer? The error you are getting is related with the input size of the current layer. Can you check your mynet structure and its fc7 layer input's size in your workspace in Matlab?
Related
I just created a model that does a binary classification and has a dense layer of 1 unit at the end. I used Sigmoid activation. However, I get this error now when I wanna convert it to CoreML.
I tried to change the number of units to 2 and activation to softmax but still didn't work.
import coremltools as ct
#1. define input size
image_input = ct.ImageType(scale=1/255)
#2. give classifier
classifier_config = coremltools.ClassifierConfig(class_labels=[0, 1]) #ERROR here
#3. convert the model
coreml_model = coremltools.convert("mask_detection_model_surgical_mask.h5",
inputs=[image_input], classifier_config=classifier_config)
#4. load and resize an example image
example_image = Image.open("Unknown3.jpg").resize((256, 256))
# Make a prediction using Core ML
out_dict = coreml_model.predict({mymodel.input_names[0]: example_image})
print(out_dict["classLabels"])
# save to disk
#coreml_model.save("FINALLY.mlmodel")
I found the answer to my question.
Use Softmax activation and 2 Dense units as the final layer with either loss='binary_crossentropy' or `loss='categorical_crossentropy'
Good luck to hundreds of people who posted a similar question but received no answer.
I'm trying to train a hybrid model with GP on top of pre-trained CNN (Densenet, VGG and Resnet) with CIFAR10 data, mimic the ex2 function in the gpflow document. But the testing result is always between 0.1~0.2, which generally means random guess (Wilson+2016 paper shows hybrid model for CIFAR10 data should get accuracy of 0.7). Could anyone give me a hint of what could be wrong?
I've tried same code with simpler cnn models (2 conv layer or 4 conv layer) and both have reasonable results. I've tried to use different Keras applications: Densenet121, VGG16, ResNet50, neither works. I've tried to freeze the weights in the pre-trained models still not working.
def cnn_dn(output_dim):
base_model = DenseNet121(weights='imagenet', include_top=False, input_shape=(32,32,3))
bout = base_model.output
fcl = GlobalAveragePooling2D()(bout)
#for layer in base_model.layers:
# layer.trainable = False
output=Dense(output_dim, activation='relu')(fcl)
md=Model(inputs=base_model.input, outputs=output)
return md
#add gp on top, reference:ex2() function in
#https://nbviewer.jupyter.org/github/GPflow/GPflow/blob/develop/doc/source/notebooks/tailor/gp_nn.ipynb
#needs to slightly change build graph part because keras variable #sharing is not the same as tensorflow
#......
## build graph
with tf.variable_scope('cnn'):
md=cnn_dn(gp_dim)
f_X = tf.cast(md(X), dtype=float_type)
f_Xtest = tf.cast(md(Xtest), dtype=float_type)
#......
## predict
res=np.argmax(sess.run(my, feed_dict={Xtest:xts}),1).reshape(yts.shape)
correct = res == yts.astype(int)
print(np.average(correct.astype(float)))
I finally figure out that the solution is training larger iterations. In the original code, I just use 50 iterations as used in the ex2() function for MNIST data and it is not enough for more complicated network and CIFAR10 data. Adjusting some hyper-parameter (e.g. learning rate and activation function) also helps.
I have 297 Grayscale images and I would Like Divide Them into 3 parts (train-test and validation).
Ofcourse, I wrote some sample codes for example following codes from MathWorks (Object Detection Using Faster R-CNN Deep Learning)
% Split data into a training and test set.
idx = floor(0.6 * height(vehicleDataset));
trainingData = vehicleDataset(1:idx,:);
testData = vehicleDataset(idx:end,:);
But Matlab 2018a show the following error
Error:"Undefined function 'height' for input arguments of type
'struct'."
I would like to detect objects in images using "Faster R CNN" method and determine their locations in images.
Suppose your images are saved in the path "C:\Users\Student\Desktop\myImages"
First, create an imageDataStore object to manage a collection of image files.
datapath = "C:\Users\Student\Desktop\myImages";
imds = imageDatastore(datapath);%You may look at documentation for customizations.
[trainds,testds,valds] = splitEachLabel(imds,.6,.2);%Lets say 60% data for training, 20% for testing and 20% for validation
Now you have train data in the variable trainds and test data in the variable testds.
You can retrieve each images using readimage, say 5th image from train set as;
im = readimage(trainds,5);
I know how to extract Caffe feats / scores using the matcaffe_demo.m that is provided along with Caffe. However when using this file one has to provide a prototxt file that determines not only the network architecture but also the input dimensions including the batch_size.
Since I'm processing the frames of videos of variable sequence lenght I need a way to use matcaffe_demo.m along with a variable batch size.
Does anyone know how to do that?
It would probably involve changing this line from matcaffe_demo.m
% Initialize a network
net = caffe.Net(net_model, net_weights, phase);
to something that dynamically passes the current batch size needed dynamically to caffe
I ended up using the reshape function:
net = caffe.Net(net_model, net_weights, phase);
net.blobs('data').reshape([dim1 dim2 numChannels numFrames]);
scores = net.forward(inputData);
caffe.reset_all();
I had hard time working on caffe with HDF5 on the image classification and regression tasks, for some reason, the training on HDF5 will always fail at the first beginning that the test and train loss could very soon drop to close to zero. after trying all the tricks such reducing the learning rate, adding RELU, dropout, nothing started to work, so I started to doubt that the HDF5 data I am feeding to caffe is wrong.
so currently I am working on the universal dataset (Oxford 102 category flower dataset and also it has public code ), firstly I started out by trying ImageData and LMDB layer for the classification, they all worked very well. at last i used HDF5 data layer for the finetuning, the training_prototxt doesn't change unless on the data layer which uses HDF5 instead. and again, at the start of the learning, the loss drops from 5 to 0.14 at iteration 60, 0.00146 at iteration 100, that seems to prove that HDF5 data is incorrect.
i have two image&label to HDF5 snippet on the github, all of them seem to generate the HDF5 dataset, but for some reason these dataset doesn't seem to be not working with caffe
I wonder anything wrong with this data, or anything that makes this example run in HDF5 or if you have some HDF5 examples for classification or regression, which can be helpful to me a lot.
one snippet is shown as
def generateHDF5FromText2(label_num):
print '\nplease wait...'
HDF5_FILE = ['hdf5_train.h5', 'hdf5_test1.h5']
#store the training and testing data path and labels
LIST_FILE = ['train.txt','test.txt']
for kk, list_file in enumerate(LIST_FILE):
#reading the training.txt or testing.txt to extract the all the image path and labels, store into the array
path_list = []
label_list = []
with open(list_file, buffering=1) as hosts_file:
for line in hosts_file:
line = line.rstrip()
array = line.split(' ')
lab = int(array[1])
label_list.append(lab)
path_list.append(array[0])
print len(path_list), len(label_list)
# init the temp data and labels storage for HDF5
datas = np.zeros((len(path_list),3,227,227),dtype='f4')
labels = np.zeros((len(path_list), 1),dtype="f4")
for ii, _file in enumerate(path_list):
# feed the image and label data to the TEMP data
img = caffe.io.load_image( _file )
img = caffe.io.resize( img, (227, 227, 3) ) # resize to fixed size
img = np.transpose( img , (2,0,1))
datas[ii] = img
labels[ii] = int(label_list[ii])
# store the temp data and label into the HDF5
with h5py.File("/data2/"+HDF5_FILE[kk], 'w') as f:
f['data'] = datas
f['label'] = labels
f.close()
One input transformation that seems to happen in the original net and is missing from your HDF5 creation in mean subtraction.
You should obtain mean file (looks like "imagenet_mean.binaryproto" in your example), read it into python and subtract it from each image.
BTW, the mean file can give you a clue as to the scale of the input image (if pixel values should be in [0..1] range or [0..255]).
You might find caffe.io useful converting binaryproto to numpy array.