Non-image data with cnn [Matlab Specific] - matlab

I am trying to use a cnn to build a classifier for my data.
The training set is comprised of 2D numerical matrices which are not image data.
It seems that Matlab's cnns only work with image inputs:
https://uk.mathworks.com/help/nnet/ref/imageinputlayer-class.html
Does anyone have experience with cnns and non-image data using Matlab's deep learning toolbox?
Thank you.

Well I first would like to understand why you want to use a CNN with non-image data? CNNs are specially good because they take into account information in the neighborhood. Unless your data has some kind of region pattern (like pixels that get together to create a pattern or sentences where word order is relevant) the CNN would not be the best approach to handle it.
That been said, if you still want to use it you could convert the matrix to images. I'm not sure if that would help though.
Function to convert: mat2gray

Related

can we make a convolution network that use more than one image to make a prediction

I cropped the following image from a tutorial.
this diagram shows a rough structure of a standard neural network. takes one image as input and make a prediction.
what I am thinking about is some kind of parallel structure. think about something like the following image.
not exactly as in the above image. But you can see I am trying to use two images to make one prediction. this image is for you to get an idea about what I am trying to ask.
is it possible to use more than one (two, three ..) images like this or any other way in order to make one prediction. now, this is not to be used in actual photo classification. But I think such a technique can be used in a file like audio classification where a graphical representation of data is used with image classification techniques.
any advice, guidance or opinion on this?
if we consider implementing exactly what is in the diagram, if I use a high-level API like Keras (Keras.model.sequential) all we can do is keep adding a layer one after the other.
so what kind of technology can I use to implement the parallel structure
Yes, you can use more than one image as input. See for example the Siamese Neural Network which takes as input 2 images and passes them through a shared network architecture.
If instead you want to have an arbitrary and variable number of images as input you can use an architecture based on Recurrent Neural Networks like Convolutional LSTM, which essentially applies a CNN to every image of the input sequence using an LSTM recurrent network.

Generating Images From Dataset Of Images Using A Neural Network

I'm not looking for a chunk of code as a solution, just the name of the model I'd need to implement or some links would be nice.
My problem is I have a dataset I've made of a few hundred 128x128 images (abstract paintings) - I'd like to simply generate more images similar to these images using a neural network (preferably no input needed for the network, except maybe random values?), but it's unclear as to how I'd go about this.
One solution I've thought about but haven't tried out yet is making an LSTM neural network, turning the paintings into 1D arrays of pixel values, and feeding the arrays to the network (LSTM networks are real good at learning sequences) - but if I'd want to work with larger images, this might not be very practical.
Any info is greatly appreciated. Thanks!
GANs (generative adversarial networks) would be appropriate in this case. GANs rely on two separate neural networks and, when properly trained, can be used to generate new images (a process known as hallucinating) that are similar to a collection of known images.
there are many examples of using GANs to generate new images of numbers from the canonical mnist dataset. naturally, you can replace mnist with your abstract paintings.

Training a model for Latent-SVM

GOOD MORNING COLLEAGUES!
I am very into train a new model from my own data set of faces!
I have found no information about this topic, then I hope my information could help people and I can get some answers as well.
I will try to explain the steps I have needed to do to train my own model and later on some questions...
I have download the Latent code from: http://cs.brown.edu/~pff/latent-release4/
I have download the PASCAL VOC 2008 code (devkit) from: http://host.robots.ox.ac.uk/pascal/VOC/voc2008/index.html
I have emulate the structure of files/folders of the VOC PASCAL but in my own data set:
Annotations. I have created a .xml where I have defined a object, face, (in each image I only have one face). I didn't define difficulties or poses...
JPEGImages where I have stored all the images
ImageSets where I have defined three files:
test.txt, where I wrote the file name of my positive samples
train.txt, where I wrote the file name of my negative samples
trainval.txt, where I wrote the file name of my positive samples (exactly the same file than test.txt).
I have change some things in globals.m and VOCinit.m (to tell the algorithm the path and the location of some files...)
Then I run the training with the command: pascal('face', 1);
Following these steps I have achieved that the training run completely and doesn't fail and I get my own model BUT I have some doubts...
Can you see anything weird in my explanation? Could it work?
Must the files test.txt/trainval.txt be equal? Why... What does it mean?
Do I have to choose the number of parts I want in the model INSIDE the function?
Please, you imagine I have two kind of samples (frontal faces and side faces) and I want to detect both... How can I address this issue? I thought I have to train a model with two components... but How can I tell to the training code which are frontal or side samples?? In the annotations with the label pose?? (I don't think so...) Are there other way to handle this purpose?
Thank you for your time!!
I hope you can solve my doubts :)
I think test.txt should contain samples (images) that will be used to estimate how good the system is after learning the faces. However, trainval.txt is used during the learning stage (training) to fine-tune the parameters of the model; it is an essential part of supervised learning.
Also, it is very hard to have one single SVM to classify faces that are both frontal and sideways. Here is my suggestion:
Train one SVM to detect if the input image is a frontal face or a sideways face. Call this something like SVM-0.
Train another SVM for frontal faces. This SVM will classify all your individuals. Note, however, that SVM is usually a binary classifier, so make sure you choose the right SVM, one that as a multiclass architecture. Call this SVM-F.
Tran a final SVM for sideways faces. Again, use a multiclass SVM. Call it SVM-S.
Present the input image to SVM-0 and if it detects it is a frontal face, present the input again to SVM-F; otherwise, give the input to SVM-S.
In my experience, you should expect very low performance in SVM-S. It is a hard problem to solve. But frontal faces is not a big deal, unless you are working with faces that vary in pose, illumination, and expression (PIE). Face recognition is affected greatly with PIE variations in the images.
I recommend you this website, it contains very good information and tutorials for starters, with or without experience.

PCA on Sift desciptors and Fisher Vectors

I was reading this particular paper http://www.robots.ox.ac.uk/~vgg/publications/2011/Chatfield11/chatfield11.pdf and I find the Fisher Vector with GMM vocabulary approach very interesting and I would like to test it myself.
However, it is totally unclear (to me) how do they apply PCA dimensionality reduction on the data. I mean, do they calculate Feature Space and once it is calculated they perform PCA on it? Or do they just perform PCA on every image after SIFT is calculated and then they create feature space?
Is this supposed to be done for both training test sets? To me it's an 'obviously yes' answer, however it is not clear.
I was thinking of creating the feature space from training set and then run PCA on it. Then, I could use that PCA coefficient from training set to reduce each image's sift descriptor that is going to be encoded into Fisher Vector for later classification, whether it is a test or a train image.
EDIT 1;
Simplistic example:
[coef , reduced_feat_space]= pca(Feat_Space','NumComponents', 80);
and then (for both test and train images)
reduced_test_img = test_img * coef; (And then choose the first 80 dimensions of the reduced_test_img)
What do you think? Cheers
It looks to me like they do SIFT first and then do PCA. the article states in section 2.1 "The local descriptors are fixed in all experiments to be SIFT descriptors..."
also in the introduction section "the following three steps:(i) extraction
of local image features (e.g., SIFT descriptors), (ii) encoding of the local features in an image descriptor (e.g., a histogram of the quantized local features), and (iii) classification ... Recently several authors have focused on improving the second component" so it looks to me that the dimensionality reduction occurs after SIFT and the paper is simply talking about a few different methods of doing this, and the performance of each
I would also guess (as you did) that you would have to run it on both sets of images. Otherwise your would be using two different metrics to classify the images it really is like comparing apples to oranges. Comparing a reduced dimensional representation to the full one (even for the same exact image) will show some variation. In fact that is the whole premise of PCA, you are giving up some smaller features (usually) for computational efficiency. The real question with PCA or any dimensionality reduction algorithm is how much information can I give up and still reliably classify/segment different data sets
And as a last point, you would have to treat both images the same way, because your end goal is to use the Fisher Feature Vector for classification as either test or training. Now imagine you decided training images dont get PCA and test images do. Now I give you some image X, what would you do with it? How could you treat one set of images differently from another BEFORE you've classified them? Using the same technique on both sets means you'd process my image X then decide where to put it.
Anyway, I hope that helped and wasn't to rant-like. Good Luck :-)

Different accuracy for HOG, SIFT and DENSE SIFT descriptor for recognition

Heyy guys,,
I'm really confused about the result that I got for the recognition.
When I was using HOG, I got different accuracy for recognition (Based on he parameter im using). I can understand this things.
May be, it's because I have dfferent viewpoint of the training Images and HOG doesnt has a capability for this. So the maximum accuracy that I got is only 40%.
Then Im using SIFT. I got more better result. It's around 70%.
And for Dense SIFT, I got the maximum only 38%.
I dont know, why it's happend. Because Dense SIFT should be more better than SIFT.
So, for this Im trying to use PCA to get the principal features from each descriptor. And then I combine these principal features to do recognition. But I got result more worst. Is it only 30%.
PCA HOG(150,4),PCA SIFT(150,3)=PCA COMBINATION(50,7)
Why this happend? Why DENSE SIFT gives worst result? and Why when I combine all those principal component (from HOG, SIFT and DENSE SIFT), I got more worst result??
And for now, I just doing everything in a sample images. I have 4000 images but for now, I only using 150 images..
I used this small size of sample to try different parameter and after I got this best parameter I'll use it in a whole train images.
Is it going to give a same result (compare with the small size images for the sample)??
May be, it's because I have different viewpoint of the training Images and HOG doesnt has a capability for this
The same holds for dense SIFT. If you have different viewpoints, dense methods are not suitable for this case. Try Hessian-Affine as detector and SIFT as a descriptor, I assume you will have better performance than SIFT.