Using CNN on non-image 1-D data - neural-network

I have a 2-D matrix of dimensions (batch size x 1000). The values are not random and do have meaning in their structure - they are ordered by time.
I wanted to try and use a CNN on this data. Is it possible to run a CNN on non-image data such as this? I've been having trouble with it since tf-slim requires that the rank of the input be >= 3.
From what I understand, CNNs need an input rank of [batch size, height, width, channel]. How can I convert what I have into that form using tf.reshape()?

Related

Matlab-How can I extract features from an image using Partial Least Square (PLS) regression?

Can anyone help me to understand how can I extract features from an image using Partial least squares (PLS) regression function "plsregress()" in Matlab?
Before this, I have used PCA function in Matlab "princomp()" to extract features from the image, what I have understood is, for example if we have 20 images, each one 50×50pixels. The first step is
(1) construct the input matrix, where each row represent one image, that is mean the size of the input matrix is [20 , 50×50] = [20,2500].
(2) when we call the pca() in matlab we get:
[eigenvectors score variances] = princomp(inputMatrix);
the eigenvectors which represent the Principle components' coefficients (features) , the score of the Principal components and the variances (eigenvalues) of each principal component.
(3) To construct the dataset in the test stage, I have used the a test dataset which is a matrix consist of one image, its size is [50,50] and I have used the principal components ( eigenvectors) with the higher variances (for example the first 5 components) to reconstruct the dataset.
First, I have used this equation to calculate the score of test dataset
test_score = test_dataset * eigenvalues(: ,1:5);
newtest_data = test_score * eigenvalues(:, 1:5)' ; // reconstructing dataset using 5 PCs.
My question is, how can I perform same steps but by using Partial Least Square (PLS) regression?

Making feature vector from Gabor filters for classification

My aim is to classify types of cars (Sedans,SUV,Hatchbacks) and earlier I was using corner features for classification but it didn't work out very well so now I am trying Gabor features.
code from here
Now the features are extracted and suppose when I give an image as input then for 5 scales and 8 orientations I get 2 [1x40] matrices.
1. 40 columns of squared Energy.
2. 40 colums of mean Amplitude.
Problem is I want to use these two matrices for classification and I have about 230 images of 3 classes (SUV,sedan,hatchback).
I do not know how to create a [N x 230] matrix which can be taken as vInputs by the neural netowrk in matlab.(where N be the total features of one image).
My question:
How to create a one dimensional image vector from the 2 [1x40] matrices for one image.(should I append the mean Amplitude to square energy matrix to get a [1x80] matrix or something else?)
Should I be using these gabor features for my purpose of classification in first place? if not then what?
Thanks in advance
In general, there is nothing to think about - simple neural network requires one dimensional feature vector and does not care about the ordering, so you can simply concatenate any number of feature vectors into one (and even do it in random order - it does not matter). In particular if you have same feature matrices you also concatenate each of its row to create a vectorized format.
The only exception is when your data actually has some underlying geometrical dependicies, for example - matrix is actualy a pixels matrix. In such case architectures like PyraNet, Convolutional Neural Networks and others, which apply some kind of receptive fields based on this 2d structure - should be better. But those implementations simply accept 2d feature vector as an input.

gaussian mixture model probability matlab

I have a data of dimension 50x100000. (100000 features, each has a dimension of 50).
I would like to fit a gaussian mixture model using this data. I used the following code.
obj = gmdistribution.fit(X',3);
What I need is when I give a new data Y I should be able to get the likelihood probabilities $p(Y|\theta)$, where $\theta$ are the gaussing mixture model parameters.
I used the following code to get the probability values.
P = pdf(obj,X');
But I am getting very low values all are about 0. Whay it is happning? How can i get the appropreate probability values?
In one dimension, the maximum value of the pdf of the Gaussian distribution is 1/sqrt(2*PI). So in 50 dimensions, the maximum value is going to be 1/(sqrt(2*PI)^50) which is around 1E-20. So the values of the pdf are all going to be of that order of magnitude, or smaller.

How to create a training example from an RGB image in Matlab?

I have RGB face images of size 60x60. They are matrices of size
60x60x3 in Matlab.
I need to apply some algorithms to these data. But first I need
to create training examples for the images. Given one image of size
60x60x3 I will need to create a vector of size 1x10800. I am not sure,
should I interleave the R, G and B values for the pixels, should I go
column by column or row by row?
Thanks
Usually, it does not matter. For instance, SVM classifier or neural network perceptron are invariant to any permutations of the input.
If your image is im, just use im(:) to transform it into a column.

How can I use the princomp function of Matlab in the following case?

I have 10 images(18x18). I save these images inside an array named images[324][10] where the number 324 represents the amount of pixels for an image and the number 10 the total amount of images that I have.
I would like to use these images for a neuron network however 324 is a big number to give as an input and thus I would like to decrease this number but retain as much information as possible.
I heard that you can do this with the princomp function which implements PCA.
The problem is that I haven't found any example on how to use this function, and especially for my case.
If I run
[COEFF, SCORE, latent] = princomp(images);
it runs fine but how can I then get the array newimages[number_of_desired_features][10]?
PCA could be a right choice here (but not the only one). Although, you should be aware of the fact, that PCA does not reduce the number of your input data features automatically. I recommend you reading this tutorial: http://arxiv.org/pdf/1404.1100v1.pdf - it is the one I used to understand PCA and its really good for beginners.
Getting back to your question. An image is an vector in a 324-dimensional space. In this space the first base vector is one having a white pixel in top left corner, next one is having next pixel white, all the other black - and so on. It probably is not the best base vector set to represent this image data. PCA computes new base vectors (the COEFF matrix - the new vectors expressed as values in old vector space) and new image vector values (the SCORE matrix). At that point you have not lost ANY data at all (no feature number reduction). But, you could stop using some of the new base vectors, because they are probably connected with noice, not the data itself. It is all described in details in the tutorial.
images = rand(10,324);
[COEFF, SCORE] = princomp(images);
reconstructed_images = SCORE / COEFF + repmat(mean(images,1), 10, 1);
images - reconstructed_images
%as you see there are almost only zeros - the non-zero values are effects of small numerical errors
%its possible because you are only switching between the sets of base vectors used to represent the data
for i=100:324
SCORE(:,i) = zeros(10,1);
end
%we remove the features 100 to 324, leaving only first 99
%obviously, you could take only the non-zero part of the matrix and use it
%somewhere else, like for your neural network
reconstructed_images_with_reduced_features = SCORE / COEFF + repmat(mean(images,1), 10, 1);
images - reconstructed_images_with_reduced_features
%there are less features, but reconstruction is still pretty good