SURF feature input to neural network in MATLAB - matlab

Can I input the SURF feature obtained by the MATLAB command (detectSURFFeature) as input to the neural network to train network in order to classify/detect the object in the image?.if yes how can I cop with the multidimensional data obtained by the descriptor? I am using image set of same resolution and almost similar orientation. I am using only MATLAB.

One way to do this is to use the bag of features approach. You discretize the space of the SURF descriptors, and then you compute a histogram of the descriptors in your image. This gives you a single vector that you can use at input to a neural network or any other classifier of your choice.

Related

Thresholding in intermediate layer using Gumbel Softmax

In a neural network, for an intermediate layer, I need to threshold the output. The output of each neuron in the layer is a real value, but I need to binarize it (to 0 or 1). But with hard thresholding, backpropagation won't work. Is there a way to achieve this?
Details:
I have a GAN kind of network i.e. there are 2 neural networks trained end-to-end. The output of first neural network is real values. I need them to be binary values. I read that Gumbel Softmax (Categorical Reparameterization) is used to handle discrete variables in a neural network. Is there a way to use that for my use-case? If yes, how? If not, is there any other way?
From what I could gather in internet is that Gumbel is a probability distribution. Using that we can generate a discrete distribution. But for use-case, I need a function that can take a real input and output a binary value. So, I need an activation function of that form. How can I achieve that?
Thanks!

How to create a neural network that receive multiple input images in Matlab

I'd like to know if it's possible to create a neural network that receive multiple input images (imageInputLayer)
For example a Siamese architecture for computing the disparity (stereo correspondence) out of two image patches. The network input is two images and the output is a scalar that represent the disparity.
Currently matlab supports a single imageInputLayer for each neural network.
I'd like to to classify a 3D object by projecting the 3D object through 3 angles, Therefor converting the problem to classification of 3 images.
I'm trying to create a network that looks like the attached image.
Please let me know what you think and how to work things out with the network input
This is simply not possible in Matlab 2018B

3D coordinates as the output of a Neural Network

Neural Networks are mostly used to classify. So, the activation of a neuron in the output layer indicates the class of whatever you are classifying.
Is it possible (and correct) to design a NN to get 3D coordinates? This is, three output neurons with values in ranges, for example [-1000.0, 1000.0], each one.
Yes. You can use a neural network to perform linear regression, and more complicated types of regression, where the output layer has multiple nodes that can be interpreted as a 3-D coordinate (or a much higher-dimensional tuple).
To achieve this in TensorFlow, you would create a final layer with three output neurons, each corresponding to a different dimension of your target coordinates, then minimize the root mean squared error between the current output and the known value for each example.

How to train a Matlab Neural Network using matrices as inputs?

I am making 8 x 8 tiles of Images and I want to train a RBF Neural Network in Matlab using those tiles as inputs. I understand that I can convert the matrix into a vector and use it. But is there a way to train them as matrices? (to preserve the locality) Or is there any other technique to solve this problem?
There is no way to use a matrix as an input to such a neural network, but anyway this won't change anything:
Assume you have any neural network with an image as input, one hidden layer, and the output layer. There will be one weight from every input pixel to every hidden unit. All weights are initialized randomly and then trained using backpropagation. The development of these weights does not depend on any local information - it only depends on the gradient of the output error with respect to the weight. Having a matrix input will therefore make no difference to having a vector input.
For example, you could make a vector out of the image, shuffle that vector in any way (as long as you do it the same way for all images) and the result would be (more or less, due to the random initialization) the same.
The way to handle local structures in the input data is using convolutional neural networks (CNN).

Classification using Matlab neural networks toolbox - image input?

I have the images of 4 different animals and need to do classification using the Matlab neural networks toolbox. In Matlab's examples (Iris), the form of input data is a 4*1 vector (sepal width, etc..), but I want the input to be the original images. In other words I want the neural networks to extract the features for the images (e.g. the kernels in convolutional NN), not myself using something like color histogram or SIFT. Is it possible for the toolbox to do the job? Thanks.