I have the images of 4 different animals and need to do classification using the Matlab neural networks toolbox. In Matlab's examples (Iris), the form of input data is a 4*1 vector (sepal width, etc..), but I want the input to be the original images. In other words I want the neural networks to extract the features for the images (e.g. the kernels in convolutional NN), not myself using something like color histogram or SIFT. Is it possible for the toolbox to do the job? Thanks.
Related
I'm trying to create a 1x1x198 input for my CNN from a 100x100x198 data(image).
I'm using Hyperspectral Imaging data (jasper), however, if I want to process and plot their endmembers through a Convolution Neural Network I have to modify the data in 1x1x198 kernel sizes.
I am trying to write the Matlab code for the following paper. I will appreciate any help regarding the paper's Matlab code. IEEE paper: Hyperspectral unmixing via deep convolutional neural networks
Thank you
I'd like to know if it's possible to create a neural network that receive multiple input images (imageInputLayer)
For example a Siamese architecture for computing the disparity (stereo correspondence) out of two image patches. The network input is two images and the output is a scalar that represent the disparity.
Currently matlab supports a single imageInputLayer for each neural network.
I'd like to to classify a 3D object by projecting the 3D object through 3 angles, Therefor converting the problem to classification of 3 images.
I'm trying to create a network that looks like the attached image.
Please let me know what you think and how to work things out with the network input
This is simply not possible in Matlab 2018B
I am making 8 x 8 tiles of Images and I want to train a RBF Neural Network in Matlab using those tiles as inputs. I understand that I can convert the matrix into a vector and use it. But is there a way to train them as matrices? (to preserve the locality) Or is there any other technique to solve this problem?
There is no way to use a matrix as an input to such a neural network, but anyway this won't change anything:
Assume you have any neural network with an image as input, one hidden layer, and the output layer. There will be one weight from every input pixel to every hidden unit. All weights are initialized randomly and then trained using backpropagation. The development of these weights does not depend on any local information - it only depends on the gradient of the output error with respect to the weight. Having a matrix input will therefore make no difference to having a vector input.
For example, you could make a vector out of the image, shuffle that vector in any way (as long as you do it the same way for all images) and the result would be (more or less, due to the random initialization) the same.
The way to handle local structures in the input data is using convolutional neural networks (CNN).
I was wondering if it is possible to perform a deconvolution of images in Caffe using a point spread function of objective at a given focal point. Something along the lines of this approach.
If yes, what would be the best way to proceed?
It is possible to deconvolve images using Caffe (and CNN in general), but the approach may not be as general as you hope it to be.
CNNs can take blurry image as an input and output sharp image. As the networks are convolutional, the input can be of any size. This can be easily done in Caffe using Convolutional layers and Euclidean Loss layer. Optionally, you can experiment with adding some pooling and deconvolution layers.
CNNs can be trained to deconvolve images for specific blur PSF as in your link. (see: [Xu et al.:Deep Convolutional Neural Network for Image Deconvolution. NIPS 2014]). This works well but you have to re-train the CNN for each new PSF (which takes lot of time).
I've tried to train CNNs to do blind deconvolution (PSF is not known) and it works very well for text documents. You can get trained nets and python-Caffe scripts at [Hradiš et al.: Convolutional Neural Networks for Direct Text Deblurring. BMVC 2015]. This approach could work for other types of images, but it would not work for unrestricted photographs and diverse blurs. For general photos, I would guess It could work for small range of blurs.
Another possibility is to do inverse filtration (e.g. using Wiener filter) and process the output using a CNN. The advantage of this is that you can compute the inverse filter for new PSF very fast and the CNN stays the same. [Schuler et al.: A machine learning approach for non-blind image deconvolution. CVPR 2013]
Can I input the SURF feature obtained by the MATLAB command (detectSURFFeature) as input to the neural network to train network in order to classify/detect the object in the image?.if yes how can I cop with the multidimensional data obtained by the descriptor? I am using image set of same resolution and almost similar orientation. I am using only MATLAB.
One way to do this is to use the bag of features approach. You discretize the space of the SURF descriptors, and then you compute a histogram of the descriptors in your image. This gives you a single vector that you can use at input to a neural network or any other classifier of your choice.