So here is there setup, I have a set of images (labeled train and test) and I want to train a conv net that tells me whether or not a specific object is within this image.
To do this, I followed the tensorflow tutorial on MNIST, and I train a simple conv net reduced to the area of interest (the object) which are training on image of size 128x128. The architecture is as follows : successively 3 layers consisting of 2 conv layers and 1 max pool down-sampling layers, and one fully connected softmax layers (with two class 0 and 1 whether the object is present or not)
I impleted it using tensorflow, and this works quite well, but since I have enough computing power I was wondering how I could improve the complexity of the classification:
- adding more layers ?
- adding more channel at each layer ? (currently 32,64,128 and 1024 for the fully connected)
- anything else ?
But the most important part is that now I want to detect this same object on larger images (roughle 600x600 whereas the size of the object should be around 100x100).
I was wondering how I could use the previously training "small" network used for small images, in order to pretrained a larger network on the large images ? One option could be to classify the image using a slicing window of size 128x128 and scan the whole image but I would like to try if possible to train a whole network on it.
Any suggestion on how to proceed ? Or an article / ressource tackling this kind of problem ? (I am really new to deep learning so sorry if this is stupid question...)
Thanks !
I suggest that you continue reading on the field overall. Your search keys include CNN, image classification, neural net, AlexNet, GoogleNet, and ResNet. This will return many articles, on-line classes and lectures, and other materials to help you learn about classification with neural nets.
Don't just add layers or filters: the complexity of the topology (net design) must be fitted to the task; a net that's too complex will over-fit the training data. The one you've been using is probably LeNet; the three I cite above are for the ImageNet image classification contest.
Since you are working on images, I would suggest you to use a pretrained image classification network (like VGG, Alexnet etc.)and fine tune this network with your 128x128 image data. In my experience until we have very large data set fine tuned network will give more accuracy and also save training time. After building a good image classifier on your data set you can use any popular algorithm to generate region of proposal from the image. Now take all regions of proposal and pass them to classification network one by one and check weather this network is classifying given region of proposal as positive or negative. If it classifying as positively then most probably your object is present in that region. Otherwise it's not. If there are a lot of region of proposal in which object is present according to classifier then you can use non maximal suppression algorithms to reduce number of positive proposals.
I am beginner in deep learning.I am using deep neural network [DNN] for image segmentation. I have few doubts. I have input image size 512x512.1. I want to select 6 Kernels of 5X5 pixels.I could not understand these kernels how I have to select, is there any standard kernel available? if yes please tell me.2. How can I take patch of a image? is it like manual cropping of some part from original image?
A very good paper for CNN-based segmentation is "Fully Convolutional Networks for Semantic Segmentation" by J. Ling et al. and they released their pre-trained networks. They can be found in the Caffe model zoo page. They also released their code (in Caffe), so it is possible to train or fine-tune models on new segmentation problems.
Note that these models directly learn the "complete" segmentation of the images. They do not rely on sampled image patches with a single class as output like previous classification-based approaches.
Good afternoon! In the first stage where on input of Convolutional Neural Network (input layer) we recieve a source image (hence an image of handwritten English letter). First of all we are using an nxn window which goes from left to right for scanning image and multiplication on kernel (convolutional matrix) to build Feature maps? But nowhere written about what exact values a kernel should be have (In other words on what Kernel values I should multiply data retrieved from n*n window ). Is it suitable to multiply data on this Convolutional Kernel intended for edge detection? There a numerous Convolutional Kernels (Emboss, Gaussian Filter, Edge detection, Angle detection, etc.)? But nowhere is written to what exact kernel it is need to multiply data for detecting hand written symbols.
Sample of Edge detection 3*3 kernel
Convolutional operation for multiplication on kernel
In addition, if size of entire image is 30*30, than is it possible to use window of 5*5 for building feature maps? Would it be enough sufficient for reaching optimal precision of letter detection?
On what exact kernel it is best to multiply area of entire image for the maximum precision of letter recognition? Or initially all values in kernel is equaled to 0? Could i also ask, what formula or rule is applied to detect overall needed amount of to be built feature maps? Or if the task is in letter recognition of English Language, than in each stage of Feature maps building process there must be exact 25 feature maps? Thank you for reply!
In a CNN, the convolutional kernel is a shared weight matrix, and is learned in a similar way to other weights. It is initialized in the same way, with small random values, and the weight deltas from back propagation are summed across all the features that receive its output (i.e. usually all "pixels" in the output of the convolutional layer)
A typical random kernel will perform a little like an edge detector.
After training, the first CNN layer can be displayed and will often have learned some kernels that can be interpreted if you are familiar with image processing
There is a nice animated view of kernel features being learned here: http://cs.nyu.edu/~yann/research/sparse/
In short your answer is this: There is no need to look for correct kernels to use. Instead look for a CNN library where you set params such as number of convolutional layers, and research the way to view the kernels as they learn - most CNN libraries will have a documented way to visualise them.
I read some books but still cannot make sure how should I organize the network. For example, I have pgm image with size 120*100, how the input should be like(like a one dimensional array with size 120*100)? and how many nodes should I adapt.
It's typically best to organize your input image as a 2D matrix. The reason is that the layers at the lower levels of the neural networks used in machine perception tasks are typically locally connected. For example, each neuron of the first layer of such a neural net will only process the pixels of a small NxN patch of the input image. This naturally leads to a 2D structure which can be more easily described with 2D matrices.
For a detailed explanation I'll refer you to the DeepFace paper which describes the stat of the art in face recognition systems.
120*100 one dimensional vector is fine. The locations of the pixel values in that vector does not matter, because all nodes are fully connected with the nodes in the next layer anyway. But you must be consistent with their locations between training, validating, and testing.
The most successful approach so far was to go with a convolutional neural network with 2D input, just as #benoitsteiner stated. For a far simpler example I'd refer you to a LeNet-5, a small neural network developed for MNIST hand-written digit recognition. It is used in EBLearn for face recognition with quite good results.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 2 years ago.
Improve this question
I'm new to the topic of neural networks. I came across the two terms convolutional neural network and recurrent neural network.
I'm wondering if these two terms are referring to the same thing, or, if not, what would be the difference between them?
Difference between CNN and RNN are as follows:
CNN:
CNN takes a fixed size inputs and generates fixed-size outputs.
CNN is a type of feed-forward artificial neural network - are variations of multilayer perceptrons which are designed to use minimal amounts of preprocessing.
CNNs use connectivity pattern between its neurons and is inspired by the organization of the animal visual cortex, whose individual neurons are arranged in such a way that they respond to overlapping regions tiling the visual field.
CNNs are ideal for images and video processing.
RNN:
RNN can handle arbitrary input/output lengths.
RNN unlike feedforward neural networks - can use their internal memory to process arbitrary sequences of inputs.
Recurrent neural networks use time-series information. i.e. what I spoke last will impact what I will speak next.
RNNs are ideal for text and speech analysis.
Convolutional neural networks (CNN) are designed to recognize images. It has convolutions inside, which see the edges of an object recognized on the image. Recurrent neural networks (RNN) are designed to recognize sequences, for example, a speech signal or a text. The recurrent network has cycles inside that implies the presence of short memory in the net. We have applied CNN as well as RNN choosing an appropriate machine learning algorithm to classify EEG signals for BCI: http://rnd.azoft.com/classification-eeg-signals-brain-computer-interface/
These architectures are completely different, so it is rather hard to say "what is the difference", as the only thing in common is the fact, that they are both neural networks.
Convolutional networks are networks with overlapping "reception fields" performing convolution tasks.
Recurrent networks are networks with recurrent connections (going in the opposite direction of the "normal" signal flow) which form cycles in the network's topology.
Apart from others, in CNN we generally use a 2d squared sliding window along an axis and convolute (with original input 2d image) to identify patterns.
In RNN we use previously calculated memory. If you are interested you can see, LSTM (Long Short-Term Memory) which is a special kind of RNN.
Both CNN and RNN have one point in common, as they detect patterns and sequences, that is you can't shuffle your single input data bits.
Convolutional neural networks (CNNs) for computer vision, and recurrent neural networks (RNNs) for natural language processing.
Although this can be applied in other areas, RNNs have the advantage of networks that can have signals travelling in both directions by introducing loops in the network.
Feedback networks are powerful and can get extremely complicated. Computations derived from the previous input are fed back into the network, which gives them a kind of memory. Feedback networks are dynamic: their state is changing continuously until they reach an equilibrium point.
First, we need to know that recursive NN is different from recurrent NN.
By wiki's definition,
A recursive neural network (RNN) is a kind of deep neural network created by applying the same set of weights recursively over a structure
In this sense, CNN is a type of Recursive NN.
On the other hand, recurrent NN is a type of recursive NN based on time difference.
Therefore, in my opinion, CNN and recurrent NN are different but both are derived from recursive NN.
This is the difference between CNN and RNN
Convolutional Neural NEtwork:
In deep learning, a convolutional neural network (CNN, or ConvNet) is a class of deep neural networks, most commonly applied to analyzing visual imagery. ... They have applications in image and video recognition, recommender systems, image classification, medical image analysis, and natural language processing.
Recurrent Neural Networks:
A recurrent neural network (RNN) is a class of artificial neural networks where connections between nodes form a directed graph along a temporal sequence. This allows it to exhibit temporal dynamic behavior. Unlike feedforward neural networks, RNNs can use their internal state (memory) to process sequences of inputs.
It is more helpful to describe the convolution and recurrent layers first.
Convolution layer:
Includes input, one or more filters (as well as subsampling).
The input can be one-dimensional or n-dimensional (n>1), for example, it can be a two-dimensional image. One or more filters are also defined in each layer. Inputs are convolving with each filter. The method of convolution is almost similar to the convolution of filters in image processing. In general, the purpose of this section is to extract the features of each filter from the input. The output of each convolution is called a feature map.
For example, a filter is considered for horizontal edges, and the result of its convolution with the input is the extraction of the horizontal edges of the input image. Usually, in practice and especially in the first layers, a large number of filters (for example, 60 filters in one layer) are defined. Also, after convolution, the subsampling operation is usually performed, for example, their maximum or average of each of the two neighborhood values is selected.
The convolution layer allows important features and patterns to be extracted from the input. And delete input data dependencies (linear and nonlinear).
[The following figure shows an example of the use of convolutional layers and pattern extraction for classification.][1]
[1]: https://i.stack.imgur.com/HS4U0.png [Kalhor, A. (2020). Classification and Regression NNs. Lecture.]
Advantages of convolutional layers:
Able to remove correlations and reduce input dimensions
Network generalization is increasing
Network robustness increases against changes because it extracts key features
Very powerful and widely used in supervised learning
...
Recurrent layers:
In these layers, the output of the current layer or the output of the next layers can also be used as the input of the layer. It also can receive time series as input.
The output without using the recurrent layer is as follows (a simple example):
y = f(W * x)
Where x is input, W is weight and f is the activator function.
But in recurrent networks it can be as follows:
y = f(W * x)
y = f(W * y)
y = f(W * y)
... until convergence
This means that in these networks the generated output can be used as an input and thus have memory networks. Some types of recurrent networks are Discrete Hopfield Net and Recurrent Auto-Associative NET, which are simple networks or complex networks such as LSTM.
An example is shown in the image below.
Advantages of Recurrent Layers:
They have memory capability
They can use time series as input.
They can use the generated output for later use.
Very used in machine translation, voice recognition, image description
...
Networks that use convolutional layers are called convolutional networks (CNN). Similarly, networks that use recurrent layers are called recurrent networks. It is also possible to use both layers in a network according to the desired application!