Bag of Words Representation [closed] - matlab

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I would like to implement bag of words representation for my project. I computed the codebook of visual words of images by using their features and descriptors.Then, I obtained cluster centers using k-means. For the bag of words representation part, it is asked that you should use manually labeled segments provided as part of the dataset. In dataset, there are three different binary masks for each image. Are those labeled segments that binary masks? If so, how will I use that computed visual words?

The bag of words approach provides a concise representation of an image or a part of an image. That representation is typically used as an input to a classification algorithm which is used to estimate the class to which the image data belongs. Typically, the classifier is a supervised learning method which will require pairs (descriptor, label) from some training set during the training process. In your case, the descriptor is the BOW representation of the image data from your training set. Then, during testing you will feed the BOW descriptor of new image data to the classifier to infer the class.
From what I understand, the fact that you have three different masks for the images, means that you also have three classes. Then, each mask will tell you which part of an image should be considered image data belonging to a particular class. This is your training data.
Under that assumption, you should extract the parts of the images that correspond to each mask, compute the BOW representation for those image parts (separately for each mask) and use those with the mask number as a label to train the classifier.
This will allow you to later to e.g. use the sliding window approach to classify parts of a test image as belonging to one of the 3 classes used during training. That would be a simple case of a detection problem.
I am not sure I understood your problem correctly, but I hope that this will help you move forward a bit.

Related

In this case, what's better: classification or clustering? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I collected data from different sources FB, Twitter, Linkedin, then made them in a structured format. As a result now: I'm having a csv file with 10000 rows (10000 person) and the data associated is about their names, age,their interests and buying habits.
I'm really stuck on this step: CLASSIFICATION or CLUSTERING. For the classification I don't really have predefined classes or a model for my users to classify them.
For clustering: I started calculating similarities and KMeans, but still can't get the result I wanted. How can I decide what to choose before moving on to the next step of Collaborative filtering?
Foremost, you have to understand that clustering is a pre-processing activity/task. The idea in clustering is to identify objects with similar properties and group them. The clustering process can be understood in terms of cattle-herding. Wherein the jockey herds loose cattle (read data points) into groups.
Note: If you are looking at the partitioning clustering algorithm family includes K-means, k-modes, k-prototype etc. The algorithm k-means will work only for numerical data. K-modes will work only for categorical data and k-prototype will work for both numerical and categorical data.
Question: Is the data preprocessed? If the answer is no, then you may try the following steps;
Is the data (column values) all categorical (=text) format or numerical or mixed?
a. If all categorical then discretize or bin or interval scale them.
b. if mixed, then discretize or bin or interval scale the categorical values only
c. Perform missing value and outlier treatment for both numerical and categorical data. This will help in retaining maximum variance as well as reduce dimensionality.
d. Normalize the numerical values to a median of zero.
Now apply a suitable clustering algorithm (based on your problem) to determine patterns. Once you have found the patterns, then you may label them. Once the identified patterns are labelled, thereafter or subsequently a classification algorithm can be used to classify any new incoming data points into an appropriate class.

Caffe CNN: diversity of filters within a conv layer [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I have the following theoretical questions regarding the conv layer in a CNN. Imagine a conv layer with 6 filters (conv1 layer and its 6 filters in the figure).
1) what guarantees the diversity of learned filters within a conv layer? (I mean, how the learning (optimization process) makes sure that it does not learned the same (similar) filters?
2) diversity of filters within a conv layer is a good thing or not? Is there any research on this?
3) during the learning (optimization process), is there any interaction between the filters of the same layer? if yes, how?
1.
Assuming you are training your net with SGD (or a similar backprop variant) the fact that the weights are initialized at random encourage them to be diverse, since the gradient w.r.t loss for each different random filter is usually different the gradient will "pull" the weights in different directions resulting with diverse filters.
However, there is nothing that guarantees diversity. In fact, sometimes filters become tied to each other (see GrOWL and references therein) or drop to zero.
2.
Of course you want your filters to be as diverse as possible to capture all sorts of different aspects of your data. Suppose your first layer will only have filters responding to vertical edges, how is your net going to cope with classes containing horizontal edges (or other types of textures)?
Moreover, if you have several filters that are the same, why computing the same responses twice? This is highly inefficient.
3.
Using "out-of-the-box" optimizers, the learned filters of each layer are independent of each other (linearity of gradient). However, one can use more sophisticated loss functions/regularization methods to make them dependent.
For instance, using group Lasso regularization, can force some of the filters to zero while keeping the others informative.

How to choose the number of filters in each Convolutional Layer? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
When building a convolutional neural network, how do you determine the number of filters used in each convolutional layer. I know that there is no hard rule about the number of filters, but from your experience/ papers you have read, etc. is there an intuition/observation about number of filters used?
For instance (I'm just making this up as example):
use more/less filters as the network gets deeper.
use larger/smaller filter with large/small kernel size
If the object of interest in the image is large/small, use ...
As you said, there are no hard rules for this.
But you can get inspiration from VGG16 for example.
It double the number of filters between each conv layers.
For the kernel size, I usually keep 3x3 or 5x5.
But, you can also take a look at Inception by Google.
They use varying kernel size, then concat them. Very interesting.
As far as I am concerned there is no foxed depth for the convolutional layers. Just several suggestions:
In CS231 they mention using 3 x 3 or 5 x 5 filters with stride of 1 or 2 is a widely used practice.
How many of them: Depends on the dataset. Also, consider using fine-tuning if the data is suitable.
How the dataset will reflect the choice? A matter of experiment.
What are the alternatives? Have a look at the Inception and ResNet papers for approaches which are close to the state of the art.

Facial features extraction in MATLAB [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I have a project in which I need to make neural network for face recognition.
Inputs of network should be features of face which needs to be recognized.
I searched a lot and found SURF Detector of Matlab's Computer Vision Toolbox to be the one which will help me extract the features of face. But SURF Detector extracts keypoints of face and for each of them sets vector with 64 or 128 values. Problem is that the number of keypoints varies,and I need it to be same for each face, to be able to feed the inputs of neural network.
So i thought to extract only some features which can be presented as single number, like proportions of nose,mouth,eyes to the face, or distance between eyes, etc.
How can i get these features, and will they be good enough to serve as inputs to neural network which will need to recognize faces? On the output of neural network there will be same number of neurons as there is different people in database, and in training phase I'm going to feed the network with extracted face features from photo, and if it is photo of let's say third of five people in database, my output layer will look like [0,0,1,0,0].
Is this good approach and can you give me some code which extracts those face features from face in Matlab?
Proportions of nose/mouth/eyes to the face and distance between eyes will give you very bad results. Those are not measures that are accurate or distinctive enough.
If you're looking for features for face recognition, you should consider LBP:
http://www.scholarpedia.org/article/Local_Binary_Patterns#Face_description_using_LBP

Ideas for extracting features of an object using keypoints of image

I'll be appreciated if you help me to create a feature vector of an simple object using keypoints. For now, I use ETH-80 dataset, objects have an almost blue background and pictures are took from different views. Like this:
After creating a feature vector, I want to train a neural network with this vector and use that neural network to recognize an input image of an object. I don't want make it complex, input images will be as simple as train images.
I asked similar questions before, some one suggested using average value of 20x20 neighborhood of keypoints. I tried it, It seems it's not working with ETH-80 images, because of different views of images. It's why I asked another question.
SURF or SIFT. Look for interest point detectors. A MATLAB SIFT implementation is freely available.
Update: Object Recognition from Local Scale-Invariant Features
SIFT and SURF features consist of two parts, the detector and the descriptor. The detector finds the point in some n-dimensional space (4D for SIFT), the descriptor is used to robustly describe the surroundings of said points. The latter is increasingly used for image categorization and identification in what is commonly known as the "bag of word" or "visual words" approach. In the most simple form, one can collect all data from all descriptors from all images and cluster them, for example using k-means. Every original image then has descriptors that contribute to a number of clusters. The centroids of these clusters, i.e. the visual words, can be used as a new descriptor for the image. The VLfeat website contains a nice demo of this approach, classifying the caltech 101 dataset:
http://www.vlfeat.org/applications/apps.html#apps.caltech-101