How can I find the rank for each user? [closed] - matlab

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I am having trouble understanding on how can I sort the users according to their rank. I'm using the Convolutional Neural Network (CNN) for developing iris recognition system and I've got the output of the Softmax classifier from the left and right iris.
What I'm going to do is using one of the ranking fusion methods (e.g. the highest rank method, the Borda count method, and the logistic regression method) to fuse the output of both the left and right iris. I completely understand each method how does it work, but I faced problem on how can I find the initial rank for each user. In other words, How can I find the rank for each user before I fed them to any one of the ranking methods?.
Please, any explanation and idea on this will be highly appreciated. Thank you in advance.

I think that in your case you don't have a global rank of the user,
just the ranking of users for each of the examples.
You may treat the answer from your classifier as a ranking method, if it returns a vector of likelihoods of a given iris belonging to each of the users.
Then you may rank the users for the left and right iris separately and fuse the rankings.

Related

How to choose the number of filters in each Convolutional Layer? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
When building a convolutional neural network, how do you determine the number of filters used in each convolutional layer. I know that there is no hard rule about the number of filters, but from your experience/ papers you have read, etc. is there an intuition/observation about number of filters used?
For instance (I'm just making this up as example):
use more/less filters as the network gets deeper.
use larger/smaller filter with large/small kernel size
If the object of interest in the image is large/small, use ...
As you said, there are no hard rules for this.
But you can get inspiration from VGG16 for example.
It double the number of filters between each conv layers.
For the kernel size, I usually keep 3x3 or 5x5.
But, you can also take a look at Inception by Google.
They use varying kernel size, then concat them. Very interesting.
As far as I am concerned there is no foxed depth for the convolutional layers. Just several suggestions:
In CS231 they mention using 3 x 3 or 5 x 5 filters with stride of 1 or 2 is a widely used practice.
How many of them: Depends on the dataset. Also, consider using fine-tuning if the data is suitable.
How the dataset will reflect the choice? A matter of experiment.
What are the alternatives? Have a look at the Inception and ResNet papers for approaches which are close to the state of the art.

can Clustering be used for predictive Analytics? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Im still not sure how clustering can be used for predictive analytics?
can someone tell me how to predict the future from extracting clusters?
generally, clustering isn't used for prediction but for labeling or analyzing existing set of data points.
after you use clusters to label your data points and divide them into groups based on common traits, you can run other prediction algorithms on that labeled data to get predictions.
I don't think clustering leads directly to predictions, other than cases of clusters that are well separated and can be used to make inferences about the data points and the properties of the clusters

Sentiment Analysis for product rating [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Hy, I am working on project that based on sentiment analysis for product rating.
I have data set for good words and Negative words. When any user comment on website for product it will rate automatically out of 10
So i am confused with clustering technique and ago that solve my problem Plzzx Help
Thanks in Advance.
You are basically asking us what would be best for you to use as a classifier for your program while we have to idea how is your data stored.
However, it seems you only have two classes, positive and negative. And you want to classify new data based on word analysis of the data.
I have worked earlier in such problem, I used Rocchio's TF-IDF algorithm for such classification. You give it a set of training data (negative and positive words) and it classifies what later comes to the system.
It is based on vector classification and cosine similarity distance measure.
For more information you can read this paper.
You can find an example of how the method works (on very small data) here.
Note: the provided example is a section of a project I worked on.

Determining the movie popularity for upcoming movies with neural network [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I have a CSV data set consisting of a movie details per line.
These are: name, budget, revenue, popularity, runtime, rating, votes, date released.
I'm wondering how to split the data set into training, validation and testing sets?
Then of course, how to get some results?
It would be nice to get a brief step-by-step intro on where/how I should begin.
You should use the nntool. In your case I guess curve fitting is appropriate. So use the nftool
Define your input and output in nftool then you can just randomly divide your data into training, validation and testing sets using the nftool. In the Nftool GUI you can choose how much to divide your data (80-10-10 or anything). Then you just follow the interface and then set the specifics of the network (e.g. the number of hidden neurons). Then you just train the network. After training you can plot the performance of the training and depending on the performance you can retrain or change the number of hidden neurons, percentage of the training data and so on.
You can also check this :
http://www.mathworks.com/help/toolbox/nnet/gs/f9-35958.html

Determining weight matrix [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I need to design a neural network which has the following behavior:
p(1)={0,1,1,1} outputs a(1)={0,1,0,0}
p(2)={1,1,0,1} outputs a(2)={0,0,1,0}
p(3)={0,0,1,0} outputs a(3)={0,0,0,1}
p(4)={0,0,1,1} outputs a(4)={1,1,0,1}
How can i do so? Which type of neural network should I use? Which learning method can be used here?
Thanks.
At first glance it seems as though you could use a simple feedforward neural network with one input layer one, one hidden layer and one output layer. You can use your training data to train the neural network using the backpropogation algorithm.
See this page for more details:
http://en.wikipedia.org/wiki/Backpropagation