Determining the movie popularity for upcoming movies with neural network [closed] - matlab

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I have a CSV data set consisting of a movie details per line.
These are: name, budget, revenue, popularity, runtime, rating, votes, date released.
I'm wondering how to split the data set into training, validation and testing sets?
Then of course, how to get some results?
It would be nice to get a brief step-by-step intro on where/how I should begin.

You should use the nntool. In your case I guess curve fitting is appropriate. So use the nftool
Define your input and output in nftool then you can just randomly divide your data into training, validation and testing sets using the nftool. In the Nftool GUI you can choose how much to divide your data (80-10-10 or anything). Then you just follow the interface and then set the specifics of the network (e.g. the number of hidden neurons). Then you just train the network. After training you can plot the performance of the training and depending on the performance you can retrain or change the number of hidden neurons, percentage of the training data and so on.
You can also check this :
http://www.mathworks.com/help/toolbox/nnet/gs/f9-35958.html

Related

How to compare different models configurations [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I am implementing a neural network model for text classification. I am trying different configurations on RNN and lstm neural network.
My question: How to compare these configuration, should I compare the models using the training set accuracy, validation accuracy or testing set accuracy?
I will explain how I finally compared my different RNN models.
First of all, I used my CPU for model training. This will ensure that I get the same model parameters each run as GPU computations are known to be non-deterministic.
Secondly, I used the same tf seed for each run. To make sure that the random variables generated in each run is the same.
Finally, I used my validation accuracy to optimize my hyper-parameters. Each run I used a combination of different parameters until I choose the model with the highest validation accuracy to be my best model.

neural network check plastic parts [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
neural networks are used to generalize and classify...
I have a little experience with classify digits...
Using neural nets to recognize handwritten digits
i want to use a network to check plastic parts.
I have a videostream of production from these plastic parts.
should i train the network with many videos of correct plastic parts to get positive output and random videos to get negative output?
If you have any books or links i would be happy to see them.
EDIT
It looks like i asked a bit stupid...
During production, wrong plastic parts can be created and these should be recognized by network. There are a lot of mistakes can happen during production, so i think
it only makes sense to train the network with correct plastic parts.
A convolution neural network would be my recommendation.
You should show individual parts with similar background and lighting.
The training has to be done on both good and bad parts - a sufficient random sampling of both. You should also set aside a test set once your CNN is trained so you can evaluate it.
You'll want to generate a confusion matrix from the test data so you'll know the rate of false positives, false negatives, correct, and incorrect classifications.

How to choose the number of filters in each Convolutional Layer? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
When building a convolutional neural network, how do you determine the number of filters used in each convolutional layer. I know that there is no hard rule about the number of filters, but from your experience/ papers you have read, etc. is there an intuition/observation about number of filters used?
For instance (I'm just making this up as example):
use more/less filters as the network gets deeper.
use larger/smaller filter with large/small kernel size
If the object of interest in the image is large/small, use ...
As you said, there are no hard rules for this.
But you can get inspiration from VGG16 for example.
It double the number of filters between each conv layers.
For the kernel size, I usually keep 3x3 or 5x5.
But, you can also take a look at Inception by Google.
They use varying kernel size, then concat them. Very interesting.
As far as I am concerned there is no foxed depth for the convolutional layers. Just several suggestions:
In CS231 they mention using 3 x 3 or 5 x 5 filters with stride of 1 or 2 is a widely used practice.
How many of them: Depends on the dataset. Also, consider using fine-tuning if the data is suitable.
How the dataset will reflect the choice? A matter of experiment.
What are the alternatives? Have a look at the Inception and ResNet papers for approaches which are close to the state of the art.

Sentiment Analysis for product rating [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Hy, I am working on project that based on sentiment analysis for product rating.
I have data set for good words and Negative words. When any user comment on website for product it will rate automatically out of 10
So i am confused with clustering technique and ago that solve my problem Plzzx Help
Thanks in Advance.
You are basically asking us what would be best for you to use as a classifier for your program while we have to idea how is your data stored.
However, it seems you only have two classes, positive and negative. And you want to classify new data based on word analysis of the data.
I have worked earlier in such problem, I used Rocchio's TF-IDF algorithm for such classification. You give it a set of training data (negative and positive words) and it classifies what later comes to the system.
It is based on vector classification and cosine similarity distance measure.
For more information you can read this paper.
You can find an example of how the method works (on very small data) here.
Note: the provided example is a section of a project I worked on.

How can I find the rank for each user? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I am having trouble understanding on how can I sort the users according to their rank. I'm using the Convolutional Neural Network (CNN) for developing iris recognition system and I've got the output of the Softmax classifier from the left and right iris.
What I'm going to do is using one of the ranking fusion methods (e.g. the highest rank method, the Borda count method, and the logistic regression method) to fuse the output of both the left and right iris. I completely understand each method how does it work, but I faced problem on how can I find the initial rank for each user. In other words, How can I find the rank for each user before I fed them to any one of the ranking methods?.
Please, any explanation and idea on this will be highly appreciated. Thank you in advance.
I think that in your case you don't have a global rank of the user,
just the ranking of users for each of the examples.
You may treat the answer from your classifier as a ranking method, if it returns a vector of likelihoods of a given iris belonging to each of the users.
Then you may rank the users for the left and right iris separately and fuse the rankings.