Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
neural networks are used to generalize and classify...
I have a little experience with classify digits...
Using neural nets to recognize handwritten digits
i want to use a network to check plastic parts.
I have a videostream of production from these plastic parts.
should i train the network with many videos of correct plastic parts to get positive output and random videos to get negative output?
If you have any books or links i would be happy to see them.
EDIT
It looks like i asked a bit stupid...
During production, wrong plastic parts can be created and these should be recognized by network. There are a lot of mistakes can happen during production, so i think
it only makes sense to train the network with correct plastic parts.
A convolution neural network would be my recommendation.
You should show individual parts with similar background and lighting.
The training has to be done on both good and bad parts - a sufficient random sampling of both. You should also set aside a test set once your CNN is trained so you can evaluate it.
You'll want to generate a confusion matrix from the test data so you'll know the rate of false positives, false negatives, correct, and incorrect classifications.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
When building a convolutional neural network, how do you determine the number of filters used in each convolutional layer. I know that there is no hard rule about the number of filters, but from your experience/ papers you have read, etc. is there an intuition/observation about number of filters used?
For instance (I'm just making this up as example):
use more/less filters as the network gets deeper.
use larger/smaller filter with large/small kernel size
If the object of interest in the image is large/small, use ...
As you said, there are no hard rules for this.
But you can get inspiration from VGG16 for example.
It double the number of filters between each conv layers.
For the kernel size, I usually keep 3x3 or 5x5.
But, you can also take a look at Inception by Google.
They use varying kernel size, then concat them. Very interesting.
As far as I am concerned there is no foxed depth for the convolutional layers. Just several suggestions:
In CS231 they mention using 3 x 3 or 5 x 5 filters with stride of 1 or 2 is a widely used practice.
How many of them: Depends on the dataset. Also, consider using fine-tuning if the data is suitable.
How the dataset will reflect the choice? A matter of experiment.
What are the alternatives? Have a look at the Inception and ResNet papers for approaches which are close to the state of the art.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Hy, I am working on project that based on sentiment analysis for product rating.
I have data set for good words and Negative words. When any user comment on website for product it will rate automatically out of 10
So i am confused with clustering technique and ago that solve my problem Plzzx Help
Thanks in Advance.
You are basically asking us what would be best for you to use as a classifier for your program while we have to idea how is your data stored.
However, it seems you only have two classes, positive and negative. And you want to classify new data based on word analysis of the data.
I have worked earlier in such problem, I used Rocchio's TF-IDF algorithm for such classification. You give it a set of training data (negative and positive words) and it classifies what later comes to the system.
It is based on vector classification and cosine similarity distance measure.
For more information you can read this paper.
You can find an example of how the method works (on very small data) here.
Note: the provided example is a section of a project I worked on.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
What should I use for stock market prediction and why? comparison if you can please.
Udpated: I wanted to use it for stock market movement (up,down) for 1 day.Also,Thank you for your answer it halped
It's not easy to say you which ML algo will give you best perfomance. Especially if not to see which market you want to predict. I recommend you to implement different algorithms and try to train them, because in my practice changing of layers gave different results. SVM sometime was also flexible enough. Also try to implement and check how your training will work on trained and untrained data in order to have really good results. Also analyze how machine learning will work on more predictable sequences ( aka sin, cos, polinomials, randow walks)
Additional field of investigation can be some technical analisis additions: moving averages, stochastics, candle chart patterns, Fibonacci levels.
And finally in order to get money don't rely only on neural network or SVM but use them in conjunction with some trading strategy. For example you can use some trading strategy which has perfomance 30 % and use ML in order to rise perfomance to 60 %
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I have a project in which I need to make neural network for face recognition.
Inputs of network should be features of face which needs to be recognized.
I searched a lot and found SURF Detector of Matlab's Computer Vision Toolbox to be the one which will help me extract the features of face. But SURF Detector extracts keypoints of face and for each of them sets vector with 64 or 128 values. Problem is that the number of keypoints varies,and I need it to be same for each face, to be able to feed the inputs of neural network.
So i thought to extract only some features which can be presented as single number, like proportions of nose,mouth,eyes to the face, or distance between eyes, etc.
How can i get these features, and will they be good enough to serve as inputs to neural network which will need to recognize faces? On the output of neural network there will be same number of neurons as there is different people in database, and in training phase I'm going to feed the network with extracted face features from photo, and if it is photo of let's say third of five people in database, my output layer will look like [0,0,1,0,0].
Is this good approach and can you give me some code which extracts those face features from face in Matlab?
Proportions of nose/mouth/eyes to the face and distance between eyes will give you very bad results. Those are not measures that are accurate or distinctive enough.
If you're looking for features for face recognition, you should consider LBP:
http://www.scholarpedia.org/article/Local_Binary_Patterns#Face_description_using_LBP
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I have a CSV data set consisting of a movie details per line.
These are: name, budget, revenue, popularity, runtime, rating, votes, date released.
I'm wondering how to split the data set into training, validation and testing sets?
Then of course, how to get some results?
It would be nice to get a brief step-by-step intro on where/how I should begin.
You should use the nntool. In your case I guess curve fitting is appropriate. So use the nftool
Define your input and output in nftool then you can just randomly divide your data into training, validation and testing sets using the nftool. In the Nftool GUI you can choose how much to divide your data (80-10-10 or anything). Then you just follow the interface and then set the specifics of the network (e.g. the number of hidden neurons). Then you just train the network. After training you can plot the performance of the training and depending on the performance you can retrain or change the number of hidden neurons, percentage of the training data and so on.
You can also check this :
http://www.mathworks.com/help/toolbox/nnet/gs/f9-35958.html