I use this command to train in colab
!./darknet detector train data/obj.data cfg/yolov3_custom.cfg darknet53.conv.74 -dont_show -map
and when I click on chart.png
all I see is this it doesn't show mAP
after 1000 iteration it shows the mAP
In Alexey/Darknet, mAP is only calculated every 4 epochs (minimum 100 iterations, see Code). One has to change the code in order to update mAP at a different interval.
Related
I have a matrix called (all_output) (which is the output training and testing Neural Network of 36 users). This matrix contains 36 cells, each cell has 504 values ( as shown in the attaced image)
the content of each cell of (all_output) is shown in the attached image
**___Update__**
i will explain how the (all_output) has been constructed
After Neural Network has been trained, I have used that code in order to test the Neural Network
% % % Test the Network %%%%%%%
outputs = net(Testing_Gen{i});
all_output{1,i}=outputs
Testing_Gen: is a matrix of size (1*36 cells as shown in the attached
image).
in order to understand the content of Testing_Gen matrix,
for each user, I have 14 test samples(examples), and for each sample 143 features have been extracted and stored in a column.
Each cell in Testing_Gen matrix contains the user's test samples and the imposter's test samples ( as shown in the attached image)
as we could see that one cell is (143 rows x 504 columns), the first 14 columns in each cell is the user's samples ( genuine user's samples) and the remaining columns are the imposter's samples (490 samples [14*35])
for example, I have extracted 14 samples or examples for User1 to be used for testing, therefore, the first cell contains the test samples (examples) of User1 (which are 14) and the imposter's samples as well (490 samples [14*35]) in order to calculate the FAR and FRR
I'd like to calculate the False Acceptance Rate (FAR), False Rejection Rate (FRR) and Equal Error Rate (EER) for this Matrix.
False Acceptance Rate is the percentage in which the system incorrectly accepts an imposter as the legitimate user.
For example, to calculate the FAR for User1 all the imposter's samples (which are already stored in (all_output) matrix) need to be tested against User1 and repeat this procedure 36 times.
False Rejection Rate displays the percentage in which the authorised user is wrongly rejected by the system.
For example, to calculate the FRR for User1 all his testing samples (which are already stored in (all_output) matrix) need to be tested against User1 and repeat this procedure for each genuine user (36 times).
EER simply can be calculated using this equation (FAR+FRR)/2
while calculating EER, the EER's results should show the necessity of having a balance between FRR and FAR for the system (in other words, the value of FAR and FRR should be close to each other as much as possible as my system aim to have a balance between accepting authorised user and rejecting imposters).
This is the code that I have done so far to calculate FRR
%%% performance calculate FAR FRR EER
% %FRR
i=36; % number of users
for n=1:i
counter1=1;
for t=0:0.01:1 % Threeshold value
FRRsingletemp=sum(all_output{1,n}(size(all_output{1},1)):size(all_output{1},2)<t)/size(all_output{1},2);
FRRsingle(counter1)=FRRsingletemp;
counter1=counter1+1;
end
FRR(n,:)=FRRsingle;
end
I am not sure what is your question but I cannot agree with your claim
EER simply can be calculated using this equation (FAR+FRR)/2
FAR (FRR) is not a value, it is a function dependent on threshold. EER is the value where FAR graph and FRR graph intersect as can be seen here.
I created a GoogleNet Model via Nvidia DIGITS with two classes (called positive and negative).
If I classify an image with DIGITS, it shows me a nice result like positive: 85.56% and negative: 14.44%.
If it pass that model it into pycaffe's classify.py with the same image, I get a result like array([[ 0.38978559, -0.06033826]], dtype=float32)
So, how do I read/interpret this result? How do I calculate the confidence levels (not sure if this is the right term) shown by DIGITS from the results shown by classify.py?
This issue led me to the solution.
As the log shows, the network produces three outputs. Classifier#classify only returns the first output. So e.g. by changing predictions = out[self.outputs[0]] to predictions = out[self.outputs[2]], I get the desired values.
Hi my question is a bit long please bare and read it till the end.
I am working on a project with 30 participants. We have two type of data set (first data set has 30 rows and 160 columns , and second data set has the same 30 rows and 200 columns as outputs=y and these outputs are independent), what i want to do is to use the first data set and predict the second data set outputs.As first data set was rectangular type and had high dimension i have used factor analysis and now have 19 factors that cover up to 98% of the variance. Now i want to use these 19 factors for predicting the outputs of the second data set.
I am using neuralnet and backpropogation and everything goes well and my results are really close to outputs.
My questions :
1- as my inputs are the factors ( they are between -1 and 1 ) and my outputs scale are between 4 to 10000 and integer , should i still scaled them before running neural network ?
2-I scaled the data ( both input and outputs ) and then predicted with neuralnet , then i check the MSE error it was so high like 6000 while my prediction and real output are so close to each other. But if i rescale the prediction and outputs then check The MSE its near zero. Is it unbiased to rescale and then check the MSE ?
3- I read that it is better to not scale the output from the beginning but if i just scale the inputs all my prediction are 1. Is it correct to not to scale the outputs ?
4- If i want to plot the ROC curve how can i do it. Because my results are never equal to real outputs ?
Thank you for reading my question
[edit#1]: There is a publication on how to produce ROC curves using neural network results
http://www.lcc.uma.es/~jja/recidiva/048.pdf
1) You can scale your values (using minmax, for example). But only scale your training data set. Save the parameters used in the scaling process (in minmax they would be the min and max values by which the data is scaled). Only then, you can scale your test data set WITH the min and max values you got from the training data set. Remember, with the test data set you are trying to mimic the process of classifying unseen data. Unseen data is scaled with your scaling parameters from the testing data set.
2) When talking about errors, do mention which data set the error was computed on. You can compute an error function (in fact, there are different error functions, one of them, the mean squared error, or MSE) on the training data set, and one for your test data set.
4) Think about this: Let's say you train a network with the testing data set,and it only has 1 neuron in the output layer . Then, you present it with the test data set. Depending on which transfer function (activation function) you use in the output layer, you will get a value for each exemplar. Let's assume you use a sigmoid transfer function, where the max and min values are 1 and 0. That means the predictions will be limited to values between 1 and 0.
Let's also say that your target labels ("truth") only contains discrete values of 0 and 1 (indicating which class the exemplar belongs to).
targetLabels=[0 1 0 0 0 1 0 ];
NNprediction=[0.2 0.8 0.1 0.3 0.4 0.7 0.2];
How do you interpret this?
You can apply a hard-limiting function such that the NNprediction vector only contains the discreet values 0 and 1. Let's say you use a threshold of 0.5:
NNprediction_thresh_0.5 = [0 1 0 0 0 1 0];
vs.
targetLabels =[0 1 0 0 0 1 0];
With this information you can compute your False Positives, FN, TP, and TN (and a bunch of additional derived metrics such as True Positive Rate = TP/(TP+FN) ).
If you had a ROC curve showing the False Negative Rate vs. True Positive Rate, this would be a single point in the plot. However, if you vary the threshold in the hard-limit function, you can get all the values you need for a complete curve.
Makes sense? See the dependencies of one process on the others?
I'm trying to train a cascade classifier by the built-in Matlab function "trainCascadeObjectDetector", but that always shows the following error message when I call this function:
trainCascadeObjectDetector('MCsDetector.xml',positiveInstances(1:5000,:),'./negativeSubFolder/',...
'FalseAlarmRate',0.01,'NumCascadeStages',5, 'FeatureType', 'LBP');
Automatically setting ObjectTrainingSize to [ 32, 32 ]
Using at most 980 of 1000 positive samples per stage
Using at most 1960 negative samples per stage
265 ocvTrainCascade(filenameParams, trainerParams, cascadeParams, boostParams, ...
Training stage 1 of 5
[....................................................Time to train stage 1: 12 seconds
Error using ocvTrainCascade
Error in generating samples for training. No samples could be generated for training the first cascade stage.
Error in trainCascadeObjectDetector (line 265)
ocvTrainCascade(filenameParams, trainerParams, cascadeParams, boostParams, ...
The number of samples are 5000 positive images and 11000 negative images. The Matlab version is 2014a that is running on Ubuntu 12.04.
I am not sure if I need to increase more training data, because the error message is:
Error in generating samples for training. No samples could be generated for training the first cascade stage.
Could you please have a look at this? Thanks!
First of all, what is the data type of positiveInstances? It should be a 1D array of structs with two fields: imageFileName and objectBoundingBoxes. positiveInstances(1:5000,:) looks a bit suspicious, because you are treating it as a 2D matrix.
The second thing to check is the negativeSubFolder. It should contain a lot of images without the objects of interest to be able to generate 1960 negative samples per stage.
For future reference, there is a tutorial in the MATLAB documentation.
Hi could someone please help I am using matlab to generate a disparity map. I have performed multi-wavelet transforms on two rectified stereo pairs and have used a stereo matching algorithm to combine the corresponding babsebands from each image to produce four intial disparity maps. However, I am now stuck and completely clueless on how to use a median operator to combine the values of these four disparity maps into one. Could someone please help me?
the four of my images are equal in size.
The previous code is irrelevant since it is in a different file(I have just saved the output from the previous file and now I am trying to code this in another file).
My thoughts were to:
1. Read the value of pixel p from each of the four basebands
2. Sort the values into ascending order
3. Calculate the median value of the pixel
4. Write the pixel value to a new image
5. Set p+1 and repeat steps 1-4 until last pixel is reached
Thank you
First, stack the images into a MxNx4 array:
bbstack = cat(3,bb1,bb2,bb3,bb4); % use bb{:} if they are in a single cell array
Then apply the median operator along the third dimension:
medbb = median(bbstack,3);