I am new to the field of deep learning. I need to test a trained model with darknet neural network, using yolo v4.
During the training I performed the division of the dataset into 80% training test and 20% test set.
I would like to know if there was a way to use the k fold cross validation methodology with yolo.
Related
I have built an LSTM network on Matlab. I tune the hyperparameters using Bayesian Optimization and Rolling Cross Validation. When I train the network on the optimized hyperparameters, I get a different R2 (coefficient of determination) every time, due to the weights being initialized randomly. I am wondering how I should report my results. SHould I fix a random seed during hyperparameter tuning and use this same seed when I train on the full training set? Thank you
I trained a Neural Network with a GA and with Backpropagation. The GA finds suitable weights for the training data but performs poorly on the test data. If I train the NN with BackPropagation, it performs much better on the test data even though the training error isn't much smaller than for the GA trained version. Even when I use the weights obtained by the GA as initial weights for Backpropagation, the NN performs worse on the test data than using only Backpropagation for training. Can anyone tell me where I could have made a mistake?
I suggest you read something about overfitting. In short you will be excelent at training set but poor at testing set(because NN follows anomaly and uncertainity and datas). Task of NN is generalize, but GA only perfect minimize error in training set(to be fair, this is GA task).
There are some methods how to deal with overfitting. I suggest you use validation set. First step is division your data into the three sets. Training testing and validation. Method is simple, you will train your NN with GA to minimalize error on training set, but you also run your NN on validation set, only run, not train. Error of network decrease on training set, but error should also decrease at validation set. So if error decrease at training set, but start increase at validation set, you must stop with learning(please don't stop at first iterations).
Hope it will be helpful.
I have encountered a similar problem, and the choice of the initial values of the neural network does not seem to affect the final classification accuracy. I used the feedforwardnet() function in matlab to compare the two cases. One is direct training, and the program gives random initial weights and bias values. One is to find the appropriate initial weights values and bias values through the GA algorithm, and then assign them to the neural network, and then start training. However, the latter approach does not improve the accuracy of neural network classification.
I have a train dataset and a test dataset, and I train a SVM with fitcsvm in MATLAB. Then, I proceed to test the trained model with predict. I'm always using the same datasets, but I keep getting different AUCs for the same model, which makes me wonder where in the process is there a random component. Note that
I'm aware of the fact that formally there isn't such thing as ROC curve or AUC and
I'm not asking for the statistical background of the SVM problem. It is relative to the matlab implementation of the training/test algorithm. I expected to have the same results because the training algorithm is, afaik, a deterministic process.
I have read this line about neural networks :
"Although the perceptron rule finds a successful weight vector when
the training examples are linearly separable, it can fail to converge
if the examples are not linearly separable.
My data distribution is like this :The features are production of rubber ,consumption of rubber , production of synthetic rubber and exchange rate all values are scaled
My question is that the data is not linearly separable so should i apply ANN on it or not? is this a rule that it should be applied on linerly separable data only ? as i am getting good results using it (0.09% MAPE error) . I have also applied SVM regression (fitrsvm function in MATLAB)so I have to ask can SVM be used in forecasting /prediction or it is used only for classification I haven't read anywhere about using SVM to forecast , and the results for SVM are also not good what can be the possible reason?
Neural networks are not perceptrons. Perceptron is on of the oldest ideas, which is at most a single building block of neural networks. Perceptron is designed for binary, linear classification and your problem is neither the binary classification nor linearly separable. You are looking at regression here, where neural networks are a good fit.
can SVM be used in forecasting /prediction or it is used only for classification I haven't read anywhere about using SVM to forecast , and the results for SVM are also not good what can be the possible reason?
SVM has regression "clone" called SVR which can be used for any task NN (as a regressor) can be used. There are of course some typical characteristics of both (like SVR being non parametric estimator etc.). For the task at hand - both approaches (as well as any another regressor, there are dozens of them!) is fine.
I am working on a voice morphing system. I have source speech signal (divided into test, train & validation) and target speech signal (divided into test, train and validation data). Now I'am designing a radial basis neural network with 3 fold cross validation to find the morphed speech wavelet coefficients. I need to initialize the net with source and target training data and perform 3 fold cross validation using training and validation samples.
I think that as per the cross validation I need to divide my data set into 3 parts and then use 2 of them for training and the other for testing. (Repeating the process for all the folds). Now the problem is that I want to know that weather I need to divide my source training data into 3 parts or the target training...??
Thus I need to know how to apply the cross validation? Can anyone please elaborate the process for me?
You should randomly divide your entire data (tuples of input ["source"] and output ["target"/"morphed"] observations) into 3 sets: training, cross-validation, and test.
The training set will be used to train each neural network you try. The cross-validation set will be used after each network is trained, in order to select the best parameters (# of hidden nodes, etc, etc). The test set is used at the end to validate the overall performance (i.e. accuracy, generalization, etc.) of the final model.