"Translating" the parameters of a saved FANN network - matlab

I am training a neural network using the FANN library and I find the library pretty impressive. The problem is that I when I tried to "export" (manually) the weights and the formation of the network so I could simulate it in Matlab something is going wrong... While FANN tells me I have an mse of 4-5% when I try to simulate it in matlab it's around 80%!
I believe I'm missing something when I'm trying to translate/map the attributes of the network from the saved file. Can somebody have a look and please help me?
The saved .net file which fann produces, the .xls files into which I put the weights as well as the matlab scripts in case you want to test it are all here : http://users.isc.tuc.gr/~spapagrigoriou/network/ .

Related

Why doesn't a dataset created in matlab's simulink offer me get.Element function?

I'm doing a tutorial to learn matlab and simulink, and it came to a part where you run the .slx model from the script and extract data from it. The issue is that my matlab doesn't offer the same options for a dataset compared to the tutorial, specifically the get.Element option
picture to make it clearer:
enter image description here

Neural Networks (SRCNN)

I have a problem in applying Super-Resolution Convolutional Neural Networks (SRCNN). In SRCNN code, the 'Readme.txt' file tells
1) Place the "SRCNN" folder into "($Caffe_Dir)/examples/"
2) Open MATLAB and direct to ($Caffe_Dir)/example/SRCNN, run "generate_train.m" and "generate_test.m" to generate training and test data.
3)To train our SRCNN, run ./build/tools/caffe train --solver examples/SRCNN/SRCNN_solver.prototxt
4) After training, you can extract parameters from the caffe model and save them in the format that can
be used in our test package (SRCNN_v1). To do this, you need to
install mat-caffe first, then open MATLAB and direct to ($Caffe_Dir)
and run "saveFilters.m". The "($Caffe_Dir)/examples/SRCNN/x3.mat" will
be there for you.
In number 3, I don't know how to run prototxt file in Matlab. So there is anyone to help me running prototxt file in Maytlab? I want some example code to run prototxt file in Matlab. Thank you for reading.

Training a model for Latent-SVM

GOOD MORNING COLLEAGUES!
I am very into train a new model from my own data set of faces!
I have found no information about this topic, then I hope my information could help people and I can get some answers as well.
I will try to explain the steps I have needed to do to train my own model and later on some questions...
I have download the Latent code from: http://cs.brown.edu/~pff/latent-release4/
I have download the PASCAL VOC 2008 code (devkit) from: http://host.robots.ox.ac.uk/pascal/VOC/voc2008/index.html
I have emulate the structure of files/folders of the VOC PASCAL but in my own data set:
Annotations. I have created a .xml where I have defined a object, face, (in each image I only have one face). I didn't define difficulties or poses...
JPEGImages where I have stored all the images
ImageSets where I have defined three files:
test.txt, where I wrote the file name of my positive samples
train.txt, where I wrote the file name of my negative samples
trainval.txt, where I wrote the file name of my positive samples (exactly the same file than test.txt).
I have change some things in globals.m and VOCinit.m (to tell the algorithm the path and the location of some files...)
Then I run the training with the command: pascal('face', 1);
Following these steps I have achieved that the training run completely and doesn't fail and I get my own model BUT I have some doubts...
Can you see anything weird in my explanation? Could it work?
Must the files test.txt/trainval.txt be equal? Why... What does it mean?
Do I have to choose the number of parts I want in the model INSIDE the function?
Please, you imagine I have two kind of samples (frontal faces and side faces) and I want to detect both... How can I address this issue? I thought I have to train a model with two components... but How can I tell to the training code which are frontal or side samples?? In the annotations with the label pose?? (I don't think so...) Are there other way to handle this purpose?
Thank you for your time!!
I hope you can solve my doubts :)
I think test.txt should contain samples (images) that will be used to estimate how good the system is after learning the faces. However, trainval.txt is used during the learning stage (training) to fine-tune the parameters of the model; it is an essential part of supervised learning.
Also, it is very hard to have one single SVM to classify faces that are both frontal and sideways. Here is my suggestion:
Train one SVM to detect if the input image is a frontal face or a sideways face. Call this something like SVM-0.
Train another SVM for frontal faces. This SVM will classify all your individuals. Note, however, that SVM is usually a binary classifier, so make sure you choose the right SVM, one that as a multiclass architecture. Call this SVM-F.
Tran a final SVM for sideways faces. Again, use a multiclass SVM. Call it SVM-S.
Present the input image to SVM-0 and if it detects it is a frontal face, present the input again to SVM-F; otherwise, give the input to SVM-S.
In my experience, you should expect very low performance in SVM-S. It is a hard problem to solve. But frontal faces is not a big deal, unless you are working with faces that vary in pose, illumination, and expression (PIE). Face recognition is affected greatly with PIE variations in the images.
I recommend you this website, it contains very good information and tutorials for starters, with or without experience.

Open-cv haar trainer app

i am trying to train a classifier of an object. I have tried to train it in MATLAB and i am getting good results. But the generated output .xml file can't be used in open-cv. so can anyone tell me how can i use the MATLAB generated .xml file in open-cv or can anyone give me some link to app where i can directly put my positive and negative images and it can do all the training and give me out .xml file in open-cv.
Thanks in advance
Here you can find several .exe that will help you to train your OpenCV classifier.
And here you can find a tutorial. Hope it helps!

Multilayer backpropagation with NETLAB toolbox

I am trying to use the NETLAB toolbox to train a 3-layer (input,hidden,output) feed-forward backpropagation Neural Network. Unfortunately I do not have too much freedom in terms of network architecture I can work with.
I notice NETLAB has the following functions that I need: mlp,mlpbkp,mlpfwd,mlpgrad. I am not sure in what order I need to call the above functions to train the network. The help manual is not of too much help either.
If any of you have used the NETLAB toolbox, kindly let me know.
Also, if you know of other free toolboxes I can use in lieu of NETLAB, kindly, let me know.
Thanks!
You can find some basic examples on usage of NETLAB online here, the following is just the header:
A Simple Program The "Hello world" equivalent in Netlab is a programme
that generates some data, trains an MLP, and plots its predictions.
The online demo is a brief version of a longer demo available with the program, and uses functions mlp and mlpfwd.
In the downloads page you'll find that you can download help files, too.
If you get stuck you may (perhaps as a last resort) want to contact the authors.
edit
I understand that pointing to help files might not be what you were looking for. As you rightly point out, there is little documentation (perhaps more importantly no demos that I could find) on performing backpropagation, and definitely not with 3 layers. The available function mlpbkp backpropagates for a 2-layer network.