training a new model using pascal kit - matlab

need some help on this.
Currently I am doing a project on computer vision that requires me to train a new model to detect a certain object.
In this case, I am using the system provided by P. Felzenszwalb, D. McAllester, D. Ramaman and his team => Discriminatively trained deformable part models which is implemented in Matlab.
Project webpage: http://www.cs.uchicago.edu/~pff/latent/.
However I have no idea how to direct the system to use my dataset(a collection of images and annotation) which is different from the the PASCAL datasets so as to train a new model.
By directing, I meant a line of code that allows me to change the dataset the system reads from, for training a model.
E.g.
% directory for caching models, intermediate data, and results
cachedir = ['/var/tmp/rbg/YOURPATH/' VOCyear '/'];
I tried looking at their Readme and documentation guides but they do not make any mention. Do correct me if I am wrong.
Let me know if I have not made my problem clear enough.
I tried looking at some files such as global.m but no go.
Your help is much appreciated and thanks in advance!

You can try to read pascal.m in the DPM package(voc-release5), there are similar code working on VOC2007/2010 dataset.

There are plenty of parts that need to be adapted to achieve this. For example the voc_config has to be adapted in order to read from your files.
The same with the pascal_train.m function. Depending on the images and the way you parse them, this may require quite some time to adapt this function.
Other functions to consider:
imreadx
pascal_test
pascaleval

Related

Flutter - Google ML kit - Text Recognition - Unable to read MRZ correctly

I am working on a flutter project wherein I need to read RMZ code from passport or ID cards.
I am using google ml kit's text recognition package (google_mlkit_text_recognition) to do this job and I am able to read the RMZ code.
The trouble is, the ml kit seems to gobble up a lot of '<'s from the RMZ code and also (only) sometimes seems to be able to convert the dates from 'YYMMDD' as in the passport RMZ to 'DD/MM/YYYY'.
Due to this inconsistency, I am unable to accurately get the required elements from the RMZ code.
Is there a way to make the ml kit simply read the code and spit it out as it is, in its raw form? Or is there some other way to do this - maybe use another plugin?
In case someone asks for the code. It's a boilerplate, see below:
final textDetector = TextRecognizer();
RecognizedText recognisedText = await textDetector.processImage(inputImage)
It would be helpful if you posted an image and the relative model output, pointing out what the model is failing at. Anyways, it seems weird the model does anything more to the output than giving you what it reads block by block. Having said this, the problem might be that the model is not suited for your specific task, in which case I would go on as follows:
Switch from your current model to the other available OCR model on Ml Kit. (eg. : from V2 beta to V1 or viceversa);
Try pre-trained models from Tensorlfow Hub;
Train a pre-trained model on your specific task;
Train a model from scratch on your specific task;
Look for any cloud based service which offer a model suited for your task;
This is everything I can come up with given the limited context of your question. If you are willing to expand on your specific problem I might be able to give you more precise info.

Matlab converting library to model

I'm working on a script to convert a Simulink library to a plain model, meaning it can be simulated, it does not auto-lock etc.
Is there a way to do this with code aside from basically copy-pasting every single block into a new model? And if it isn't, what is the most efficient way to do the "copy-paste".
I was not able to find any clues as how to approach this problem here, or on Google, or on the official documentation or on the MathWorks forum so I'm at a loss on how to proceed.
Thank you in advance!
I don't think it's possible to convert a library to a model, but you can programmatically add library blocks to models like so:
sys = 'testModel';
new_system(sys);
open_system(sys);
add_block('Simulink/Sources/Sine Wave', [sys, '/MySineWave']);
save_system(sys);
close_system(sys);
sim(sys);
You could even use the find_system command to list all the blocks in a library and then loop through them all and create a new model for each using the above code.

Running Memnet model using caffe

I try to use the dataset of mnist, which is an example provided in github, to run Memnet model on the interface of cmd.
The model is downloaded from here
I modified its deploy.prototxt accordingly. Having no idea... Can someone help me with this?
But it keeps telling me something wrong going on, as the pic shows:
The command line interface should get as a -solver a solver.prototxt file (which has train_val.prototxt as one of its parameters). You cannot supply train_val.prototxt directly to caffe train.
You can look at the examples subfolder of caffe and find some examples of solver.prototxt. A simple one can be found in examples/mnist, you can check out lenet_solver.prototxt.

Is it possible to update and use updated .ini and .ned files when Omnet++ simulation is running?

I am trying to run Omnet++ and matlab software in parallel and want them to communicate. When Omnet++ is running, I want to update the position of the node and for that I want to edit the .ned and .int files with matlab results continuously. During simulation I want to generate the result file using the updated files. I want just to update the position and don't want to add or delete any node. Please suggest me a way for proceeding?
matlab_loop
{
matlab_writes_position_in_ned_file;
delay(100ms);
}
omnet_loop
{
omnet_loads_ned_and_simulates;
//sca and vec should update;
delay(100ms);
}
Thank you.
NED and Ini files are read only during initialization of the model. You can't "read" them again after the simulation started. On the other hand, you are free to modify your parameters and create/delete modules using OMNeT++'s C++ API. What you want to achieve is basicaly: set your node position based on some calculations carried out by matlab code. The proper way to do it:
Generate C code from your matlab code.
Link that code to your OMNeT++ model
Create a new mobility model (assuming you are using INET) that is using the matlab code
What you are looking for seems to be more of a project rather than a question/problem which can be solved in Q&A site like stackoverflow.
Unfortunately, I have little understanding of matlab and V-REP to provide you a satisfactory answer. However, it seems that you will need to play around with APIs in lower levels.
As an example of coupling different simulation tools to form a simulation framework in case of need consider reading this paper and this
Also note the answer given by #Rudi. He seems to know what he is talking about.

ESS workflow for R project/package development

Can anyone share his experience on workflow for R peject development under ESS? I tried several times to learn emacs but I have not get it yet. I can understand ESS as an editor, but is there a project view in ESS? what's the efficient ways to set up/view R project directory, coding, and testing, and how's ESS has an edge to facilitate the whole process?
Do you use ESS as a good R editor only or tend to emulate a R IDE environment within ESS?
Thanks for any advices.
It sounds like you're asking two separate questions.
One question concerns workflow and the other concerns using ESS.
As I use StatET and Eclipse, I'll just share my experience regarding the workflow aspect of your question.
As with Vincent I also follow something like the workflow set out by Josh Reich here (also see Hadley's useful comments):
Workflow for statistical analysis and report writing
Although it can vary between projects, I tend to have a couple of main R files
import.R: this imports data files and does any necessary cleaning and manipulation
analyse.R: This generates the output that I need for any final report
main.R: This calls import.R and analyse.R
The aim is for import.R and analyse.R to represent the complete and final workflow for producing the final results of any analyses.
In terms of a directory structure for an analysis project, I'll often also have the following folders
data: for storing any raw data files
meta: for storing meta data, such as variable labels, scoring systems for tests, recoding information, etc.
output: for storing any graphics, tables, or text generated by my analyses that I might want to incorporate into an external program
temp: When exploring the data and brainstorming analyses, I like to type code into files instead of using the console. I tend to label these temp1.R, temp2.R, temp3.R. I store these in a temp folder. That way I have a permanent record that's easily accessible. If the analyses become final they get incorporated into one of the main R files (i.e., import.R or analysis.R)
functions: If I think that a function will be needed across a couple of projects, I often place it one function per file or a set of related functions in a file in a folder called functions. This makes it relatively easy to reuse functions across projects, when the formal requirements of package development are more than needed.
library: If I want to create some general functions that I think will be project specific, I'll place them in this folder
save: A folder to store any saved R objects
StatET and Eclipse make it easy to interact with such a file system.
Of course, given all the R gurus that use ESS and Emacs, I'm sure it also handles interactions with the file system well.
I'm not exactly sure what you expect as an answer on this one. I, for one, have stolen (and adapted) a system that was suggested here a little while ago (by Josh Reich):
Create a folder for every project, and split up your work in a bunch of different .R files:
Load.R for getting your raw data into R;
Prep.R for cleaning the data, recoding variables, etc.;
Func.R for coding any custom functions you will need for evaluation; and
Eval.R for running your final stuff.
If that doesn't fit your style, just change it.
Then, you can either have a master file to call each of the parts one after each other (good for reproducibility), or save at different stages and have the individual scripts load the appropriate data (good if some of the prep work is very computationally/time intensive).
**
On a different note, the trick that is posted at the link really helped me get into ESS. It turns Shift-Enter into a one-stop-ESS-shop: http://www.kieranhealy.org/blog/archives/2009/10/12/make-shift-enter-do-a-lot-in-ess/
Others have given you some good ideas about how to setup your directory/file structure for a project.
You also asked about "project views," in which case you might want to look into the Emacs Code Browser (ECB).
You can find some screen shots of it in action on its site, here:
http://ecb.sourceforge.net/screenshots/index.html