Training tesseract to use with iPhone - iphone

I am trying to use tesseract-2.04 in my iPhone application and just want to detect the numbers. What I am doing here is first I am cross compiling tesseract to generate lib file using this post http://robertcarlsen.net/2009/07/15/cross-compiling-for-iphone-dev-884 and then using the the demo application at http://robertcarlsen.net/2010/01/12/ocr-for-iphone-source-1080 , but the results far away than realistic.
I am not able to resolve the issue or how to train tesseract so that it comes closure for practical usage.
Please help.
Thanks,
Madhup

I get quite good results setting
TessBaseAPI::SetVariable("tessedit_char_whitelist", "0123456789");
while gently urging the user to let the numbers fit in a certain box. This makes locating the numbers easier for me, and ensures the user keeps the image steady and at a reasonable distance leading to a sharper image.
I have thought about altering valid_word() in tesseract-2.04/dict/permute.cpp, but there seems to be no need for that.
The next step will be to hardcode a minimum/maximum char size so recognition time can become way less than the 500 ms it is now. Then the next step will be to add some code that keeps track of results in time, so that reading 5 90% of the time and 8 only 10% will lead the code to remember the 5.
It all depends on the use case you have. I'm lucky in the sense that I'm allowed to just show a 200x50 box which will contain the number.

Related

Why does my Google Optimize experiment show no clear winner

I did a very simple text, the footer contact form on the left of the website or the right of the website. The results showed "no clear winner". But the below data shows that one has 5 conversions vs 1, which I consider to be significant (albeit low numbers). It also says there is a 95% probability that this one will be better.
What am I not understanding about this data? Is it that the numbers are too low in volume to give a reading or is it a bug or is there something I've missed?
Its probably because your AB Test did not have a lot of traffic, in each variant. So 5 conversions vs 1, is not really a big difference between the two.

Loss of TransE gets plateau at the margin value with BigDL library

I have been trying to use optimizer(SGD, Adagrad) from BigDL library on TransE with scala. My current implementation works with mini batch in sequential way. I followed this example to optimize the embeddings(as Tensors) without creating a layered model.My code is somewhat similar to this example. My current problem is, with some parameters my losses gets at a plateau point (the value of margin) no matter how many epochs I run. With this, my hit#10 in testing is not that good. Can someone give any idea why losses get at a plateaued point and if this generates bad testing results?
P.S. I have checked my loss calculation and it is fine. The only place I have control over my implementation is the optimizer.
Thanks in advance.

Determine sample size for A/B testing, more than 2 variants

What R function should we use if we want to decide the sample size for such a test:
10 ads, we want to use a test to decide which ads has the best click through rate. We are able to count the flow and click throughs.
I don’t think the number of variant experiences makes a difference. In each, you compare a metric to the same metric in control, so in each you’ll have its own significant sample size: the smaller the difference with the control, the larger the sample size.
The point of active debate in recent years is something related: how, at run time, to optimize the traffic split between the experiences so that by the time all the variants are called, the most has gone through your winning experience. Google (Experiments) have devised something they call the Multi-Arm Bandid algorithm for that, but as far as I know it hasn't been published in a peer-reviewed journal, and probably for a reason.
Good Luck!

Encog Neural Network error not changing

I am creating a neural network that works on colored images. but when i train it, the error will never change after some point. even after a thousand iterations. what causes it? or what should i do? here is the structure:
BasicNetwork network = new BasicNetwork();
network.addLayer(new BasicLayer(null,true,16875));
network.addLayer(new BasicLayer(new ActivationSigmoid(),true,(50)));
network.addLayer(new BasicLayer(new ActivationSigmoid(),true,setUniqueNumbers.size()));
network.getStructure().finalizeStructure();
network.reset();
the input layer is actually 75 * 75 (75x75 pixels) *3 (red, green, blue), so i came up with 16875.
When the error stops changing, you have hit a minimum, probably a local minimum.
This means that it has found what it thinks is the best solution so far, and to move away from that point will cause more error (even if it has to go up hill a bit before it can go down hill to an even lower error rate). This happens early when there isn't a strong pattern/correlation in the data. It can also happen when the structure is out of wack.
That looks like it may definitely be one of your problems. ~17,000 input neurons is a ton. And then 50 hidden neurons doesn't seem to match up well with so many inputs. Instead of feeding in so much data, find a way to extract features to reduce the input size and make it more meaningful to the network.
Examples that may help it run better:
Down sample the image from 75*75 to, say, 10*10. This depends on what the picture is.
Convert from RGB to gray scale
Learn about feature extraction - if you can extract features from the image, such as lines, edges, etc., the net will have a lot more to learn from. Seriously, feature extraction is your golden ticket to making ANNs work.
Good luck!

Problem with block matching in matlab

I have written matlab codes for two different block matching algorithms, extensive search and three step search, but i am not sure how i can check whether i am getting the correct results. Is there any standard way to check these or any standard code which i can run and compare my result with.I read somewhere that JM software can be used but i didnt find any way to use it.
You can always use the results produced by your algorithms to create the next frame of video and then analyze its quality by either visually inspecting it (which is rather subjective, and we like to deal in numbers) or calculating the mean square error between the produced image and the one you're trying to estimate. Mean square error of the exhaustive (extensive) search should be lower than the one three-step gives you.
Well, did you try to plot it? I mean,after the block-matching you have a new image, right?.
A way to know if you result if true or not is to check the sum of the difference of 2 frames.
A - pre_frame
B - post_frame
C - Compensated frame
If abs(abs(A-B)) is lower than abs(abs(A-C))) that mean it could be true.
Next time, try to specify your algoritm. Also, put your code here to help you more.