I need to know where the barcode is positioned on the screen. I need it for UPC barcodes or, more in genera, for 1D Barcodes.
So far the ZXResult resultPoints array seem to provide only few (clue) points on the same scanning line and not the coordinates of the barcode rectangle.
Thanks
That's right. It does not operate by detecting the whole rectangle since it's unnecessary and slow. It scans by line. When detected it can tell you two points on the line -- the middle of the guard pattern at the start and end.
If you want to find the whole rectangle you need to do it yourself, though these points are a clue about where to search. Look at MonochromeRectangleDetector or WhiteRectangleDetector in the project, which can find a barcode-like rectangle.
Related
Seems like a simple question, but I have been tearing my hair out for hours now.
I have a series of files ie.
kml_image_L1_0_0.jpg
kml_image_L2_0_0.jpg
kml_image_L2_0_1.jpg
kml_image_L2_1_0.jpg
kml_image_L2_1_1.jpg
etc. However just plotting them on the leaflet map surface understandibly puts the images at 0,0 on the earths surface, and the 0 zoom level inferred by the files should really be about 15 or so.
So I want to specify the latitude and longitude where the images should originate , and what zoom level they should start at. I have tried bounds (which doesn't display anything) and I have tried playing with offsetting the zoom level.
I need this because a user needs to click on an offline map to specify where they are and I need the GPS coordinates.
I also have a KML file but it seems to be of more help for plotting vector data on the map.
Any help is much appreciated, cheers.
If I understand correctly, the "kml_image_Lz_x_y.jpg" images that you have are actually tiles, with zoom, horizontal and vertical indices in their file name?
And your issue is that they use (z,x,y) numbers as if they started from the top-most level (zoom 0, single tile for entire world), but in fact they are just a small portion of the pyramid of tiles?
And you cannot use them as is because you still want to get actual geographic coordinates (latitude, longitude), which would be totally wrong if you used the tiles as if they were showing the entire world?
In that case, you have several options as workarounds:
The most simple and reliable would probably be to simply write a small script to rename all your tiles to their true (z,x,y) numbers.
Another option would be to modify the (z,x,y) numbers before they are written in the tile src attribute, and apply the appropriate offset (constant for z, scaled by z for x and y). That should probably happen in L.TileLayer.getTileUrl() method.
Good luck! :-)
My desired output is moving a lot of dots to visualize some words.
The effect is similar to this video http://www.youtube.com/watch?v=Le13by2WM70 .
I think this problem could be split into two sub-problem.
The first is how to extract the path from a vector font.
The second is how to moving dots to visulize that polygon.
There are some tools could solve first part, but I have not idea about the second part.
Anyone has done this?
You could probably do pretty well by just sampling points on a regular grid, with a little jitter added in to avoid looking too computery. All you need to do is check if you are "inside" or "outside" of the path. For inside, place a fish (or dot); for outside, no fish.
I want to check the drawn shape matches a letter from the alphabet. It's a kids app for learning.
When any one draw the shape then how can i detect it's a correct letter?
I don't have any sample code for you but this is how I'd do this, I think.
You need to define or get a bezier path describing the shape of each letter - this would be an outline of a solid letter, not just a line drawing the letter shape. There may be a way to get this from the api, obtaining a bezier path from a glyph, or you may have to design them yourself.
You then need to scale the bezier path so it is roughly the same size as the drawing on the screen.
Then, check how many of the points in the drawn path fall within your standard font glyph. If its over a certain threshold, you can count that as a successful draw.
This is assuming you've asked the user to draw an A and you are checking against that one path. If you're trying to find out what theyve drawn without anything to go on, you need a handwriting recognition library, try searching for one of those.
Maybe I'm asking this too soon in my research, but I'd better know if this is possible sooner than later.
Imagine I have the following square printed on a paper on top of a table:
The table is brown, so it does not match with any of the colors in the square. Is there a way for me, from a common iPhone camera (non-stereo view), to figure out the distance and angle from which Im looking at the square in the table?
In the end what I'm looking for is being able to draw a 3D square on top of this one using the camera image, but I'm not sure if I am going to be able to figure out the distance and position of the object in space using only a 2D image. Any hints are well appreciated.
Short answer: http://weblog.bocoup.com/javascript-augmented-reality
Big answer:
First posterize, Then vectorize, With the vectors in your power you may need to do some math tricks to define, based on the vectors position, the perspective and then the camera position.
Maybe this help:
www.pixastic.com/lib/docs/actions/posterize/
github.com/selead/cl-vectorizer
vectormagic.com/home
autotrace.sourceforge.net
www.scipy.org/PyLab
raphaeljs.com/
technabob.com/blog/2007/12/29/video-games-get-vectorized/
superuser.com/questions/88415/is-there-an-open-source-alternative-to-vector-magic
Oughta be possible. Scan the image for the red/blue/yellow pattern, then do edge detection to figure out how warped the squares are (they'll be parallelograms in anything but straight-on view). Distance would depend on the camera's zoom setting and scan resolution. But basically you'd count how many pixels are visible in each of the squares, run that past the camera's specs and you should be able to determine a rough distance.
I have a NSArray of points that make up a path. I can detect when it self-intersects. When this happens, I try to fill the path.
First I used CoreGraphics, now I'm using openGl to draw a triangle array. Doesn't work well as you can see in the image.
How do I fill only the circular area while leaving the "tail" alone? I was thinking of a reverse flood fill but don't think CG has any API functions for this...
Maybe instead of actually drawing the path you can just approximate the diameter of the path and draw a circle with your approximation.
Here is some code to detect a circle gesture on the iPhone:
http://www.mobileorchard.com/iphone-circle-gesture-detection/
Record all of the points in a doubly-linked list. When it comes time to fill, walk the list from the start and find the point that's closest to the end. Then, lineto that point, then lineto each point in reverse order, stopping with the second point in the list. The fill will implicitly close the path, which will jump from where you left off (the second point) back to the start (first) point.
This is just off the top of my head; you can play with a couple of variations on this to see what works best. You might record the closest previous node in each node, but this could get expensive for many nodes.