I have dimensions in millimeters (mostly rectangles and squares) and I'm trying to draw them to their size.
Something like so 6.70 x 4.98 x 3.33 mm.
I really won't be using the depth in the object but just threw it in.
New to drawing shapes with my hands ;)
Screens are typically measured in pixels (android) or points (ios). Both amount to the old standard of 72 pts/in. Though, now we have devices with different pixel ratios. To figure out an exact size would mean you need to determine the current device's screen size and it's pixel ratio. Both can be done with WidgetsBinding.instance.window... Then you just do the math from there to convert those measurements to mm.
However, this seems like an odd requirement so you may just be asking how to draw a square of an exact size. You may want to look into the Canvas/Paint API which can be used in conjunction with a CustomPainter. Another option is a Stack with some Position.fromRect or .fromRelativeRect and draw them using that setup.
Related
I am building a model with pymunk and I need to use real dimensions (physical size of model is approximately 1 meter). Is there a way to scale the graphics in pygame_util so that 1 meter corresponds to 800 pixels?
Pymunk itself is unitless, as described here: http://www.pymunk.org/en/latest/overview.html#mass-weight-and-units
Pymunk 6.1 (and later)
With Pymunk 6.1 its now possible to set a Transform on the SpaceDebugDrawOptions object (or one one of the library-specific implementations like pygame_utils.DebugDraw) as documented here http://www.pymunk.org/en/latest/pymunk.html#pymunk.SpaceDebugDrawOptions.transform
With this new feature it should be possible to set a scaling Transform to achieve what you are asking about.
Pymunk 6.0 (and earlier)
When used with pygame_util the distances will be measured in pixels, e.g. a 10x20 box shape (create_box(size=(10,20))) will be drawn as a 10x20 pixels rectangle. This means that the easiest way to achive what you ask about is to just define that the Pymunk length unit is 0.125cm, and therefore the box shape above 1.25cm x 2.5cm.
An alternative would be to scale the surface once complete. So instead of using the screen surface in pymunk.pygame_util.DrawOptions() you use a custom surface that you scale when the space has been drawn and then blit the result to the screen. I dont think this option is a good as the first option since there might be scaling artifacts, but depending on your exact use case maybe it works.
Technically the x, y, width, and height represent a set of dimensions that relate to pixels. I
can't have 200.23422 pixels so why do they use floats instead of ints?
The reason for the floats are that modern CPUs and GPUs a are optimized to work with many floating point numbers in parallel. This is true for iOS as well as Mac.
With Quartz you don't address individual pixels, but everything you draw is always antialiased. When you have a coordinate 1.0, 1.0 then this actually adds color to the 2x2 pixels at the coordinate origin.
This is why you might get blurry lines if you draw at integer coordinates. On non-retina you habe to draw offset by 0.5. Technically you would need to offset by 0.25 to draw exact pixels on Retina displays. Though there it does not really matter that much because you don't really see it any more at that pixel size.
Long story short: you don't address pixels direcrly, but the Graphics engine maps between floating point coordinates and pixels for you.
Resolution independence.
You want to keep your mathematical representation of your UI as accurate as practicable, only translating to pixel int values when you actually need to draw to the output device (and even then, not really). That's so that you can apply any number of transformations to your views and still get an accurate result.
Moreover it is possible to render lines, for example, at half-pixel widths and even less with a visible result - the system uses intelligent antialiasing to display a fine line.
It's the same principle as vector drawing has been using for decades (Adobe's PostScript, SVG etc). In fact Quartz is based on PDF, which is the modern version of PostScript. NeXT used Display PostScript in it's time, and then it was considered pretty revolutionary.
The dimensions are actually points that on non-retina screens have a 1 to 1 relation to pixels, but for retina screens 1 point = 2 pixels. So on a retina screen you can actually increment by half a point.
I have a stack of images with a bar close to the center. As the stack progresses the bar pivots around one end and the entire stack contains images with the bar rotated at many different angles up to 45 degrees above or below horizontal.
As shown here:
I'm looking for a way to rotate the bar and/or entire image and align everything horizontally before I do my other processing. Ideally this would be done in Matlab / imageJ / ImageMagick. I'm currently trying to work out a method using first Canny edge detection, followed by a Hough transform, followed by an image rotation, but I'm hoping this is a specific case of a more general problem which has already been solved.
If you have the image processing toolbox you can use regionprops with the 'Orientation' property to find the angle.
http://www.mathworks.com/help/images/ref/regionprops.html#bqkf8ji
The problem you are solving is known as image registration or image alignment.
-The first thing you need to due is to treshold the image, so you end up with a black and white image. This will simplify the process.
-Then you need to calculate the mass center of the imgaes and then translate them to match each others centers.
Then you need to rotate the images to matcheach other. This could be done using the principal axis measure. The principal axis will give you the two axis that explain most of the variance in the population. Which will basically give you a vector showing which way your bar is pointing. Then all you need to due is rotate the bars in the same direction.
-After the principal axis transformation you can try rotating the pictues a little bit more in each direction to try and optimise the rotation.
All the way through your translation and rotation you need a measure for showing you how good a fit your tranformation is. This measure can be many thing. If the picture is black and white a simple subtraction of the pictures is enough. Otherwise you can use measures like mutual information.
...you can also look at procrustes analysis see this link for a matlab function http://www.google.dk/search?q=gpa+image+analysis&oq=gpa+image+analysis&sugexp=chrome,mod=9&sourceid=chrome&ie=UTF-8#hl=da&tbo=d&sclient=psy-ab&q=matlab+procrustes+analysis&oq=matlab+proanalysis&gs_l=serp.3.1.0i7i30l4.5399.5883.2.9481.3.3.0.0.0.0.105.253.2j1.3.0...0.0...1c.1.5UpjL3-8aC0&pbx=1&bav=on.2,or.r_gc.r_pw.r_qf.&bvm=bv.1355534169,d.Yms&fp=afcd637d8ae07bde&bpcl=40096503&biw=1600&bih=767
You might want to look into the SIFT transform.
You should take as your image the rectangle that represents a worst case guess for your bar and determine the rotation matrix for that.
See http://www.vlfeat.org/overview/sift.html
Use the StackReg plugin of ImageJ. I'm not 100% sure but I think it already comes installed with FIJI (FIJI Is Just ImageJ).
EDIT: I think I have misread your question. That is not a stack of images you are trying to fix, right? In that case, a simple approach (probably not the most efficient but definetly works), is the following algorithm:
threshold the image (seems easy, your background is always white)
get a long horizontal line as a structuring element and dilate the image with it
rotate the structuring element and keep dilating image, measuring the size of the dilation.
the angle that maximizes it, is the rotation angle you'll need to fix your image.
There are several approaches to this problem as suggested by other answers. One approach possibly similar to what you are already trying, is to use Hough transform. Hough transform is good at detecting line orientations. Combining this with morphological processing and image rotation after detecting the angle you can create a system that corrects for angular variations. The basic steps would be
Use Morphological operations to make the bar a single line blob.
Use Hough transform on this image.
Find the maximum in the transform output and use that to find orientation angle.
Use the angle to fix original image.
A full example which comes with Computer Vision System Toolbox for this method. See
http://www.mathworks.com/help/vision/examples/rotation-correction-1.html
you can try givens or householder transform, I prefer givens.
it require an angle, using cos(angle) and sin(angle) to make the givens matrix.
I am having a problem transferring the position of some objects in still image (RGB image ) into 2D view of the room where the image had been taken.I have the coordinates of about 3 objects in the image (i mean X,y coordinate ) as well as the distance between them and I want to transfer the position of these 3 objects into the plan view .
Any help is much appreciated
You will probably need to clarify your question, but if I'm reading it the right way, it coult be as simple as taking the ratio from one object to another.
For example, if your sensor is 640px wide, and that covers a horizontal length of 10 meters, then you know that every 64 pixels represents one meter in the real world.
Bare in mind that this assumes the objects in the real world are at in the same plane, orthogonal to the lens vector. If objects are in different planes (depths), then you have a bigger problem in your hands.
I have an image which looks like this:
I have a task in which I should circle all the bottles around their opening. I created a simple algorithm and started working it. My algorithm follows:
Threshold the original image
Do some morphological opening in it
Fill the empty holes
Separate the portion of the image using region props such that only the area equivalent to the mouth of the bottles is selected.
Find the centroid for each and draw circle around each bottle.
I did according to the algorithm above and but I have some portion of the image around which I draw a circle. This is because I have selected the area since the area of the mouth of bottle and the remained noise is almost same. And so I yielded a figure like this.
The processing applied on the image look like this:
And my final image after plotting the circle over the original image is like this:
I think I can deal with the extra circle, that is, because of some white portion of the image remained as shown in the figure 2 below. This can be filtered out using regionproping for eccentricity. Is that a good idea or there are some other approaches to this? How would I deal with other bottles behind the glass and select them?
Nice example images you provide for your question!
One thing you can use to detect the remaining bottles (if there are any) is the well defined structure of the placement of the bottles.
The 4 by 5 grid of the bottle should be relatively easy to locate, and when the grid is located you can test if a bottle is detected at each expected bottle location.
With respect to the extra detected bottle, you can use shape features like
eccentricity,
the first Hu moment
a ratio between the perimeter length squared over the area (which is minimized for a circle) details here
If you are able to detect the grid, it should be easy to located it as an outlier (far from an expected bottle location) and discard accordingly.
Good luck with your project!
I've used the same approach as midtiby's third suggestion using the ratio between area and perimeter called shape factor:
4π * Area /perimeter^2
to detect circles from a contour traced image (from the thresholded image) to great success;
http://www.empix.com/NE%20HELP/functions/glossary/morphometric_param.htm
Regarding the 4 unfound bottles, this is rather tricky without some a priori knowledge of what it is you're looking at (as discussed using the 4 x 5 grid, then looking from the centre of each cell). I did think that from the list of contours, most would be of the bottle tops (which you can test using the shape factor stuff), however, one would be of a large rectangle. If you could find the extremities of the rectangle (from the largest contour in terms of area), then remove it from the third image, you'd be left with partial circles. If you then contour traced those partial circles and used a mixture of shape factor/curve detection etc. may help? And yes, good luck again!