maintains position in any rotation - iphone

I have done some drawing on one layer now I want to draw same thing on another layer.So I have stored all the points of drawing that user has drawn on first layer.And then using convertPoint: toLayer: method converting stored point into points of another layer. And its working.But creates problem in orientation. If I have done drawing in portrait. Then it will work only for portrait in landscape position will be change. So please suggest any way to get out of this.
Thanks

Normalise your stored points, such that the x, y positions are relative to a surface of size 0..1, 0..1 (divide the x, y by the width and height of the current surface). Then, whenever you want to change the size of the underlying surface, multiply each point by this new surface's width and height. All points will now appear in the same relative positions regardless of the surface dimensions.
Note the above will scale (going from portrait to landscape, the Y will be compressed and the X expanded). If you don't want to do this, you will need to take physical dimensions of the surface on-screen into account too. That is, normalise your points to some physical dimension instead.
Note: I have no idea what system, language, package, library, etc. you are using as you don't state in your question!

Related

Find actual height of image from screen height with camera angle distortion

If my phone camera is at a known height on my phone and it takes a picture from a known distance, how can I find out the actual height of an object at the bottom of the picture, considering that the camera is taking the photo from a top angle?
This question talks of a similar problem, but the answers haven't taken camera angle and height into account.
Here's a diagram of the setup -
h is the actual height of the yellow box in front of the blue screen.
This is the image captured by the camera -
How can I find h, given h' on the image? Assume the focal length of the camera is known.
Assuming you know the calibration matrix K, here is a solution that I find simpler than calculating angles. Choose the points p1=(x,y) and p2=(r,s) as indicated in the figure above. Since you say that you know the distance from the camera to the object, that means you know the depth d of these points in camera coordinates, and
Q1=inverse(K)*p1*d
Q2=inverse(K)*p2*d
give you the corresponding points on the cube in camera coordinates. Now the height you seek is simply
abs(Q1-Q2)
Hope that helps.
Edit: Here's a quick explanation about the calibration matrix. When using the pinhole camera model, a 3d point P can be reprojected in the image plane via the multiplication KP where K is (assuming square pixels) the matrix
f 0 a
0 f b
0 0 1
where f is the focal length expressed in terms of pixel size, and [-a,-b]^t is the center of the image coorrdinates system (expressed in pixels). For more info, you can just goolge "intrinsic camera parameters", or for a quick and dirty explanation look here or here. And maybe my other answer can help?
Note: In your case since you only care about depth, you do not need a and b, you can set them to 0 and just set f.
PS: If you don't know f, you should look into camera calibration algorithms (there are auto-calibrating methods but as far as I know they require many frames and fall into the domain of SLAM/SFM). However, I think that you can find pre-computed intrinsic parameters in Blender for a few known smartphone models, but they are not expressed in the exact manner presented above, and you'll need to convert them. I'd calibrate.
I must be missing something, but I think this is quite easy (based on your assumptions, which include doing some type of image processing to detect the front and bottom edges of your object in order to get h') Keep in mind that you are also assuming that your distance from the top of the object to your camera is the same as from the bottom of your object to your camera. (at greater distances this becomes moot, but at close ranges, the skew can actually be quite significant)
The standard equation for distance:
dist = (focalDist(mm) * objectRealHeight(mm) * imageHeight(pix) ) / ( objectHeight(pix) * sensorHeight(mm) )
You can re-arrange this equation to solve for objectRealHeight since you know everything else...

Rounded corner rectangle coordinate representation

Simple rounded corner rectangle code in Matlab can be written as follows.
rectangle('Position',[0,-1.37/2,3.75,1.37],...
'Curvature',[1],...
'LineWidth',1,'LineStyle','-')
daspect([1,1,1])
How to get the x and y coordinates arrays of this figure?
To get the axes units boundaries, do:
axisUnits = axis(axesHandle) % axesHandle could be gca
axisUnits will be an four elements array, with the following syntax: [xlowlim xhighlim ylowlim yhighlim], it will also contain the zlow and zhigh for 3-D plots.
But I think that is not what you need to know. Checking the matlab documentation for the rectangle properties, we find:
Position four-element vector [x,y,width,height]
Location and size of rectangle. Specifies the location and size of the
rectangle in the data units of the axes. The point defined by x, y
specifies one corner of the rectangle, and width and height define the
size in units along the x- and y-axes respectively.
It is also documented on the rectangle documentation:
rectangle('Position',[x,y,w,h]) draws the rectangle from the point x,y
and having a width of w and a height of h. Specify values in axes data
units.
See if this illustrate what you want. You have an x axis that goes from −100 to 100 and y axis that goes from 5 to 15. Suppose you want to put a rectangle from −30 to −20 in x and 8 to 10 in y.
rectangle('Position',[-30,8,10,2]);
As explained by the comments there appears to be no direct way to query the figure created by rectangle and extract x/y coordinates. On the other hand, I can think of two simple strategies to arrive at coordinates that will closely reproduce the curve generated with rectangle:
(1) Save the figure as an image (say .png) and process the image to extract points corresponding to the curve. Some degree of massaging is necessary but this is relatively straightforward if blunt and I expect the code to be somewhat slow at execution compared to getting data from an axes object.
(2) Write your own code to draw a rectangle with curved edges. While recreating precisely what matlab draws may not be so simple, you may be satisfied with your own version.
Whether you choose one of these approaches boils down to (a) what speed of execution you consider acceptable (b) how closely you need to replicate what rectangle draws on screen (c) whether you have image processing routines, say for reading an image file.
Edit
If you have the image processing toolbox you can arrive at a set of points representing the rectangle as follows:
h=rectangle('Position',[0,-1.37/2,3.75,1.37],...
'Curvature',[1],...
'LineWidth',1,'LineStyle','-')
daspect([1,1,1])
axis off
saveas(gca,'test.png');
im = imread('test.png');
im = rgb2gray(im);
figure, imshow(im)
Note that you will still need to apply a threshold to pick the relevant points from the image and then transform the coordinate system and rearrange the points in order to display properly as a connected set. You'll probably also want to tinker with resolution of the initial image file or apply image processing functions to get a smooth curve.

Derive a rotational/transformational matrix given an image and a rotated image in Java?

Need some advise and point me in the right direction.
My object detection system reads in this image(see below) and returns coordinates for bounding boxes for some detection results(in this case, a hammer)
http://i1116.photobucket.com/albums/k572/Ruihong_Zhou/z3IJx-1.png
However I wish to examine the accuracy of the detection results for the same image by feeding the system, rotated images of the original images and allow it to detect and return coordinates for detection results if any.
For example:
http://i1116.photobucket.com/albums/k572/Ruihong_Zhou/myJQA-1.jpg
Let's say the coordinates of the yellow point(in the image above) is found but it is with respect to the rotated frame of reference. How do i actually transform/rotate these coordinates and find out where do they actually lie in the original image with respect to the original frame of reference.
Someone has pointed out to me that I should use affine transformation but I'm not sure how to go about it as honestly this is the 1st time i have heard of affine transformation and i'm still trying to brute force my learning of it now.
Further research indicates that I need both the original set of coordinates in the original image and the same set of coordinates in the rotated image to come up with a transformation matrice but I only have the detected set of coordinates in the rotated image.

Calculate distance between points drawn on different zoomscale

I'm developing an iPhone app where the user chooses an image and then is allowed to draw on it (dots) that maybe stored on different zoomscales (he's allowed to zoom in and out).
I store the location of every point drawn in an array but when I calculate the distance I come to realize the result isn't correct if the points were stored on different zoomscales. Would someone kindly help me with this?
Probably you should store points in normalized unit.
Assuming that you are using UIScrollView for zooming, divide both x and y by current scrollView.zoomScale before storing. When calculating the distance, multiply the distance back by scrollView.zoomScale.

how to move image using UIAccelerometer?

How to move image when i am moving iphone?
THis question really needs to be improved. Your best bet would be to look at UIAccelerometer documentation and UIImage documentation. If you provide more details of what you want to do, I can provide a more detailed response.
First of all, as zPesk said, read docs. But, as an approximation.
Start accelerometer setting your class as the sharedAccelerometer delegate. Then implement accelerometer:didAccelerate: on your class and check the X and Y axis (if you want to move the image on 2D).
If X axis is negative, move your image to the left, if positive to the right. If Y axis is negative, to the bottom, if positive, to the top.
If you want to accelerate the movement of the image depending on the measurement of the accelerometer, multiply some pixel constant for the axis measurement and add it to the X and Y of the frame of the image. The more you tilt the device, the more accelerated is the movement.