How to move image when i am moving iphone?
THis question really needs to be improved. Your best bet would be to look at UIAccelerometer documentation and UIImage documentation. If you provide more details of what you want to do, I can provide a more detailed response.
First of all, as zPesk said, read docs. But, as an approximation.
Start accelerometer setting your class as the sharedAccelerometer delegate. Then implement accelerometer:didAccelerate: on your class and check the X and Y axis (if you want to move the image on 2D).
If X axis is negative, move your image to the left, if positive to the right. If Y axis is negative, to the bottom, if positive, to the top.
If you want to accelerate the movement of the image depending on the measurement of the accelerometer, multiply some pixel constant for the axis measurement and add it to the X and Y of the frame of the image. The more you tilt the device, the more accelerated is the movement.
Related
How do you determine that the intrinsic and extrinsic parameters you have calculated for a camera at time X are still valid at time Y?
My idea would be
to use a known calibration object (a chessboard) and place it in the camera's field of view at time Y.
Calculate the chessboard corner points in the camera's image (at time Y).
Define one of the chessboard corner points as world origin and calculate the world coordinates of all remaining chessboard corners based on that origin.
Relate the coordinates of 3. with the camera coordinate system.
Use the parameters calculated at time X to calculate the image points of the points from 4.
Calculate distances between points from 2. with points from 5.
Is that a clever way to go about it? I'd eventually like to implement it in MATLAB and later possibly openCV. I think I'd know how to do steps 1)-2) and step 6). Maybe someone can give a rough implementation for steps 2)-5). Especially I'd be unsure how to relate the "chessboard-world-coordinate-system" with the "camera-world-coordinate-system", which I believe I would have to do.
Thanks!
If you have a single camera you can easily follow the steps from this article:
Evaluating the Accuracy of Single Camera Calibration
For achieving step 2, you can easily use detectCheckerboardPoints function from MATLAB.
[imagePoints, boardSize, imagesUsed] = detectCheckerboardPoints(imageFileNames);
Assuming that you are talking about stereo-cameras, for stereo pairs, imagePoints(:,:,:,1) are the points from the first set of images, and imagePoints(:,:,:,2) are the points from the second set of images. The output contains M number of [x y] coordinates. Each coordinate represents a point where square corners are detected on the checkerboard. The number of points the function returns depends on the value of boardSize, which indicates the number of squares detected. The function detects the points with sub-pixel accuracy.
As you can see in the following image the points are estimated relative to the first point that covers your third step.
[The image is from this page at MATHWORKS.]
You can consider point 1 as the origin of your coordinate system (0,0). The directions of the axes are shown on the image and you know the distance between each point (in the world coordinate), so it is just the matter of depth estimation.
To find a transformation matrix between the points in the world CS and the points in the camera CS, you should collect a set of points and perform an SVD to estimate the transformation matrix.
But,
I would estimate the parameters of the camera and compare them with the initial parameters at time X. This is easier, if you have saved the images that were used when calibrating the camera at time X. By repeating the calibrating process using those images you should get very similar results, if the camera calibration is still valid.
Edit: Why you need the set of images used in the calibration process at time X?
You have a set of images to do the calibrations for the first time, right? To recalibrate the camera you need to use a new set of images. But for checking the previous calibration, you can use the previous images. If the parameters of the camera are changes, there would be an error between the re-estimation and the first estimation. This can be used for evaluating the validity of the calibration not for recalibrating the camera.
I want to use the iPhones's accelerometer to detect motions while driving. I'm a bit confused what the accelerometer actually measures, especially when driving a curve.
As you can see in the picture, a car driving a curve causes two forces. One is the centripetal force and one is the velocity. Imagine the iPhone is placed on the dashboard with +y-axis is pointing to the front, +x-axis to the right and +z-axis to the top.
My Question is now what acceleration will be measured when the car drives this curve. Will it measure g-force on the -x-axis or will the g-force appear on the +y axis?
Thanks for helping!
UPDATE!
For thoses interested, as one of the answers suggested it measures both. The accelerometer is effected by centrifugal force and velocity resulting in an acceleration vector that is a combination of these two.
I think it will measure both. But don't forget that the sensor will measure gravity as well. So when your car is not moving, you will still get accelerometer readings. A nice talk on sensors in smartphones http://www.youtube.com/watch?v=C7JQ7Rpwn2k&feature=results_main&playnext=1&list=PL29AD66D8C4372129 (it's on android, but the same type of sensors are used in iphone).
Accelerometer measures acceleration of resultant force applied to it (velocity is not a force by the way). In this case force is F = g + w + c i.e. vector sum of gravity, centrifugal force (reaction to steering centripetal force, points from the center of the turn) and car acceleration force (a force changing absolute value of instantaneous velocity, points along the velocity vector). Providing Z axis of accelerometer always points along the gravity vector (which is rare case for actual car) values of g, w and c accelerations can be accessed in Z, X and Y coordinates respectively.
Unless you are in free fall the g-force (gravity) is always measured. If I understand your setup correctly, the g-force will appear on the z axis, the axis that is vertical in the Earth frame of reference. I cannot tell whether it will be +z or -z, it is partly convention so you will have to check it for yourself.
UPDATE: If the car is also going up/downhill then you have to take the rotation into account. In other words, there are two frames of reference: the iPhone's frame of reference and the Earth frame of reference. If you would like to deal with this situation, then please ask a new question.
I noticed a really puzzling behavior on iPhone:
If I hold the phone in the vertical, and tilt it, the compass change.
I already figured the amount it changes is the same amount it would change for the same amount of tilting if it was in horizontal (ie: suppose that a vector coming from the screen is called Y, turning around Y does not matter the attitude of the iPhone results in a compass change).
I want to compensate that, my app was not made to you hold the phone in the horizontal (although I do plan also to allow some tilting in the X axis let's call it, from like 10 degrees to 135)
But I really could not figure how iPhone calculate the heading, thus where the heading vector actually points...
After some scientific style experiments, I found:
The iPhone has magnetometer, it has 3 axis, X, that goes from left to right from the screen. Y, that goes from bottom to up. And Z, that comes from behind the phone and comes to the front.
Earth magnetic field is as expected by the laws of physics not a sphere, in the location I am (brazil), it is slanted about 30 degrees. (meaning that I have to hold the phone in a 30 degrees angle to zero 2 axis).
One possible technique to calculate north, is use cross product of a vector tangential to the magnetic field (ie: the vector the magnetometer reports to you), and gravity. The result will be a vector that points east. If you wish you can make another cross product between east and gravity, resulting in a vector that points north.
Know that iPhone sensors are quite good, and every minor fluctuation and vibration is caught, thus it is good idea to use a lowpass filter, to remove the noise from the signal.
The iPhone itself, has a complex routine to determine the "true heading", I don't figured it completely, but it uses the accelerometer in some way to compensate for tilt. You can use the accelerometer and compensate back if that is your wish, for example if the phone is tilted 70 degrees, you can change the true heading by 70 degrees too, and the result will be the phone ignoring tilting.
Also the routine of true heading, verify if the iPhone is upside down or not. If we consider it in horizontal, in front of you as 0, then more or less at 135 degrees it decides that it is upside down, flipping the results.
Note the same coordinate system also apply to the accelerometer, allowing the use of vectors operations between accelerometer and magnetometer data without much fiddling.
I have done some drawing on one layer now I want to draw same thing on another layer.So I have stored all the points of drawing that user has drawn on first layer.And then using convertPoint: toLayer: method converting stored point into points of another layer. And its working.But creates problem in orientation. If I have done drawing in portrait. Then it will work only for portrait in landscape position will be change. So please suggest any way to get out of this.
Thanks
Normalise your stored points, such that the x, y positions are relative to a surface of size 0..1, 0..1 (divide the x, y by the width and height of the current surface). Then, whenever you want to change the size of the underlying surface, multiply each point by this new surface's width and height. All points will now appear in the same relative positions regardless of the surface dimensions.
Note the above will scale (going from portrait to landscape, the Y will be compressed and the X expanded). If you don't want to do this, you will need to take physical dimensions of the surface on-screen into account too. That is, normalise your points to some physical dimension instead.
Note: I have no idea what system, language, package, library, etc. you are using as you don't state in your question!
I have an image that was rotated to an unknown angle, and I don't have the original image. How I determine the angle of rotation with matlab commands?
I need to rotate the image back with this angle to reach the original image.
As #High Performance Mark mentions in his comment, it is difficult to give an answer when it is unclear how you can recognize that the image is rotated, or what would make you decide that the rotation has properly been corrected.
In other words, you will first have to find a way to determine the rotation angle by analyzing the image with respect to specific features that inform you about a potential rotation. For example, if your image contains a face, you'd do face detection (for which there is plenty of code on the File Exchange and then rotate so that the eyes are up and the mouth down. If your image contains lines that should be vertical and/or horizontal in an un-rotated image, you can apply a Hough-transform to your image and find the most likely angle of rotation using houghpeaks.
Finally, to rotate your image, you can use imrotate.
Without examples or a more detailed description, it's hard to give good advice. But generally, this can be done for some types of images.
For example, suppose the image shows buildings, poles, furniture or something that should have vertical edges. Run an edge detector, then take a Fourier transform. There should be peaks, or some visible pattern in the power spectrum, along the Y axis for an unrotated image. The power spectrum rotates the same way as the image. If you can devise an algorithm to find the spectral features that indicate vertical edges, you can measure its angle w.r.t. the origin (zero frequency). That is the angle of image rotation.
But you will have to distinguish that particular feature from all other image features that show in the power spectrum. Have fun with that - this is the kind of detail that will take most of your creativity and time.