Measuring distance with iPhone camera - iphone

How to implement a way to measure distances in real time (video camera?) on the iPhone, like this app that uses a card to compare the size of the card with the actual distance?
Are there any other ways to measure distances? Or how to go about doing this using the card method? What framework should I use?

Well you do have something for reference, hence the use of the card. Saying that after watching the a video for the app I can't seem it seems too user friendly.
So you either need a reference of an object that has some known size, or you need to deduct the size from the image. One idea I just had that might help you do it is what the iPhone's 4 flash (I'm sure it's very complicated by it might just work for some stuff).
Here's what I think.
When the user wants to measure something, he takes a picture of it, but you're actually taking two separate images, one with flash on, one with flash off. Then you can analyze the lighting differences in the image and the flash reflection to determine the scale of the image. This will only work for close and not too shining objects I guess.
But that's about the only other way I thought about deducting scale from an image without any fixed objects.

I like Ron Srebro's idea and have thought about something similar -- please share if you get it to work!
An alternative approach would be to use the auto-focus feature of the camera. Point-and-shoot camera's often have a laser range finder that they use to auto-focus. iPhone doesn't have this and the f-stop is fixed. However, users can change the focus by tapping the camera screen. The phone can also switch between regular and macro focus.
If the API exposes the current focus settings, maybe there's a way to use this to determine range?

Another solution may be to use two laser pointers.
Basically you would shine two laser pointers at, say, a wall in parallel. Then, the further back you go, the beams will look closer and closer together in the video, but they will still remain the same distance apart. Then you can easily come up with some formula to measure the distance based on how far apart the dots are in the photo.
See this thread for more details: Possible to measure distance with an iPhone and laser pointer?.

Related

Using tiles as texture when zooming in on Object in Unity3d

I have a problem with an smartphone (android so far) app I am programming in unity3d.
I got the following set up:
I have a sphere on which, so far, I have a texture of the world. It's resolution is already set to the maximum but when I "zoom in" on it it just looks disgusting. I'd like to reload texture tiles like in Google maps to have a higher resolution without having a too big impact on the memory. Is that approach a good one at all? And if so: how do I do that?
Haven't found the right things so far or I just got them wrong.
EDIT: As some people probably get me wrong I try it again with some more details.
What I am trying to set up is the Earth as a sphere with a map. That map is okay as long as you dont zoom in too far, which is obvious because of the textures finit resolution. Now, to still have good graphic quality, when being zoomed in that far that one could see streets in a city I want to load additional "satelite pictures" like Google Maps does it (you know: you zoom in and Google Maps always reloads the images in that specific area to still provide a good graphic quality). How do I achieve that specific behaviour? Providing all the tiles i need is no problem, I got a Vector Graphic from which I could export all the needed tiles, but i don't know how to reload those tiles when zooming in at that specific area (to reduce memory consumption, drawcalls, etc).
Any help is really appreciated as I am a beginner in programming with unity. Thank you very much in advance!
Dustin
Are you actually zooming in or moving the camera physically closer? Unity has some internal filters that make textures easier to load at a longer distance. When zooming, the camera isnt actually moving closer meaning the more detailed version of it will not load correctly.
Also check your Max Texture Size setting for the texture you are using. Maybe it is turned down too low. Try turning it up to a higher resolution.
If it still looks blurry up close with Max Texture Size turned higher, your texture may not be high enough resolution to begin with. Make it a high enough resolution that it looks good up close. Then begin lowering the Max Texture Size resolution until up close it is "good enough".
Next, typically, it will start looking blurry in the distance or have distinct linear transitions from looking good to blurry. You can fix this by turning up the Ansio Level. Use this with caution because it increases the rendering load. Only use what you need, especially for mobile games.

Overlay "Structured Glas" Effect on iPhone Camera Feed - General Directions

I'm currently trying to write an app, that would be able to show the effects of glas, as seen through the iPhone Camera.
I'm not talking about simple, uniform glas but glass like this:
Now I already broke this into two problems:
1) Apply some Image Filter to the 2D-frames presented by the iPhone Camera. This has been done and seems possible, e.g. in the app: faceman
2) I need to get the individual lighting properties of a sheet of glas that my client supplies me with. Now basicly, there must be a way to read the information about how the glas distorts ands skews the image. I think It might be somehow possible to make a high-res picture of the plate of glasplate, laid on a checkerboard-image and somehow analyze this.
Now, I'm mostly searching for literature, weblinks on how you guys think I could start at 2. It doesn't need to be exact, in the end I just need something that looks approximately like the sheet of glass I want to show. And I'm don't even know where to search, Physics, Image Filtering or Comupational Photography books.
EDIT: I'm currently thinking, that one easy solution could be bump-mapping the texture on top of the camera-feed, I asked another question on this here.
You need to start with OpenGL. You want to effectively have a texture - similar to the one you've got above - displace the texture below it (the live camera view) to give the impression of depth and distortion. This is a 'non-trivial' problem, in that whilst it's a fairly standard problem in its field if you're coming from a background with no graphics or OpenGL experience you can expect a very steep learning curve.
So in short, the only way you can achieve this realistically on iOS is to use OpenGL, and that should be your starting point. Apple have a few guides on the matter, but you'll be better off looking elsewhere. There are some useful books such as the OpenGL ES 2.0 Programming Guide that can get you off on the right track, but where you start would depend on how comfortable you are with 3D graphics and C.
Just wanted to add that I solved this old answer using the refraction example in the Khronos OpenGl ES SDK.
Wrote a blog-entry with pictures about it :
simulating windows with refraction

Measuring a room with an iPhone

I have a need to measure a room (if possible) from within an iPhone application, and I'm looking for some ideas on how I can achieve this. Extreme accuracy is not important, but accuracy down to say 1 foot would be good. Some ideas I've had so far are:
Walk around the room and measure using GPS. Unlikely to be anywhere near accurate enough, particularly for iPod touch users
Emit sounds from the microphone and measure how long they take to return. There are some apps out there that do this already, such as PocketMeter. I suspect this would not be user friendly, and more gimmicky than practical.
Anyone have any other ideas?
You could stand in one corner and throw the phone against the far corner. The phone could begin measurement at a certain point of acceleration and end measurement at deceleration
1) Set iPhone down on the floor starting at one wall with base against the wall.
2) Mark line where iPhone ends at top.
3) Pick iPhone up and move base to where the line is you just drew.
4) Repeat steps 1->3 until you reach the other wall.
5) Multiply number of lines it took to reach other wall by length of iPhone to reach final measurement.
=)
I remember seeing programs for realtors that involved holding a reference object up in a picture. The program would identify the reference object and other flat surfaces in the image and calculate dimensions from that. It was intended for measuring the exterior of houses. It could follow connected walls that it could assume were at right angles.
Instead of shipping with a reference object, as those programs did, you might be able to use a few common household objects like a piece of printer paper. Let the user pick from a list of common objects what flat item they are holding up to the wall.
Detecting the edges of walls, and of the reference object, is some tricky pattern recognition, followed by some tricky math to convert the found edges to planes. Still better than throwing you phone at the far wall though.
Emit sounds from the microphone and measure how long they take to return. There are some apps out there that do this already, such as PocketMeter. I suspect this would not be user friendly, and more gimmicky than practical.
Au contraire, mon frère.
This is the most user friendly, not to mention accurate, way of measuring the dimensions of a room.
PocketMeter measures the distance to one wall with an accuracy of half an inch.
If you use the same formulas to measure distance, but have the person stand near a corner of the room (so that the distances to the walls, floor, and ceiling are all different), you should be able to calculate all three measurements (length, width, and height) with one sonar pulse.
Edited, because of the comment, to add:
In an ideal world, you would get 6 pulses, one from each of the surfaces. However, we don't live in an ideal world. Here are some things you'll have to take into account:
The sound pulse causes the iPhone to vibrate. The iPhone microphone picks up this vibration.
The type of floor (carpet, wood, tile) will affect the time that the sound travels to the floor and back to the device.
The sound reflects of off more than one surface (wall) and returns to the iPhone.
If I had to guess, because I've done something similar in the past, you're going to have to emit a multi-frequency tone, made up of a low frequency, a medium frequency, and a high frequency. You'll have to perform a fast Fourier Transform on the sound wave you receive to pick out the frequencies that you transmitted.
Now, I don't want to discourage you. The calculations can be done. However, it's going to take some work. After all PocketMeter has been at it for a while, and they only measure the distance to one wall.
I think an easier way to do this would be to use the Pythagorean theorem. Most rooms are 8 or 10 feet tall and if the user can guess accurately, you can use the camera to do some analysis and crunch the numbers. (You might have to have some clever way to detect the angle)
How to do it
I expect 5 points off of your bottom line for this ;)
Let me see if it helps. Take an object of known length and keep it beside the wall and with Iphone, take pic of wall along with the object that you kept beside the wall. Now get the ratio of wall width and object width from the image in Iphone. And as you know the width of the object, you can easily calcualte the width of wall. repeat it for each wall and you will have a room measurement.
Your users could measure a known distance by pacing it off, and thereby calibrate the length of their pace. Then they could enter the distance of each wall in paces, and the phone would convert it to feet. This would probably be very convenient, and would probably be accurate to within 10%.
If they may need more accurate readings, then give them the option of entering in a measurement from a tape measure.
This answer is somewhat similar to Jitendra's answer, but the method he suggests will only work where you can fit the whole wall in a single shot.
Get an object of know size and photograph it held against the wall with the iphone held against the other wall (two people or blutac needed). Then you can calculate the distance between the walls by looking at the size of the object (in pixels) in the photo. You could use a PDF to make a printed document the object of known size and use a 2D barcode to get the iphone to pick it up.
When the user wants to measure something, he takes a picture of it, but you're actually taking two separate images, one with flash on, one with flash off. Then you can analyze the lighting differences in the image and the flash reflection to determine the scale of the image. This will only work for close and not too shining objects I guess.
But that's about the only other way I thought about deducting scale from an image without any fixed objects.

How can I capture the amount of surface area that the user is making contact with the iPhone screen?

Is there a way to capture the amount of screen that is making contact with the users? I assume there is since this finger painting app shows the ipad responding to only the pixels that the user makes contact with.
Thanks so much in advance for your help!
The size of the touch is abstracted away by the framework, and UITouches only contain calculated (“best estimated”) points instead of the raw, actual areas that were touched. I would guess that the “pressure” was calculated from the duration and the direction of the touch.
In a nutshell, there is no public API to get the contact area.
I don't think Apple provides APIs for the size of the touch, or as #nickthedude said (I think) any kind of way to measure pressure. Basically, you need to implement your own algorithm/policy for determining line thickness/opacity/other effects. I believe a common way to do this is to measure the amount of time spent for the stroke, and work from there. For instance, if the user moved more quickly, you might want a thinner line segment. Apple really should just provide a canvas view of some kind. Best of luck!
to get the exact area you may have to roll your own but you can get uievents pretty easily and then do some magic from there. Basically impliment/override touchesBegan, touchesEnded, touchesMoved on the UIView in question and put in your custom code there.
Looking at the video maybe the amount of touches in the UIEvent set might correspond to the "pressure" of the touch, then again maybe not.
What if you laid down a series of successively smaller square uiviews wherever the user touched then if the touches "spilled" into the larger uiviews behind the smaller front ones than you could conjecture that the touch pressure was harder. Something to try I guess. Good luck.
Why not just describe what you want to do and foxus on asking about that instead - it may not have anything at all to do with the example that has you so otherwise enthralled - I can use a camera to monitor your hand from across the table and paint pixels on the screen via BT, completely ignoring any contact between your fingers and the screen.

Custom touch tracking in iphone

I don't want to use any of the normal touch events in the iphone sdk.
When an user touches the screen I want to find where he touched and all of pixels he touches. Is there a way to do it in iphone ? may be using a low level SDK.
i want this to do it for some thing like a drawing app with finger on iphone.
There is no low-level API. IIRC, the data in a touch object is actually returned by the hardware. In other words, that is all the data that software can get.
Having done some touch UI experiments in the distant past, I can tell you that processing real world touches in software is a lot more complicated than you would expect on first glance. It's not like tracking a mouse. A finger is actually a very blunt and imprecise pointing instrument on the scale of a mobil screen. There is a great deal of variation in finger size, pressure of contact, contact area, angle of contact and consistency of contact. It takes a lot of processing to turn that blunt imprecision into a single point or collection of points that an API can easily use.
I wouldn't try to reinvent the wheel even if you find a way to extract more data from the hardware. If nothing else, (puts on interface-nazi hat) if your touch interface behaves different from other apps, users will be confused when the have to switch back and forth.
A touch is a rather imprecise gesture, so getting all the pixels that one encompasses is not really possible. However, you can get the rough 'center' of a touch, and extrapolate an area around that for a group of 'touched pixels'. No need to use a low level SDK, just override -touchesBegan:withEvent: on UIView.