I've been looking for a camera property that tells the distance between the camera and the object in focus, but without success.
Considering the phone has autofocus, one should be able to tell the distance simply by checking the focal length. Also the taken images taken has the following properties, which both should be what I'm looking for:
kCGImagePropertyExifFocalLength
kCGImagePropertyExifSubjectDistance
Is there a way to see these properties already in the camera without taking the picture?
Related
Currently in my SceneKit scene for a game in iOS using Swift the render distance is very limited, there is a noticeable cutoff in the terrain
of the players perspective, i cant find a "max render distance" setting anywhere and the only option ive seen so far is to just cover it with fog, im clearly missing something as ive seen plenty of games with larger render distances but after searching across google, documentation and stack overflow i cant seem to get an answer, can anyone help?
Camera Far Clipping Plane
To adjust a max distance between the camera and a visible surface, use zFar instance property. If a 3D object's surface is farther from the camera than this distance, the surface is clipped and does not appear. The default value in SceneKit is 100.0 meters.
arscnView.pointOfView?.camera?.zFar = 500.0
Im a dingdong and figured out what i was missing.
What i was looking for was a setting in the camera that your scene is using as the point of view, theres a setting called 'Z clipping" which clips out anything closer then the "near" value or further then the "far" value, and by default far is set to 100 units. just adjust that setting either in code or within XCODE and set it to a higher value to view the entire scene.
I'd like to develop an iPhone app that does the following:
1. Starts the device camera.
2. Places a layer on the screen containing a stretchable frame for the user to fit to a desired object.
3. Measures the object's width & height.
You may look at this app which does practically what I need and more:
http://itunes.apple.com/us/app/easymeasure-measure-your-camera!/id349530105?mt=8
Notice that it doesn't need to be super accurate and can definitely bear some aberration.
Any clue how to do it?
10x
The clue: Geometry and Trigonometry.
By knowing the camera Field-of-View angles, entering the height of the camera above ground and assuming a planar, i.e. flat, ground, you can use basic geometry and trigonometry to work out everything.
I want to count number of shake horizontally and vertically, I have referred to UIAcceleration
I have also referred to Motion Events
But couldn't come up with better approach.
Any kind of help is highly appreciated , code , reference, or any type.
i just want to count the number of shake user make by shaking the iphone device, the shake can be vertically or horizontally holding iphone in normal way(home key at the bottom)
Try DiceShaker. You'll need to use the code for "Isolating Instantaneous Motion from Acceleration Data" given in Listing 4-6 of the motion events (also called high-pass filter computation) documentation to detect acceleration provided by user.
EDIT: The accelerometer constantly provides the gravity component readings because the accelerometer works with a bunch of springs that determine the force component (in the direction of each spring's length) by the increase/decrease in the spring's length. So just remove the constant gravity(the force that's ALWAYS working) component to detect the change provided by the user (hence the name high-pass). Luckily, we don't need to figure out how to because Apple has done the hard work and given the equations in their documentation!
How to implement a way to measure distances in real time (video camera?) on the iPhone, like this app that uses a card to compare the size of the card with the actual distance?
Are there any other ways to measure distances? Or how to go about doing this using the card method? What framework should I use?
Well you do have something for reference, hence the use of the card. Saying that after watching the a video for the app I can't seem it seems too user friendly.
So you either need a reference of an object that has some known size, or you need to deduct the size from the image. One idea I just had that might help you do it is what the iPhone's 4 flash (I'm sure it's very complicated by it might just work for some stuff).
Here's what I think.
When the user wants to measure something, he takes a picture of it, but you're actually taking two separate images, one with flash on, one with flash off. Then you can analyze the lighting differences in the image and the flash reflection to determine the scale of the image. This will only work for close and not too shining objects I guess.
But that's about the only other way I thought about deducting scale from an image without any fixed objects.
I like Ron Srebro's idea and have thought about something similar -- please share if you get it to work!
An alternative approach would be to use the auto-focus feature of the camera. Point-and-shoot camera's often have a laser range finder that they use to auto-focus. iPhone doesn't have this and the f-stop is fixed. However, users can change the focus by tapping the camera screen. The phone can also switch between regular and macro focus.
If the API exposes the current focus settings, maybe there's a way to use this to determine range?
Another solution may be to use two laser pointers.
Basically you would shine two laser pointers at, say, a wall in parallel. Then, the further back you go, the beams will look closer and closer together in the video, but they will still remain the same distance apart. Then you can easily come up with some formula to measure the distance based on how far apart the dots are in the photo.
See this thread for more details: Possible to measure distance with an iPhone and laser pointer?.
In an AR app whereby you annotate objects or buildings in a camera view, I want to understand the role, that different hardware bits - on the phone (iPhone/Android) - play to achieve the AR effect. Please elaborate more on the following:
Camera: provides the 2D view of reality.
GPS: provides the longitude,latitude of the device.
Compass: direction with respect to magnetic north.
Accelerometer: (does it have a role?)
Altimeter: (does it have a role?)
An example: if the camera view is showing the New York skyline, how does the information from the hardware listed above help me to annotate the view ? Assuming I have the longitude & latitude for the Chrysler building and it is visible in my camera view, how does one calculate with accuracy where to annotate the name on the 2D picture ? I know that given 2 pairs of (longitude,latitude), you can calculate the distance between the points.
use the camera to get the field of view.
use the compass to determine the direction the device is oriented in. The direction determines the set of objects that fall into the field view and need to be reflected with AR adorners.
use the GPS to determine the distance between your location and each object. The distance is usually reflected in the size of the AR adorner you show for that object or in the number of details you show.
use the accelerometer to determine the horizon of the view (a 3-way accelerometer sensitive enough to measure the gravity force). The horizon can be combined with the object altitude to position the AR adorners properly vertically.
use the altimeter for added precision of vertical positioning.
if you have a detailed terrain/buildings information, you can also use the altimeter to determine line of visibility to the various objects and clip out (or partially) the AR adorners for partially obscured or invisible objects.
if the AR device is moving, use the accelerometers to determine the speed and do some either throttling of the number of objects downloaded per view or smart pre-fetching of the objects that will come into view to optimize for the speed of view changes.
I will leave the details of calculating all this data from the devices as an exercise to you. :-)