delegate method for CMMotionManager - iphone

I just want to know that when we call the method of startGyroUpdates with CMMotionManager and fix some updateInterval say to 1.0/60.0 , then is there any delegate method that we have to implement where we can get the gyro updates. If not then where/how we can get the gyro-updates.
Also if there is some useful code snippet to find out device change in position i-e if the device is moved up or down from some reference point.

Documentation says:
startGyroUpdates
Starts gyroscope updates without a handler.
- (void)startGyroUpdates
Discussion
You can get the latest gyroscope data through the gyroData property. You must call stopGyroUpdates when you no longer want your application to process gyroscope updates.
Availability
Available in iOS 4.0 and later.
See Also
– startGyroUpdatesToQueue:withHandler:
Declared In
CMMotionManager.h

Adding to xs2bush's correct answer: See the documentation links in Simple iPhone motion detect for more information.
Regarding the second point moved from some reference point, definitely not. At the moment there is no way at all to determine displacement with an acceptable precision. There are several questions and discussions about this like
Getting displacement from accelerometer data with Core Motion or
Measuring time the vehicle takes to accelerate in iPhone (I don't believe the 3% ;-)

Related

how to use the userAcceleration in iPhone

hi I want to get the acceleration of my iPhone but I dont want the acceleration values to change when the iPhone is tilted.
and I think the answer is the userAcceleration.
but I don't know how to get the userAcceleration values.
I know that I have to use the core motion and use the CMdeviceMotion but i don't know how to initialize and set it up.
I know this question was asked awhile ago, but I'm hoping I can provide some interesting perspective if you're still interested.
userAcceleration will provide watered-down (not raw) data comprised of sets of acceleration & gyroscopic information. You can get the raw acceleration data from CMMotionManager with the method -accelerometerData:
Unfortunately, the purpose of the accelerometer on the iOS device is to determine movement and orientation in a 3-dimensional axis: X, Y and Z coordinates. The iOS system doesn't differentiate between "tilting" and "movement" - they're one in the same. I don't know what purpose you have for divvying out the too, but that's what's laid out in the CoreMotion framework for us.

GPS signal lost warning notification

Can anyone tell how to show an error message when GPS signal is not available on iphone.
And the lat,lon label remains shows the previous reading where the signal is lost. Please anyone guide me.
Thanks prior.....
Implement the CLLocationManagerDelegate protocol method locationManager:didFailWithError:.
Here's a quote from the documentation:
Discussion
Implementation of this
method is optional. You should
implement this method, however.
If the location service is unable to
retrieve a location fix right away, it
reports a kCLErrorLocationUnknown
error and keeps trying. In such a
situation, you can simply ignore the
error and wait for a new event.
To keep the values you should buffer them in a property and only update them when locationManager:didUpdateToLocation:fromLocation: fires.
One thing that could happen, when having and then losing "true" GPS, is that Core Location will fall back to cell tower triangulation or wifi sniffing.
The only way you'd know that had happened is a sudden increase in the .horizontalAccuracy value of the CLLocation objects you get in your didUpdateToLocation: method.

Acceleration threshold for shake versus typical hand motion?

What acceleration characterization should be used to differentiate an intentional single shake-to-do-something shake versus other typical random or unintentional device motions?
Shaking can be detected by the OS itself. There's no need to do this yourself.
More info in this StackOverflow question:
How to use Shake API in iPhone SDK 3.0?

iphone touch shape or raw data

Anyone know if it is yet possible to detect the touch shape? Maybe through getting the raw touchscreen data?
I found this question/answer here: How to get raw touchscreen data?
That mentions GSEvent, but it is quite old.
I'd like to try to get a rough calculation of the pressure of the touch by its shape/area, but of course UITouch only gives a calculated point.
Yes, raw touch data is contained in the GSEventRecord object, particularly what you are looking for is the pathMajorRadius property on GSPathInfo, which gives the major radious on the tap. This is a rough estimate of the pressure, but take into account big/small fingers give also different measures.
Watch out for the pathPressure property also in GSPathInfo, it does NOT contain the pressure. It always contains 1, capacitive screens (like the iPad's or IPhone's) do not measure pressure at all.
If you are planning submitting your app in the app store, you won't be able to do it if you include access to private frameworks (like in this case, GSEvent.h in the GraphicServices framework). But what you need to do is catch every UIEvent in the sendEvent method of your subclassed UIApplication, then use the methods in
https://github.com/kennytm/iphone-private-frameworks/blob/master/GraphicsServices/GSEvent.h
to get the information of the GSEvent.

Can the exposure time be manually adjusted for an iOS cameras?

I want to adjust the exposure of the iPhone/iPod touch camera with intimate detail. I would prefer to take a series of photos with decreasing exposure times to obtain a sequence of images (for HDR reconstruction). Is this possible?
If not, what's the next best thing? It seems you can set a point of interest in the image for the autoexposure. Perhaps I could search for a dark/light region of the image and then use this exposurePointOfInterest to adjust the exposure, but this seems like a very indirect solution that is also error-prone. If anybody has tried an alternative, such an answer is also desirable.
As iOS gives control of frame durations by
MinFrameDuration
MaxFrameDuration
since exposure times vary based on fram rate and frame duration
By setting min and max frame rate to a particular value
You will be locking the fram rate.
That will effect your exposure times.
This is also very indirect way of controlling, may be it helps your case
some example would be like this:
if (conn.isVideoMinFrameDurationSupported)
conn.videoMinFrameDuration = CMTimeMake(1, CAPTURE_FRAMES_PER_SECOND);
if (conn.isVideoMaxFrameDurationSupported)
conn.videoMaxFrameDuration = CMTimeMake(1, CAPTURE_FRAMES_PER_SECOND);
Since you would have to decrease the shutter speed of the camera, this unfortunately does not appear to be possible, and more importantly, against the HIG:
Changing the behavior of iPhone external hardware is a violation of
the iPhone Developer Program License Agreement. Applications must
adhere to the iPhone Human Interface Guidelines as outlined in the
iPhone Developer Program License Agreement section 3.3.7
Related article Apple Removes Camera+ iPhone App From The App Store After Developer Reveals Hack To Enable Hidden Feature.
If it can be done programatically, instead of with the hardware, you might have a chance, but then its just an effect on an image,not a true long exposure picture.
There are some simulated slow shutter apps that do get approved like Slow Shutter or Magic Shutter.
Related article: New iPhone Camera App “Magic Shutter” Hits The App Store.
This is supported since iOS 8:
http://developer.xamarin.com/guides/ios/platform_features/intro_to_manual_camera_controls/
Have a look at AVCaptureExposureModeCustom and CaptureDevice.LockExposure...
I tried to do this for my motion activated camera app (Pocket Sentry) and I found that it is not possible to do this AND get approved in the app store.
I have been trying to do this myself. I think its possible only by using the exposure point of interest property. I am detecting the dark and bright spots and then adjusting the point accordingly.
Please refer : Detecting bright/dark points on iPhone screen
Does anyone know a better way to do this?
I am not too sure, but you should try using AVFoundation class to build the camera app, following the apple's sample code:
AVCam Sample Code
And then try to leverage the exposureMode property of the Class:
exposureMode Class Reference