Anyone know if it is yet possible to detect the touch shape? Maybe through getting the raw touchscreen data?
I found this question/answer here: How to get raw touchscreen data?
That mentions GSEvent, but it is quite old.
I'd like to try to get a rough calculation of the pressure of the touch by its shape/area, but of course UITouch only gives a calculated point.
Yes, raw touch data is contained in the GSEventRecord object, particularly what you are looking for is the pathMajorRadius property on GSPathInfo, which gives the major radious on the tap. This is a rough estimate of the pressure, but take into account big/small fingers give also different measures.
Watch out for the pathPressure property also in GSPathInfo, it does NOT contain the pressure. It always contains 1, capacitive screens (like the iPad's or IPhone's) do not measure pressure at all.
If you are planning submitting your app in the app store, you won't be able to do it if you include access to private frameworks (like in this case, GSEvent.h in the GraphicServices framework). But what you need to do is catch every UIEvent in the sendEvent method of your subclassed UIApplication, then use the methods in
https://github.com/kennytm/iphone-private-frameworks/blob/master/GraphicsServices/GSEvent.h
to get the information of the GSEvent.
Related
hi I want to get the acceleration of my iPhone but I dont want the acceleration values to change when the iPhone is tilted.
and I think the answer is the userAcceleration.
but I don't know how to get the userAcceleration values.
I know that I have to use the core motion and use the CMdeviceMotion but i don't know how to initialize and set it up.
I know this question was asked awhile ago, but I'm hoping I can provide some interesting perspective if you're still interested.
userAcceleration will provide watered-down (not raw) data comprised of sets of acceleration & gyroscopic information. You can get the raw acceleration data from CMMotionManager with the method -accelerometerData:
Unfortunately, the purpose of the accelerometer on the iOS device is to determine movement and orientation in a 3-dimensional axis: X, Y and Z coordinates. The iOS system doesn't differentiate between "tilting" and "movement" - they're one in the same. I don't know what purpose you have for divvying out the too, but that's what's laid out in the CoreMotion framework for us.
Just wanted to ask if there is any advantage for either using mouse click event or touch tap events, when writing apps for mobiles or tablets (for the iphone especially)?
I know that both of them should work fine, but in term of performance, is anyone better? Are there any things I should be aware of when choosing either?
By the way am using actionscript3 to implement the app.
This is probably the best documentation on Adobe AIR touch support:
http://help.adobe.com/en_US/as3/dev/WSb2ba3b1aad8a27b0-6ffb37601221e58cc29-8000.html
Midway through that page it states:
Note: Listening for touch and gesture events can consume a significant amount of processing resources (equivalent to rendering several frames per second), depending on the computing device and operating system. It is often better to use mouse events when you do not actually need the extra functionality provided by touch or gestures.
The only benefit of touch, I would think, would be multi-touch. The TouchEvent has a touchPointID which allows you to track the movement of each touch point. If you don't care about multi-touch, it sounds like Mouse Events would be the way to go.
Excellent question! Tap events are "technically" slower as they monitor multiple input points. If your only concerned with a single touch input, the standard mouse event system is just fine. For touch events, there's a couple objects being created per listener to assist in handling the multitouch functionality (however this is close to a tiny fractional ms loss in performance).
i think that the touchEvent is better than mouseevent when implement the app on tablets!i try it many times!you can have a test
I just want to know that when we call the method of startGyroUpdates with CMMotionManager and fix some updateInterval say to 1.0/60.0 , then is there any delegate method that we have to implement where we can get the gyro updates. If not then where/how we can get the gyro-updates.
Also if there is some useful code snippet to find out device change in position i-e if the device is moved up or down from some reference point.
Documentation says:
startGyroUpdates
Starts gyroscope updates without a handler.
- (void)startGyroUpdates
Discussion
You can get the latest gyroscope data through the gyroData property. You must call stopGyroUpdates when you no longer want your application to process gyroscope updates.
Availability
Available in iOS 4.0 and later.
See Also
– startGyroUpdatesToQueue:withHandler:
Declared In
CMMotionManager.h
Adding to xs2bush's correct answer: See the documentation links in Simple iPhone motion detect for more information.
Regarding the second point moved from some reference point, definitely not. At the moment there is no way at all to determine displacement with an acceptable precision. There are several questions and discussions about this like
Getting displacement from accelerometer data with Core Motion or
Measuring time the vehicle takes to accelerate in iPhone (I don't believe the 3% ;-)
I saw at least 6 apps in AppStore that take photos when detect motion (i.e. a kind of Spy stuff). Does anybody know what is a general way to do such thing using iPhone SDK?
I guess their apps take photos each X seconds and compare current image with previous to determine if there any difference (read "motion"). Any better ideas?
Thank you!
You could probably also use the microphone to detect noise. That's actually how many security system motion detectors work - but they listen in on ultrasonic sound waves. The success of this greatly depends on the iPhone's mic sensitivity and what sort of API access you have to the signal. If the mic's not sensitive enough, listening for regular human-hearing-range noise might be good enough for your needs (although this isn't "true" motion-detection).
As for images - look into using some sort of string-edit-distance algorithm, but for images. Something that takes a picture every X amount of time, and compares it to the previous image taken. If the images are too different (edit distance too big), then the alarm sounds. This will account for slow changes in daylight, and will probably work better than taking a single reference image at the beginning of the surveillance period and then comparing all other images to that.
If you combine these two methods (image and sound), it may get you what you need.
You could have the phone detect light changes meaning using the sensor at the top front of the phone. I just don't know how you would access that part of the phone
I think you've about got it figured out- the phone probably keeps images where the delta between image B and image A is over some predefined threshold.
You'd have to find an image library written in Objective-C in order to do the analysis.
I have this application. I wrote a library for Delphi 10 years ago but the analysis is the same.
The point is to make a matrix from whole the screen, e.g. 25x25, and then make an average color for each cell. After that, compare the R,G,B,H,S,V of average color from one picture to another and, if the difference is more than set, you have motion.
In my application I use fragment shader to show motion in real time. Any question, feel free to ask ;)
Okay, I have an app that tells me what color of pixel I touched by reading the screen (like a screenshot) after each touch. To retrieve the pixels, I use a method similar to that appearing here. But it seems that after each touch, the image data is still being held on to (and not to mention saving hundreds of unwanted screenshots in my photo album by the way) and I start getting memory notifications shortly before the app finally crashes.... My app starts out at 3.5MB but after each touch this figure increases until it is at about 100MB, after which the app crashes.
QUESTION:
How do I free this data after each touch?
(Here is the link again for Source)
The provided code frees all its buffers. The memory leak must be elsewhere.
If you want to use a more streamlined way of reading one pixel's color, you could consider the approach suggested in this answer. The idea is to use a very small buffer and draw the view with a transform that shifts the pixel into the range covered by the context.