Just wanted to ask if there is any advantage for either using mouse click event or touch tap events, when writing apps for mobiles or tablets (for the iphone especially)?
I know that both of them should work fine, but in term of performance, is anyone better? Are there any things I should be aware of when choosing either?
By the way am using actionscript3 to implement the app.
This is probably the best documentation on Adobe AIR touch support:
http://help.adobe.com/en_US/as3/dev/WSb2ba3b1aad8a27b0-6ffb37601221e58cc29-8000.html
Midway through that page it states:
Note: Listening for touch and gesture events can consume a significant amount of processing resources (equivalent to rendering several frames per second), depending on the computing device and operating system. It is often better to use mouse events when you do not actually need the extra functionality provided by touch or gestures.
The only benefit of touch, I would think, would be multi-touch. The TouchEvent has a touchPointID which allows you to track the movement of each touch point. If you don't care about multi-touch, it sounds like Mouse Events would be the way to go.
Excellent question! Tap events are "technically" slower as they monitor multiple input points. If your only concerned with a single touch input, the standard mouse event system is just fine. For touch events, there's a couple objects being created per listener to assist in handling the multitouch functionality (however this is close to a tiny fractional ms loss in performance).
i think that the touchEvent is better than mouseevent when implement the app on tablets!i try it many times!you can have a test
Related
I'm building an openvr app for steamvr to assist with seated play (my room is small so my tracking area isn't ideal). My app pretty much just adjusts the play-area height when I hold the grip button and "scroll" on the touchpad so that I can reach objects that are too low/high at variable heights. (I tried "OpenVR Advanced Settings" but the options for keybinding with it is limited to simple button presses so I decided to make my own version).
I'd like to prevent touchpad input from being sent to the game while the grip button is being held, so that the moving on the touchpad doesn't cause movement in game, is this possible at all?
I'm assuming it's not possible, but wondering whether anyone has had any experience with this.
After your clarification in the comments the answer is no, you can not "eat up" device inputs in an application, I usually work on OpenVR drivers and there after you submit a device input and/or any other event its available to anything that expects pose update events, and event subscribers can not stop others from receiving the said events
However there might be a work around (if its still an issue) I know of at least 1 application that can do what you want and that application is OVR Toolkit (when the overlay is active and you try to click something in the overlay, the game running in parallel will not receive the input, however that will only happen if OVR Toolkit overlay surface receives input, it may be a built in OpenVR overlay feature and you don't have to do anything or it can be defined by the developer, I don't really have a want to test this right now)
Sadly though OVR Toolkit is not open source, but there is an open source toolkit for unity for making overlays, which is open source and might be the solution you're looking for, it can be found here
I've been searching for a little while to investigate whether or not our Gtk+ 3 application (Gtk#, actually, but bound to Gtk+ 3) will be easily convertible to receive touch events rather than mouse clicks. This is pure investigation at the moment since I don't actually have a touch screen yet (that's also the reason why I can't easily test my theories).
As far as I can tell, the Gnome 3 can indeed do this and the documentation that seems relevant is the gesture stuff, found here.
I've not been able to find any sample Gtk+ code so I'm hoping it's a simple matter of instantiating one of the gesture classes and simply setting it up to deliver events to the main event loop.
Is there anyone with experience with Gtk gestures that can guide me in a simple example? Let's say I have a DrawableArea and I want to receive a very simple, single-touch event that gives me an event with the point within the area that was touched.
Once I have that, I should be able to build on it to handle swipes, pinches and so on.
You cannot inject pointer events as touch events: they are fundamentally different, and they interact with the gesture recognition state machine that lives inside GTK.
GTK has the ability (for debugging purposes) to emulate touch events via pointer, though obviously it cannot emulate multi-touch events because a pointing device only has one event sequence. If you have a recent version of GTK 3.x, you can use the GTK_TEST_TOUCHSCREEN environment variable set to a non-zero value.
If you want to know how to use GtkGesture implementations in your own widgets, then I suggest you look at the GtkGesture API reference, and look at the various types of gestures available, like GtkGestureSwipe or GtkGestureZoom. You're supposed to add GtkGesture instances to your widget when constructing it, and then catch the signals on the specific gesture when they are recognised.
Hey, I was wondering if there was an event or something that can check if my user is holding down a button? (for iphone, iOS)
The Event class is TouchEvent. You'll need to toggle the touch state in a variable if you want to keep the fact someone is pressing down after the fact (of the event).
You can use MouseEvent if you need/desire a single touch point. Though you still need the variable.
You need to set Multitouch.inputMode to MultitouchInputMode.TOUCH_POINT in order to turn on touch events on the iPhone. Unfortunately, you can only test touch/gesture events on the iPhone itself. (MouseEvent and TouchEvent are virtually identical in use: MOUSE_DOWN = TOUCH_BEGIN, MOUSE_MOVE = TOUCH_MOVE, TOUCH_END = MOUSE_UP. The main difference is that you can only have one "mouse" yet multiple touches.)
Personally, I use MouseEvent to test on the machine, with an if-then setting up TouchEvent listeners if Multitouch.supportsTouchEvents is true (which it only is on the iPhone/Android).
If the button is a Cocoa Touch UIButton, you can easily connect a number of its gesture events from Interface builder to code Xcode IBActions (where you write the appropriate methods).
How your project would connect to flash, I am not sure. Give us more details and perhaps someone can better help. ;)
I have developed a test for iPod/iPhone (with MonoTouch if that is relevant) that measures reaction time. But I need to take into consideration the time between touching the screen and actual triggering of the button event. Is there any documentation of that?
It's already very hard to almost impossible to get predictable interrupt latency on real time operating systems.
But on the iPhone? Imho impossible. A capacitive touchscreen is not optimal to get results that are exactly the same for each body and location. And if mail.app decides to poll for emails just at the moment you'll touch the screen there will be a bigger delay.
But to make one thing clear, we are speaking about some micro seconds or even less than that.
If you want accurate results you shouldn't use an iPhone. But I guess your app will be some kind of game, so nobody cares if your result is 0.01 seconds off. But I wouldn't show results as 0.381829191 seconds, that fakes accuracy you'll never get on any smartphone.
What is the lowest reaction time you got in your app?
The time between an actual touch and the system registering it will be negligable.
One key thing: if you are detecting the press using touch events like touchUpInside, consider using the touchesDownInside event because touchesUpInside, will not fire until the user's finger leaves the screen.
I have an OpenGL application which is rendering intensive and also fetches stuff over HTTP.
Following Apple's samples for OpenGL, I originally used NSTimer for my main painting loop, before finding out (like everyone else) that it really isn't a good idea (because you sometimes have huge delays on touch events being processed when slow paints cause paint timers to pile up).
So currently I am using the strategy given by user godexsoft at http://www.idevgames.com/forum/showthread.php?t=17058 (search for the post by godexsoft). Specifically, my render run loop is on the main thread and contains this:
while( CFRunLoopRunInMode(kCFRunLoopDefaultMode, 0.01f, FALSE) ==
kCFRunLoopRunHandledSource);
This line allows events like touch events and things related to comms to happen in between rendering of frames. It works, but I'd like to refine it further. Is there any way I can give priority to the touch events over other events (like comms related stuff)? If I could do that, I could reduce the 0.01f number and get a better frame rate. (I'm aware that this might mean comms would take longer to get back, but the touch events not lagging is pretty important).
This doesn't directly answer your question, but have you considered using CADisplayLink for scheduling redraws? It was introduced in iPhone OS 3.1.