Detect Touchpad event in C++ Builder - callback

I am writing an API in C++ Builder that collects information for events on the touchpad of a windows laptop.
This is how I was doing it.
I was creating a window
when the touch pad is touched, I simply paint that information on that window in WM_PAINT event.
But now I dont want to create that window (form), i want to catch all the events, even if user is on desktop screen or on another application's window. If an application that is using my API is running in background i want to be able to get that touch even information in the code. How can I do that??
I hope you are getting my point ... actually i want to do it in a seamless way, otherwise that white form window will irritate the user.
I also want to save these events in a link list and want to return that out of the API is it possible??
I will be very thankful for your time. I really need to work it out in next few hours.

The touchpad is just a mouse like any other. It generates standard mouse events. Use a global WH_MOUSE hook via SetWindowsHookEx() to capture mouse events globally. To replay them, use mouse_event(). Alternatively, use WH_JOURNALRECORD and WH_JOURNALPLAYBACK hooks instead for capture and playback, respectively.

Related

Capturing game controller (not keyboard) input in Unity when the application is in the background?

In a nutshell, my goal is to create my own program for visually looking at game controller input.
However, it appears that once the unity application is in the background, ergo, once I switch to a different window to play a game, the controller input for my unity program is no longer read.
I've seen that the user Elringus had a project called UnityRawInput to help with this, but it only works for keyboard input. I've seen a lot of different responses related to this question that mention using native libraries to "hook" to get background input, but it apparently only works for keyboards.
What information is out there such that I can hook to game controllers themselves? I cannot seem to easily search for information pertaining to game controllers (and their various axis) themselves and not keyboards.
Why not a keyboard? Because I am very interested in a game controller's triggers in the way where, when pressed slightly, it can preform "weaker" actions rather than being pressed all the way, which a keyboard cannot discern as far as I can tell.
Though, wherever the solution might be, even outside of Unity, I'd like to know. Thanks! :D

Intercept Vive controller input?

I'm building an openvr app for steamvr to assist with seated play (my room is small so my tracking area isn't ideal). My app pretty much just adjusts the play-area height when I hold the grip button and "scroll" on the touchpad so that I can reach objects that are too low/high at variable heights. (I tried "OpenVR Advanced Settings" but the options for keybinding with it is limited to simple button presses so I decided to make my own version).
I'd like to prevent touchpad input from being sent to the game while the grip button is being held, so that the moving on the touchpad doesn't cause movement in game, is this possible at all?
I'm assuming it's not possible, but wondering whether anyone has had any experience with this.
After your clarification in the comments the answer is no, you can not "eat up" device inputs in an application, I usually work on OpenVR drivers and there after you submit a device input and/or any other event its available to anything that expects pose update events, and event subscribers can not stop others from receiving the said events
However there might be a work around (if its still an issue) I know of at least 1 application that can do what you want and that application is OVR Toolkit (when the overlay is active and you try to click something in the overlay, the game running in parallel will not receive the input, however that will only happen if OVR Toolkit overlay surface receives input, it may be a built in OpenVR overlay feature and you don't have to do anything or it can be defined by the developer, I don't really have a want to test this right now)
Sadly though OVR Toolkit is not open source, but there is an open source toolkit for unity for making overlays, which is open source and might be the solution you're looking for, it can be found here

How do you develop simple code for Gtk+ 3 touch input?

I've been searching for a little while to investigate whether or not our Gtk+ 3 application (Gtk#, actually, but bound to Gtk+ 3) will be easily convertible to receive touch events rather than mouse clicks. This is pure investigation at the moment since I don't actually have a touch screen yet (that's also the reason why I can't easily test my theories).
As far as I can tell, the Gnome 3 can indeed do this and the documentation that seems relevant is the gesture stuff, found here.
I've not been able to find any sample Gtk+ code so I'm hoping it's a simple matter of instantiating one of the gesture classes and simply setting it up to deliver events to the main event loop.
Is there anyone with experience with Gtk gestures that can guide me in a simple example? Let's say I have a DrawableArea and I want to receive a very simple, single-touch event that gives me an event with the point within the area that was touched.
Once I have that, I should be able to build on it to handle swipes, pinches and so on.
You cannot inject pointer events as touch events: they are fundamentally different, and they interact with the gesture recognition state machine that lives inside GTK.
GTK has the ability (for debugging purposes) to emulate touch events via pointer, though obviously it cannot emulate multi-touch events because a pointing device only has one event sequence. If you have a recent version of GTK 3.x, you can use the GTK_TEST_TOUCHSCREEN environment variable set to a non-zero value.
If you want to know how to use GtkGesture implementations in your own widgets, then I suggest you look at the GtkGesture API reference, and look at the various types of gestures available, like GtkGestureSwipe or GtkGestureZoom. You're supposed to add GtkGesture instances to your widget when constructing it, and then catch the signals on the specific gesture when they are recognised.

iPhone HARDCODING a swipe

Basically, we either have remote access to the iPhone or the phone is connected to a network where we can control the phone (send it messages etc.) How can I simulate a swipe without touching the actual phone? I know there are Swipe Recognizers, but I haven't found a way to HARDCODE coordinates to simulate a swipe; for example, without touching the phone, perform the swipe to unlock.
A swipe is input. You'd normally recognize the swipe, either with a gesture recognizer or by handling the touch directly, and then perform some sort of action. If you want to simulate a swipe, just perform the action that would be performed if the user made the equivalent gesture.
For example, if a swipe would normally switch to a different view, simply call the method that switches to that view. If possible, do it with animation so that the user has some visual indication of what's going on.
I'm not entirely sure that what you are trying to do is possible, unless maybe with a script in the Automation instrument. However, if the iPhone is jailbroken, you could install Veency and connect the phone through any VNC client and interact with it that way.

Flash cs5 iOS holding down?

Hey, I was wondering if there was an event or something that can check if my user is holding down a button? (for iphone, iOS)
The Event class is TouchEvent. You'll need to toggle the touch state in a variable if you want to keep the fact someone is pressing down after the fact (of the event).
You can use MouseEvent if you need/desire a single touch point. Though you still need the variable.
You need to set Multitouch.inputMode to MultitouchInputMode.TOUCH_POINT in order to turn on touch events on the iPhone. Unfortunately, you can only test touch/gesture events on the iPhone itself. (MouseEvent and TouchEvent are virtually identical in use: MOUSE_DOWN = TOUCH_BEGIN, MOUSE_MOVE = TOUCH_MOVE, TOUCH_END = MOUSE_UP. The main difference is that you can only have one "mouse" yet multiple touches.)
Personally, I use MouseEvent to test on the machine, with an if-then setting up TouchEvent listeners if Multitouch.supportsTouchEvents is true (which it only is on the iPhone/Android).
If the button is a Cocoa Touch UIButton, you can easily connect a number of its gesture events from Interface builder to code Xcode IBActions (where you write the appropriate methods).
How your project would connect to flash, I am not sure. Give us more details and perhaps someone can better help. ;)