I am coding a game on a Touchscreen with many players at the same time. The issue is, when there are 2 or more touches, a little square is appearing on the screen. It seems to be a unity built-in feature as it is still present in an empty project.
Is there a way to prevent this annoying little square to appear ? I already disabled magic touch shortcuts in windows. And this doesn't appear on the desktop home screen.
I am able to listen to the touches. It seems to be only a visual thing.
Even when I disable multitouch with Input.multiTouchEnabled = false; It still appears.
I also tried to remove the 18 default Axes in the Input Manager.
My goal is to handle every touch separately, without listening to pinch, long press, or scroll interactions. Each player has only to tap somewhere on the screen.
Thanks for you time
Solved it by myself. I completely disabled Touch feedback in the windows parameters. I don't think it is the only way to do that but it works.
Configuration Panel > Pen and Touch
Unmark "Show visual feedback when touching the screen"
Related
A bit of background: I recently implemented a Drag and Drop Behavior to my app, where I can drag items from e.g. the Finder inside my NSTableView. Now I wanted to write a few ui-tests for this new functionality.
The general idea was to move the finder window to the left side of the screen and my application window to the right side of the screen and then execute the drag and drop. The drag and drop itself is not the problem, the problem is the setup of the mentioned window layout. I cannot find a convenient way to resize and move the two windows. Coming from .net, I expected something like app.window.setSize(..) or app.window.moveTo(...).
What I tried so far:
As I have Magnet installed on my Mac, I tried the easy way out and sent key-events (control + option + arrow) to the window. This did not work, sending the keystrokes results in an error beep. Doing this manually during the tests works, so I don't know what exactly stops Magnet from rearranging the windows, but I guess it has something to do with the Testing Framework. I did not dig deeper into this, as it would have been a cheap solution anyway.
Drag the app window corners based on screen dimensions, e.g. for the window on the left I drag the corners to the top left, bottom left, top middle and bottom middle of the screen. This requires that all four corners are visible on screen, but that's a problem for another day. The solution would normally work, but the problem is that the y-coordinates I get from the frame of my app window are not what I was expecting. I do receive the location of the app window with app.windows.firstMatch.frame.origin. The x-coordinates look alright, but the y-coordinates are totally off (from what I expected).
I can't find many resources regarding the origin or frame members. Any idea on how to face this problem or where to find a documentation about the XCUITest-Framework and the basic concepts behind it? The official documentation doesn't help in this case. I only found this short explanation in the apple documentation archive about the coordinate system of macOS (or OS X back then) applications.
I'm working on mapbox js-gl, version 1.7.0. I have tied the showing and hiding of a crosshairs div on my page to the zoomstart and zoomend events.
map.on('zoomstart', function (e)
{
$("#crosshairs-container").show();
console.log("Zooming started...");
});
map.on('zoomend', function (e)
{
$("#crosshairs-container").hide();
console.log("Zooming finished...");
var zoomLevel = map.getZoom();
renderMap();
});
The problem is: the referenced div doesn't appear and disappear reliably and consistently.
On desktop (Linux/Chrome): the mouse wheel's individual "incremental jumps" (most mice have them, if not all of the nowadays) are sometimes registered as the zoomend event, and sometimes not.
This means, that sometimes zooming finishes after one "wobble" of the mouse wheel - even though I'm still zooming by continuing to turn the mouse wheel. Other times, the zooming continues as I continue to turn the mouse wheel - the behaviour I'd expect.
On mobile (Android/Chrome): Similar behaviour, though, here, the crosshairs overlay pretty much disappears completely while zooming (in or out).
I have observed that, when zooming in and out (trying to reproduce and observe this behaviour) several times over the same part of the map (which has already loaded and added layers), the desktop seems to "find its groove", so to speak.
My question is: since it's rather unlikely that I have used the wrong events - I don't think this is a coding issue - has anyone else seen this behaviour ? Is this a hardware issue, in the sense that the pinch zoom on mobile does things "intermittently", as does a "incremental wobbles" mouse wheel ? So what - to the user - looks and feels continuous, is actually, behind the scenes, many inidividually triggered events "stitched together" ?
Incidentally, the drag events work flawlessly on desktop & mobile.
I solved this issue by switching from the dragstart end dragend events to the movestart and moveend events. This seems to avoid all problems, including having to debounce or throttle the event handlers.
Thanks to #Steve Bennett for the tip.
I've recently created a 2D app for the HoloLens. It is a UI Panel with several buttons into it. In order to drag the panel and be positioned as the user wants, I implemented the HanDdraggable.cs functionality (from HoloToolKit). However, whenever I try to move the panel it also rotates.
To change that I modified the Rotation Mode from "Default" to "Orient Towards User" and "Orient Towards User and Keep Uptight". But then It works even worst; if I implement that case, whenever I try to select the panel and drag it to somewhere, the panel runs off from my field of view and it suddenly disappears.
I wanted to ask if somebody has already tried to implement the HandDraggable option into an UI Hololens app and knows how to fix this nodding issue.
I'm currently working on hololens UI for one of my projects and to manipulate UI I used TwoHandManipulatable script which is built into MixedRealityToolKit. In Manipulation Mode of that script you could only set "Move" as an option, and this would allow you to move a menu with two hands, as well as one. (I wanted to have a menu which you can also rotate and scale - which works perfectly with this script, you can lock around which axis you want to have rotation enabled, to avoid unwanted manipulation).
For your script HandDraggable, did you try to set RotationMode to Lock Object Rotation? Sounds like this could solve the problem.
There is a simple button in our application which when is pressed works just as expected, but with a light quick touch it just greys out and does nothing.
Is there a way to capture those light touches too?
EDIT:
I see now when exactly it doesn't work and it is when I tap, drag and release. (even if I tap, drag and don't leave the button area)
Try setting EventSystem.pixelDragThreshold to a higher value. This threshold makes the difference between a click and a drag, and it's default value is quite low for high-res touch systems.
I am currently running Linux Mint 17.2 with Cinnamon. I have 2 monitors.
When I set monitors to be adjacent in Cinnamon settings, mouse freely moves through border shared between monitors but cannot escape visible area.
That is, if I set monitors to share only corner, mouse is effectively locked to current monitor and can escape to another only through corner.
However, setting monitors to be non-adjacent allows mouse to roam freely all over virtual framebuffer, including invisible areas.
I thought that Cinnamon sets some flag that controls this behavior, but changing monitor position using xrandr has the same effect.
Also, it is the same when I start plain Xorg with nothing but xterm, even without a window manager, and configure monitors using xrandr.
What exactly stops mouse from leaving visible area when all monitors are adjacent? Is there a way to override this behavior?
Being able to control this might be useful e.g. to stop mouse from leaving monitor every time you try to click something near border, without running a busy loop that monitors mouse and moves it back if needed (and without doubling framebuffer size by making monitors be adjacent by corner).
With more control it can be used to e.g. make mouse "reluctant" to leave current window, and maybe do other fun stuff. At least it will make it possible to reimplement this thing so that it can actually lock mouse to window for apps like Chrome browser or OpenGL games and not just xterm and the like.
Now that I think of it, I may even try to implement it, if it is not yet and if I find relevant code.
Okay, I have found the relevant code.
This behavior is hardcoded in Xorg X server, in RandR extension, including visible area continuity check.
Definitely nothing configurable. Well, unless you agree with creator of dwm on what the word "configuration" means :)
I do agree. Right now relevant code locations are randr/rrpointer.c and
randr/rrcrtc.c:332,1685.
Would be nice though if someone created a proper X srver extension for that.
As you already figured out: if your monitor areas are non-continuous, it seems that xrandr will allow the entire x11-screen to be used by the pointer. I just purposefully moved the position of one by 1 pixel (option --pos with xrandr) to free the mouse.
Once the mouse can go everywhere, it should be possible to fence it in with pointer barriers:
http://who-t.blogspot.com/2012/12/whats-new-in-xi-23-pointer-barrier.html
That requires the XFixes extention version 5+ and gets enhanced with XInput as described in the link with events and temporary barrier lifts... which is probably not required here.