I am trying to get mouse position by using Raw input method. In the RAWMOUSE structure am always getting value MOUSE_MOVE_RELATIVE as usFlags that means am getting relative value of last mouse position. but i want absolute position of mouse. how to get absolute position value of mouse from Raw input ?
RAWMOUSE have exact same values that Windows received from mouse hardware (it could be HID USB/Bluetooth or i8042 PS/2 mouse etc). Usually mouses are sending relative movement, but some could send absolute - for example touch screens, RDP mouse (yay!).
So if you need absolute mouse pos (for example for game menu) you can use GetCursorPos API to get coords of Windows-rendered cursor on the screen.
But its not the same thing as sent with MOUSE_MOVE_ABSOLUTE flag in RAWMOUSE.
MOUSE_MOVE_ABSOLUTE is mouse movement in 0..65535 range in device-space, not screen-space. Here is some code how to convert it to screen-space:
if ((rawMouse.usFlags & MOUSE_MOVE_ABSOLUTE) == MOUSE_MOVE_ABSOLUTE)
{
bool isVirtualDesktop = (rawMouse.usFlags & MOUSE_VIRTUAL_DESKTOP) == MOUSE_VIRTUAL_DESKTOP;
int width = GetSystemMetrics(isVirtualDesktop ? SM_CXVIRTUALSCREEN : SM_CXSCREEN);
int height = GetSystemMetrics(isVirtualDesktop ? SM_CYVIRTUALSCREEN : SM_CYSCREEN);
int absoluteX = int((rawMouse.lLastX / 65535.0f) * width);
int absoluteY = int((rawMouse.lLastY / 65535.0f) * height);
}
else if (rawMouse.lLastX != 0 || rawMouse.lLastY != 0)
{
int relativeX = rawMouse.lLastX;
int relativeY = rawMouse.lLastY;
}
[Weird to see a 9 year old unanswered question as a top google search result, so I'll offer an answer.. better late than never!]
At the level of Raw Input API, you are getting the raw horizontal/vertical movement of the mouse device -- what the mouse reports at the hardware level.
According to the docs (I haven't verified) these delta X/Y values are not yet processed by the desktop mouse speed/acceleration/slowdown settings. So even if you started tracking the deltas from a known absolute location, you would quickly drift away from where Windows is positioning the mouse cursor, on-screen.
What's not clear from the docs, is what units or scale these relative X/Y values are reported in. It would be nice if it were somehow normalized, but I suspect it depends on the DPI resolution of your mouse. (I will find a mouse with adjustable DPI to test, and report back, if no one edits me first.)
Edit/Update: I got my hands on a mouse with adjustable DPI .. did some crude testing, enough to confirm the rough scale of the lLastX/Y values seems to match the hardware DPI.. eg. with the mouse in 1200 dpi mode, moving it physically 1 inch from left to right, generates a net sum of lLastX values ~= 1200.
Related
I’m looking for a way to get the input intensity of a controller joystick when using a Xbox/PS5 Controller in Unity using the old Input Manager. Anything would help, I can’t find any resources on this subject, thanks!
Currently, the only results I’m getting are -1,0,1 using the old input manager
I think you already have your desired output. The method Input.GetAxis() returns a value between -1 and 1, representing the intensity of the joystick input. A value of -1 indicates that the joystick is fully pressed to the left, a value of 0 indicates that the joystick is not being pressed, and a value of 1 indicates that the joystick is fully pressed to the right.
To get the input intensity of a specific joystick, such as the left joystick on an Xbox controller, you can use the following code:
float leftJoystickIntensity = Input.GetAxis("LeftJoystickX");
I am using the following code to export SceneCaptureComponent as a PNG image on the device and I am facing a weird issue.
On Windows and iOS everything works as expected.
On Android the image is upside down, like it is rotated by 180 degrees.
Any hints why?
bool UsaveJPG::SaveImage(class USceneCaptureComponent2D* Target, const FString ImagePath, const FLinearColor ClearColour)
{
FRenderTarget* RenderTarget = Target->TextureTarget->GameThread_GetRenderTargetResource();
if (RenderTarget == nullptr)
{
return false;
}
TArray<FColor> RawPixels;
// Format not supported - use PF_B8G8R8A8.
if (Target->TextureTarget->GetFormat() != PF_B8G8R8A8)
{
// TRACEWARN("Format not supported - use PF_B8G8R8A8.");
return false;
}
if (!RenderTarget->ReadPixels(RawPixels))
{
return false;
}
// Convert to FColor.
FColor ClearFColour = ClearColour.ToFColor(false); // FIXME - want sRGB or not?
for (auto& Pixel : RawPixels)
{
// Switch Red/Blue changes.
const uint8 PR = Pixel.R;
const uint8 PB = Pixel.B;
Pixel.R = PB;
Pixel.B = PR;
// Set alpha based on RGB values of ClearColour.
Pixel.A = ((Pixel.R == ClearFColour.R) && (Pixel.G == ClearFColour.G) && (Pixel.B == ClearFColour.B)) ? 0 : 255;
}
TSharedPtr<IImageWrapper> ImageWrapper = ImageWrapperModule.CreateImageWrapper(EImageFormat::PNG);
const int32 Width = Target->TextureTarget->SizeX;
const int32 Height = Target->TextureTarget->SizeY;
if (ImageWrapper.IsValid() && ImageWrapper->SetRaw(&RawPixels[0], RawPixels.Num() * sizeof(FColor), Width, Height, ERGBFormat::RGBA, 8))
{
FFileHelper::SaveArrayToFile(ImageWrapper->GetCompressed(), *ImagePath);
return true;
}
return false;
}
It seems as if you aren't the only one having this issue.
Questions with answers (bolding to pop out answer section):
"I believe the editor mobile preview provides "correct" results, while the mobile device produces upside down results. I ran into this issue when creating a post process volume to simulate a camera partially submerged in water with mathematically calculated wave displacement. I can convert the screen position to world position using a custom material node with the code: float2 ViewPos = (ScreenPos.xy - View.ScreenPositionScaleBias.wz) / View.ScreenPositionScaleBias.xy * 10; float4 Pos = mul(float4(ViewPos, 10, 1), [Link Removed]); return Pos.xyz / Pos.w; This works as expected in the editor mobile preview, but the results are weirdly flipped around when launching to a mobile device. Manually inverting the Y-coordinate of the UV by subtracting the Y-component from 1 will correct the ViewportUV output, but the world space calculation still doesn't work (probably because [Link Removed] and [Link Removed]need to also be inverted). Strangely, if I feed the inverted UV coordinates into SceneTexture:PostProcessInput0, the entire scene flips upside down when running on a mobile device. This means that the ScreenPosition's ViewportUV is not the same as the SceneTexture UV. As a result, any post process effects that rely on the ViewportUV do not work correctly when running on mobile devices."
https://unreal-engine-issues.herokuapp.com/issue/UE-63442
I'm having trouble getting the tilt function to work with my Android phone. It seems that the tilt values are upside down because when I hold the phone flat on the table with the screen up, the tilt values snap from values like x=-3,y=0,z=-3 to z=3,y=0,z=3. When I hold the phone up with the screen facing down, the values are much closer to 0 all round.
Response:
tilt Y is always zero on android devices. but works fine on IOS
I think nobody has reported it as bug yet :) you do it
https://answers.unrealengine.com/questions/467434/tilt-vector-values-on-android-seem-to-be-upside-do.html?sort=oldest
I'm not an Unreal dev, but you might have to put in a condition for Android to invert the image for yourself. I've done that kind of thing before (think cross browser compatibility). It's always a PITA and bloats the code for a stupid "reason", but sometimes it's what you have to do. Maybe Unreal will release a patch for it, but not likely, since any other devs who've already accepted and accounted for this issue will now have to fix their "fixed" code. It may just have to be something you live with, unfortunately.
With reading this, it just sounds like the different OSs decided on different corners for the origin of the "graph" that is the screen. Win and iOS seem to have chosen the same corner, while Android chose differently. I've done similar things in programming as well as for CNC machines. You may be able to choose your "world origin" and orientation in setting.
Open the Find in Blueprints menu item.
Search for Debug Draw World Origin. In the results, double-click the Debug Draw World Origin function.
https://docs.unrealengine.com/en-US/Platforms/AR/HandheldAR/ARHowToShowWorldOrigin/index.html
There's not much out there that I can decipher being relevant, but searching for "android unreal world origin" gets a few results that might help. And adding "orientation" gets some different, possibly relevant, results.
I'm using iPhone ARToolkit and I'm wondering how it works.
I want to know how with a destination location, a user location and a compass, this toolkit can know it user is looking to that destination.
How can I know the maths behind this calculations?
The maths that AR ToolKit uses is basic trigonometry. It doesn't use the technique that Thomas describes which I think would be a better approach (apart from step 5. See below)
Overview of the steps involved.
The iPhone's GPS supplies the device's location and you already have the coordinates of the location you want to look at.
First it calculates the difference between the latitude and the longitude values of the two points. These two difference measurements mean you can construct a right-angled triangle and calculate what angle from your current position another given position is. This is the relevant code:
- (float)angleFromCoordinate:(CLLocationCoordinate2D)first toCoordinate:(CLLocationCoordinate2D)second {
float longitudinalDifference = second.longitude - first.longitude;
float latitudinalDifference = second.latitude - first.latitude;
float possibleAzimuth = (M_PI * .5f) - atan(latitudinalDifference / longitudinalDifference);
if (longitudinalDifference > 0) return possibleAzimuth;
else if (longitudinalDifference < 0) return possibleAzimuth + M_PI;
else if (latitudinalDifference < 0) return M_PI;
return 0.0f;
}
At this point you can then read the compass value from the phone and determine what specific compass angle(azimuth) your device is pointing at. The reading from the compass will be the angle directly in the center of the camera's view. The AR ToolKit then calculates the full range of angle's currently displayed on screen as the iPhone's field of view is known.
In particular it does this by calculating what the angle of the leftmost part of the view is showing:
double leftAzimuth = centerAzimuth - VIEWPORT_WIDTH_RADIANS / 2.0;
if (leftAzimuth < 0.0) {
leftAzimuth = 2 * M_PI + leftAzimuth;
}
And then calculates the right most:
double rightAzimuth = centerAzimuth + VIEWPORT_WIDTH_RADIANS / 2.0;
if (rightAzimuth > 2 * M_PI) {
rightAzimuth = rightAzimuth - 2 * M_PI;
}
We now have:
The angle relative to our current position of something we want to display
A range of angles which are currently visible on the screen
This is enough to plot a marker on the screen in the correct position (kind of...see problems section below)
It also does similar calculations related to the devices inclination so if you look at the sky you hopefully won't see a city marker up there and if you point it at your feet you should in theory see cities on the opposite side of the planet. There are problems with these calculation in this toolkit however.
The problems...
Device orientation is not perfect
The value I've just explained the calculation of assumes you're holding the device in an exact position relative to the earth. i.e. perfectly landscape or portrait. Your user probably won't always be doing that. If you tilt the device slightly your horizon line will no longer be horizontal on screen.
The earth is actually 3D!
The earth is 3-dimensional. Few of the calculations in the toolkit account for that. The calculations it performs are only really accurate when you're pointing the device towards the horizon.
For example if you try to plot a point on the opposite side of the globe (directly under your feet) this toolkit behaves very strangely. The approach used to calculate the azimuth range on screen is only valid when looking at the horizon. If you point your camera at the floor you can actually see every single compass point. The toolkit however, thinks you're still only looking at compass reading ± (width of view / 2). If you rotate on the spot you'll see your marker move to edge of the screen, disappear and then reappear on the other side. What you would expect to see is the marker stay on screen as you rotate.
The solution
I've recently implemented an app with AR which I initially hoped AR Toolkit would do the heavy lifting for me. I came across the problems just described which aren't acceptable for my app so had to roll my own.
Thomas' approach is a good method up to point 5 which as I explained above only works when pointing towards the horizon. If you need to plot anything outside of that it breaks down. In my case I have to plot objects that are overhead so it's completely unsuitable.
I addressed this by using OpenGL ES to plot my markers where they actually are in 3D space and move the OpenGL viewport around according to readings from the gyroscope while continuously re-calibrating against the compass. The 3D engine handles all the hard work of determining what's on screen.
Hope that's enough to get you started. I wish I could provide more detail than that but short of posting a lot of hacky code I can't. This approach however did address both problems described above. I hope to open source that part of my code at some point but it's very rough and coupled to my problem domain at the moment.
that is all information needed. with iphone-location and destination-location you can calculate the destination-angle (with respect to true north).
The only missing thing is to know where the iPhone is currently looking at which is returned by the compass (magnetic north + current location -> true north).
edit: Calculations: (this is just an idea: there may exist a better solution without a lot coordinate-transformations)
convert current and destination location to ecef-coordinates
transform destination ecef coordinate to enu (east, north, up) local coordinate system with current location as reference location. You can also use this.
ignore the height-value and use the enu-coordinate to get the direction: atan2(deast, dnorth)
The compass returns already the angle the iPhone is looking at
display the destination on the screen if dest_angle - 10° <= compass_angle <= dest_angle + 10°
with respect to the cyclic-angle-space. The constant of 10° is just a guessed value. You should either try some values to find out a useful one or you have to analyse some properties of the iPhone-camera.
The coordinate-transformation-equations become much simpler if you assume that the earth is a sphere and not an ellipsoid. Most links if have postet are assuming an wgs-84 ellipsoid becasue gps also does afaik).
I am trying to drag some shapes in HTML canvas but I am encountering a problem with respect to determining the change in mouse coordinates [dx,dy]
First of all, there is no problem in the coordinates themselves, stored in mousePos as the rollover effects work flawlessly. What I am doing is, upon first entering the shape, saving the mouse coordinates.
pos = {x : mousePos[0] , y : mousePos[1]};
Then, onMotion updates the coordinates everytime the mouse moves as well as recording the current position
dx=mousePos[0]-pos.x;
dy=mousePos[1]-pos.y;
pos = {x : mousePos[0] , y : mousePos[1]};
Then I add the dx and dy values to the shapes coordinates (lets take a simple rectangle as an example)
ctx.fillRect(0+this.dx,0+this.dy,100+this.dx,100+this.dy);
as long as the mouse doesn't move too fast, it works relatively well (not perfect though). If I move the mouse very quickly, without going out of the window, the rectangle does not catch up with the mouse. I can understand if there is a delay catching up to the mouse, but how can the delta values be off? Clearly we know where we started, and even if dozens/hundreds of pixels are skipped in the process, eventually the mouse should stop and the correct delta values should be calculated.
Any help would be greatly appreciated as I have hit a conceptual wall here.
You might try to get e.layerX-Y when the onMotion is fired to get the real position instead of the delta. This way it can't be "off".
To use this, place your shape into a div with style="padding:0px;margin=0px;" , because the position is relative to the parent block.
How does one determine the (x,y) coordinates while dragging "dragover" and after the drop ("drop") in HTML5 DnD?
I found a webpage that describes x,y for dragover (look at e.offsetX and e.layerX to see if set for various browsers but for the life of me they ARE NOT set).
What do you use for the drop(e) function to find out WHERE in the current drop target it was actually dropped?
I'm writing an online circuit design app and my drop target is large.
Do I need to drop down to mouse-level to find out x,y or does HTML5 provided a higher level abstraction?
The properties clientX and clientY should be set (in modern browsers):
function drag_over(event) {
console.log(event.clientX);
console.log(event.clientY);
event.preventDefault();
return false;
}
function drop(event) {
console.log(event.clientX);
console.log(event.clientY);
event.preventDefault();
return false;
}
Here's a (somewhat) practical example that works in Firefox and Chrome.
To build on top of robertc's answer and work around the issue:
The clientX and clientY are going to be inaccurate proportional to the distance of the cursor from the upper left corner of the element which was dropped.
You may simply calculate the delta distance by subtracting the client coords recorded at dragStart from the client coords recorded at dragEnd. Bear in mind the the element which was dropped still has the exact same position at dragEnd which you may update by the delta distance to get the new left and top positional coordinates.
Here's an demo in react, although the same can be accomplished in plain JS.