How to detect device motion direction in Flutter? - flutter

I am trying to detect the direction in which a device is moved. If a device (mobile or tablet) is picked up and moved to the right or left, I want to detect that using Flutter.
Here is what I've tried:
I have tried using sensors_plus library for this and tried using the accelerometer for detecting left or right direction and the program is given below.
void initState() {
userAccelerometerEvents.listen((UserAccelerometerEvent event) {
var me = event.x;
var me2 = num.parse(me.toStringAsFixed(2));
print(me2);
if (me2.round() < 0) {
setState(() {
direction = "Left";
});
}
if (me2 > 0) {
if (me2 > 0.5) {
setState(() {
direction = "Right";
});
} else {
setState(() {
direction = "Still";
});
}
}
});
super.initState();}
the outputs I get are:
The program only shows left or right if I move the device really fast, although I would like to detect the direction on slower speeds. For this, I have tried to take smaller values of "event.x" but in that case, the output switches between "Left" and "Right" and "Still" really quickly even when the device is still.
I could not find any example that covers this problem online or in the package docs or examples.
I would like to know how I can achieve this? I would like to know the left and right movement of the device with slow speeds too.
Should I use another sensor?
I do not fully understand the magnetometer's inputs which I've also tried using. Any help in understanding that is appreciated.
I have also tried searching for a package on pub.dev but I can't find any that fit this need.
Any help is appreciated thanks a lot in advance.

The plugin tells you about acceleration. If you want to detect motion, i.e. speed, you need to apply in integration filter to the data from those events. You may want to use UserAccellerationEvents since those already eliminate gravity.
Also, using events from the gyroscope will make you calculation more robust.
This is an application of sensor fusion by the way.

You can try using location package to access device current location. It has attribute of heading which can be useful for you. You can use a listener to capture device location changes. Whenever device heading is changed, the heading value will be printed.
Location location = Location();
location.onLocationChanged.listen((LocationData currentLocation) {
// Use current location
print(currentLocation.heading);
});
Read more here: https://pub.dev/packages/location
Heading is a double value, that varies from 0 to 360 degrees. This image will help you to see what heading value means.

Related

Image created upside down in Android

I am using the following code to export SceneCaptureComponent as a PNG image on the device and I am facing a weird issue.
On Windows and iOS everything works as expected.
On Android the image is upside down, like it is rotated by 180 degrees.
Any hints why?
bool UsaveJPG::SaveImage(class USceneCaptureComponent2D* Target, const FString ImagePath, const FLinearColor ClearColour)
{
FRenderTarget* RenderTarget = Target->TextureTarget->GameThread_GetRenderTargetResource();
if (RenderTarget == nullptr)
{
return false;
}
TArray<FColor> RawPixels;
// Format not supported - use PF_B8G8R8A8.
if (Target->TextureTarget->GetFormat() != PF_B8G8R8A8)
{
// TRACEWARN("Format not supported - use PF_B8G8R8A8.");
return false;
}
if (!RenderTarget->ReadPixels(RawPixels))
{
return false;
}
// Convert to FColor.
FColor ClearFColour = ClearColour.ToFColor(false); // FIXME - want sRGB or not?
for (auto& Pixel : RawPixels)
{
// Switch Red/Blue changes.
const uint8 PR = Pixel.R;
const uint8 PB = Pixel.B;
Pixel.R = PB;
Pixel.B = PR;
// Set alpha based on RGB values of ClearColour.
Pixel.A = ((Pixel.R == ClearFColour.R) && (Pixel.G == ClearFColour.G) && (Pixel.B == ClearFColour.B)) ? 0 : 255;
}
TSharedPtr<IImageWrapper> ImageWrapper = ImageWrapperModule.CreateImageWrapper(EImageFormat::PNG);
const int32 Width = Target->TextureTarget->SizeX;
const int32 Height = Target->TextureTarget->SizeY;
if (ImageWrapper.IsValid() && ImageWrapper->SetRaw(&RawPixels[0], RawPixels.Num() * sizeof(FColor), Width, Height, ERGBFormat::RGBA, 8))
{
FFileHelper::SaveArrayToFile(ImageWrapper->GetCompressed(), *ImagePath);
return true;
}
return false;
}
It seems as if you aren't the only one having this issue.
Questions with answers (bolding to pop out answer section):
"I believe the editor mobile preview provides "correct" results, while the mobile device produces upside down results. I ran into this issue when creating a post process volume to simulate a camera partially submerged in water with mathematically calculated wave displacement. I can convert the screen position to world position using a custom material node with the code: float2 ViewPos = (ScreenPos.xy - View.ScreenPositionScaleBias.wz) / View.ScreenPositionScaleBias.xy * 10; float4 Pos = mul(float4(ViewPos, 10, 1), [Link Removed]); return Pos.xyz / Pos.w; This works as expected in the editor mobile preview, but the results are weirdly flipped around when launching to a mobile device. Manually inverting the Y-coordinate of the UV by subtracting the Y-component from 1 will correct the ViewportUV output, but the world space calculation still doesn't work (probably because [Link Removed] and [Link Removed]need to also be inverted). Strangely, if I feed the inverted UV coordinates into SceneTexture:PostProcessInput0, the entire scene flips upside down when running on a mobile device. This means that the ScreenPosition's ViewportUV is not the same as the SceneTexture UV. As a result, any post process effects that rely on the ViewportUV do not work correctly when running on mobile devices."
https://unreal-engine-issues.herokuapp.com/issue/UE-63442
I'm having trouble getting the tilt function to work with my Android phone. It seems that the tilt values are upside down because when I hold the phone flat on the table with the screen up, the tilt values snap from values like x=-3,y=0,z=-3 to z=3,y=0,z=3. When I hold the phone up with the screen facing down, the values are much closer to 0 all round.
Response:
tilt Y is always zero on android devices. but works fine on IOS
I think nobody has reported it as bug yet :) you do it
https://answers.unrealengine.com/questions/467434/tilt-vector-values-on-android-seem-to-be-upside-do.html?sort=oldest
I'm not an Unreal dev, but you might have to put in a condition for Android to invert the image for yourself. I've done that kind of thing before (think cross browser compatibility). It's always a PITA and bloats the code for a stupid "reason", but sometimes it's what you have to do. Maybe Unreal will release a patch for it, but not likely, since any other devs who've already accepted and accounted for this issue will now have to fix their "fixed" code. It may just have to be something you live with, unfortunately.
With reading this, it just sounds like the different OSs decided on different corners for the origin of the "graph" that is the screen. Win and iOS seem to have chosen the same corner, while Android chose differently. I've done similar things in programming as well as for CNC machines. You may be able to choose your "world origin" and orientation in setting.
Open the Find in Blueprints menu item.
Search for Debug Draw World Origin. In the results, double-click the Debug Draw World Origin function.
https://docs.unrealengine.com/en-US/Platforms/AR/HandheldAR/ARHowToShowWorldOrigin/index.html
There's not much out there that I can decipher being relevant, but searching for "android unreal world origin" gets a few results that might help. And adding "orientation" gets some different, possibly relevant, results.

Is it possible to set a "max allowed pitch" for a Mapbox GL map?

I am unable to find a method to prevent the user from setting the pitch angle of the map too far. I am working with high resolution weather data, so I want to prevent them from setting the pitch so extreme that they can see far away from the intended area. That puts me in a position to either extend the data (WAY too much bandwidth would be used) or just not display it there, making it ugly. I would not like to eliminate the pitch ability completely, as it helps with visualization.
I have looked as much as I can in the documentation but since I cannot find the information and I do not even have a code snippet I have tried because I have nowhere to start. Is there any way to (for example) let the user pitch up to a certain degree only? I made an example image where the left pitch would be OK but not the right, as in that current zoom level it would let them see too far away. If this is not possible, I am open to alternate methods, if any.
Use the render event to check the value of the angle:
map.on('render', (e) => {
if (e.target.getPitch() > MAX_PITCH) e.target.setPitch(MAX_PITCH)
})
[ https://jsfiddle.net/05o4e7dr/ ]
There's now a maxPitch option that you can pass to the Map construct. And, as of Mapbox GL JS 2.0.0, you can set it to pretty high values, all the way up to 85°:
let MAX_PITCH = 85
new mapboxgl
.Map({
container: 'map',
// ...
pitch: MAX_PITCH,
maxPitch: MAX_PITCH,
});

SetTransform causes Unity LeapMotion object to spin too fast

This builds upon a question I recently asked here:
Unity3D Leap Motion - Put hand in static pose (SetTransform causes Hand to spin crazy)
However, the code suggested in the only response given does not work. I have been using the 'SetTransform' method and while it does allow me to move the hand to my desired position, the rotation is crazy. The hand continuously spins and despite spending the best part of four days on it, I can't find the solution.
In short all I am trying to do it set the hand to a fixed pose (a fist for example) but have it move and rotate with the live hand data. I created a method (detailed in previous question) that would manually recalculate the positions of the joints to put the hand in the pose as the SetTransform was causing this crazy rotation. However, I still ended up with crazy rotation from having to transform the hand to rotate it, so I have switched back to the SetTransform method for ease.
posedHand = PoseManager.LoadHandPose(mhd.LeftHand, int.Parse("2"), transform);
Hand h = new Hand();
if(posedHand != null)
{
h.CopyFrom(posedHand);
h.SetTransform(LiveData.LeftHand.PalmPosition.ToVector3(), LiveData.LeftHand.Rotation.ToQuaternion());
}
All I really want is a method that I can pass two hand objects into (one being the current 'live' hand, the other being the desired pose) and get a hand object returned which I can then render.
Update
As requested here are images of what I am currently getting and what I want to achieve.
Current:
Target:
The target image would display the fixed pose, which in this example is a fist at the current position and rotation of the live hand. Thus meaning I could have my hand 'open' but onscreen I would see a fist moving around. As you can see from the 'current' the SetTransform is giving me the correct pose and position but the rotation is freaking out.
This question has been answere elsewhere. Unity3D Leap Motion - Put hand in static pose (Pose done just can't rotate)
Essentially, i had to create a function that took in the pose hand as a parameter which then used Hands.GetHand(Chiralty) to get the position and rotation from the live data which was then used to 'SetTransform' the posed hand to the new position and rotation
public static Hand TransformHandToLivePosition(Chirality handType, Hand poseHand)
{
Hand sourceHand = Hands.Get(handType);
Hand mimicHand = null;
if (poseHand == null && sourceHand != null && sourceHand.Fingers.Count > 0)
{
//poseHand = PoseManager.LoadHandPose(sourceHand, 2, this.transform, true);
}
if (poseHand != null && sourceHand != null)
{
// Copy data from the tracked hand into the mimic hand.
if (mimicHand == null) { mimicHand = new Hand(); }
mimicHand.CopyFrom(poseHand); //copy the stored pose in the mimic hand
mimicHand.Arm.CopyFrom(poseHand.Arm); // copy the stored pose's arm into the mimic hand
// Use the rotation from the live data
var handRotation = sourceHand.Rotation.ToQuaternion();
// Transform the copied hand so that it's centered on the current hands position and matches it's rotation.
mimicHand.SetTransform(sourceHand.PalmPosition.ToVector3(), handRotation);
}
return mimicHand;
}

Check if camera if facing a specific direction

I'm working to let a character push objects around. The problem is that as soon as he touches an object he starts moving it, even when the touch was accidental and the character isn't facing the direction of the object.
What I want to do is get the direction the object when a collision happens, and if the camara is really facing that direction allow the player to move it.
Right now I only managed to get the direction of the object, but I don't know how to compare that with the direction of the camera.
This is what I'm trying now:
void OnCollisionEnter(Collision col) {
float maxOffset = 1f;
if (col.gameObject.name == "Sol") {
// Calculate object direction
Vector3 direction = (col.transform.position - transform.position).normalized;
// Check the offset with the camera rotation (this doesn't work)
Vector3 offset = direccion - Camera.main.transform.rotation.eulerAngles.normalized;
if(offset.x + offset.y + offset.z < maxOffset) {
// Move the object
}
}
You can try to achieve this in several different ways. And it is a bit dependent on how precise you mean facing the box.
You can get events when an object is visible within a certain camera, and when it enters or leaves using the following functions. With these commands from the moment the box gets rendered with that camera (so even if just a edge is visible) your collision will trigger.
OnWillRenderObject, Renderer.isVisible Renderer.OnBecameVisible, OnBecameInvisible
Or you could attempt to calculate whether and objects bounding box falls within the camera's view frustum, there for you could use the following geometry commands
GeometryUtility.CalculateFrustumPlanes, GeometryUtility.TestPlanesAABB
Or if you rather have a very precise facing you could also go for a Physics.Raycast So then you will only trigger the event when the ray is hitting the object.
Hope this helps ya out.
Take a compass obj and align it to ur device sorry object than when you move it you can always know where it points to.
theoretically it should work but maybe your object just moves around because of a bug in your simulator motion engine.

HTML5: dragover(), drop(): how get current x,y coordinates?

How does one determine the (x,y) coordinates while dragging "dragover" and after the drop ("drop") in HTML5 DnD?
I found a webpage that describes x,y for dragover (look at e.offsetX and e.layerX to see if set for various browsers but for the life of me they ARE NOT set).
What do you use for the drop(e) function to find out WHERE in the current drop target it was actually dropped?
I'm writing an online circuit design app and my drop target is large.
Do I need to drop down to mouse-level to find out x,y or does HTML5 provided a higher level abstraction?
The properties clientX and clientY should be set (in modern browsers):
function drag_over(event) {
console.log(event.clientX);
console.log(event.clientY);
event.preventDefault();
return false;
}
function drop(event) {
console.log(event.clientX);
console.log(event.clientY);
event.preventDefault();
return false;
}
Here's a (somewhat) practical example that works in Firefox and Chrome.
To build on top of robertc's answer and work around the issue:
The clientX and clientY are going to be inaccurate proportional to the distance of the cursor from the upper left corner of the element which was dropped.
You may simply calculate the delta distance by subtracting the client coords recorded at dragStart from the client coords recorded at dragEnd. Bear in mind the the element which was dropped still has the exact same position at dragEnd which you may update by the delta distance to get the new left and top positional coordinates.
Here's an demo in react, although the same can be accomplished in plain JS.