Image created upside down in Android - unreal-engine4

I am using the following code to export SceneCaptureComponent as a PNG image on the device and I am facing a weird issue.
On Windows and iOS everything works as expected.
On Android the image is upside down, like it is rotated by 180 degrees.
Any hints why?
bool UsaveJPG::SaveImage(class USceneCaptureComponent2D* Target, const FString ImagePath, const FLinearColor ClearColour)
{
FRenderTarget* RenderTarget = Target->TextureTarget->GameThread_GetRenderTargetResource();
if (RenderTarget == nullptr)
{
return false;
}
TArray<FColor> RawPixels;
// Format not supported - use PF_B8G8R8A8.
if (Target->TextureTarget->GetFormat() != PF_B8G8R8A8)
{
// TRACEWARN("Format not supported - use PF_B8G8R8A8.");
return false;
}
if (!RenderTarget->ReadPixels(RawPixels))
{
return false;
}
// Convert to FColor.
FColor ClearFColour = ClearColour.ToFColor(false); // FIXME - want sRGB or not?
for (auto& Pixel : RawPixels)
{
// Switch Red/Blue changes.
const uint8 PR = Pixel.R;
const uint8 PB = Pixel.B;
Pixel.R = PB;
Pixel.B = PR;
// Set alpha based on RGB values of ClearColour.
Pixel.A = ((Pixel.R == ClearFColour.R) && (Pixel.G == ClearFColour.G) && (Pixel.B == ClearFColour.B)) ? 0 : 255;
}
TSharedPtr<IImageWrapper> ImageWrapper = ImageWrapperModule.CreateImageWrapper(EImageFormat::PNG);
const int32 Width = Target->TextureTarget->SizeX;
const int32 Height = Target->TextureTarget->SizeY;
if (ImageWrapper.IsValid() && ImageWrapper->SetRaw(&RawPixels[0], RawPixels.Num() * sizeof(FColor), Width, Height, ERGBFormat::RGBA, 8))
{
FFileHelper::SaveArrayToFile(ImageWrapper->GetCompressed(), *ImagePath);
return true;
}
return false;
}

It seems as if you aren't the only one having this issue.
Questions with answers (bolding to pop out answer section):
"I believe the editor mobile preview provides "correct" results, while the mobile device produces upside down results. I ran into this issue when creating a post process volume to simulate a camera partially submerged in water with mathematically calculated wave displacement. I can convert the screen position to world position using a custom material node with the code: float2 ViewPos = (ScreenPos.xy - View.ScreenPositionScaleBias.wz) / View.ScreenPositionScaleBias.xy * 10; float4 Pos = mul(float4(ViewPos, 10, 1), [Link Removed]); return Pos.xyz / Pos.w; This works as expected in the editor mobile preview, but the results are weirdly flipped around when launching to a mobile device. Manually inverting the Y-coordinate of the UV by subtracting the Y-component from 1 will correct the ViewportUV output, but the world space calculation still doesn't work (probably because [Link Removed] and [Link Removed]need to also be inverted). Strangely, if I feed the inverted UV coordinates into SceneTexture:PostProcessInput0, the entire scene flips upside down when running on a mobile device. This means that the ScreenPosition's ViewportUV is not the same as the SceneTexture UV. As a result, any post process effects that rely on the ViewportUV do not work correctly when running on mobile devices."
https://unreal-engine-issues.herokuapp.com/issue/UE-63442
I'm having trouble getting the tilt function to work with my Android phone. It seems that the tilt values are upside down because when I hold the phone flat on the table with the screen up, the tilt values snap from values like x=-3,y=0,z=-3 to z=3,y=0,z=3. When I hold the phone up with the screen facing down, the values are much closer to 0 all round.
Response:
tilt Y is always zero on android devices. but works fine on IOS
I think nobody has reported it as bug yet :) you do it
https://answers.unrealengine.com/questions/467434/tilt-vector-values-on-android-seem-to-be-upside-do.html?sort=oldest
I'm not an Unreal dev, but you might have to put in a condition for Android to invert the image for yourself. I've done that kind of thing before (think cross browser compatibility). It's always a PITA and bloats the code for a stupid "reason", but sometimes it's what you have to do. Maybe Unreal will release a patch for it, but not likely, since any other devs who've already accepted and accounted for this issue will now have to fix their "fixed" code. It may just have to be something you live with, unfortunately.
With reading this, it just sounds like the different OSs decided on different corners for the origin of the "graph" that is the screen. Win and iOS seem to have chosen the same corner, while Android chose differently. I've done similar things in programming as well as for CNC machines. You may be able to choose your "world origin" and orientation in setting.
Open the Find in Blueprints menu item.
Search for Debug Draw World Origin. In the results, double-click the Debug Draw World Origin function.
https://docs.unrealengine.com/en-US/Platforms/AR/HandheldAR/ARHowToShowWorldOrigin/index.html
There's not much out there that I can decipher being relevant, but searching for "android unreal world origin" gets a few results that might help. And adding "orientation" gets some different, possibly relevant, results.

Related

How to detect device motion direction in Flutter?

I am trying to detect the direction in which a device is moved. If a device (mobile or tablet) is picked up and moved to the right or left, I want to detect that using Flutter.
Here is what I've tried:
I have tried using sensors_plus library for this and tried using the accelerometer for detecting left or right direction and the program is given below.
void initState() {
userAccelerometerEvents.listen((UserAccelerometerEvent event) {
var me = event.x;
var me2 = num.parse(me.toStringAsFixed(2));
print(me2);
if (me2.round() < 0) {
setState(() {
direction = "Left";
});
}
if (me2 > 0) {
if (me2 > 0.5) {
setState(() {
direction = "Right";
});
} else {
setState(() {
direction = "Still";
});
}
}
});
super.initState();}
the outputs I get are:
The program only shows left or right if I move the device really fast, although I would like to detect the direction on slower speeds. For this, I have tried to take smaller values of "event.x" but in that case, the output switches between "Left" and "Right" and "Still" really quickly even when the device is still.
I could not find any example that covers this problem online or in the package docs or examples.
I would like to know how I can achieve this? I would like to know the left and right movement of the device with slow speeds too.
Should I use another sensor?
I do not fully understand the magnetometer's inputs which I've also tried using. Any help in understanding that is appreciated.
I have also tried searching for a package on pub.dev but I can't find any that fit this need.
Any help is appreciated thanks a lot in advance.
The plugin tells you about acceleration. If you want to detect motion, i.e. speed, you need to apply in integration filter to the data from those events. You may want to use UserAccellerationEvents since those already eliminate gravity.
Also, using events from the gyroscope will make you calculation more robust.
This is an application of sensor fusion by the way.
You can try using location package to access device current location. It has attribute of heading which can be useful for you. You can use a listener to capture device location changes. Whenever device heading is changed, the heading value will be printed.
Location location = Location();
location.onLocationChanged.listen((LocationData currentLocation) {
// Use current location
print(currentLocation.heading);
});
Read more here: https://pub.dev/packages/location
Heading is a double value, that varies from 0 to 360 degrees. This image will help you to see what heading value means.

Unity resolution problem, mesh has holes when zooming out

I have a server which renders a mesh using OpenGL with a custom frame buffer. In my fragment shader I write the gl_primitiveID into an output variable. Afterwards I call glReadPixels to read the IDs out. Then I know which triangles were rendered (and are therefore visible) and I send all those triangles to a client which runs on Unity. On this client I add the vertex and index data to a GameObject and it renders it without a problem. I get the exact same rendering result in Unity as I got with OpenGL, unless I start to zoom out.
Here are pictures of the mesh rendered with Unity:
My first thought was that I have different resolutions, but this is not the case. I have 1920*1080 on both server and client. I use the same view and projection matrix from the client on my server, so this also shouldn't be the problem. What could be cause of this error?
In case you need to see some of the code I wrote.
Here is my vertex shader code:
#version 330 core
layout (location = 0) in vec3 position;
uniform mat4 view;
uniform mat4 projection;
void main()
{
gl_Position = projection * view * vec4(position, 1.0f);
}
Here is my fragment shader code:
#version 330 core
layout(location = 0) out int primitiveID;
void main(void)
{
primitiveID = gl_PrimitiveID + 1; // +1 because the first primitive is 0
}
and here is my getVisibleTriangles method:
std::set<int> getVisibleTriangles() {
glReadBuffer(GL_COLOR_ATTACHMENT0);
glReadPixels(0, 0, SCR_WIDTH, SCR_HEIGHT, GL_RED_INTEGER, GL_INT, &pixels[0]);
std::set<int> visibleTriangles;
for (int i = 0; i < pixelsLength; i += 4) {
int id = * (int*) &pixels[i];
if (id != 0) {
visibleTriangles.insert(id-1); // id 0 is NO_PRIMITIVE
}
}
return visibleTriangles;
}
Oh my god, I can't believe I made such a stupid mistake.
After all it WAS a resolution problem.
I didn't call gl_Viewport (only when resizing the window). Apparently when creating a window with glfwCreateWindow GLFW creates the window but since the parameters are only hints and not hard constraints (as it is stated here: glfw reference) it is possible that those are not exactly fulfilled. Since I have passed a desired size of 1920*1080 (which is also my resolution) apparently the drawing area did not actually get the size of 1920*1080, because there is also some space needed for the menu etc. So it is rendered with a lower resolution (to be exact, it was 1920*1061) on the server which results in missing triangles on the client.
Before getting into the shader details for this problem, are you sure that the problem doesn't lie with the way the zoom out functionality has been implemented in Unity? It's just a hunch since I have seen it in older projects, but if the zoom in/out functionality works by actually moving the camera then the movement of the clipping planes will create those "holes" when the mesh surfaces go outside the range. Although the placement of some of those holes in the shared image makes me doubt that this is the case, but you never know.
If this happens to be the way the zoom function works then you can confirm this by looking at the editor mode while zooming out. It will display the position of the clipping planes of the camera in relation to the mesh.

Swift Avfoundation: QRcode scanning error in bright ambient

I'm using the example from here to built a qrcode scanning apps and it work perfectly fine for paper, or normal light
in normal condition, the qrcode is looks like below:
(the line are thicker and those dots are stick near to each other)
My problem: when the ambient is bright, and the phone is bright (especially from retina display like SamsungEdge 7), the qrcode scanned become like below. Unable to read the qrcode anymore!
(the line become thinner and the dots become smaller and further apart)
any suggestion or where/how i can fix this kind of error? because ZXING is enable to scan even in my 'error' scenario.
Thanks in advance!
After asking and searching around. (useful info: https://www.objc.io/issues/21-camera-and-photos/camera-capture-on-ios/)
This issue is related with the camera exposure, brightness, contrast, white balance factors.
this is what i add to solve the issue
//zoom + set exposure for bright senario
do {
try currentDevice.lockForConfiguration()
} catch {
// handle error
return
}
currentDevice.videoZoomFactor = 1.0 + CGFloat(1)
let exposureBias:Float = -0.5
currentDevice.setExposureTargetBias(exposureBias) { (time:CMTime) -> Void in
}
currentDevice.unlockForConfiguration()

Trying to move 2D object with accelerometer

I want to be able to direct my player when he is in the air. So I maded a method called CheckTilt() which is called in FixedUpdate() which is basically checking to see if the tablet has been moved in the x direction. The condition inside of CheckTilt() is based on a bool variable called isGrounded which basically is false when the player is airborne and true otherwise that part works fine. However, I cannot get the player to move based on the accelerometer. Here is my method:
public void checkTilt()
{
if (!isGrounded)
{
float tiltSpeed = 10.0f;
float x = Input.acceleration.x;
Vector2 tilted = new Vector2(x, 0);
myRigidBody.AddForce(tilted * tiltSpeed * Time.deltaTime);
}
}
Other Info: I'm building this for android device, and have tested it with the unity remote.
Fortunately the answer is just that nothing like that works with the Unity remote - you must build to try it.
Re your other question: it's almost impossible to guess the "parity". you have to try it, and if that is not right, just multiply by -1f. (What I do is include a UI button on screen, which, swaps the parity - but really just build twice until you get it!)
"I'm still not getting any movement when moved..."
it's an incredible pain in the ass getting the axes right. YES, on fact, it is 100% a fact that you will need to read all 3 axis, even if a 2D game.
Don't forget that
Apple STUPIDLY USE Z-BACKWARDS orientation ...
because they are utter idiots who have never worked in the game industry.
DOCO
But you will almost certainly have to do this:
You are familiar with using the Canvas / UI system, right? if not tell me.
1) make a canvas and add three Text fields
2) do this
public Text showX;
public Text showY;
public Text showZ;
and then
showX.text = yourAccelerometer.x.ToString("f2");
.. same for the others ..
It seems like a pain but I assure, it's the ONLY WAY to figure out what the hell is going on.

Mouse Position from Raw Input method

I am trying to get mouse position by using Raw input method. In the RAWMOUSE structure am always getting value MOUSE_MOVE_RELATIVE as usFlags that means am getting relative value of last mouse position. but i want absolute position of mouse. how to get absolute position value of mouse from Raw input ?
RAWMOUSE have exact same values that Windows received from mouse hardware (it could be HID USB/Bluetooth or i8042 PS/2 mouse etc). Usually mouses are sending relative movement, but some could send absolute - for example touch screens, RDP mouse (yay!).
So if you need absolute mouse pos (for example for game menu) you can use GetCursorPos API to get coords of Windows-rendered cursor on the screen.
But its not the same thing as sent with MOUSE_MOVE_ABSOLUTE flag in RAWMOUSE.
MOUSE_MOVE_ABSOLUTE is mouse movement in 0..65535 range in device-space, not screen-space. Here is some code how to convert it to screen-space:
if ((rawMouse.usFlags & MOUSE_MOVE_ABSOLUTE) == MOUSE_MOVE_ABSOLUTE)
{
bool isVirtualDesktop = (rawMouse.usFlags & MOUSE_VIRTUAL_DESKTOP) == MOUSE_VIRTUAL_DESKTOP;
int width = GetSystemMetrics(isVirtualDesktop ? SM_CXVIRTUALSCREEN : SM_CXSCREEN);
int height = GetSystemMetrics(isVirtualDesktop ? SM_CYVIRTUALSCREEN : SM_CYSCREEN);
int absoluteX = int((rawMouse.lLastX / 65535.0f) * width);
int absoluteY = int((rawMouse.lLastY / 65535.0f) * height);
}
else if (rawMouse.lLastX != 0 || rawMouse.lLastY != 0)
{
int relativeX = rawMouse.lLastX;
int relativeY = rawMouse.lLastY;
}
[Weird to see a 9 year old unanswered question as a top google search result, so I'll offer an answer.. better late than never!]
At the level of Raw Input API, you are getting the raw horizontal/vertical movement of the mouse device -- what the mouse reports at the hardware level.
According to the docs (I haven't verified) these delta X/Y values are not yet processed by the desktop mouse speed/acceleration/slowdown settings. So even if you started tracking the deltas from a known absolute location, you would quickly drift away from where Windows is positioning the mouse cursor, on-screen.
What's not clear from the docs, is what units or scale these relative X/Y values are reported in. It would be nice if it were somehow normalized, but I suspect it depends on the DPI resolution of your mouse. (I will find a mouse with adjustable DPI to test, and report back, if no one edits me first.)
Edit/Update: I got my hands on a mouse with adjustable DPI .. did some crude testing, enough to confirm the rough scale of the lLastX/Y values seems to match the hardware DPI.. eg. with the mouse in 1200 dpi mode, moving it physically 1 inch from left to right, generates a net sum of lLastX values ~= 1200.