I want to simulate real-time in Unity using GPS data (Latitude / Longitude/ Altitude) of the aircraft moving in another flight simulator. In this way, the aircraft in Unity, should act the same as the aircraft in the other simulator.
As is known, Unity uses the xyz coordinate system. I have studied many examples to transform these two different types of data into one another. But in all of them, problems occur in coordinate transformations and aircrafts move differently. However, I still do not understand how to do it.
Is there an easy formula for realizing this transformation?
Here are a few examples of instant data I receive from the simulator:
<GPS>
<Lat>21.325352</Lat>
<Long>-157.929607</Long>
<Al>885.512322</Al>
</GPS>
<GPS>
<Lat>21.325356</Lat>
<Long>-157.929555</Long>
<Al>886.829367</Al>
</GPS>
<GPS>
<Lat>21.325357</Lat>
<Long>-157.929540</Long>
<Al>887.487356</Al>
</GPS>
Related
I am building a game on Unity with two Azure Kinects. How do I calibrate them to have get the positional data of body so solve occlusion?
Currently I get two bodies for each person. How can I map the 2 virtual bodies (from each camera) to each individual person?
Your idea is great as Multiple-camera setups offer a solution to increase the coverage of the captured human body and to minimize the occlusions.
Please go through the document: Benefits of using multiple Azure Kinect DK devices to read more on Fill in occlusions. Although the Azure Kinect DK data transformations produce a single image, the two cameras (depth and RGB) are actually a small distance apart. The offset makes occlusions possible. Use the Kinect SDK to capture the depth data from both devices and store it in separate matrices. Align the two matrices using a 3D registration algorithm. This will help you to map the data from one device to the other, taking into account the relative position and orientation of each device.
Please refer to this article published by: Nadav Eichler
Spatio-Temporal Calibration of Multiple Kinect Cameras Using 3D Human Pose
Quoted:
When using multiple cameras, two main requirements must be fulfilled
in order to fuse the data across cameras:
Camera Synchronization (alignment between the cameras’ clocks).
Multi-Camera Calibration (calculating the mapping between cameras’
coordinate systems).
I'm trying to make an augmented reality application about chemistry using Vuforia and Unity3D. I will physically have a big image of periodic table of elements and some small spherical objects, and I don't know how to determine which element is covered by the sphere when I put it on the periodic table. Does anyone have an idea or has done this already? I will next associate that chemical element with the sphere.
I think your best bet would be to try and track not only the position of the printed periodic table as Vuforia image target, but also the position of the 'small spherical objects' as Vuforia model targets. Whether or not that would work depends on the exact characteristics of those spherical objects and to which degree they are suitable for tracking as model targets. Otherwise consider replacing the spherical objects with alternative objects possibly with trackable stickers on them.
We develop an application that uses models exported as an fbx from max in unity (seems to work), changes them and then communicates the changes back to 3DSMax for a clean render.
We rotate the model pivot in max in such a way in max that it is shown correctly in Unity after the export.
What we got so far:
Position:
x(max) = x(unity)
y(max) = z(unity)
z(max) = y(unity)
Rotation:
x(max) = x(unity)
y(max) = -y(unity)
z(max) = z(unity)
Simple rotations seem to work, complex do not. I suspect we did not properly take the mirroring when going from left handed to right handed or the different rotation multiplication order into account. How is the mapping done correctly?
There is a related question with no answers:
Unity rotation convertion
The issue was the different rotation order of Unity (XYZ) and Max (ZYX). That explains that single rotations work but not complex ones. If you do the transformation in the question above and then just do each rotation in the same order consecutively in Unity, it works.
I am working on a simple SLAM simulation for a project. Here's the problem:
For the simulation, I will be using a mobile robot moving in a room. The robot has laser distance sensors so he can detect the detect the distances from itself to the wall from inside an angle, as shown in the first figure:
The MATLAB code I've implemented for the simulation is to simply calculate the angles from each wall point to the the robot's pose and return all the points whose angle is inside, for example, [-60°,+60°].
For more complex room configurations it can't be used though, since walls that shouldn't be detected (walls from other rooms) will be detected as well, as seen in the second figure:
I need a better way of implementing this detection inside the simulation so I can use it for any kind of rooms like this one, producing results like this:
Games like FroggyJump for iPhone figure out the rotation of the iphone. I'm getting confused with the acceleration values. How do I calculate the level of rotation? I suppose I need to consider when the iphone isn't perfectly upright.
Thank you.
I'm also wanting to use the new Core Motion framework with the "Device Motion" for iPhone 4 for extra precision. I guess I'll have to use that low pass filter for the other devices.
It's the yaw.
Having given Froggy Jump a quick go, I think it's likely directly using the accelerometer's x value as the left/right acceleration on the frog. If it is stationary, you can think of an accelerometer as giving you the vector that points upward into space, relative to the local axes. For something like a ball rolling or anything else accelerating due to tilt, you want to use the values directly.
For anything that involves actually knowing angles, you're probably best picking the axis around which you want to detect rotation then using the C function atan2f on the accelerometer values for the other two axes. With just an accelerometer, there are some scenarios in which you can't detect rotation — for example, if the device is flat on a table then an accelerometer can't detect yaw. The general rule is that rotations around the gravity vector can't be detected with an accelerometer alone.