I am trying to persist markers in an augmented reality game. Here is the gist of what I am doing:
I have my users recording and saving an area to an ADF. Then they drop marker’s into the scene and save out their position data in Unity World coordinates to a text file. I then restart the app, load and localize to the ADF and load the markers.
In order to get this working, I've modified the ARPoseController.cs file in the Unity demo package to use the Area Description as it's base frame. In the _UpdateTransformation method I've swapped out the frame pairs
pair.baseFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_START_OF_SERVICE;
pair.targetFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_DEVICE;
for
pair.baseFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_AREA_DESCRIPTION;
pair.targetFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_DEVICE;
I've also added some code confirming that I'm successfully localizing to the ADF, but I'm noticing that my markers position in Unity World Space do not position properly relative to real environment.
I can confirm that my markers save and load properly based on START_OF_SERVICE origin so I assume that they are properly serializing and deserializing. What could be causing this? Am I wrong in assuming this should just work by switching the base framepair to Area_Description instead of START_OF_SERVICE?
I had a similar problem getting the AR and ADF integrated, I had to modify the TangoPointCloud to check if you're using an AreaDescription in OnTangoDepthAvailable() and adjust the baseFrame target as required.
i.e.:
if (m_tangoDeltaPoseController.m_useAreaDescriptionPose)
{
pair.baseFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_AREA_DESCRIPTION;
pair.targetFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_DEVICE;
}
else
{
pair.baseFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_START_OF_SERVICE;
pair.targetFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_DEVICE;
}
That way, the geometry of the point cloud adjusts itself based on the ADF offset instead of from device start.
After that change, when I'm using the sample code for AR to drop markers, it registers the surface properly so I'm placing the markers in the correct spots and orientation. I'm still encountering some flakiness with the markers not adjusting when relocalized though, have to look into the AreaLearningInGameController for loop closure events.
Hope that helps!
Related
I am trying to create a multistory maze. I found it fairly easy to create the first level, then I realized I had no idea how to simply raise this first level up in order to create a second level beneath it.
Also, is there a way to 'fuse' all of these wall parts into one object and then raise this object up?
Edit: Much to my embarrassment, their is a way to fuse objects. The 'Union' tool is what I needed, but had no idea existed. I 'fused' (unioned) the various parts that made up my walls and joined them together into one big part. After that unioning, moving the entire maze upwards became quite easy.
I don't understand your problem but I think that you're making a 3D maze in roblox and you want the first level to go up and the second level to form below the level.
If the maze is NOT procedurally generated AND the maps are built by hand. Then you can make the script detect if the player won, and then raise the first level by either using tween or using loops (I'd recommend tween because loops and linear tweening does the same), and then make an effect that shows it forming (Transparency, parts coming back together, etc..).
I will show you the simplest example. But you can add whatever you want
local ts = game:GetService("TweenService")
local ti = TweenInfo.new(0.5, Enum.TweenStyle.Linear, Enum.TweenDirection.Out) --Customize it to your liking
local levels = game.LevelStorageParent.LevelFolderOrModelHere:GetChildren()
local pos = workspace.Level1.Position --Change (Not the levels because we are not cloning this)
local levelYRaise = 10 --Put any number or just get bounding box for full raise
ts:Create(workspace.Level1, ti, {Position = Vector3.new(pos.X, pos.Y+levelYRaise, pos.Z):Play()
local newLevel = levels.Level2:Clone()
newLevel.Parent = workspace
newLevel.Pos = workspace.Level1.Position - Vector3.new(0, workspace.Level1.Size.Y, 0)
newLevel.Transparency = 1
ts:Create(newLevel, ti, {Transparency = 0}):Play()
Change the code to your liking and your hierarchy names and parenting
For an academy project I have to find parts of a object (location / angle relative to the object ). The object is marked with a QR-code.
Currently I'm stuck on the basics. I scanned a room and load this room with "Spatial Object Mesh Observer".
But this observer gives no relevant information:
The observer does not attempt to find 3D model LODs when sending the meshes to the application.
Someone a hint where I could start?
Scanned Room with a box ( object to find ):
var observer = CoreServices.GetSpatialAwarenessSystemDataProvider<IMixedRealitySpatialAwarenessMeshObserver>();
// Loop through all known Meshes
foreach (SpatialAwarenessMeshObject meshObject in observer.Meshes.Values)
{
Mesh mesh = meshObject.Filter.mesh;
var vertices = mesh.vertices;
// mesh.vertexCount -> 15978
// mesh.vertices -> empty
// mesh.triangles -> empty
// Do something with the Mesh object
}
Edit: 29.04.2021
Unity: Unity 2019.4.21f1
MRTK: 2.6.1
Everything seems to be loaded correctly and the triangles are recognized in an external tool.
However, this information is not available in Unity.
From our understanding, you can't obtain the vertices and triangles property of Spatial Mesh at the runtime. So, we try to make some modifications based on the SpatialAwarenessMeshDemo scene (Assets/MRTK/Examples/Demos/SpatialAwareness/Scenes) to reproduce this issue. We added the following code to the ToggleObservers() method and made it to be invoked when the sphere is clicked.
var observer = CoreServices.GetSpatialAwarenessSystemDataProvider<IMixedRealitySpatialAwarenessMeshObserver>();
foreach (SpatialAwarenessMeshObject meshObject in observer.Meshes.Values)
{
Mesh mesh = meshObject.Filter.mesh;
Debug.Log(mesh.vertexCount);
Debug.Log(mesh.vertices);
Debug.Log(mesh.triangles);
}
After our test, everything works well both in the HoloLens2 Device and Unity Holographic Remoting, it always outputs the expected value. So we recommend that you check for updates in Settings to see If there is a system update available for HoloLens 2. Then follow our steps to make a simple test to see if that works. This may help you locate the problem in your project.
I moved the code to "void OnBecameVisible()", now the information are available
I am trying to load an animation from an fbx file and have it play on a GameObject:
TestObject.AddComponent<Animation>();
animation_handler = TestObject.GetComponent<Animation>();
walking_anim = Resources.Load("fbx_anims/walking_anim_test", typeof(AnimationClip)) as AnimationClip;
if(walking_anim == null)
{
Debug.Log("walking anim not found");
}
walking_anim.legacy = true;
animation_handler.AddClip(walking_anim, "walking");
animation_handler.wrapMode = WrapMode.Loop;
In the game loop, I tried using this:
if (Input.GetKeyDown(KeyCode.W))
{
if (!(animation_handler.IsPlaying("walking")))
{
animation_handler.clip = walking_anim;
animation_handler.Play("walking");
}
}
It doesnt give any errors, yet it doesn't work either. Anything I'm missing?
EDIT: For clarification: The model stays in the default T-Pose, after pressing 'W'. After inserting Debug.Logs at different points, I can confirm that the Play function is getting called only once, after which IsPlaying always returns true. Yet the "playing" animation causes no visual changes in the model (yes, the bone names are the same).
You don't want to use the Animation component, it is an old legacy component that has been replaced by the much improved Animator component. There are a lot of good posts on the Internet on how to use it - no need to repeat it here. The important steps are:
Add the Animator (not Animation) component to the model.
Create an "Animator Controller" in your project and add the clips (like the "walking_anim"). Here you can have a lot of different clips and tell Unity how to interpolate between them by using different parameters.
Add the "Animator Controller" to your "Animator" component.
Add an "avatar" of your model (usually created when the model is imported).
By code alter the parameters of the Animator Controller to tell it which animation clips to play.
It may look like a lot of steps, but it is not so hard and you will quickly have a walking, running, jumping creature on your screen. Good luck!
Hi how do we change motion in an AnimatorController using script in Unity Editor?
The red highlight is the one I'd like to change, but using script
My use case is to copy animations from one object, process the animation e.g. adding offset rotation, then add to another object. Since my object has many child objects, I need to create a script to automate this.
Changing the AnimatorController via editor scripts is always quite tricky!
First of all you need to have your AnimationClip from somewhere
// somewhere get the AnimationClip from
var clip = new AnimationClip();
Then you will have to cast the runtimeAnimatorController to an AnimatorController which is only available in the Editor! (put using UnityEditor; at the top of the script!)
var controller = (AnimatorController)animator.runtimeAnimatorController;
Now you can get all its information. In your case you can probably use the default layer (layers[0]) and its stateMachine and according to your image retrieve the defaultState:
var state = controller.layers[0].stateMachine.defaultState;
or find it using Linq FirstOrdefault (put using System.Linq; at the top of the script) like e.g.
var state = controller.layers[0].stateMachine.states.FirstOrDefault(s => s.state.name.Equals("SwimmingAnim")).state;
if (state == null)
{
Debug.LogError("Couldn't get the state!");
return;
}
Finally assign the AnimationClip to this state using SetStateEffectiveMotion
controller.SetStateEffectiveMotion(state, clip);
Note however that even though you can write individual animation curves for an animation clip using SetCurve unfortunately it is not possible to read them properly so it will be very difficult to do what you want
copy animations from one object, process the animation e.g. adding offset rotation, then add to another object.
You will have to go through AnimationUtility.GetCurveBindings which gets quite complex ;)
Good Luck!
We are developing a game about driving awareness.
The problem is we need to show videos to the user if he makes any mistakes after completing driving. For example, if he makes two mistakes we need to show two videos at the end of the game.
Can you help with this. I don't have any idea.
#solus already gave you an answer, regarding "how to play a (pre-registered) video from your application". However, from what I've understood, you are asking about saving (and visualize) a kind of replay for the "wrong" actions, performed by the player. This is not an easy task, and I don't think that you can receive an exaustive answer, but only some advices. I will try to give you my own ones.
First of all, you should "capture" the position of the player's car, in various time periods.
As an example, you could read player's car position every 0.2 seconds, and save it into a structure (example: a List).
Then, you would implement some logic to detect the "wrong" actions (crashes, speeding...They obviously depend on your game) and save a reference to the pair ["mistake", "relevant portion of the list containg car's positions for that event"].
Now, you have all what you need to recreate a replay of the action: that is, making the car "driving alone", by reading the previously saved positions (that will act as waypoints for generating the route).
Obviously, you also have to deal with the camera's position and rotation: just leave it attached to the car (as the normal "in-game" action), or modify it during time to catch the more interesting angulations, as the AAA racing games do (this will make the overall task more difficult, of course).
Unity will import a video as a MovieTexture. It will be converted to the native Theora/Vorbis (Ogg) format. (Use ffmpeg2theora if import fails.)
Simply apply it as you would any texture. You could use a plane or a flat cube. You should adjust its localScale to the aspect ratio of your video (movie.width/(float)movie.height).
Put the attached audioclip in an AudioSource. Then call movie.Play() and audio.Play().
You could also load the video from a local file path or the web (in the correct format).
var movie = new WWW(#"file://C:\videos\myvideo.ogv").movie;
...
if(movie.isReadyToPlay)
{
renderer.material.mainTexture = movie;
audio.clip = movie.audioClip;
movie.Play();
audio.clip.Play();
}
Use MovieTexture, but do not forget to install QuickTime, you need it to import movie clip (.mov file for example).