Path figure over the agent figure on 2d visualization ANYLOGIC - simulation

I've just built a model of an airport. I've created an agent called 'airplane' and, when it moves along the path, in the 2D visualization I see the path figure over the airplane figure (instead the 3D is okay). Can anyone help me?

The Z-order in the 2D visualization is not in connection with the Z-values set for 3D animation. If the airplane agent population lives in the main agent, then you can see the agent presentation in the origin. I assume the paths are simply created in Main. Drag the airplane presentation on a path and use 'Bring forward' or 'Bring to front' actions until you see the airplane above the path, then move it back to the origin.
I'm not sure how/if this could be solved without having a custom population in Main.

Related

How to make Holograms move relative to user's head in Hololens 2?

I am creating a simple App on Hololens2 using Unity. I create two game objects and want them to move based on the user's head movement i.e. they do not stay still at a place in space but move relative to the user's head. However, I am not sure on how to enable this setting. Can someone please help?
The Orbital Solver provided in MRTK can implement this idea without even writing any code. It can lock the object to a specified position and offset it from the player. It is recommend to refer to the SolverExamples.unity which is located at /MRTK/Examples/Demos/Solvers/Scenes to get stated Solver components.
If I understand you correctly, you want to have a object at a specific distance from the HoloLens 2 at any given time.
So that (for example) a cube is always in the upper right corner of the users view.
If that is the case you can position the desired object as a child to the Main Camera (located under the MixedRealityPlayspace) in the hierarchy view.

Animation on top another resource agent animation 3D

How can I place an agent animation over another resource agent animation in a 2D simulation?
I already moved the order in the Palette but still the seized resource gets on top of the attached agent.
In the image below, the agent is the white triangle and the seize resource is the blue truck, I need to see the triangle on top of the blue truck. The truck doesn’t has a presentation item in the Palette I don’t know why, still it appears in the simulation.
Palette vs Animation
In the 3D animation everything looks correct because I have the Z coordinate set correctly, is the 2D animation the one with the issue.
You must ensure that your truck agent animation is part of Main as well.
Likely, you will need to have them in a custom agent population and your ResourcePool needs to add new units to it (you can set this up under the "Advanced" properties of the ResourcePool).
Then you can "layer" it below your white triangle using API calls as described in the help here.

How to add 3D elements into the Hololens 2 field of view

I'm trying to build a Remote Assistance solution using the Hololens 2 for university, i already set up the MRTK WebRTC example with Unity. Now i want to add the functionality of the desktop counterpart being able to add annotations in the field of view of the Hololens to support the remote guidance, but i have no idea how to achieve that. I was considering the Azure Spatial Anchors, but i haven't found a good example of adding 3D elements in the remote field of view of the Hololens from a 2D desktop environment. Also i'm not sure if the Spatial Anchors is the right framework, as they are mostly for persistent markers in the AR environment, and i'm rather looking for a temporary visual indicator.
Did anyone already work on such a solution and can give me a few frameworks/hints where to start?
To find the actual world location of a point from a 2D image, you can refer this answer: https://stackoverflow.com/a/63225342/11502506
In short, cameraToWorldMatrix and projectionMatrix transforms define for each pixel a ray in 3D space representing the path taken by the photons that produced the pixel. But anything along a certain ray will show up on the same pixel. So to find the actual world location of a point, you'll need either use Physics.Raycast method to calculate the impact point in world space where the ray hit the SpatialMapping.

Unity NavMeshSurface Loading Incorrectly

I am having trouble with the NavMeshSurface build process at runtime. I followed Unity's tutorial for using the NavMeshSurface features...
https://unity3d.com/learn/tutorials/topics/navigation/making-it-dynamic?playlist=17105
...and integrated it with my project successfully. However, when the level builds the Navmesh, it builds it 90 degrees perpindicular to my level.
Visual of Navmesh being built at runtime at 90 degree perpendicular to level. NOTE: Heightmesh is being built just fine.
The tutorial didn't show any signs of this being a problem. Currently my level is being built on the XZ axis because Navmesh surfaces won't generate on the XY plane. I have tried rotating the level 90 degrees but then nothing is created. I also have taken a screenshot of my current NavMeshSettings if that will help.
Snapshot of current NavmeshSurface settings in case I have something set incorrectly.
I'm no stranger to coding, I just haven't worked with dynamic navmesh before. So if there is something else you need feel free to ask and I can post it.
Found a solution. Generate the Navmesh so it is correct, then rotate your level to how you need it. Create the NavMeshSurface on its own gameobject as a child of your level, then attach your NavMesh data to the NavMeshSurface. You can rotate the child object to then match your level again.

How to set dynamic hotspot for 360 image with unity 3D

I am trying to build a visitors tour with Unity 3D. I have panaromic picture of bedrooms within an hotel and I would like to add points (hot spots) to my pictures that leads to another picture.
The problem is that I want to add this point dynamically via a backend, and I can't find a way to achieve that in Unity.
I will try to answer this question.
Unity has a XYZ coordinate system that can be translated to real world. I would measure real distances to these points (from the center where you took your picture) in your location/room and send these coordinates via backend to Unity3D client.
In Unity you can create Vector3 positions or directions based on coordinates you sent before. Use these positions/directions to instantiate 'hotspots' objects prefabs in right positions and directions. It might be necessary to adjust the scale/units to get the right result.
Once you have your 'hotspot' objects in place add a script to them that will load new scene (on click) with another location/image and repeat the process.
This is a very brief suggestion on how to do it. The code would be quite simple.