Collision detection only occurs within a single source. Is there a workaround so my js generated top layer doesn't look like this? So basically what I mean is that in this example the label Dordrecht and 's-Gravendeel would need to be hidden when detecting a collision with my generated source.
This feature, termed "cross-source placement" is a high priority at Mapbox and under active development. You can track the feature via this GitHub issue.
Related
I'm trying to develop an app for the Hololens 1 using Unity. What I want to archive is providing a pre-designed experience to users for a specific room (like a specific room in a museum).
My idea is, that I scan the room with the Hololens, using the scanned mesh in Unity to place the virtual content (using the scan mesh to place the content at the correct position in the room) and then build the app and deploy it to the device. The goal is, that I can give a visitor of the museum the Hololens, he can go to this room, start the app in the room (everywhere in the room) and see the virtual objects on the right places (for example a specific exhibit, the door to the next room, in the middle of the room or....). I don't want to have the visitor place objects by himself and I don't want the staff to do this in advance (before handing out the headset). I want to design the complete experience in Unity for one specific room.
Everytime I am searching for use cases like this I didn't really find a starting point. Somehow the app has to recognize the position of the headset in the room (or find pre-set anchors or something like this).
I really thought this might be a very basic use case for the hololens.
Is there a way to achieve this goal? Later I want to design multiple experiences for all the rooms of the museum (maybe a separate app for every room).
I think I have to find pre set anchors in this room and then placing the content relative to it. But how is it possible to define this anchor and ensure that every visitor finds it so that the virtual content appears on the corresponding real world object?
You should start with Spatial Anchor technology. Spatial Anchor can help you lock the GameObject in a place to locations in the real world based on the system’s understanding. Please refer this link for more information:Spatial anchors. And then, you need persisting local Spatial Anchor in the real-world, this documentation show how to persist the location of WorldAnchor's across sessions with WorldAnchorStore class:Persistence in Unity. If you also want to share experiences with multiple customer to collectively view or interact with the same hologram which is positioned at a fixed point in space. You need to export an anchor from one device and import it by a second HoloLens device, please follow this guide:Local anchor transfers in Unity
Besides, in situations where you can use Azure Spatial Anchors we strongly recommend you to use it. Azure Spatial Anchors provides convenience for sharing experiences across sessions and devices, you can quick-start with this:How to create and locate anchors using Azure Spatial Anchors in Unity
I am trying to develop a customer support page using ARCore and ARKit with Flutter. And there are 2 ARCore and ARKit plugins in pub.dattlang.org.
First I need to establish how to create a customer support for:
nearest branch on map
nearest ATM on map
new credit offer and how to effect me if I apply etc...
Secondly I need to use ARCore and ARKit plugins to my Flutter app.
But I am not sure if the plugins will allow me to develop a customer support for my app.
So question is: is it possible to build Customer Support with AR and how?
I know that if I was a customer in need of support, an augmented-reality customer support experience would not be my first choice. I would much prefer a web form where I could describe my problem in a text box. In general, customer support is going to require text entry, and handling that through augmented reality is probably a bad idea.
That being said, there are ways that you could use augmented reality to improve customer support, especially if the customer's problem has to do with the physical arrangement of objects. For example, a customer support application for IKEA might use augmented reality to display a 3D view of the customer's furniture assembling itself, to help customers who have trouble reading the 2D instructions. Or, you could have the user "paint" in AR over a 3D scene of something to indicate their problem, and then send the resulting 3D scene in to the company for the support staff to look at, which could be a higher-bandwidth form of communication than the customer trying to describe the problem by typing.
But you can't just throw arkit and arcore at the problem of "customer support"; you have to think through what the customers actually need to be supported to do, and whether and how you actually want to use AR to improve that over what you get from a simple text form. A problem like "my package didn't arrive" can't really be solved with AR. I doubt anyone else has ever used these technologies for customer support; you won't find a ready-made design here.
So that's the first step: make a list of some example customer problems for the business whose customers you are supporting, and for each of them think through whether and how AR would be useful for solving them. Once you have an idea of what AR stuff you actually want, then you can come back with a more specific question about how to achieve that using the tools at hand (arkit and arcore).
We have around 4 developers and we share a cubicle area. the base stations extend higher than the cubicle walls (for better tracking) but whenever 2 or more Vive stations are ON at the same time they interfere with each other and tracking gets problematic.
How do professional companies that work on big Virtual Reality projects solve this problem?
The next generation of base stations and tracked devices won't have this problem, because the base station ID will be encoded in the laser, and a headset can ignore base stations it doesn't recognize. Unfortunately this will require both new base stations and new headsets / controllers, and I've seen no estimate on when the next generation will be released.
In the meantime, the only solution is to find a way to partition your environment so that any given region is only covered by two base stations. It's a pain, but you can cover a large area with a single pair. If you only have 4 people in a confined space you might be able to set up so that you only have one pair of stations. If that's not practical, you might look into mounting barn doors around the stations so that you can restrict their field of view.
Actually you can share a pair of base stations across many developers - that's how professional companies do it.
I have developed an app which is beyond the scale of spatial mapping and therefore I have not included any spatial mapping into my project, but the HoloLens' built in spatial mapping is causing issues, especially in darkened areas. Is there anyway to disable the onboard mapping?
Thanks
That capability is not currently available in any of the exposed API's. I would suggest asking the same question in forums.hololens.com and see if you can get the attention of someone from Microsoft as a requested feature.
Yes, it can be disabled.
There used to be a function called SetMappingEnabled. I think that Microsoft removed this function after some update. You could simply call SpatialMappingManager.Instance.SetMappingEnabled(false) but not anymore.
SpatialMappingManager.Instance.DrawVisualMeshes = false;
should be able to disable Spatial. I would disable Spatial shadow too. Use the function below to disable Spatial.
void disableSpatialMapping()
{
SpatialMappingManager.Instance.StopObserver();
SpatialMappingManager.Instance.CastShadows = false;
SpatialMappingManager.Instance.DrawVisualMeshes = false;
}
I received this from DavidKlineMS on forums.hololens.com:
The HoloLens is constantly scanning the environment. In examples like you describe (darkened areas), tracking can be impacted. If you are looking to disable the UI that indicates tracking loss, you will need to handle it manually.
The documentation pages on handling tracking loss (Unity and DirectX) may help you here.
I have to admit a very large object in Vuforia, having the ability to get close and not lose recognition.
The object is static and can track from afar.
Is there any system that can with a static camera position it, and with a camera phone to navigate close to the object?
Sorry if I have not explained well.
Extended Tracking allows the tracking a degree of persistence once a target has been detected. As the target goes out of view, Vuforia uses other information from the environment to infer the target position by visually tracking the environment. Vuforia builds a map around the target specifically for this purpose and assumes that both the environment and target are largely static.
Extended Tracking can significantly enhance two kinds of user experience:
Game or game-like experiences with a large amount of dynamic content that requires the user to point the device away from the target as the user follows the content
Visualizations of large objects like furniture, appliances, large home furnishings and even architectural models at the right scale and perspective
Learn more about : https://developer.vuforia.com/library/articles/Training/Extended-Tracking