Large object recognition in Vuforia - unity3d

I have to admit a very large object in Vuforia, having the ability to get close and not lose recognition.
The object is static and can track from afar.
Is there any system that can with a static camera position it, and with a camera phone to navigate close to the object?
Sorry if I have not explained well.

Extended Tracking allows the tracking a degree of persistence once a target has been detected. As the target goes out of view, Vuforia uses other information from the environment to infer the target position by visually tracking the environment. Vuforia builds a map around the target specifically for this purpose and assumes that both the environment and target are largely static.
Extended Tracking can significantly enhance two kinds of user experience:
Game or game-like experiences with a large amount of dynamic content that requires the user to point the device away from the target as the user follows the content
Visualizations of large objects like furniture, appliances, large home furnishings and even architectural models at the right scale and perspective
Learn more about : https://developer.vuforia.com/library/articles/Training/Extended-Tracking

Related

Pre-Designed Hololens Application for a specific room using Unity

I'm trying to develop an app for the Hololens 1 using Unity. What I want to archive is providing a pre-designed experience to users for a specific room (like a specific room in a museum).
My idea is, that I scan the room with the Hololens, using the scanned mesh in Unity to place the virtual content (using the scan mesh to place the content at the correct position in the room) and then build the app and deploy it to the device. The goal is, that I can give a visitor of the museum the Hololens, he can go to this room, start the app in the room (everywhere in the room) and see the virtual objects on the right places (for example a specific exhibit, the door to the next room, in the middle of the room or....). I don't want to have the visitor place objects by himself and I don't want the staff to do this in advance (before handing out the headset). I want to design the complete experience in Unity for one specific room.
Everytime I am searching for use cases like this I didn't really find a starting point. Somehow the app has to recognize the position of the headset in the room (or find pre-set anchors or something like this).
I really thought this might be a very basic use case for the hololens.
Is there a way to achieve this goal? Later I want to design multiple experiences for all the rooms of the museum (maybe a separate app for every room).
I think I have to find pre set anchors in this room and then placing the content relative to it. But how is it possible to define this anchor and ensure that every visitor finds it so that the virtual content appears on the corresponding real world object?
You should start with Spatial Anchor technology. Spatial Anchor can help you lock the GameObject in a place to locations in the real world based on the system’s understanding. Please refer this link for more information:Spatial anchors. And then, you need persisting local Spatial Anchor in the real-world, this documentation show how to persist the location of WorldAnchor's across sessions with WorldAnchorStore class:Persistence in Unity. If you also want to share experiences with multiple customer to collectively view or interact with the same hologram which is positioned at a fixed point in space. You need to export an anchor from one device and import it by a second HoloLens device, please follow this guide:Local anchor transfers in Unity
Besides, in situations where you can use Azure Spatial Anchors we strongly recommend you to use it. Azure Spatial Anchors provides convenience for sharing experiences across sessions and devices, you can quick-start with this:How to create and locate anchors using Azure Spatial Anchors in Unity

Is it possible to build Cutomer Support in Flutter with AR and how?

I am trying to develop a customer support page using ARCore and ARKit with Flutter. And there are 2 ARCore and ARKit plugins in pub.dattlang.org.
First I need to establish how to create a customer support for:
nearest branch on map
nearest ATM on map
new credit offer and how to effect me if I apply etc...
Secondly I need to use ARCore and ARKit plugins to my Flutter app.
But I am not sure if the plugins will allow me to develop a customer support for my app.
So question is: is it possible to build Customer Support with AR and how?
I know that if I was a customer in need of support, an augmented-reality customer support experience would not be my first choice. I would much prefer a web form where I could describe my problem in a text box. In general, customer support is going to require text entry, and handling that through augmented reality is probably a bad idea.
That being said, there are ways that you could use augmented reality to improve customer support, especially if the customer's problem has to do with the physical arrangement of objects. For example, a customer support application for IKEA might use augmented reality to display a 3D view of the customer's furniture assembling itself, to help customers who have trouble reading the 2D instructions. Or, you could have the user "paint" in AR over a 3D scene of something to indicate their problem, and then send the resulting 3D scene in to the company for the support staff to look at, which could be a higher-bandwidth form of communication than the customer trying to describe the problem by typing.
But you can't just throw arkit and arcore at the problem of "customer support"; you have to think through what the customers actually need to be supported to do, and whether and how you actually want to use AR to improve that over what you get from a simple text form. A problem like "my package didn't arrive" can't really be solved with AR. I doubt anyone else has ever used these technologies for customer support; you won't find a ready-made design here.
So that's the first step: make a list of some example customer problems for the business whose customers you are supporting, and for each of them think through whether and how AR would be useful for solving them. Once you have an idea of what AR stuff you actually want, then you can come back with a more specific question about how to achieve that using the tools at hand (arkit and arcore).

How can HTC VIVE developers share same space and test using different Vive sets?

We have around 4 developers and we share a cubicle area. the base stations extend higher than the cubicle walls (for better tracking) but whenever 2 or more Vive stations are ON at the same time they interfere with each other and tracking gets problematic.
How do professional companies that work on big Virtual Reality projects solve this problem?
The next generation of base stations and tracked devices won't have this problem, because the base station ID will be encoded in the laser, and a headset can ignore base stations it doesn't recognize. Unfortunately this will require both new base stations and new headsets / controllers, and I've seen no estimate on when the next generation will be released.
In the meantime, the only solution is to find a way to partition your environment so that any given region is only covered by two base stations. It's a pain, but you can cover a large area with a single pair. If you only have 4 people in a confined space you might be able to set up so that you only have one pair of stations. If that's not practical, you might look into mounting barn doors around the stations so that you can restrict their field of view.
Actually you can share a pair of base stations across many developers - that's how professional companies do it.

To load layer on leaflet even on offline

I'm using leaflet map in my ionic application.
On offline, by default, open street map layer loads but when changing the zoom level, the map disappears.
How can I get the map to show when zoomed offline?
You need to download (pre-cache) the tile images to work with offline mode.
Download the tiles at the zoom levels you want. Map data is quite large and so you may want to limit to a small subset of zoom levels, or a geographic boundary (a city, country, etc.)
OSM's specific tile usage policy is here: http://wiki.openstreetmap.org/wiki/Tile_usage_policy But the part that applies to your app is this:
In particular, downloading significant areas of tiles at zoom levels
17 and higher for offline or later usage is forbidden without prior
consultation with a System Administrator.
That page also lists numerous "free" tile providers. Because I expect it to change "significantly" over time, I'm not gonna copy and paste it for you.
Unless you enter into some kind of contract/agreement with the tile provider, you may need to render your own or find a different tile service that will allow you to pre-cache tiles.
The details for downloading map data that can be rendered as tiles can be found on the OSM website: http://wiki.openstreetmap.org/wiki/Downloading_data
There are multiple ways to get the data from OSM, but these services and options will undoubtedly change over time so I think it safest to just refer to their website (google "download open street map data" if above link is broken).
I have used a .NET library called BruTile to manage the tile cache. But basically it is just a bunch of image files organized in a zoom/grid structure.
It would also be good for you to google search topics such as "ionic leaflet tile cache" and "pre cache map tiles ionic" and such. There isn't a lot out there yet, but this is a growing area of development.
Thoughts regarding mobile apps
If you are deploying to mobile devices it probably doesn't make sense to deploy the pre-cached tiles because (a) they will become out of date and require constant upkeep, and (b) large file size resulting in slow downloads. It would be better to download tiles while online after the app is installed.
Windows and Android phones both allow download of offline map data in their maps apps. It may be possible to leverage that data. Otherwise you would make your app work similarly: prompt user to download maps for their region, and then find a way to reasonably specify the region (geographic area) for which they need maps. It is also a good idea to let the user know how much data will be downloaded for metered data plans and device storage.
It also would be better for you to use a server other than the OSM servers, such as a paid Microsoft or MapQuest or Bing account. The OSM servers aren't capable of handling production load across every SPA that wants maps to work offline. Better to rely on the device's capabilities and built-in maps app if possible. Amazon and Azure (for example) services may be feasible for this. If you wrote your own program to run as a service on your own sever it could pre-cache tiles from the tile service (thus reducing your usage fees and server load) and then the apps would just get map data from your server. This also gives you the opportunity to get creative with your own custom tiles.

Collision detection on multiple sources

Collision detection only occurs within a single source. Is there a workaround so my js generated top layer doesn't look like this? So basically what I mean is that in this example the label Dordrecht and 's-Gravendeel would need to be hidden when detecting a collision with my generated source.
This feature, termed "cross-source placement" is a high priority at Mapbox and under active development. You can track the feature via this GitHub issue.