How can HTC VIVE developers share same space and test using different Vive sets? - unity3d

We have around 4 developers and we share a cubicle area. the base stations extend higher than the cubicle walls (for better tracking) but whenever 2 or more Vive stations are ON at the same time they interfere with each other and tracking gets problematic.
How do professional companies that work on big Virtual Reality projects solve this problem?

The next generation of base stations and tracked devices won't have this problem, because the base station ID will be encoded in the laser, and a headset can ignore base stations it doesn't recognize. Unfortunately this will require both new base stations and new headsets / controllers, and I've seen no estimate on when the next generation will be released.
In the meantime, the only solution is to find a way to partition your environment so that any given region is only covered by two base stations. It's a pain, but you can cover a large area with a single pair. If you only have 4 people in a confined space you might be able to set up so that you only have one pair of stations. If that's not practical, you might look into mounting barn doors around the stations so that you can restrict their field of view.

Actually you can share a pair of base stations across many developers - that's how professional companies do it.

Related

Pre-Designed Hololens Application for a specific room using Unity

I'm trying to develop an app for the Hololens 1 using Unity. What I want to archive is providing a pre-designed experience to users for a specific room (like a specific room in a museum).
My idea is, that I scan the room with the Hololens, using the scanned mesh in Unity to place the virtual content (using the scan mesh to place the content at the correct position in the room) and then build the app and deploy it to the device. The goal is, that I can give a visitor of the museum the Hololens, he can go to this room, start the app in the room (everywhere in the room) and see the virtual objects on the right places (for example a specific exhibit, the door to the next room, in the middle of the room or....). I don't want to have the visitor place objects by himself and I don't want the staff to do this in advance (before handing out the headset). I want to design the complete experience in Unity for one specific room.
Everytime I am searching for use cases like this I didn't really find a starting point. Somehow the app has to recognize the position of the headset in the room (or find pre-set anchors or something like this).
I really thought this might be a very basic use case for the hololens.
Is there a way to achieve this goal? Later I want to design multiple experiences for all the rooms of the museum (maybe a separate app for every room).
I think I have to find pre set anchors in this room and then placing the content relative to it. But how is it possible to define this anchor and ensure that every visitor finds it so that the virtual content appears on the corresponding real world object?
You should start with Spatial Anchor technology. Spatial Anchor can help you lock the GameObject in a place to locations in the real world based on the system’s understanding. Please refer this link for more information:Spatial anchors. And then, you need persisting local Spatial Anchor in the real-world, this documentation show how to persist the location of WorldAnchor's across sessions with WorldAnchorStore class:Persistence in Unity. If you also want to share experiences with multiple customer to collectively view or interact with the same hologram which is positioned at a fixed point in space. You need to export an anchor from one device and import it by a second HoloLens device, please follow this guide:Local anchor transfers in Unity
Besides, in situations where you can use Azure Spatial Anchors we strongly recommend you to use it. Azure Spatial Anchors provides convenience for sharing experiences across sessions and devices, you can quick-start with this:How to create and locate anchors using Azure Spatial Anchors in Unity

Recreation District Trails to public Google Maps

I have created a kml of over 30 trails with trailheads that my team and I GPS'd throughout the past couple of years for our local recreation district ([Brooktrails Redwood Forest][1]). We have a trail map that has been printed and is being used by our county residents and visitors, however, when they look for recreation trails (foot and bike), none of the District trails are visible in GoogleMaps. Is there a way to import all of these rather than having to draw the trails within Google Maps? They are also accessible via Google My Maps - but that's not particularly useful for people who are searching from outside the circle with whom I've shared the link. I've also contacted MapMy...everything and in order to create routes using trails, the trails have to be on GoogleMaps. I suspect this is the case for all run/ride tracking apps. So... again, is there a way to import a kml of the 30+ trails and trailheads to GoogleMaps? Drawing each is a daunting task as they are all over a mile long through varying terrain - plus the accuracy decreases significantly. In case the link in BRF doesn't work, here is the link to Google MyMaps: https://www.google.com/maps/d/u/0/edit?mid=1GcKhd2rPOCBtSqoH8KyzdBVI_V7zzhxB&ll=39.43926886661395%2C-123.40566300000002&z=13
I appreciate the help. If we can get the trails in GoogleMaps, then other opportunities open for us in terms of trail maintenance/care, grants, and usability for locals and visitors alike.
Crystal

Is it possible to build Cutomer Support in Flutter with AR and how?

I am trying to develop a customer support page using ARCore and ARKit with Flutter. And there are 2 ARCore and ARKit plugins in pub.dattlang.org.
First I need to establish how to create a customer support for:
nearest branch on map
nearest ATM on map
new credit offer and how to effect me if I apply etc...
Secondly I need to use ARCore and ARKit plugins to my Flutter app.
But I am not sure if the plugins will allow me to develop a customer support for my app.
So question is: is it possible to build Customer Support with AR and how?
I know that if I was a customer in need of support, an augmented-reality customer support experience would not be my first choice. I would much prefer a web form where I could describe my problem in a text box. In general, customer support is going to require text entry, and handling that through augmented reality is probably a bad idea.
That being said, there are ways that you could use augmented reality to improve customer support, especially if the customer's problem has to do with the physical arrangement of objects. For example, a customer support application for IKEA might use augmented reality to display a 3D view of the customer's furniture assembling itself, to help customers who have trouble reading the 2D instructions. Or, you could have the user "paint" in AR over a 3D scene of something to indicate their problem, and then send the resulting 3D scene in to the company for the support staff to look at, which could be a higher-bandwidth form of communication than the customer trying to describe the problem by typing.
But you can't just throw arkit and arcore at the problem of "customer support"; you have to think through what the customers actually need to be supported to do, and whether and how you actually want to use AR to improve that over what you get from a simple text form. A problem like "my package didn't arrive" can't really be solved with AR. I doubt anyone else has ever used these technologies for customer support; you won't find a ready-made design here.
So that's the first step: make a list of some example customer problems for the business whose customers you are supporting, and for each of them think through whether and how AR would be useful for solving them. Once you have an idea of what AR stuff you actually want, then you can come back with a more specific question about how to achieve that using the tools at hand (arkit and arcore).

Large object recognition in Vuforia

I have to admit a very large object in Vuforia, having the ability to get close and not lose recognition.
The object is static and can track from afar.
Is there any system that can with a static camera position it, and with a camera phone to navigate close to the object?
Sorry if I have not explained well.
Extended Tracking allows the tracking a degree of persistence once a target has been detected. As the target goes out of view, Vuforia uses other information from the environment to infer the target position by visually tracking the environment. Vuforia builds a map around the target specifically for this purpose and assumes that both the environment and target are largely static.
Extended Tracking can significantly enhance two kinds of user experience:
Game or game-like experiences with a large amount of dynamic content that requires the user to point the device away from the target as the user follows the content
Visualizations of large objects like furniture, appliances, large home furnishings and even architectural models at the right scale and perspective
Learn more about : https://developer.vuforia.com/library/articles/Training/Extended-Tracking

How to create an online database for a swift game to store player scores?

I am looking to create something along these lines.
http://www.iostutorial.org/2011/06/17/add-high-scores-to-your-ios-game/
However, i want it to be online so it stores high scores of players and displays top 10 scores.
it would be of great help if someone could point me in the right direction. Any books or articles would be great.
Game center is designed specifically for what you are trying to do. It has the added advantage that it will provide exposure for your game, potentially increasing sales.