Need help regarding 3D floor plan for an app - google-maps-android-api-2

I am working on an Indoor positioning app based on beacons. I want to create this app for a specific floor of our building. Is there any API or any other way to create 3D floor plan?

Full disclaimer: I am working at Archilogic
You could try Archilogic - if you already have a 2D floor plan it usually takes less than 24 hours to get a 3D model back. If you don't have a floor plan, you could use the web editor to create the 3D model yourself, too.
The model can then be exported to OBJ or FBX, but that requires a paid plan.

Related

Is it possible to use Reality Composer for detecting 3D assets in the real world?

I'm trying to find a way to create .arobject for detecting 3D assets in real world. The only solution I've found is to use Apple scanning application. But I wonder maybe there is a way to use Reality Composer to achieve this? Since Reality Composer can detect Images and Anchors maybe this is possible.
Of course you can use iOS/iPadOS version of Reality Composer for creating .arobject and then recognizing real-world object based on data for AnchorEntity(.object). Look at these two images to find out how you can do that.
Take into consideration that you can't scan cylindrical and moving real-world objects!

Using image markers to differentiate assets in unity

I am a very newbie to Augmented Reality software. I want to design a simple app. As a part of this app, There will be a series of uniquely designed tags. These tags will be on some assets. In the application, I want to store some metadata for each asset. Imagine a DB table with fields like :(asset Id, name, var1, var2...) holding the asset meta-data.
So, when The augmented reality app detects a unique image then it will show its meta-data information, over the marker. It is that simple. In summary, I want to know how can I use image markers to differentiate assets? Sorry If I am asking a very basic question.
Regards,
Ferda
First of all your question is too broad. How are you planning to implement this application? First, you have to decide whether you will be using an Augmented Reality SDK or computer vision techniques?
My suggestions would be based on amount of devices you want to use this application or target platform, choosing 1 SDK from ARCore, Vuforia or ARKit. I am not familiar with ARKit but in ARCore and Vuforia, either augmented images or image targets are hold in an image database. So you can get the image id or name of any target you detected using your device. In conclusion you can visualize specific assets for specific images.
Below you can see ARCore Augmented Image database. As you can see every image has a name. In your code you can differentiate images using image.Name, then visualize corresponding meta data over the marker.
Also in both SDKs, you can define your own database but your images should not have repetitive features and have high contrast sections.
Vuforia has a similar concept as well. The difference between ARCore and Vuforia depends on what which devices you target and quality of image tracking. Vuforia can detect images better in my opinion. It is really fast in terms of detecting images.

Is there a way to perform multiple 3d object recognition in unity using vuforia?

I'm using vuforia scanner to detect and recognize a 3D object. It works well with one object but now i want to recognize multiple objects at once. I have gone through so many links but they only speak of multiple image targets and not 3d objects. If not using vuforia, is there any other sdk to do so?
I messed with object recognition once but I'm pretty sure the databases are basically the "same" as 2D image target databases. That is, you can tell Vuforia to load more than one of them and they'll run simultaneously. I don't have Vuforia installed at the moment, but I know the setting is in the main script attached to the camera (you have to fiddle with it when creating your project in the first place to get it to use something other than the sample targets).
There is, however, the limit on how many different targets Vuforia will recognize at once (IIRC is something really small, like 2 or 3). So be aware of this when planning your project.

Can Project Tango Device 3D Map an Entire Building.

I would like to know if It is possible to make a 3D model of an entire building on my college campus. If I am able to make the 3D model of each room, and then somehow combine all the rooms to make a full 3D building it would be a great project for my senior internship. Please direct me to the correct information. Or please give me instructions on how to use the Project Tango Device to create a full 3D building. Ultimately, I want to use the Project Tango Device to conduct indoor mapping using augmented reality.
Project Tango can export your scanned meshes to .obj files. Programs like 3DSMax allows you to import several .obj-files.
To create a mesh of a room with project tango and export the files you can use the constructor app.

iOS 3D indoor navigation application

what are the steps needed to create an indoor 3d navigation application. I have some auto cad files for a building and it would not be a problem to create a 3d model using 3dmax. Inertial sensors will be used for localization, bit After getting the model, how can I integrate it in iOS and create the visualization?
Depending on what your complete requirements are i believe it sounds like you do require openGL programming in order to create that 3D environment. And for navigation, i would suggest using the GPS in order to specify where you are located as opposed to inertial sensors. Or maybe a Mix of both so as to reduce your errors. I am guessing you want to be able to locate yourself in a building where GPS and wifi or 3G signals are not available. Just making use of inertial sensors would definitely be error prone.