I would like to know if It is possible to make a 3D model of an entire building on my college campus. If I am able to make the 3D model of each room, and then somehow combine all the rooms to make a full 3D building it would be a great project for my senior internship. Please direct me to the correct information. Or please give me instructions on how to use the Project Tango Device to create a full 3D building. Ultimately, I want to use the Project Tango Device to conduct indoor mapping using augmented reality.
Project Tango can export your scanned meshes to .obj files. Programs like 3DSMax allows you to import several .obj-files.
To create a mesh of a room with project tango and export the files you can use the constructor app.
Related
I have managed to develop an application that will use both an external hardware scanner and use the in build lidar scanner of an apple device to produce a .obj file. The external scanner comes with an SDK which has worked well.
Where I am struggling is with the in build scanner on the most recent iPad, I have managed to get a mesh generated (Which looks terrible by the way any hints on making the mesh higher quality is welcome) but I cannot seem to generate a real world texture and map it to the .obj file. Does anyone know how this can be achieved, nobody online currently seems to be able to achieve this ? Its so frustrating.....
I am a very newbie to Augmented Reality software. I want to design a simple app. As a part of this app, There will be a series of uniquely designed tags. These tags will be on some assets. In the application, I want to store some metadata for each asset. Imagine a DB table with fields like :(asset Id, name, var1, var2...) holding the asset meta-data.
So, when The augmented reality app detects a unique image then it will show its meta-data information, over the marker. It is that simple. In summary, I want to know how can I use image markers to differentiate assets? Sorry If I am asking a very basic question.
Regards,
Ferda
First of all your question is too broad. How are you planning to implement this application? First, you have to decide whether you will be using an Augmented Reality SDK or computer vision techniques?
My suggestions would be based on amount of devices you want to use this application or target platform, choosing 1 SDK from ARCore, Vuforia or ARKit. I am not familiar with ARKit but in ARCore and Vuforia, either augmented images or image targets are hold in an image database. So you can get the image id or name of any target you detected using your device. In conclusion you can visualize specific assets for specific images.
Below you can see ARCore Augmented Image database. As you can see every image has a name. In your code you can differentiate images using image.Name, then visualize corresponding meta data over the marker.
Also in both SDKs, you can define your own database but your images should not have repetitive features and have high contrast sections.
Vuforia has a similar concept as well. The difference between ARCore and Vuforia depends on what which devices you target and quality of image tracking. Vuforia can detect images better in my opinion. It is really fast in terms of detecting images.
I'm using vuforia scanner to detect and recognize a 3D object. It works well with one object but now i want to recognize multiple objects at once. I have gone through so many links but they only speak of multiple image targets and not 3d objects. If not using vuforia, is there any other sdk to do so?
I messed with object recognition once but I'm pretty sure the databases are basically the "same" as 2D image target databases. That is, you can tell Vuforia to load more than one of them and they'll run simultaneously. I don't have Vuforia installed at the moment, but I know the setting is in the main script attached to the camera (you have to fiddle with it when creating your project in the first place to get it to use something other than the sample targets).
There is, however, the limit on how many different targets Vuforia will recognize at once (IIRC is something really small, like 2 or 3). So be aware of this when planning your project.
i have to build an app like this:
https://www.youtube.com/watch?v=vetDCkbQGM4
It should simply detect the cockpit of a car and should show informations. For example "this is air conditioning", "this is switch button for the radio". The targets will be pre defined. Basically the app should detect everything and should show information.
Can I realize this with Vuforia? Which framework is suitable for this task?
I hope you guys can help me.
Cheers!
Since your targets are pre-defined, the simplest solution would be to use aruco markers to get 3D world positions/rotations through your user's camera feed.
See the AR Marker Detector in the Unity Asset Store for an example. Vuforia uses 'VuMarks' that are more intricate versions of this.
If you can't add computer-readable labels to the real world for your project, then you are talking about real-time object recognition. That is a much harder problem and not yet easily solvable in Unity as far as I know. It would require something like Google's Cloud Vision API. There is a Unity Cloud Vision project on GitHub, but I have no idea how well it works or what it's capabilities are.
Yes it is possible, you were first require to google. There are different SDK/Framework and Unity Asset store packages available.
You can use Free Vuforia AR Starter Kit from asset store to up and run your logic. Or You can also use Free AR Toolkit. There are different kind of tut available which can show you how to implement these pacakges.
I want to build a cross-platform mobile app that can identify QR-codes and will render a 3d model on it using AR.
I found that Unity in combination with Vuforia will do the trick on the AR part, but is it possible here to download and use 3D models dynamically?
Thanks
I guess what you're looking for is called AssetBundle be aware that downloading a large model (+texture) at run time can be heavy and will highly depend on the internet connection of the device.
Hope this helps.