Is there a way to perform multiple 3d object recognition in unity using vuforia? - unity3d

I'm using vuforia scanner to detect and recognize a 3D object. It works well with one object but now i want to recognize multiple objects at once. I have gone through so many links but they only speak of multiple image targets and not 3d objects. If not using vuforia, is there any other sdk to do so?

I messed with object recognition once but I'm pretty sure the databases are basically the "same" as 2D image target databases. That is, you can tell Vuforia to load more than one of them and they'll run simultaneously. I don't have Vuforia installed at the moment, but I know the setting is in the main script attached to the camera (you have to fiddle with it when creating your project in the first place to get it to use something other than the sample targets).
There is, however, the limit on how many different targets Vuforia will recognize at once (IIRC is something really small, like 2 or 3). So be aware of this when planning your project.

Related

Is it possible to use Reality Composer for detecting 3D assets in the real world?

I'm trying to find a way to create .arobject for detecting 3D assets in real world. The only solution I've found is to use Apple scanning application. But I wonder maybe there is a way to use Reality Composer to achieve this? Since Reality Composer can detect Images and Anchors maybe this is possible.
Of course you can use iOS/iPadOS version of Reality Composer for creating .arobject and then recognizing real-world object based on data for AnchorEntity(.object). Look at these two images to find out how you can do that.
Take into consideration that you can't scan cylindrical and moving real-world objects!

Loading a unity webgl Game from a separate immersive game

I have a two separate webGL games that built with unity. Which can be uploaded to a Game portal website. One of two games is a 3d game, which player can walk around to places and interact with some objects.
When player interact with it, I want to load the other game inside the same page. I want to do this process backwards as well.
I tried to do this with, adding the two games into the same project. I have lot of problems when doing it.
Light weight render pipeline settings collides
Since the project is a WebGL, It should be a small one one but it does not when I wanted to scale up. (20,30 games)
Because of the size, it does not support to the mobile web platform.
Can anyone give me a solution for this? Any comments would be highly appreciated.
If you take a look at your built WebGL game you'll see an index.html file. You will need to take both of your built games and put them in a folder and create a new index.html (based off of the ones that your WebGL build contain) that will properly load two unityInstances on the same page.
To accomplish this, you will need to change some of your IDs so that the two scripts don't interact with the same DOM elements; and you will need to change the file paths of your built game so that it can access the game even though it's now in a folder.
It's not an easy challenge, and unfortunately, it might require some trickery to get automated. You could also try loading the WebGL project JavaScript or an iFrame.
There is a minimum size that a Unity WebGL project can be (around 10mb), so it's not going to scale up to 20-30 games on a single page. For comparison, a typical webpage is ~2mb nowadays. You are going to have to load them one-at-a-time.

Using image markers to differentiate assets in unity

I am a very newbie to Augmented Reality software. I want to design a simple app. As a part of this app, There will be a series of uniquely designed tags. These tags will be on some assets. In the application, I want to store some metadata for each asset. Imagine a DB table with fields like :(asset Id, name, var1, var2...) holding the asset meta-data.
So, when The augmented reality app detects a unique image then it will show its meta-data information, over the marker. It is that simple. In summary, I want to know how can I use image markers to differentiate assets? Sorry If I am asking a very basic question.
Regards,
Ferda
First of all your question is too broad. How are you planning to implement this application? First, you have to decide whether you will be using an Augmented Reality SDK or computer vision techniques?
My suggestions would be based on amount of devices you want to use this application or target platform, choosing 1 SDK from ARCore, Vuforia or ARKit. I am not familiar with ARKit but in ARCore and Vuforia, either augmented images or image targets are hold in an image database. So you can get the image id or name of any target you detected using your device. In conclusion you can visualize specific assets for specific images.
Below you can see ARCore Augmented Image database. As you can see every image has a name. In your code you can differentiate images using image.Name, then visualize corresponding meta data over the marker.
Also in both SDKs, you can define your own database but your images should not have repetitive features and have high contrast sections.
Vuforia has a similar concept as well. The difference between ARCore and Vuforia depends on what which devices you target and quality of image tracking. Vuforia can detect images better in my opinion. It is really fast in terms of detecting images.

Fixing object when camera open Unity AR

Im trying to create a AR Game in Unity for educational project.
I want to create something like pokemon go: when the camera open the object will be fixed somewhere on the real world and you will have to search for it with the camera.
My problem is that ARCore and vuforia groundDetection (I dont want to use targets) are only limited for few types of phone and i tried to use kudan sdk but it didnt work.
Any one can give me a tool or a tutorial on how to do this? I just need ideas or someone to tell me where to start?
Thanks in advance.
The reason why plane detection is limited to only some phones at this time is partially because older/less powerful phones cannot handle the required computing power.
If you want to make an app that has the largest reach, Vuforia is probably the way to go. Personally, I am not a fan of Vuforia, and I would suggest you use ARCore (and/or ARKit for iOS).
Since this is an educational tool and not a game, are you sure Unity is the way to go? I am sure you may be able to do it in Unity, but choosing the right platform for a project is important - just keep that in mind. You could make a native app instead.
If you want to work with ARCore and Unity (which is a great choice in general), here is the first in a series of tutorials that can get you started as a total beginner.
Let me know if you have other questions :)
You can use GPS data from phone to display object when the user arrived specific place you can show the object. You can search GPS based Augmented Reality on google. You can check this video : https://www.youtube.com/watch?v=X6djed8e4n0

Augmented Reality Application in iOS

I am trying to create an ios application using which we can convert a real life object e.g Sofa, Table as 3D objects using IPhone's camera. These 3D object info can be saved in the database and can be displayed as Augumented reality objects when the IPhone camera is pointed at some other part of the room.
I have searched the internet but could'nt find any info on where to get started to convert real life objects to 3D objects for viewing as augumented reality objects.
check below link where you found SDK and also sample code for implement AR
http://quickblox.com/developers/IOS
I think any way you go with this it's going to be huge task. However I've had good results with similar goals using OpenCV.
It has an iOS SDK, but is written in C++. Unfortunately I don't think there's anything available that will allow you to achieve this using pure Obj-C or Swift.
You can go through following links
https://www.qualcomm.com/products/vuforia
http://www.t-immersion.com/ar-key-words/augmented-reality-sdk#
http://dev.metaio.com/sdk/
https://www.layar.com