I have a two separate webGL games that built with unity. Which can be uploaded to a Game portal website. One of two games is a 3d game, which player can walk around to places and interact with some objects.
When player interact with it, I want to load the other game inside the same page. I want to do this process backwards as well.
I tried to do this with, adding the two games into the same project. I have lot of problems when doing it.
Light weight render pipeline settings collides
Since the project is a WebGL, It should be a small one one but it does not when I wanted to scale up. (20,30 games)
Because of the size, it does not support to the mobile web platform.
Can anyone give me a solution for this? Any comments would be highly appreciated.
If you take a look at your built WebGL game you'll see an index.html file. You will need to take both of your built games and put them in a folder and create a new index.html (based off of the ones that your WebGL build contain) that will properly load two unityInstances on the same page.
To accomplish this, you will need to change some of your IDs so that the two scripts don't interact with the same DOM elements; and you will need to change the file paths of your built game so that it can access the game even though it's now in a folder.
It's not an easy challenge, and unfortunately, it might require some trickery to get automated. You could also try loading the WebGL project JavaScript or an iFrame.
There is a minimum size that a Unity WebGL project can be (around 10mb), so it's not going to scale up to 20-30 games on a single page. For comparison, a typical webpage is ~2mb nowadays. You are going to have to load them one-at-a-time.
Related
I'm working on an app with Unity, in which I want to use ARCore's augemented Images to track Images outside, specifically Images of streetart. I took the pictures and when testing them at home on my Monitor they are easily found, but when I'm trying it in the real world, it does not work. Somehow the pictures are not recognized. I've tried about 20 different pictures of different designs and none of them worked outside. I also ran all of them through the arcoreimg tool and they all performed at 85-100, which is considered good.
I'm stumped on how that could be and would appreciate any ideas on how to fix it.
I would like to make an AR iPhone app in unity that places an object in the real world which you can then interact with it on your iPhone. like you have a bar at the bottom of your screen and you can drag the objects into the ar world and interact with them with the ability of hand tracking. This will work kind of like the meta 2 interface https://www.youtube.com/watch?v=m7ZDaiDwnxY which you can grab things and drag them. it uses hand tracking to do this.
I have done some research on this but, I need some help doing this because I don't know where to start and how to accomplish what I am trying to do.
I don't have any code.
You can email me at jaredmiller219#gmail.com for any comments and questions. also, you can email me to help me with this. thanks so much for your support!
To get started in mobile AR in Unity, I would recommend starting with Unity's resources:
https://unity.com/solutions/mobile-ar
Here's a tutorial resource for learning ARKit:
https://unity3d.com/learn/learn-arkit
As for hand tracking, obviously the Meta 2 has specialized hardware to execute its features... you shouldn't necessarily be expecting to achieve the same feature set with only a phone driving your experience. Leap Motion is the most common hand tracker I've seen integrated into VR and AR setups and it works well, but if you really need hand tracking with just a phone, you could check out ManoMotion which seeks to bring hand tracking and gesture recognition to ARKit, although I haven't personally worked with it.
I'm using vuforia scanner to detect and recognize a 3D object. It works well with one object but now i want to recognize multiple objects at once. I have gone through so many links but they only speak of multiple image targets and not 3d objects. If not using vuforia, is there any other sdk to do so?
I messed with object recognition once but I'm pretty sure the databases are basically the "same" as 2D image target databases. That is, you can tell Vuforia to load more than one of them and they'll run simultaneously. I don't have Vuforia installed at the moment, but I know the setting is in the main script attached to the camera (you have to fiddle with it when creating your project in the first place to get it to use something other than the sample targets).
There is, however, the limit on how many different targets Vuforia will recognize at once (IIRC is something really small, like 2 or 3). So be aware of this when planning your project.
Im trying to create a AR Game in Unity for educational project.
I want to create something like pokemon go: when the camera open the object will be fixed somewhere on the real world and you will have to search for it with the camera.
My problem is that ARCore and vuforia groundDetection (I dont want to use targets) are only limited for few types of phone and i tried to use kudan sdk but it didnt work.
Any one can give me a tool or a tutorial on how to do this? I just need ideas or someone to tell me where to start?
Thanks in advance.
The reason why plane detection is limited to only some phones at this time is partially because older/less powerful phones cannot handle the required computing power.
If you want to make an app that has the largest reach, Vuforia is probably the way to go. Personally, I am not a fan of Vuforia, and I would suggest you use ARCore (and/or ARKit for iOS).
Since this is an educational tool and not a game, are you sure Unity is the way to go? I am sure you may be able to do it in Unity, but choosing the right platform for a project is important - just keep that in mind. You could make a native app instead.
If you want to work with ARCore and Unity (which is a great choice in general), here is the first in a series of tutorials that can get you started as a total beginner.
Let me know if you have other questions :)
You can use GPS data from phone to display object when the user arrived specific place you can show the object. You can search GPS based Augmented Reality on google. You can check this video : https://www.youtube.com/watch?v=X6djed8e4n0
I'm developing a game for kids with the Unity3d platform. My game has a lot of pictures and images. When I load the game I'm using a Hashset of about 50,000 strings. Each string is a word for each picture I wish the player to see.
What can I do in order to reduce the loading time of the game to a minimum? Currently it takes about 20 seconds just to open the game, without doing anything.
My project is for Android devices at this moment.
Is loading the images and data via a web API not a better idea?
Using the WWW object you can then load the specific files you need at that time. This way you don't have to export the data with the app and keep it really small and fast.