Phonegap / Ionic augmented reality - ionic-framework

I've been Googling and Googling for an augmented reality plugin to use for my app but I keep returning to Wikitude, which costs a fortune (at least, at this moment). So I'm reaching out to you.
I'm looking for an AR plugin that can display an image onto a marker/placeholder. If it is possible to play a video as well that would be great but its not mandatory.
Have you been at this point before? Which plugin did you choose and why?
Thanks for reading and taking your time to respond.

Related

Face Swap and Face Detection/Recognition in Flutter

How does one go about creating a Face Swap mechanism in Flutter?
Can anyone point me in the right direction?
Thank you
You’ll probably need a good plugin to do all the hard work for you. I recommend Google’s ML Kit on Flutter, as it is the most popular way to run on-device ML with Flutter.
The face detection plugin is what you want. You would basically get the face oval shape with face contour detection and swap those shapes. And this can be done real-time with a given video input.
But you should keep in mind that the plugin is on v0.0.1. If you’re aiming for production, you’d better do that with Swift or Kotlin.
There are multiple ways to archive this thing in Flutter. It might be in real-time or with some delay of seconds.
You can use one of these packages.
Open CV
TensorFlow
Google's ML Kit
It might be possible you will not get good support from openCV and TensorFlow in a flutter. But you can integrate the OpenCV/TensorFlow native libs or SDK for both Android and IOS and invoke them through platform channels
There is also one more possible solution but it will definitely have a delay. For this kind of ML project python have great support of library and projects.
You can set up a python project which is responsible for face swapping it takes input from the flutter app (using rest API or socket) and return the output image after face-swapping.
Some great face swap projects are available on GitHub you can look into it.

make an app that displays desktop on iphone in vr on unity

I would like to make an AR iPhone app in unity that places an object in the real world which you can then interact with it on your iPhone. like you have a bar at the bottom of your screen and you can drag the objects into the ar world and interact with them with the ability of hand tracking. This will work kind of like the meta 2 interface https://www.youtube.com/watch?v=m7ZDaiDwnxY which you can grab things and drag them. it uses hand tracking to do this.
I have done some research on this but, I need some help doing this because I don't know where to start and how to accomplish what I am trying to do.
I don't have any code.
You can email me at jaredmiller219#gmail.com for any comments and questions. also, you can email me to help me with this. thanks so much for your support!
To get started in mobile AR in Unity, I would recommend starting with Unity's resources:
https://unity.com/solutions/mobile-ar
Here's a tutorial resource for learning ARKit:
https://unity3d.com/learn/learn-arkit
As for hand tracking, obviously the Meta 2 has specialized hardware to execute its features... you shouldn't necessarily be expecting to achieve the same feature set with only a phone driving your experience. Leap Motion is the most common hand tracker I've seen integrated into VR and AR setups and it works well, but if you really need hand tracking with just a phone, you could check out ManoMotion which seeks to bring hand tracking and gesture recognition to ARKit, although I haven't personally worked with it.

Fixing object when camera open Unity AR

Im trying to create a AR Game in Unity for educational project.
I want to create something like pokemon go: when the camera open the object will be fixed somewhere on the real world and you will have to search for it with the camera.
My problem is that ARCore and vuforia groundDetection (I dont want to use targets) are only limited for few types of phone and i tried to use kudan sdk but it didnt work.
Any one can give me a tool or a tutorial on how to do this? I just need ideas or someone to tell me where to start?
Thanks in advance.
The reason why plane detection is limited to only some phones at this time is partially because older/less powerful phones cannot handle the required computing power.
If you want to make an app that has the largest reach, Vuforia is probably the way to go. Personally, I am not a fan of Vuforia, and I would suggest you use ARCore (and/or ARKit for iOS).
Since this is an educational tool and not a game, are you sure Unity is the way to go? I am sure you may be able to do it in Unity, but choosing the right platform for a project is important - just keep that in mind. You could make a native app instead.
If you want to work with ARCore and Unity (which is a great choice in general), here is the first in a series of tutorials that can get you started as a total beginner.
Let me know if you have other questions :)
You can use GPS data from phone to display object when the user arrived specific place you can show the object. You can search GPS based Augmented Reality on google. You can check this video : https://www.youtube.com/watch?v=X6djed8e4n0

Does the Unity WebGL exporter work?

Has the Unity WebGL exporter improved in the last year? Thinking about using it for a project but the last time I checked it wasn't fit for purpose..huge file sizes and bugs with different browsers...
I just finished porting my mobile game to WebGL using Unity 2017.1. It has stabilized quite a bit and I found no major issues using it. The biggest annoyance was the long compilation times.
I've found it to be quite good now in 2017.1. It compresses things pretty nicely and load times are not too bad. If you want to see some examples of different games exported to WebGL, I have a site up called SIMMER.io to host these games. Most uploads were created in Unity 2017.1.
There's also a WebGL compressor asset available here, but I have not tried it: https://www.assetstore.unity3d.com/en/#!/content/30335

OpenGL ES and Default iPhone UI components

So I am rather experienced with OpenGL on the desktop platform and am trying to integrate it with my iOS development experience. I have created several large scale iOS applications so I have a good understanding of that process as well. I was wondering if anyone knows of any useful techniques to integrate iOS UI components with an OpenGL scene, or if that is even possible. I apologize if this is to general. I can refine it if necessary.
For example, say you have an iPad application that has a table and whatnot on the left, and you want to add a little 3D OpenGL window on the right. (Perhaps a chart or something that the user can interact with?) This would not be for a game or anything, but more for my understanding on how to smoothly integrate the different platforms. Any advice or links that the community could provide would be greatly appreciated. Thanks in advance!
GL-Views do not have to cover the entire screen. A great and very easy to understand example is the sound=recorder SpeakHere iphone app within the iOS SDK.
This example uses a small GL-View for displaying a peak-level-meter of the audio signal; GLLevelMeter.
Hope this helps...