I'm prototyping mobile ml application within Unity engine.
I have trained tensorflow graph (.pb) and I want to run the model in unity mobile. (both android and ios)
With OpenCVForUnity plugin, with dnn module, I can run tensorflow graph in mobile. But the problem is that's running on CPU.
I need GPU based solution and it seems that OpenCVForUnity isn't proper approach for that.
So any idea for running graph on GPU in unity mobile enviroment?
You might want to use Barracuda, which will allow you to convert a tensorflow model and use it in cross-platform Unity applications. Unity ML-Agents uses Barracuda, so you could use their code as a reference for how to utilize your neural network.
Related
How does one go about creating a Face Swap mechanism in Flutter?
Can anyone point me in the right direction?
Thank you
You’ll probably need a good plugin to do all the hard work for you. I recommend Google’s ML Kit on Flutter, as it is the most popular way to run on-device ML with Flutter.
The face detection plugin is what you want. You would basically get the face oval shape with face contour detection and swap those shapes. And this can be done real-time with a given video input.
But you should keep in mind that the plugin is on v0.0.1. If you’re aiming for production, you’d better do that with Swift or Kotlin.
There are multiple ways to archive this thing in Flutter. It might be in real-time or with some delay of seconds.
You can use one of these packages.
Open CV
TensorFlow
Google's ML Kit
It might be possible you will not get good support from openCV and TensorFlow in a flutter. But you can integrate the OpenCV/TensorFlow native libs or SDK for both Android and IOS and invoke them through platform channels
There is also one more possible solution but it will definitely have a delay. For this kind of ML project python have great support of library and projects.
You can set up a python project which is responsible for face swapping it takes input from the flutter app (using rest API or socket) and return the output image after face-swapping.
Some great face swap projects are available on GitHub you can look into it.
I'm in process of writing photo editing app and I want to know if there any more efficient way to solve my task.
Task: GPU-accelerated 2d image processing of float type images on Android with OpenGL/Vulkan and Metal on iOS.
Current pipeline is following: UI made with flutter controls C++ backend via dart:ffi that utilizes Halide lang generators to efficiently offload computation to OpenGL or Metal.
I am worried about complexity. Halide has it's own caveats, dart:ffi it's own and usage of C to glue C++ and Dart too.
Q: Is there any way to efficiently compute image pixel values with Flutter? Any SkSL API exposure on the roadmap?
Flutter SDK exposes API from the Skia engine through Canvas. One way of accessing these API is via packages like graphx. Since early 2020, Flutter renders with Metal on iOS, and OpenGL on Android by default.
What are the basic steps would be to deploy keras model on mobile devices using Flutter? What should I consider here? A quick guideline would be very much appreciated. Thanks.
You have to convert your model to tensorflow lite (provided all the operations in your model are supported in tflite).
The below link gives a complete demo on how an object detection model can be ported to mobile device via flutter. In place of the object detection model , you can use your custom model.
be wary of the input type, and the output type when you convert to tf lite.
https://blog.francium.tech/real-time-object-detection-on-mobile-with-flutter-tensorflow-lite-and-yolo-android-part-a0042c9b62c6
The way I understand it is that there are several environments that support ARCore and Unity and Sceneform SDK are some of the options.
I was wondering how are they different from each other besides one being in Java and the other being in C#? Why would someone choose one over the other aside from language preference?
Thank you
Sceneform empowers Android developers to work with ARCore without learning 3D graphics and OpenGL. It includes a high-level scene graph API, realistic physically based renderer, an Android Studio plugin for importing, viewing, and building 3D assets, and easy integration into ARCore that makes it straightforward to build AR apps. Visit this video link of Google I/O '18.
Whereas ARCore in Unity uses three key capabilities to integrate virtual content with the real world as seen through your phone's camera:
Motion tracking
Environmental understanding allows the phone to detect the size
and location of all type of surfaces: horizontal, vertical and
angled surfaces like the ground, a coffee table or walls.
Light estimation allows the phone to estimate the environment's
current lighting conditions.
ARCore is Google’s platform for building augmented reality experiences. Using different APIs, ARCore enables your phone to sense its environment, understand the world and interact with information. Some of the APIs are available across Android and iOS to enable shared AR experiences.
I want to build a cross-platform mobile app that can identify QR-codes and will render a 3d model on it using AR.
I found that Unity in combination with Vuforia will do the trick on the AR part, but is it possible here to download and use 3D models dynamically?
Thanks
I guess what you're looking for is called AssetBundle be aware that downloading a large model (+texture) at run time can be heavy and will highly depend on the internet connection of the device.
Hope this helps.