I'm in process of writing photo editing app and I want to know if there any more efficient way to solve my task.
Task: GPU-accelerated 2d image processing of float type images on Android with OpenGL/Vulkan and Metal on iOS.
Current pipeline is following: UI made with flutter controls C++ backend via dart:ffi that utilizes Halide lang generators to efficiently offload computation to OpenGL or Metal.
I am worried about complexity. Halide has it's own caveats, dart:ffi it's own and usage of C to glue C++ and Dart too.
Q: Is there any way to efficiently compute image pixel values with Flutter? Any SkSL API exposure on the roadmap?
Flutter SDK exposes API from the Skia engine through Canvas. One way of accessing these API is via packages like graphx. Since early 2020, Flutter renders with Metal on iOS, and OpenGL on Android by default.
Related
How does one go about creating a Face Swap mechanism in Flutter?
Can anyone point me in the right direction?
Thank you
You’ll probably need a good plugin to do all the hard work for you. I recommend Google’s ML Kit on Flutter, as it is the most popular way to run on-device ML with Flutter.
The face detection plugin is what you want. You would basically get the face oval shape with face contour detection and swap those shapes. And this can be done real-time with a given video input.
But you should keep in mind that the plugin is on v0.0.1. If you’re aiming for production, you’d better do that with Swift or Kotlin.
There are multiple ways to archive this thing in Flutter. It might be in real-time or with some delay of seconds.
You can use one of these packages.
Open CV
TensorFlow
Google's ML Kit
It might be possible you will not get good support from openCV and TensorFlow in a flutter. But you can integrate the OpenCV/TensorFlow native libs or SDK for both Android and IOS and invoke them through platform channels
There is also one more possible solution but it will definitely have a delay. For this kind of ML project python have great support of library and projects.
You can set up a python project which is responsible for face swapping it takes input from the flutter app (using rest API or socket) and return the output image after face-swapping.
Some great face swap projects are available on GitHub you can look into it.
On the web when you need access to a low level graphic API for performance or other reasons, you can reach out to WebGL, similarly there's OpenGL for android/windows and Metal for iOS.
Is there something similar in Flutter? Searching, all I could find was the CustomPainter API, which is not exactly what I was looking for.
The Flutter team has stated they're working on custom shaders support, but that hasn't been released yet.
I'm a bit lost looking through all the various Agora.io modules (and not sure what it means that only some of them have Unity-specific downloads).
I want to make a Unity app where two remote phones exchange data as follows:
Streaming voice in both directions
Streaming video in one direction (recorded from device camera)
Streaming a small amount of continuously-changing custom data in the other direction (specifically, a position + orientation in a virtual world; probably encoded as 7 floats)
The custom data needs to have low latency but does not need reliability (it's fine if some updates get lost; app only cares about the most recent update). Updates basically every frame.
Ideally I want to support both Android and iOS.
I started looking at Agora video (successfully built a test project) and it seems like it will cover the voice and video, but I'm struggling to find a good way to send the custom data (position + orientation). It's probably theoretically possible to encode it as a custom video feed but that sounds complex and inefficient. Is there some out-of-band signalling mechanism I could use to send some extra data alongside/instead of a video?
Agora real-time messaging sounds like it would probably work for this, but I can't seem to find any info about integrating it with Unity (either on Agora's web site or in a general web search). Can I roll this in somehow?
Agora interactive gaming could maybe also be relevant? The overview doesn't seem real clear about how it's different from regular Agora video. I suspect it's overkill but that might be fine if there isn't a large performance cost.
Could anyone point me in the right direction?
I would also consider alternatives to Agora if there's a better plugin for implementing this feature set in Unity.
Agora's Video SDK for Unity supports exporting projects to Android, iOS, MacOS, and Windows (non-UWP).
Regarding your data streaming needs, Agora's RTM SDK is in the process of being ported to work within Unity. At the moment the best way to send data using the Agora SDK is to use CreateDataStream to leverage Agora's ability to open a data stream that is sent along with the frames. Data stream messages are limited to 1kb per frame and 30kb/s so I would be cautious about running it on every frame if you are using a frame-rate above 30fps.
I'm prototyping mobile ml application within Unity engine.
I have trained tensorflow graph (.pb) and I want to run the model in unity mobile. (both android and ios)
With OpenCVForUnity plugin, with dnn module, I can run tensorflow graph in mobile. But the problem is that's running on CPU.
I need GPU based solution and it seems that OpenCVForUnity isn't proper approach for that.
So any idea for running graph on GPU in unity mobile enviroment?
You might want to use Barracuda, which will allow you to convert a tensorflow model and use it in cross-platform Unity applications. Unity ML-Agents uses Barracuda, so you could use their code as a reference for how to utilize your neural network.
I need some framework for my iPhone app, which is using maps. Now these maps are raster images and I'd like to optimize my app by doing vector maps instead. I know that my colleagues from Android development had used Mapsforge framework for this purposes. Is there any analog of this library for iPhone? I need framework that could quickly render vector maps using hardware acceleration, caching maps, offline rendering and (optinal) be cross-platform. Any suggestions? Thanks!
ok, I've moved over my laziness and decided to move to github my forgotten almost year-ago work. This is Mapsforge for iOS, dirty code, but it should work without any additional setup. It can read .map files and asynchronously render tiles with vector objects to mapView. You can find it here: https://github.com/medvedNick/Mapsforge_iOS
Have a look at the following post: https://groups.google.com/forum/?fromgroups=#!topic/route-me-map/wbBa4h0R_iw
There is … libosmscout that do vector maps drawing (unix, windows,
QT, iOS, Android, …), routing, searching:
https://sourceforge.net/projects/libosmscout/
I did the iOS/OSX drawing code, it's not finished but works quite
well.