Offline Face Recognition On Flutter - flutter

I am trying to build a flutter app which will have two functionality.
1. Save face data
2. Recognize face based on saved face data.
I want to do this offline. The possible solution I found on google is using Firebase ML Kit, but that requires a network connection.
Is there any way to do realtime face recognition without requiring a network connection?
Thanks in advance.

Firebase ML Kit's face detection API, an on-device API, works even when there’s no network connection. Note, the functionality offered is face detection, not face recognition.

Related

Face Swap and Face Detection/Recognition in Flutter

How does one go about creating a Face Swap mechanism in Flutter?
Can anyone point me in the right direction?
Thank you
You’ll probably need a good plugin to do all the hard work for you. I recommend Google’s ML Kit on Flutter, as it is the most popular way to run on-device ML with Flutter.
The face detection plugin is what you want. You would basically get the face oval shape with face contour detection and swap those shapes. And this can be done real-time with a given video input.
But you should keep in mind that the plugin is on v0.0.1. If you’re aiming for production, you’d better do that with Swift or Kotlin.
There are multiple ways to archive this thing in Flutter. It might be in real-time or with some delay of seconds.
You can use one of these packages.
Open CV
TensorFlow
Google's ML Kit
It might be possible you will not get good support from openCV and TensorFlow in a flutter. But you can integrate the OpenCV/TensorFlow native libs or SDK for both Android and IOS and invoke them through platform channels
There is also one more possible solution but it will definitely have a delay. For this kind of ML project python have great support of library and projects.
You can set up a python project which is responsible for face swapping it takes input from the flutter app (using rest API or socket) and return the output image after face-swapping.
Some great face swap projects are available on GitHub you can look into it.

I am trying to develop a pitch detection feature in flutter and need some suggestion regarding what kind of packages and library to use

A pitch detecting feature where a user records his voice and app should process the audio and tell what is the frequency of the voice. With this knowledge, a musician can use to tune their voice. I found a couple of good resources like ML5 and Tensorflow lite which I like the idea of. I don't have to have my own dataset models, I can see many available models which are basic but gets the job done, but I don't know which one will be best for flutter integration? or even they are compatible to use! Any kind of relevant suggestion will be appreciated.

Does Agora.io for Unity provide these features?

I'm a bit lost looking through all the various Agora.io modules (and not sure what it means that only some of them have Unity-specific downloads).
I want to make a Unity app where two remote phones exchange data as follows:
Streaming voice in both directions
Streaming video in one direction (recorded from device camera)
Streaming a small amount of continuously-changing custom data in the other direction (specifically, a position + orientation in a virtual world; probably encoded as 7 floats)
The custom data needs to have low latency but does not need reliability (it's fine if some updates get lost; app only cares about the most recent update). Updates basically every frame.
Ideally I want to support both Android and iOS.
I started looking at Agora video (successfully built a test project) and it seems like it will cover the voice and video, but I'm struggling to find a good way to send the custom data (position + orientation). It's probably theoretically possible to encode it as a custom video feed but that sounds complex and inefficient. Is there some out-of-band signalling mechanism I could use to send some extra data alongside/instead of a video?
Agora real-time messaging sounds like it would probably work for this, but I can't seem to find any info about integrating it with Unity (either on Agora's web site or in a general web search). Can I roll this in somehow?
Agora interactive gaming could maybe also be relevant? The overview doesn't seem real clear about how it's different from regular Agora video. I suspect it's overkill but that might be fine if there isn't a large performance cost.
Could anyone point me in the right direction?
I would also consider alternatives to Agora if there's a better plugin for implementing this feature set in Unity.
Agora's Video SDK for Unity supports exporting projects to Android, iOS, MacOS, and Windows (non-UWP).
Regarding your data streaming needs, Agora's RTM SDK is in the process of being ported to work within Unity. At the moment the best way to send data using the Agora SDK is to use CreateDataStream to leverage Agora's ability to open a data stream that is sent along with the frames. Data stream messages are limited to 1kb per frame and 30kb/s so I would be cautious about running it on every frame if you are using a frame-rate above 30fps.

How to implement a CMSampleBuffer for MLkit facial detection?

Basically, I'm trying to create a simple real-time facial recognition IOS app that streams the users face and tells them whether their eyes are closed. I'm following the google tutorial here - https://firebase.google.com/docs/ml-kit/ios/detect-faces.
I'm on step 2 (Run the Face Detector) and I'm trying to create a visionImage using the CMSampleBufferRef. I'm basically just copying the code and when I do, there is no reference to "sampleBuffer" as shown in the tutorial. I don't know what to do as I really don't understand the CMSampleBuffer stuff.
ML Kit has a Quickstart app showing how to do that. Here is the code:
https://github.com/firebase/quickstart-ios/tree/master/mlvision

How to recognize the human voice by code in iphone?

I want to integrate voice detection in my iPhone app. The iPhone app allow the user to search the word by using their voice. But, i don't know a single info about Voice Recognition in iPhone. Can you please suggest me any ideas,tutorials or sample code for this?
You can also use Google Chrome API to integrate voice recognition on your application, but there is a big problem : the API works only with FLAC encoded files, but this encoding isn't supported natively on iOS... :/
You can see those 2 links for more information :
http://www.albertopasca.it/whiletrue/2011/09/objective-c-use-google-speech-iphone/
http://8byte8.com/blog/2012/07/voice-recognition-ios/
EDIT :
I realized an application including voice recognition using Nuance SDK, but it's not free to use. You can register for free and get a developer key that allows you to test your application for 90-days. An application example is included, you can see the code, it's very easy to implement.
Good luck :)
The best approach will probably be to:
Record the voice on the phone
Send the recording to a server that runs the speech recognition software
Then return something to the phone to indicate what it should do
This approach is favorable as there are a number of open source voice to text softwares out there & you are not limited by computing power in the backend.
Having said that, iOS has OpenEars which is based on Pocket Sphinx. It looks promising...
Well voice recognition is not correlated with iphone. All you can do is record the voice in iphone. Once done, you can either code your one voice recognition module, or find a third party API and reuse it.
You can do google search on that.