Face Detection in Preview Camera Feed on Flutter - flutter

How do we "draw a square" on detected faces on camera preview feed in Flutter? Is there a cross platform solution to this?
Flutter provides a Camera Plugin, but is there a way for us to draw a square box detecting faces on the preview feed? Any thoughts on this please?
.
SOMETHING LIKE THIS EXAMPLE CAMERA PREVIEW FEED

Firstly, get the image data. This can be done by either using the camera plugin's output, or even directly communicate with the SurfaceView/TextureView.
Secondly, run face detection algorithm. If you do not need cross-platform, https://medium.flutterdevs.com/face-detection-in-flutter-2af14455b90d?gi=f5ead7c6d7c9 MLKit sounds good. If needing cross-platform, you can use Rust algorithms like https://github.com/atomashpolskiy/rustface and bind Rust code to Flutter via https://github.com/fzyzcjy/flutter_rust_bridge. Or, use C++ face detection algorithms and bind to Flutter (though setup may be a bit harder).
Lastly, once you know the face, draw a box around it. For example, Container widget.

Related

Is it posibile to change the focus for Camera Module V2?

I am using the camera for reading some text and currently, my images look quite blurry
Is it possible to change the focus of the camera?
I am using
https://www.raspberrypi.org/products/camera-module-v2/
Yes, it's definitely possible, I did it many times. Sometimes in the camera box there is even a specific tool to rotate the lens included (check if you have it, I experienced that it's not always present). If you don't have a tool take thin pliers and rotate the lens, you can look here.

Apply custom camera filters on live camera preview - Swift

I'm looking to make a native iPhone iOS application in Swift 3/4 which uses the live preview of the back facing camera and allows users to apply filters like in the built in Camera app. The idea was for me to create my own filters by adjusting Hue/ RGB/ Brightness levels etc. Eventually I want to create a HUE slider which allows users to filter for specific colours in the live preview.
All of the answers I came across for a similar problem were posted > 2 years ago and I'm not even sure if they provide me with the relevant, up-to-date solution I am looking for.
I'm not looking to take a photo and then apply a filter afterwards. I'm looking for the same functionality as the native Camera app. To apply the filter live as you are seeing the camera preview.
How can I create this functionality? Can this be achieved using AVFoundation? AVKit? Can this functionality be achieved with ARKit perhaps?
Yes, you can apply image filters to the camera feed by capturing video with the AVFoundation Capture system and using your own renderer to process and display video frames.
Apple has a sample code project called AVCamPhotoFilter that does just this, and shows multiple approaches to the process, using Metal or Core Image. The key points are to:
Use AVCaptureVideoDataOutput to get live video frames.
Use CVMetalTextureCache or CVPixelBufferPool to get the video pixel buffers accessible to your favorite rendering technology.
Draw the textures using Metal (or OpenGL or whatever) with a Metal shader or Core Image filter to do pixel processing on the CPU during your render pass.
BTW, ARKit is overkill if all you want to do is apply image processing to the camera feed. ARKit is for when you want to know about the camera’s relationship to real-world space, primarily for purposes like drawing 3D content that appears to inhabit the real world.

How to show ImageTarget's feature points on AR camera

I want to show the feature points of image targets on AR camera while scanning the image using unity tool. Is there any script or any option to achieve that?.
I have attached the screenshot of showing image feature like that I want to show on AR camera.
Basically, there is no such option. Vuforia is not exposing this information, and it is not supported in any API.
I did see, however, someone mentions he was able to do it by parsing the binary '.dat' file of the dataset - but I do not think it is something you can rely on (and Vuforia also do not encourage such usage). You can take a look here: Feature Points

How to detect color live using camera on android in unity

i want to make a live color detection using camera on android in unity. apps that i want is like "color grab" on playstore.
anyone can help me how it works? or how to make it on unity?
Well SO isn't a script providing service: always try to provide what you have tried already before asking a question. If you don't have any script, at least expose you way you want to do it, the steps you think are needed, ...
Anyway, I'd advise you to take a look at Unity Texture2D.ReadPixels() method:
display what you need on screen
when the user touch a place, call for ReadPixels()
then retrieve the color of the desire location on the texture using Texture2D.GetPixel()
If you want to search for a larger area (not a single pixel), you can look for all the pixels around the wanted location and then get the average color found.
Hope this helps,

iOS: Decompose UIImageView image into shapes and change their colors

I have been making an iPhone App where I need to identify and decompose different shapes(e.g Wall, Chair, Book, etc..) in UIImageView's image and change their color. So far I have implemented code to allow user to select color and apply it to selected area (pixel base) using gesture recogniser but what I am looking for is far more than what I have done!
Is it posible to detect the different shapes available in given image and change their color?
Thanks.
whatever algorithm you use, you should place it on top of one of the best frameworks for computer Vision, open CV for iOS
then you might check other projects in other languages that do this image segmentation using open cv, and with the theory may be roll your own solution ;)
good luck
Object recognition and detection is a very wide topic in computer science and, as far as I know, is not supported by UIImage's public methods. I think you have a long way to go in order to achieve your goal. Try and look up any open source iOS projects that handle object detection or maybe even look into non-native libraries that have iOS wrappers, such as openCV. Good luck, don't give up.