Convert camera capture image like camscanner image flutter - flutter

I have implemented a app to capture image from camera and convert it to pdf and share that image.But I want to display camera captured image like on camscanner image using image processing, Can anyone suggest to dart library to do my task?

I have found a suitable answer for my requirement. I have used "edge_detection 1.0.5" package for detect edges for capture image so then final output will same as camscanner output.
https://pub.dev/packages/edge_detection

Related

Flutter Multiple Image to Video Slideshow with Audio converter Using FFmpegKit

I want to render a video file using some images with animation.
I have tried some solution out there and i did not get any solution either I don't know how to write command of FFmpegKit. I also tried to first implement a single image with mp3 but it was also not working.

How to recognize object in photo with flutter?

I have a book with pictures. The task is that a each picture is attached to video, and when the camera hovers over the picture, the application should open another screen and play the video associated with the photo. i tried to use teachablemachine, but it cant detect if there's too many photos. Any ideas is highly appreciated. Thanks
You could use firebase's Object Detection and Tracking and Camera Plugin's image stream feature.
Basically, you would process each frame you get from camera plugin with Firebase's ML feature, and once you detect an object you can perform any action with it.
You can use Tensoflow Lite: https://www.tensorflow.org/lite
You have some dependencies for flutter, for example: https://pub.dev/packages/tflite

Overlay text and images to YouTube live stream from Flutter app

I am looking into creating a Flutter mobile app that live streams to YouTube using the YouTube Live Streaming API. I have checked the API and found that it does not offer a way to overlay text and images onto the livestream. How would I achieve this using Flutter?
I imagine this involves using the Stack widget to overlay content on top of the user's video feed. However this would somehow need to be encoded into the video stream to be sent to YouTube.
this type of work is usually done with FFmpeg
See this discussion for more info: https://video.stackexchange.com/questions/12105/add-an-image-overlay-in-front-of-video-using-ffmpeg
FFmpeg for mobile devices is made available by this project:
https://github.com/tanersener/mobile-ffmpeg
And then, as always, we have a flutter package called flutter_ffmpeg to allow us these features on flutter
https://pub.dev/packages/flutter_ffmpeg
TLDR: You can use CameraController (Camera package) and Canvas in Flutter for drawing the text. Unfortunately CameraController.startImageStream is not documented in the API docs, and is a 1 year+ GitHub issue.
Everytime the camera plugin gives you a video frame controller.startImageStream((CameraImage img) { /* your code */}, you can draw the image onto the canvas, draw the text, capture the video and call the YouTube API. You can see an example of using the video buffer in Tensorflow Lite package here or read more info at this issue.
On this same canvas, you can draw whatever you want, like drawArc, drawParagraph, drawPoints. It gives you ultimate flexibility.
A simple example of capturing the canvas contents is here, where I have previously saved the strokes in state. (You should use details about the text instead, and just pull the latest frame from the camera.):
Future<img.Image> getDrawnImage() async {
ui.PictureRecorder recorder = ui.PictureRecorder();
Canvas canvas = Canvas(recorder);
canvas.drawColor(Colors.white, BlendMode.src);
StrokesPainter painter = StrokesPainter(
strokes: InheritedStrokesHistory.of(context).strokes);
painter.paint(canvas, deviceData.size);
ui.Image screenImage = await (recorder.endRecording().toImage(
deviceData.size.width.floor(), deviceData.size.height.floor()));
ByteData imgBytes =
await screenImage.toByteData(format: ui.ImageByteFormat.rawRgba);
return img.Image.fromBytes(deviceData.size.width.floor(),
deviceData.size.height.floor(), imgBytes.buffer.asUint8List());
}
I was going to add a link to an app I made which allows you to draw and screenshot the drawing into your phone gallery (but also uses Tensorflow Lite), but the code is a little complicated. Its probably best to clone it and see what it does if you are struggling with capturing the canvas.
I initially could not find the documentation on startImageStream and forgotten I have used it for Tensorflow Lite, and suggested using MethodChannel.invokeMethod and writing iOS/ Android specific code. Keep that in mind if you find any limitations in Flutter, although I don't think Flutter will limit you in this problem.

Getting UIImage size when Image is picked from Photo Library

In my application I am uploading the image picked from Photo library. before uploading I need to show the size of image so that user will come to know how much data he need to transfer over network.
To Achive this I need to get Image size , I tried with UIImagePNGRepresentation,after this I am getting size in terms of some MBs,but when I dump image with data received from UIImagePNGRepresentation,size is shown in some KBs. why this is happening?
Does iOS internally compress the image data? how to get image path when picking image from photo library?
Thanks,
Sagar

Creating a video from the visible window or view

I have requirement to record video from the visible view.i,e not form camera. like in TalkingTom application. Can any suggest the solution.
You can take screenshot (using UIGetScreenImage) and store the images into an array.
Then convert it into video using mpeg coder.