Flutter: Access CMSampleBufferRef data from Flutter Camera - flutter

I'm working with ML on the device for Flutter that requires to have UIImage to feed into the model. The requirement is to use Livestream to detect objects in near real-time.
I use the Flutter camera with startImageStream function and get CameraImage from the streaming. I ask the camera to return ImageFormatGroup.bgra8888 for iOS, No need for Android since it's already working fine.
I convert bgra8888 on Isolate spawn to convert to a jpg image using Image Lib and send the binary of the image to Swift via Flutter Method Channel, rebuild that binary into UIImage and feed it into the model. I feed the image every 0.5 seconds (didn't feed in realtime from the camera stream image since it will be too much data feeding into the Model)
Everything seems working fine until I tested with old devices, iPhone 6s, and iPhone7 Plus. iPhone X is working fine, The model responded around 0.3 seconds which is faster than I feed.
while iPhone 6s, and iPhone 7 Plus spend around 1.5 - 2 seconds.
I tested the model with Native by creating camera view and feed the UIImage from didOutputSampleBuffer like below sample code. My iPhone 6s, and iPhone 7 Plus response around 0.5-0.6 seconds which is a lot faster
After I've done some research and found out that
https://github.com/flutter/plugins/blob/main/packages/camera/camera_avfoundation/ios/Classes/FLTCam.m
Flutter actually has camera stream which is the same as iOS and create RGBA and send to Flutter
- (void)captureOutput:(AVCaptureOutput *)output
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection {
if (output == _captureVideoOutput) {
CVPixelBufferRef newBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CFRetain(newBuffer);
I could get CMSampleBufferRef and create UIImage and feed to model directly without sending back and forth between Flutter and iOS, and I don't have to convert image in Flutter which is slower so If I can get CMSampleBufferRef directly from Camera, I believe that iPhone6S, and iPhone 7 Plus should run faster
The question is: is it possible to get CMSampleBufferRef directly without have to go around via Flutter? I've founded that FLTCam.h has a function called copyPixelBuffer . I debug it via Xcode and this return the image that I want but I can't find the way to use it.
and I found that FlutterTexture mention that texture can be share via Flutter
https://api.flutter.dev/objcdoc/Protocols/FlutterTexture.html
but no idea how to get that share texture
Anyone has any idea how I can access the image before Flutter camera send to Flutter?
I have another solution that I might clone their camera and expose copyPixelBuffer to public so I can access it. I didn't try yet but I want it to be last resource since other developers has to maintain 2 camera versions that we use in the app.

Related

Overlay text and images to YouTube live stream from Flutter app

I am looking into creating a Flutter mobile app that live streams to YouTube using the YouTube Live Streaming API. I have checked the API and found that it does not offer a way to overlay text and images onto the livestream. How would I achieve this using Flutter?
I imagine this involves using the Stack widget to overlay content on top of the user's video feed. However this would somehow need to be encoded into the video stream to be sent to YouTube.
this type of work is usually done with FFmpeg
See this discussion for more info: https://video.stackexchange.com/questions/12105/add-an-image-overlay-in-front-of-video-using-ffmpeg
FFmpeg for mobile devices is made available by this project:
https://github.com/tanersener/mobile-ffmpeg
And then, as always, we have a flutter package called flutter_ffmpeg to allow us these features on flutter
https://pub.dev/packages/flutter_ffmpeg
TLDR: You can use CameraController (Camera package) and Canvas in Flutter for drawing the text. Unfortunately CameraController.startImageStream is not documented in the API docs, and is a 1 year+ GitHub issue.
Everytime the camera plugin gives you a video frame controller.startImageStream((CameraImage img) { /* your code */}, you can draw the image onto the canvas, draw the text, capture the video and call the YouTube API. You can see an example of using the video buffer in Tensorflow Lite package here or read more info at this issue.
On this same canvas, you can draw whatever you want, like drawArc, drawParagraph, drawPoints. It gives you ultimate flexibility.
A simple example of capturing the canvas contents is here, where I have previously saved the strokes in state. (You should use details about the text instead, and just pull the latest frame from the camera.):
Future<img.Image> getDrawnImage() async {
ui.PictureRecorder recorder = ui.PictureRecorder();
Canvas canvas = Canvas(recorder);
canvas.drawColor(Colors.white, BlendMode.src);
StrokesPainter painter = StrokesPainter(
strokes: InheritedStrokesHistory.of(context).strokes);
painter.paint(canvas, deviceData.size);
ui.Image screenImage = await (recorder.endRecording().toImage(
deviceData.size.width.floor(), deviceData.size.height.floor()));
ByteData imgBytes =
await screenImage.toByteData(format: ui.ImageByteFormat.rawRgba);
return img.Image.fromBytes(deviceData.size.width.floor(),
deviceData.size.height.floor(), imgBytes.buffer.asUint8List());
}
I was going to add a link to an app I made which allows you to draw and screenshot the drawing into your phone gallery (but also uses Tensorflow Lite), but the code is a little complicated. Its probably best to clone it and see what it does if you are struggling with capturing the canvas.
I initially could not find the documentation on startImageStream and forgotten I have used it for Tensorflow Lite, and suggested using MethodChannel.invokeMethod and writing iOS/ Android specific code. Keep that in mind if you find any limitations in Flutter, although I don't think Flutter will limit you in this problem.

How to make video captured by front camera not being inverse Android?

I recording video using MediaRecorder.When using back-camera,it working fine,but when using front camera,the video captured is being flipped/inverse.Means that the item in right,will appear on the left.The camera preview is working fine,just final captured video flipped.
Here is the camera preview looks like
But the final video appear like this(all the item in left hand side,appear on right hand side)
What I tried so far:
I tried to apply the matrix when prepare recorder,but it seems does change anything.
private boolean prepareRecorder(int cameraId){
//# Create a new instance of MediaRecorder
mRecorder = new MediaRecorder();
setCameraDisplayOrientation(this,cameraId,mCamera);
int angle = getVideoOrientationAngle(this,cameraId);
mRecorder.setOrientationHint(angle);
if(cameraId == Camera.CameraInfo.CAMERA_FACING_FRONT){
Matrix matrix = new Matrix();
matrix.preScale(1.0f,-1.0f);
}
//all other code to prepare recorder here
}
I already read for all this question below,but all this seems didnt solve my problem.For information,I using SurfaceView for the camera preview,so this question here doesn't help.
1) Android flip front camera mirror flipped video
2) How to keep android from inverting the image from the front facing camera?
3) Prevent flipping of the front facing camera
So my question is :
1) How to capture a video by front camera which the video not being inverse(exactly the same with camera preview)?
2) How to achieve this when the Camera preview is using SurfaceView but not TextureView ? (cause all the question I mention above,tell about using TextureView)
All possible solution is mostly welcome..Tq
EDIT
I made 2 short video clip to clarify the problem,please download and take a look
1) The video during camera preview of recording
2) The video of the final product of recording
So, if the system camera app produces video similar to your app, you didn't do something wrong. Now it's time to understand what happens to front-facing camera video recording.
The front facing camera is not different from the rear facing camera in the way it captures still pictures or video. There is a difference how the phone displays camera preview on the screen. To make it look more natural to the user, Android (and all other systems) mirrors the preview, so that you can see yourself as if in a mirror.
It is important to understand that this only applies to the way the preview is presented to you. If you pick up any video conferencing app, connect two devices that you hold in two hands, and look at yourself, you will see to your surprise that the two instances of yourself are flipped.
This is not a bug, this is the natural way to present the video to the other party.
See the sketch:
This is how you see the scene:
This is how your peer sees the same scene
Normally, recording of a video is done from the point if view of your peer, as in the second picture. This is the natural setup for, e.g., video conferencing.
But Snapchat and some other social apps choose to store the front-facing video clip as if you record it from the mirror (as if the recorder is in your hand on the first picture). Some people like this feature, others hate it (see https://forums.androidcentral.com/general-help-how/664539-front-camera-pics-mirrored-reversed-only-snapchat.html and https://www.reddit.com/r/nexus6/comments/3846ay/has_anyone_found_a_fix_for_snapchat_flipping)
You cannot use MediaRecorder for that. You can use the lower-level API of MediaCodec to record processed frames. You need to flip each frame 'manually', and this may be a significant performance hit, because normally the MediaRecorder 'connects' the camera to hardware encoder in a very efficient way, without need even to copy the pixels to user memory. This answer shows how you can manipulate the way camera is rendered to texture.
You can achieve this by recording video manually from surface view.
In such case preview and recording will match exactly.
I've been using this library for this purpose:
https://github.com/spaceLenny/recordablesurfaceview
Here is the guide how to use it (not with camera but with OpenGL drawing): https://withintent.uncorkedstudios.com/recording-screen-video-on-android-with-recordablesurfaceview-451c9daa213e

Is there any standard fileopen dialog in iphone programming?

I want to create application, that will process images(mostly - photographs from mobile camera).
So i need at first to provide the way to open image file from phone memory.
how can i do that?
Sorry for such a stupid question, but i am newbie in iphone programming(yet:)).
P.S. I use xcode, cocoa
You can use UIImagePickerController to get a standard interface for selecting a photo from the user's library or camera roll. Depending on the configuration, it can also be used to capture a new image with the camera. Note however that you get a UIImage back, not a file, if you want to upload it somewhere, you will first have to create a file or NSData object from the image, using either UIImagePNGRepresentation() or UIImageJPEGRepresentation().

Replacing face time video calls iOS4 - iPhone

I would like to change what the camera 'captures' to something else during a video call.
Lets say I have an image that I want to be seen on the other side instead of the video sent from the camera.
I want to 'hack' the camera on the iPhone - get control on the data being sent.
Is this feasible?
Not without jailbreaking and a lot of work inside MobilePhone.app
Start by running class-dump-z on /Applications/MobilePhone.app/MobilePhone

How do I test a camera in the iPhone simulator?

Is there any way to test the iPhone camera in the simulator without having to deploy on a device? This seems awfully tedious.
There are a number of device specific features that you have to test on the device, but it's no harder than using the simulator. Just build a debug target for the device and leave it attached to the computer.
List of actions that require an actual device:
the actual phone
the camera
the accelerometer
real GPS data
the compass
vibration
push notifications...
I needed to test some custom overlays for photos. The overlays needed to be adjusted based on the size/resolution of the image.
I approached this in a way that was similar to the suggestion from Stefan, I decided to code up a "dummy" camera response.
When the simulator is running I execute this dummy code instead of the standard "captureStillImageAsynchronouslyFromConnection".
In this dummy code, I build up a "black photo" of the necessary resolution and then send it through the pipelined to be treated like a normal photo. Essentially providing the feel of a very fast camera.
CGSize sz = UIDeviceOrientationIsPortrait([[UIDevice currentDevice] orientation]) ? CGSizeMake(2448, 3264) : CGSizeMake(3264, 2448);
UIGraphicsBeginImageContextWithOptions(sz, YES, 1);
[[UIColor blackColor] setFill];
UIRectFill(CGRectMake(0, 0, sz.width, sz.height));
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData *imageData = UIImageJPEGRepresentation(image, 1.0);
The image above is equivalent to a 8MP photos that most of the current day devices send out. Obviously to test other resolutions you would change the size.
I never tried it, but you can give it a try!
iCimulator
Nope (unless they've added a way to do it in 3.2, haven't checked yet).
I wrote a replacement view to use in debug mode. It implements the same API and makes the same delegate callbacks. In my case I made it return a random image from my test set. Pretty trivial to write.
A common reason for the need of accessing the camera is to make screenshots for the AppStore.
Since the camera is not available in the simulator, a good trick ( the only one I know ) is to resize your view at the size you need, just the time to take the screenshots. You will crop them later.
Sure, you need to have the device with the bigger screen available.
The iPad is perfect to test layouts and make snapshots for all devices.
Screenshots for iPhone6+ will have to be stretched a little ( scaled by 1,078125 - Not a big deal… )
Good link to a iOS Devices resolutions quick ref : http://www.iosres.com/
Edit : In a recent project, where a custom camera view controller is used, I have replaced the AVPreview by an UIImageView in a target that I only use to run in the simulator. This way I can automate screenshots for iTunesConnect upload. Note that camera control buttons are not in an overlay, but in a view over the camera preview.
The #Craig answer below describes another method that I found quite smart - It also works with camera overlay, contrarily to mine.
Just found a repo on git that helps Simulate camera functions on iOS Simulator with images, videos, or your MacBook Camera.
Repo