In an OpenGL ES app I'm working on, I noticed that the glReadPixels() function fails to work across all devices/simulators. To test this, I created a bare-bones sample OpenGL app. I set the background color on an EAGLContext context and tried to read the pixels using glReadPixels() as follows:
int bytesPerPixel = 4;
int bufferSize = _backingWidth * _backingHeight * bytesPerPixel;
void* pixelBuffer = malloc(bufferSize);
glReadPixels(0, 0, _backingWidth, _backingHeight, GL_RGBA, GL_UNSIGNED_BYTE, pixelBuffer);
// pixelBuffer should now have meaningful color/pixel data, but it's null for iOS 7 devices
free(pixelBuffer);
This works as expected on the simulator for iOS 6 and 7 and a physical iOS 6 device, but it fails on a physical iOS 7 device. The scenarios tested are shown in the table below (YES/NO = works/doesn't):
I'm using OpenGL ES v1.1 (though v2 also fails to work after a quick test).
Has anyone encountered this problem? Am I missing something? The strangest part of this is that it only fails on iOS 7 physical devices.
Here is a gist with all the relevant code and the bare-bones GitHub project for reference. I've made it very easy to build and demonstrate the issue.
UPDATE:
Here is the updated gist, and the GitHub project has been updated too. I've updated the sample project so that you can easily view the memory output from glReadPixels.
Also, I have a new observation: When the EAGLContext is layer-backed ([self.context renderbufferStorage:GL_RENDERBUFFER_OES fromDrawable:(CAEAGLLayer*)self.layer]), glReadPixels can successfully read data on all devices/simulators (iOS 6 and 7). However, when you toggle the flag in GLView.m so that the context is not layer-backed ([self.context renderbufferStorage:GL_RENDERBUFFER_OES fromDrawable:nil]), glReadPixels exhibits the condition expressed in the initial post (works on iOS 6 sim/device, iOS 7 sim, but fails on iOS 7 device).
As posted in the comments I managed to use your code and it worked. However, I defined your BACKING_TYPE_LAYERBACKED which generates the render buffer from the view.
The other pipeline that creates the FBO did not work though. The issue in your FBO pipeline is calling [self.context renderbufferStorage:GL_RENDERBUFFER_OES fromDrawable:nil]. Remove this line and you should be fine.
To continue from Matic Oblak's answer, for those who might encounter this issue when using a second back buffer( data backed - storage not from layer ) for tasks like object picking, on device, you will need to rebind frameBuffer, renderBuffer and then re-attach renderBuffer to frameBuffer. For e.g the bindBuffers function in gist would be as below
- (void)bindBuffers
{
glBindFramebufferOES(GL_FRAMEBUFFER_OES, _framebuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, _renderbuffer);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_RENDERBUFFER_OES, _renderbuffer);
}
Related
I'm working with ML on the device for Flutter that requires to have UIImage to feed into the model. The requirement is to use Livestream to detect objects in near real-time.
I use the Flutter camera with startImageStream function and get CameraImage from the streaming. I ask the camera to return ImageFormatGroup.bgra8888 for iOS, No need for Android since it's already working fine.
I convert bgra8888 on Isolate spawn to convert to a jpg image using Image Lib and send the binary of the image to Swift via Flutter Method Channel, rebuild that binary into UIImage and feed it into the model. I feed the image every 0.5 seconds (didn't feed in realtime from the camera stream image since it will be too much data feeding into the Model)
Everything seems working fine until I tested with old devices, iPhone 6s, and iPhone7 Plus. iPhone X is working fine, The model responded around 0.3 seconds which is faster than I feed.
while iPhone 6s, and iPhone 7 Plus spend around 1.5 - 2 seconds.
I tested the model with Native by creating camera view and feed the UIImage from didOutputSampleBuffer like below sample code. My iPhone 6s, and iPhone 7 Plus response around 0.5-0.6 seconds which is a lot faster
After I've done some research and found out that
https://github.com/flutter/plugins/blob/main/packages/camera/camera_avfoundation/ios/Classes/FLTCam.m
Flutter actually has camera stream which is the same as iOS and create RGBA and send to Flutter
- (void)captureOutput:(AVCaptureOutput *)output
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection {
if (output == _captureVideoOutput) {
CVPixelBufferRef newBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CFRetain(newBuffer);
I could get CMSampleBufferRef and create UIImage and feed to model directly without sending back and forth between Flutter and iOS, and I don't have to convert image in Flutter which is slower so If I can get CMSampleBufferRef directly from Camera, I believe that iPhone6S, and iPhone 7 Plus should run faster
The question is: is it possible to get CMSampleBufferRef directly without have to go around via Flutter? I've founded that FLTCam.h has a function called copyPixelBuffer . I debug it via Xcode and this return the image that I want but I can't find the way to use it.
and I found that FlutterTexture mention that texture can be share via Flutter
https://api.flutter.dev/objcdoc/Protocols/FlutterTexture.html
but no idea how to get that share texture
Anyone has any idea how I can access the image before Flutter camera send to Flutter?
I have another solution that I might clone their camera and expose copyPixelBuffer to public so I can access it. I didn't try yet but I want it to be last resource since other developers has to maintain 2 camera versions that we use in the app.
My app is currently using AVFoundation to take the raw camera data from the rear camera of an iPhone and display it on an AVCaptureVideoPreviewLayer in real time.
My goal is to to conditionally apply simple image filters to the preview layer. The images aren't saved, so I do not need to capture the output. For example, I would like to toggle a setting that converts the video coming in on the preview layer to Black & White.
I found a question here that seems to accomplish something similar by capturing the individual video frames in a buffer, applying the desired transformations, then displaying each frame as an UIImage. For several reasons, this seems like overkill for my project and I'd like to avoid any performance issues this may cause.
Is this the only way to accomplish my goal?
As I mentioned, I am not looking to capture any of the AVCaptureSession's video, merely preview it.
Probably the most performant way of handling this would be to use OpenGL ES for filtering and display of these video frames. You won't be able to do much with an AVCaptureVideoPreviewLayer directly, aside from adjusting its opacity when overlaid with another view or layer.
I have a sample application here where I grab frames from the camera and apply OpenGL ES 2.0 shaders to process the video in realtime for display. In this application (explained in detail here), I was using color-based filtering to track objects in the camera view, but others have modified this code to do some neat video processing effects. All GPU-based filters in this application that display to the screen run at 60 FPS on my iPhone 4.
The only iOS device out there that supports video, yet doesn't have an OpenGL ES 2.0 capable GPU, is the iPhone 3G. If you need to target that device as well, you might be able to take the base code for video capture and generation of OpenGL ES textures, and then use the filter code from Apple's GLImageProcessing sample application. That application is built around OpenGL ES 1.1, support for which is present on all iOS devices.
However, I highly encourage looking at the use of OpenGL ES 2.0 for this, because you can pull off many more kinds of effect using shaders than you can with the fixed function OpenGL ES 1.1 pipeline.
(Edit: 2/13/2012) As an update on the above, I've now created an open source framework called GPUImage that encapsulates this kind of custom image filtering. It also handles capturing video and displaying it to the screen after being filtered, requiring as few as six lines of code to set all of this up. For more on the framework, you can read my more detailed announcement.
I would recommend looking at the Rosy Writer example from the ios development library. Brad Larson's GPUImage Library is pretty awesome but it seems a little overkill for this question.
If you are just interested in adding OpenGL Shaders (aka Filters) to a AVCaptureVideoPreviewLayer the workflow is to send the output of the capture session to an OpenGL view for rendering.
AVCaptureVideoDataOutput *videoOut = [[AVCaptureVideoDataOutput alloc] init];
videoOut.videoSettings = #{ (id)kCVPixelBufferPixelFormatTypeKey : #(_renderer.inputPixelFormat) };
[videoOut setSampleBufferDelegate:self queue:_videoDataOutputQueue];
Then in the captureOutput: delegate send the sample buffer to OpenGL Renderer
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVPixelBufferRef sourcePixelBuffer = CMSampleBufferGetImageBuffer( sampleBuffer );
_renderer copyRenderedPixelBuffer:sourcePixelBuffer];
}
In OpenGL Renderer attach the sourcePixelBuffer to a texture and you can filter it within the OpenGL Shaders. The shader is a program that is run on a perpixel base. The Rosy Writer example also shows examples of using different filtering techniques other than OpenGL.
Apple's example AVCamFilter does it all
Now that Apple is officially allowing UIGetScreenImage() to be used in iPhone apps, I've seen a number of blogs saying that this "opens the floodgates" for video capture on iPhones, including older models. But I've also seen blogs that say the fastest frame rate they can get with UIGetScreenImage() is like 6 FPS.
Can anyone share specific frame-rate results you've gotten with UIGetScreenImage() (or other approved APIs)? Does restricting the area of the screen captured improve frame rate significantly?
Also, for the wishful thinking segment of today's program, does anyone have pointers to code/library that uses UIGetScreenImage() to capture video? For instance, I'd like an API something like Capture( int fps, Rect bounds, int durationMs ) that would turn on the camera and for the given duration record a sequence of .png files at the given frame rate, copying from the given screen rect.
There is no specific frame rate. UIGetScreenImage() is not a movie recorder. It just try to return as soon as it could, unfortunately still very slow.
Restricting the area of the screen captured is useless. UIGetScreenImage doesn't take any input parameters. Cropping the output image could make the frame rate even worse due to the excess work.
UIGetScreenImage() returns an image of current screen display. It's said to be slow but whether it's fast enough depends on the use case. The video recording app iCamcorder is using this function.
According to their blog,
iCamcorder records at an remarkable average minimum of 10 frames per second and a maximum of 15 frames per second.
The UIGetScreenImage method Apple recently allowed developers to use captures the current screen contents. Unfortunately it is really slow, about 15% of the processing time of the App just goes into calling this method. http://www.drahtwerk.biz/EN/Blog.aspx/iCamcorder-v19-and-Giveaway/?newsID=27
So the raw performance of UIGetScreenImage() should be at least much higher than 15 fps.
To crop the returned image, you can try
extern CGImageRef UIGetScreenImage(void);
...
CGImageRef cgoriginal = UIGetScreenImage();
CGImageRef cgimg = CGImageCreateWithImageInRect(cgoriginal, rect);
UIImage *viewImage = [UIImage imageWithCGImage:cgimg];
CGImageRelease(cgoriginal);
CGImageRelease(cgimg);
Is there any way to test the iPhone camera in the simulator without having to deploy on a device? This seems awfully tedious.
There are a number of device specific features that you have to test on the device, but it's no harder than using the simulator. Just build a debug target for the device and leave it attached to the computer.
List of actions that require an actual device:
the actual phone
the camera
the accelerometer
real GPS data
the compass
vibration
push notifications...
I needed to test some custom overlays for photos. The overlays needed to be adjusted based on the size/resolution of the image.
I approached this in a way that was similar to the suggestion from Stefan, I decided to code up a "dummy" camera response.
When the simulator is running I execute this dummy code instead of the standard "captureStillImageAsynchronouslyFromConnection".
In this dummy code, I build up a "black photo" of the necessary resolution and then send it through the pipelined to be treated like a normal photo. Essentially providing the feel of a very fast camera.
CGSize sz = UIDeviceOrientationIsPortrait([[UIDevice currentDevice] orientation]) ? CGSizeMake(2448, 3264) : CGSizeMake(3264, 2448);
UIGraphicsBeginImageContextWithOptions(sz, YES, 1);
[[UIColor blackColor] setFill];
UIRectFill(CGRectMake(0, 0, sz.width, sz.height));
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData *imageData = UIImageJPEGRepresentation(image, 1.0);
The image above is equivalent to a 8MP photos that most of the current day devices send out. Obviously to test other resolutions you would change the size.
I never tried it, but you can give it a try!
iCimulator
Nope (unless they've added a way to do it in 3.2, haven't checked yet).
I wrote a replacement view to use in debug mode. It implements the same API and makes the same delegate callbacks. In my case I made it return a random image from my test set. Pretty trivial to write.
A common reason for the need of accessing the camera is to make screenshots for the AppStore.
Since the camera is not available in the simulator, a good trick ( the only one I know ) is to resize your view at the size you need, just the time to take the screenshots. You will crop them later.
Sure, you need to have the device with the bigger screen available.
The iPad is perfect to test layouts and make snapshots for all devices.
Screenshots for iPhone6+ will have to be stretched a little ( scaled by 1,078125 - Not a big deal… )
Good link to a iOS Devices resolutions quick ref : http://www.iosres.com/
Edit : In a recent project, where a custom camera view controller is used, I have replaced the AVPreview by an UIImageView in a target that I only use to run in the simulator. This way I can automate screenshots for iTunesConnect upload. Note that camera control buttons are not in an overlay, but in a view over the camera preview.
The #Craig answer below describes another method that I found quite smart - It also works with camera overlay, contrarily to mine.
Just found a repo on git that helps Simulate camera functions on iOS Simulator with images, videos, or your MacBook Camera.
Repo
I'm trying to build a video recorder without jailbreaking my iPhone (i've a Developer license).
I began using PhotoLibrary private framework, but i can only reach 2ftp (too slow).
Cycoder app have a fps of 15, i think it uses a different approach.
I tried to create a bitmap from the previewView of the CameraController, but it always returns e black bitmap.
I wonder if there's a way to directly access the video buffer, maybe with IOKit framework.
Thanks
Marco
Here is the code:
image = [window _createCGImageRefRepresentationInFrame:rectToCapture];
Marco
That is the big problem. So far i've solved using some temp fixed size buffers and detach a thread for every buffer when is full. The thread will save the buffer content in the Flash memory. Launching some heavy threads, heavy beacause each thread access the flash, will slow the device down and the refresh of the camera view.
Buffers cannot be big, because you will get memory warning, and cannot be small because you will freeze the device, because of too many threads and accesses to the flash memory at a time.
The solution resides in balancing buffer size and number of threads.
I haven't already tried to use sqlite3 db to store images binary data, but i don't if will be a better solution.
PS: to speed up class methods call, avoid the common solution [object method] because of how method call works, but try to get and save the method address as below.
From Apple ObjC doc:
"The example below shows how the procedure that implements the setFilled: method might be
called:
void (*setter)(id, SEL, BOOL);
int i;
setter = (void (*)(id, SEL, BOOL))[target methodForSelector:#selector(setFilled:)];
for ( i = 0; i < 1000, i++ )
setter(targetList[i], #selector(setFilled:), YES); "
Marco
If you're intending to ever release your app on the App Store, using a private framework will ensure that it will be rejected. Video, using the SDK, simply isn't supported.
To capture the video you can see when the Camera is active requires fairly sophisticate techniques, not exposed by any framework/lib out of the box.
I used a non documented UIWindow method to get the current displayed frame as CGImageRef.
Now it works successfully!!
If you would, and if i'm allowded, i can post the code that do the trick.
Marco