I would like to render multiple H264 mp4 videos on multiple views at the same time. Target is to read about 8 short videos, each at a size of 100x100 pixels and let them display their content on multiple positions on the screen, simultaneously.
Imagine 24 squares on the screen, each showing one video out of pool of 8 videos.
MoviePlayer doesn't work, for it's only showing one fullscreen video. An AVPlayer with multiple AVPlayerLayers is limited, because only the most-recently-created Layer will show it's content on screen (according to the documentation and my testing).
So, i wrote a short video class and created an instance for every .mp4 file in my package, using AVAssetReader to read it's content. On update, every videoframe is retreived converted to an UIImage and displayed, according to the video's framerate. Furthermore, these images are cached for a fast access on looping.
- (id) initWithAsset:(AVURLAsset*)asset withTrack:(AVAssetTrack*)track
{
self = [super init];
if (self)
{
NSDictionary* settings = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA], (NSString*)kCVPixelBufferPixelFormatTypeKey, nil];
mOutput = [[AVAssetReaderTrackOutput alloc] initWithTrack:track outputSettings:settings];
mReader = [[AVAssetReader alloc] initWithAsset:asset error:nil];
[mReader addOutput:mOutput];
BOOL status = [mReader startReading];
}
return self;
}
- (void) update:(double)elapsed
{
CMSampleBufferRef buffer = [mOutput copyNextSampleBuffer];
if (buffer)
{
UIImage* image = [self imageFromSampleBuffer:sampleBuffer];
}
[...]
}
Actually this works pretty well, but only for 4 videos. The fifth one never shows up. First I thought of memory issues, but I tested it on the following devices:
iPhone 3GS
iPhone 4
iPad
iPad 2
I had the same behaviour on each device: 4 videos playing in a smooth loop, no differences.
If it would have been a memory issue, I would have expect at least either the iPad 2 to show 5 or 6 videos (due to it's better hardware) or the 3GS to show only 1 or a crash somewhere.
The simulator shows all videos, though.
Debugging on the device shows, that
BOOL status = [mReader startReading];
returns false for video 5,6,7 and 8.
So, is there some kind of hardware setting (or restriction) that doesn't allow more than 4 simultaneous AVAssetReaders? Because, I can't really explain this behaviour. I don't think that all devices have the exact same amount of video memory.
Yes, iOS has an upper limit on the number of videos that can be decoded at one time. While your approach is good, I don't know of any way to work around this upper limit as far as having that many h.264 decoders active at once. If you are interested, please have a look at my solution to this problem, this is an xcode project called Fireworks. Basically, this demo shows decoding a bunch of alpha channel videos to disk, then each one is played by mapping a portion of the video files into memory. This approach makes it possible to decode more than 4 movies at the same time without using up all the system memory and without running into the hard limit of the number of h.264 decoder objects.
Have you tried creating separate AVPlayerItems based on the same AVAsset for each AVPlayerLayer?
Here's my latest iteration of a perfectly smooth-scrolling collection view with real-time video previews (up to 16 at a time):
https://youtu.be/7QlaO7WxjGg
It even uses a cover flow custom layout and "reflection" view that mirrors the video preview perfectly. The source code is here:
http://www.mediafire.com/download/ivecygnlhqxwynr/VideoWallCollectionView.zip
Related
How Can I delay the stream to UIImageview using AVCaptureVideoPreviewLayer from camera?
See below how I bind them, but I just can't figure how to delay it (I don't want it in real time)
AVCaptureVideoPreviewLayer* captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
captureVideoPreviewLayer.frame = self.imageView.bounds;
[self.imageView.layer addSublayer:captureVideoPreviewLayer];
First you're going to want to remove the preview layer frame you have right now, as there is no method to delay those preview frames out of the box.
You're going to want to create a buffer. If we're talking about a few frames, you could have a NSMutableArray that you're filling up on one end with UIImages while you're feeding your image view from the other end.
Your UIImage would come from the didOutputSampleBuffer method, use something like this UIImage created from CMSampleBufferRef not displayed in UIImageView?
Now, few challenges you will have to deal with:
you're talking about having a multiple seconds delay, 5 seconds would be about 150 frames. Storing 150 UIImage in memory isn't gonna happen, unless they're very tiny and on the latest devices
You would solve that by saving the images to disk and have your array only store the path of those images instead of the images themselves. Now you're probably going to run into performance issues, as you're going to do read/write operations in real time, your framerate is going to suffer from that
Because of that bad frame rate, you're going to have to make sure you're not losing synchronization between recorded feed and live feed, otherwise you'll start with a 5 sec delay and end up with a much longer delay
Good luck with that, it can be done with some trade-off (slower frame rate...) but it can be done. (I have done something very similar myself multiple times, can't share the code for IP reasons).
I am using AVCaptureSession to capture video and get real time frame from iPhone camera but how can I send it to server with multiplexing of frame and sound and how to use ffmpeg to complete this task, if any one have any tutorial about ffmpeg or any example please share here.
The way I'm doing it is to implement an AVCaptureSession, which has a delegate with a callback that's run on every frame. That callback sends each frame over the network to the server, which has a custom setup to receive it.
Here's the flow:
http://developer.apple.com/library/ios/#documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/03_MediaCapture.html#//apple_ref/doc/uid/TP40010188-CH5-SW2
And here's some code:
// make input device
NSError *deviceError;
AVCaptureDevice *cameraDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
AVCaptureDeviceInput *inputDevice = [AVCaptureDeviceInput deviceInputWithDevice:cameraDevice error:&deviceError];
// make output device
AVCaptureVideoDataOutput *outputDevice = [[AVCaptureVideoDataOutput alloc] init];
[outputDevice setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
// initialize capture session
AVCaptureSession *captureSession = [[[AVCaptureSession alloc] init] autorelease];
[captureSession addInput:inputDevice];
[captureSession addOutput:outputDevice];
// make preview layer and add so that camera's view is displayed on screen
AVCaptureVideoPreviewLayer *previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:captureSession];
previewLayer.frame = view.bounds;
[view.layer addSublayer:previewLayer];
// go!
[captureSession startRunning];
Then the output device's delegate (here, self) has to implement the callback:
-(void) captureOutput:(AVCaptureOutput*)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection*)connection
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer( sampleBuffer );
CGSize imageSize = CVImageBufferGetEncodedSize( imageBuffer );
// also in the 'mediaSpecific' dict of the sampleBuffer
NSLog( #"frame captured at %.fx%.f", imageSize.width, imageSize.height );
}
Sending raw frames or individual images will never work well enough for you (because of the amount of data and number of frames). Nor can you reasonably serve anything from the phone (WWAN networks have all sorts of firewalls). You'll need to encode the video, and stream it to a server, most likely over a standard streaming format (RTSP, RTMP). There is an H.264 encoder chip on the iPhone >= 3GS. The problem is that it is not stream oriented. That is, it outputs the metadata required to parse the video last. This leaves you with a few options.
1) Get the raw data and use FFmpeg to encode on the phone (will use a ton of CPU and battery).
2) Write your own parser for the H.264/AAC output (very hard).
3) Record and process in chunks (will add latency equal to the length of the chunks, and drop around 1/4 second of video between each chunk as you start and stop the sessions).
There is a long and a short story to it.
This is the short one:
go look at https://github.com/OpenWatch/H264-RTSP-Server-iOS
this is a starting point.
you can get it and see how he extracts the frame. This is a small and simple project.
Then you can look at kickflip which has a specific function "encodedFrame" its called back onces and encoded frame arrives from this point u can do what you want with it, send via websocket. There is a bunch of very hard code avalible to read mpeg atoms
Look here , and here
Try capturing video using AV Foundation framework. Upload it to your server with HTTP streaming.
Also check out a stack another stack overflow post below
(The post below was found at this link here)
You most likely already know....
1) How to get compressed frames and audio from iPhone's camera?
You can not do this. The AVFoundation API has prevented this from
every angle. I even tried named pipes, and some other sneaky unix foo.
No such luck. You have no choice but to write it to file. In your
linked post a user suggest setting up the callback to deliver encoded
frames. As far as I am aware this is not possible for H.264 streams.
The capture delegate will deliver images encoded in a specific pixel
format. It is the Movie Writers and AVAssetWriter that do the
encoding.
2) Encoding uncompressed frames with ffmpeg's API is fast enough for
real-time streaming?
Yes it is. However, you will have to use libx264 which gets you into
GPL territory. That is not exactly compatible with the app store.
I would suggest using AVFoundation and AVAssetWriter for efficiency
reasons.
The AVFoundation Programming Guide states that the preferred pixel formats when capturing video are:
for iPhone 4: kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange or kCVPixelFormatType_32BGRA
for iPhone 3G: kCVPixelFormatType_422YpCbCr8 or kCVPixelFormatType_32BGRA
(There are no recommendations [yet] for the iPhone 5 or for iPad devices with cameras.)
There is however no help provided as to how I should go about and determine what device the app is currently running on. And what if the preferred pixel format becomes different on a future and therefor to my app unknown device?
What is the correct, and future proof, way to determine the preferred YpCbCr pixel format for any device?
I believe you can just set the video settings to nil and AVFoundation will use the most efficient format. For instance instead of doing
NSDictionary *videoSettings = [NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange] forKey:(id)kCVPixelBufferPixelFormatTypeKey];
videoOutput.videoSettings = videoSettings;
Do this instead
videoOutput.videoSettings = nil;
You may also try not setting it at all. I know in the past I would just set this to nil unless I needed to capture images in a specific format.
edit
To get the format AVFoundation chose to use.
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(source);
CGColorSpaceRef cref = CVImageBufferGetColorSpace(imageBuffer);
Is there any way to test the iPhone camera in the simulator without having to deploy on a device? This seems awfully tedious.
There are a number of device specific features that you have to test on the device, but it's no harder than using the simulator. Just build a debug target for the device and leave it attached to the computer.
List of actions that require an actual device:
the actual phone
the camera
the accelerometer
real GPS data
the compass
vibration
push notifications...
I needed to test some custom overlays for photos. The overlays needed to be adjusted based on the size/resolution of the image.
I approached this in a way that was similar to the suggestion from Stefan, I decided to code up a "dummy" camera response.
When the simulator is running I execute this dummy code instead of the standard "captureStillImageAsynchronouslyFromConnection".
In this dummy code, I build up a "black photo" of the necessary resolution and then send it through the pipelined to be treated like a normal photo. Essentially providing the feel of a very fast camera.
CGSize sz = UIDeviceOrientationIsPortrait([[UIDevice currentDevice] orientation]) ? CGSizeMake(2448, 3264) : CGSizeMake(3264, 2448);
UIGraphicsBeginImageContextWithOptions(sz, YES, 1);
[[UIColor blackColor] setFill];
UIRectFill(CGRectMake(0, 0, sz.width, sz.height));
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData *imageData = UIImageJPEGRepresentation(image, 1.0);
The image above is equivalent to a 8MP photos that most of the current day devices send out. Obviously to test other resolutions you would change the size.
I never tried it, but you can give it a try!
iCimulator
Nope (unless they've added a way to do it in 3.2, haven't checked yet).
I wrote a replacement view to use in debug mode. It implements the same API and makes the same delegate callbacks. In my case I made it return a random image from my test set. Pretty trivial to write.
A common reason for the need of accessing the camera is to make screenshots for the AppStore.
Since the camera is not available in the simulator, a good trick ( the only one I know ) is to resize your view at the size you need, just the time to take the screenshots. You will crop them later.
Sure, you need to have the device with the bigger screen available.
The iPad is perfect to test layouts and make snapshots for all devices.
Screenshots for iPhone6+ will have to be stretched a little ( scaled by 1,078125 - Not a big deal… )
Good link to a iOS Devices resolutions quick ref : http://www.iosres.com/
Edit : In a recent project, where a custom camera view controller is used, I have replaced the AVPreview by an UIImageView in a target that I only use to run in the simulator. This way I can automate screenshots for iTunesConnect upload. Note that camera control buttons are not in an overlay, but in a view over the camera preview.
The #Craig answer below describes another method that I found quite smart - It also works with camera overlay, contrarily to mine.
Just found a repo on git that helps Simulate camera functions on iOS Simulator with images, videos, or your MacBook Camera.
Repo
I'm building an app which includes a number of image sequences (5 sequences with about 80 images each). It runs nicely in the iPhone simulator, but causes my iPhone to reboot when I test it. By the way, each png image is about 8k in size.
Has anyone successfully built a similar app?
Am I using too many resources for the iPhone to handle?
Anyone?
UPDATE:
Thanks to all for you answers! I've modified my code to use [UIImage imageWithContentsOfFile:] instead of [UIImage imageNamed:]
However I'm still unable to prevent the app from crashing my iPhone.
(please note that my pngs are not that big about 400x400px / 8k)
Does anyone have any suggestions?
Here's my code:
// code snippet:
myFrames = [[NSMutableArray alloc] initWithCapacity:maxFrames];
NSMutableString *curFrame;
num = 0;
// loop (maxframes = 80)
for(int f = 1; f < maxFrames+1; f++)
{
curFrame = [NSMutableString stringWithString:tName];
if(f < 10) [curFrame appendString:[NSString stringWithFormat:#"00%i",f]];
else if(f>9 && f<100) [curFrame appendString:[NSString stringWithFormat:#"0%i",f]];
else [curFrame appendString:[NSString stringWithFormat:#"%i",f]];
UIImage *img = [UIImage imageWithContentsOfFile: [[NSBundle mainBundle] pathForResource:curFrame ofType:#"png"]];
if(img) [myFrames addObject:img];
[img release];
}
// animate the images!
self.animationImages = myFrames;
self.animationDuration = (maxFrames * .05); // Seconds
[self startAnimating];
The best way to find out is to run the application under Instruments using Leaks or Object Alloc. If you see an upward trend that keeps rising, you might have a leak.
If you're using [UIImage imageNamed:], you should be aware that it pre-caches an optimized version which takes up more memory when compared with [UIImage imageWithContentsOfFile:]. Additionally, until iPhone 3.0, the cache created by [UIImage imageNamed:] doesn't get released when there's a memory warning.
The current-gen iPhone only has 128MB of ram, some of which is used by the OS itself. A 320x480 image fully uncompressed with an alpha channel can take 614k. If you have 400 unique images that are full screen, that's well over 128MB of ram, assuming it is loaded up and cached uncompressed.
The number one reason why an app would not crash on the simulator but on the phone would be memory
On the iphone simulator AFAIK the memory is not limited to 128Mb while on the iphone once it reaches 128Mb it restarts. So check your memory usage on the simulator. You have to change the way you are loading the images and or check for leaks. Also check if your getting low memory warnings by implementing the methods (I forgot what they are called :()
I've seen apps run in the simulator and not on the phone because of improper PNG formatting (even a single improperly formatted image can cause this crash). Check to make sure that the format of your images matches those of PNG files provided by apple in their example apps.
That being said 400 full screen images would easily cause it to run out of memory as in memory they will occupy far more than the 8kb. Not sure how big those images are, but if they're all in memory they will need to be very, very small on the iPhone.
The first answer to your question states that while your PNGs may take up only 8K on disk, that is the compressed on-disk form. When it is loaded into memory, it is decompressed and is much larger than 8K. At 32-bits per pixel, a 400x400 image will be 640K.
Even without the alpha channel, you're looking at 480K. 480K x 80 frames, that is 38.4MB, which is definitely creeping into using more memory than the iphone has available to give your app at once. Here is an article about some of the troubles with obtaining a substantial about of memory from the iPhone OS.