transparent video iPhone - iphone

I have a video which was recorded on blue screen.
Is there a way to make it transparent on iOS devices? I know I can do it with pixel shaders and openGL but I m afraid that the process of decoding video frame/ uploading openGL texture and eliminate fragments with pixel shader will be too slow.
Any suggestions?

It sounds like you want to do some sort of chroma keying with your video. I just added the capability to do this to my GPUImage framework, which as the name indicates uses GPU-based processing to perform these operations many times faster than CPU-bound filters could.
The SimpleVideoFileFilter example in the framework shows how to load a movie, filter it, and encode it back to disk. Modifying this to perform chroma keying gives the following:
NSURL *sampleURL = [[NSBundle mainBundle] URLForResource:#"sample" withExtension:#"m4v"];
movieFile = [[GPUImageMovie alloc] initWithURL:sampleURL];
filter = [[GPUImageChromaKeyBlendFilter alloc] init];
[filter setColorToReplaceRed:0.0 green:0.0 blue:1.0];
[filter setThresholdSensitivity:0.4];
[movieFile addTarget:filter];
UIImage *inputImage = [UIImage imageNamed:#"background.jpg"];
sourcePicture = [[GPUImagePicture alloc] initWithImage:inputImage smoothlyScaleOutput:YES];
[sourcePicture addTarget:filter];
NSString *pathToMovie = [NSHomeDirectory() stringByAppendingPathComponent:#"Documents/Movie.m4v"];
NSURL *movieURL = [NSURL fileURLWithPath:pathToMovie];
movieWriter = [[GPUImageMovieWriter alloc] initWithMovieURL:movieURL size:CGSizeMake(480.0, 640.0)];
[filter addTarget:movieWriter];
[movieWriter startRecording];
[movieFile startProcessing];
[movieWriter setCompletionBlock:^{
[filter removeTarget:movieWriter];
[movieWriter finishRecording];
}];
The above code will load a movie from the application's resources called sample.m4v, feed it into a chroma key filter that is set to key off of pure blue with a sensitivity of 0.4, attach a background image to use for the chroma keying, and then send all that to a movie encoder which writes Movie.m4v in the application's /Documents directory.
You can adjust the threshold and specific blue tint to match your needs, as well as replace the input image with another movie or other source as needed. This process can also be applied to live video from the iOS device's camera, and you can display the results to the screen if you'd like.
On an iPhone 4 running iOS 5.0, chroma keying takes 1.8 ms (500+ FPS) for 640x480 frames of video, 65 ms for 720p frames (15 FPS). The newer A5-based devices are 6-10X faster than that for these operations, so they can handle 1080p video without breaking a sweat. I use iOS 5.0's fast texture caches for both frame uploads and retrievals, which accelerates the processing on that OS version over 4.x.
The one caution I have about this is that I haven't quite gotten the audio to record right in my movie encoding, but I'm working on that right now.

If you mean you want to render the video, but set the blue pixels to transparent, then the only efficient way to do this is with OpenGL. This should be easily possible for iOS devices, video decoding is handled in hardware, and I have several projects where I transfer video frames to OpenGL using glTexSubImage2D, works fine.

Related

Can i use AVAssetWriter in place of AVExportSession?

I want to crop the video given the lenght and width and x and y coordinates and it seems its not at possible with avmutablecomposition so I am planning to use AVAssetWriter to crop video using its aspectFill property in video setting.
BUT MY QUESTION is can we use AVAssetWriter as a replacement for AVExportSession ??
If yes, how to initialise the AVAssetWriterInput with AVAsset object as that we do in AVExportSession,Like this
[[AVAssetExportSession alloc] initWithAsset:videoAsset presetName:AVAssetExportPresetHighestQuality];
It’s possible that what you are looking for is AVMutableComposition naturalSize property which from what it looks from here, allows scaling (not cropping) of the video to wished sizes.
https://developer.apple.com/library/mac/documentation/AVFoundation/Reference/AVMutableComposition_Class/Reference/Reference.html#//apple_ref/occ/instp/AVMutableComposition/naturalSize

iPhone 4S is unable to handle full resolution from AVCaptureSessionPresetPhoto

I've been developing my project on iPhone 4S and the iPhone5. In my project, after the picture is taken, I crop it and then resize the image to apply photo filters. The iPhone 5 seems to handle this very well however on the iPhone 4S, it seems to crash at different points during the picture taking process. I checked to see if there we're any memory leaks that I may have missed. Here is the code below:
[stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error)
{
[captureSession stopRunning];
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
UIImage *newImage = [[UIImage alloc] initWithData:imageData];
After this, I automatically detect the orientation of the picture taken and rotate the picture so that it is always upright.
After this, I crop the image into a square using NYXImagesKit
float filteringSquareRatio = 307.0/320.0;
UIImage *cropped = [newImage cropToSize:CGSizeMake(filteringSquareRatio * newImage.size.width, filteringSquareRatio * newImage.size.width) usingMode:NYXCropModeCenter];
Lastly, I resize the image using MGImageUtilities
UIImage *resized = [cropped imageScaledToFitSize:CGSizeMake(320, 320)];
Is there a better way to do this? I'm currently using AVCaptureSessionPresetPhoto because I would like to save the original high resolution photo on the device and send the cropped and resized version to the server. I don't want to use any of the video presets because the camera is zoomed in. What can be causing the crashing?
If you weren't able to find a solution to this (I see this problem too), you could perhaps detect the 4s and use the "high" preset instead which has lower resolution.

YouTube Video Feed Thumbnails (without black bars)?

I have managed using the YouTube API to fetch thumbnails for my list of videos, however they have black bars top and bottom of the UIImage I get. How can I fetch a thumbnail without these bars and even better a higher quality thumbnail?
Here is the code I use currently:
GDataEntryBase *entry = [[feed entries] objectAtIndex:i];
NSArray *thumbnails = [[(GDataEntryYouTubeVideo *)entry mediaGroup] mediaThumbnails];
NSData *data = [NSData dataWithContentsOfURL:[NSURL URLWithString:[[thumbnails objectAtIndex:0] URLString]]];
UIImage *thumbnail = [UIImage imageWithData:data];
It's also worth noting that all thumbnails are currently in a 4:3 aspect ratio, for historical purposes. If the underlying video is 16:9 and you plan on using a 16:9 player, then it makes sense to position the thumbnail so that the top and bottom black bars are hidden. That's independent of whether you use a lower-resolution or higher-resolution thumbnail.
Well this answer gave me the hint, so I went ahead and took a guess.
How do I get a YouTube video thumbnail from the YouTube API?
Turns out if I just change the index from 0, to 1 of my data object I get a higher quality thumbnail. Magic, easy.

GPUImage Video with transparency over UIView

I am working on an iOS project that uses AV-Out to show contents in a 1280x720 window on a second screen.
I have a MPMoviePlayerController's view as background and on top of that different other elements like UIImages and UILabels.
The background movie plays in a loop.
Now I want to overlay the whole view including all visible elements with another fullscreen animation that has transparency so that only parts of the underlying view are visible.
I first tried a png animation with UIImageView.
I was surprised to find that actually works on iPhone5, but of course the pngs are so big in size that this uses way too much ram and it crashes on everything below iPhone4s.
So i need another way.
I figured out how to play a second movie at the same time using AVFoundation.
So far, so good. Now i can play the overlay video, but of course it is not trasparent yet.
I also learned that with the GPUImage library I can use GPUImageChromaKeyBlendFilter to filter a color out of a video to make it transparent and then combine it with another video.
What i don't understand yet is the best way to implement it in my case to get the result that i want.
Can i use the whole view hierarchy below the top video as first input for the GPUImageChromaKeyBlendFilter and a greenscreen-style video as second input and show the result live in 720p? how would i do that?
Or would it be better to use GPUImageChromaKeyFilter and just filter the greenscreen-style video, and play it in a view above all other views? Would the background of this video be transparent then?
Thanks for your help!
You'll need to build a custom player using AVFoundation.framework and then use a video with alpha channel. The AVFoundation framework allows much more robust handeling of video without many of the limitations of MPMedia framework. Building a custom player isn't as hard as people make it out to be. I've written a tutorial on it here:http://www.sdkboy.com/?p=66
OTHER WAY..........
The SimpleVideoFileFilter example in the framework shows how to load a movie, filter it, and
encode it back to disk. Modifying this to perform chroma keying gives the following:
NSURL *sampleURL = [[NSBundle mainBundle] URLForResource:#"sample" withExtension:#"m4v"];
movieFile = [[GPUImageMovie alloc] initWithURL:sampleURL];
filter = [[GPUImageChromaKeyBlendFilter alloc] init];
[filter setColorToReplaceRed:0.0 green:0.0 blue:1.0];
[filter setThresholdSensitivity:0.4];
[movieFile addTarget:filter];
UIImage *inputImage = [UIImage imageNamed:#"background.jpg"];
sourcePicture = [[GPUImagePicture alloc] initWithImage:inputImage smoothlyScaleOutput:YES];
[sourcePicture addTarget:filter];
NSString *pathToMovie = [NSHomeDirectory() stringByAppendingPathComponent:#"Documents/Movie.m4v"];
NSURL *movieURL = [NSURL fileURLWithPath:pathToMovie];
movieWriter = [[GPUImageMovieWriter alloc] initWithMovieURL:movieURL size:CGSizeMake(480.0, 640.0)];
[filter addTarget:movieWriter];
[movieWriter startRecording];
[movieFile startProcessing];
[movieWriter setCompletionBlock:^{
[filter removeTarget:movieWriter];
[movieWriter finishRecording];
}];
The above code will load a movie from the application's resources called sample.m4v, feed it into a chroma key filter that is set to key off of pure blue with a sensitivity of 0.4, attach a background image to use for the chroma keying, and then send all that to a movie encoder which writes Movie.m4v in the application's /Documents directory.
You can adjust the threshold and specific blue tint to match your needs, as well as replace the input image with another movie or other source as needed. This process can also be applied to live video from the iOS device's camera, and you can display the results to the screen if you'd like.
On an iPhone 4 running iOS 5.0, chroma keying takes 1.8 ms (500+ FPS) for 640x480 frames of video, 65 ms for 720p frames (15 FPS). The newer A5-based devices are 6-10X faster than that for these operations, so they can handle 1080p video without breaking a sweat. I use iOS 5.0's fast texture caches for both frame uploads and retrievals, which accelerates the processing on that OS version over 4.x.
The one caution I have about this is that I haven't quite gotten the audio to record right in my movie encoding, but I'm working on that right now.
The SimpleVideoFileFilter example in the framework shows how to load a movie, filter it, and encode it back to disk. Modifying this to perform chroma keying gives the following:
NSURL *sampleURL = [[NSBundle mainBundle] URLForResource:#"sample" withExtension:#"m4v"];
movieFile = [[GPUImageMovie alloc] initWithURL:sampleURL];
filter = [[GPUImageChromaKeyBlendFilter alloc] init];
[filter setColorToReplaceRed:0.0 green:0.0 blue:1.0];
[filter setThresholdSensitivity:0.4];
[movieFile addTarget:filter];
UIImage *inputImage = [UIImage imageNamed:#"background.jpg"];
sourcePicture = [[GPUImagePicture alloc] initWithImage:inputImage smoothlyScaleOutput:YES];
[sourcePicture addTarget:filter];
NSString *pathToMovie = [NSHomeDirectory() stringByAppendingPathComponent:#"Documents/Movie.m4v"];
NSURL *movieURL = [NSURL fileURLWithPath:pathToMovie];
movieWriter = [[GPUImageMovieWriter alloc] initWithMovieURL:movieURL size:CGSizeMake(480.0, 640.0)];
[filter addTarget:movieWriter];
[movieWriter startRecording];
[movieFile startProcessing];
[movieWriter setCompletionBlock:^{
[filter removeTarget:movieWriter];
[movieWriter finishRecording];
}];
The above code will load a movie from the application's resources called sample.m4v, feed it into a chroma key filter that is set to key off of pure blue with a sensitivity of 0.4, attach a background image to use for the chroma keying, and then send all that to a movie encoder which writes Movie.m4v in the application's /Documents directory.
You can adjust the threshold and specific blue tint to match your needs, as well as replace the input image with another movie or other source as needed. This process can also be applied to live video from the iOS device's camera, and you can display the results to the screen if you'd like.
On an iPhone 4 running iOS 5.0, chroma keying takes 1.8 ms (500+ FPS) for 640x480 frames of video, 65 ms for 720p frames (15 FPS). The newer A5-based devices are 6-10X faster than that for these operations, so they can handle 1080p video without breaking a sweat. I use iOS 5.0's fast texture caches for both frame uploads and retrievals, which accelerates the processing on that OS version over 4.x.
The one caution I have about this is that I haven't quite gotten the audio to record right in my movie encoding, but I'm working on that right now.

How to get real time video stream from iphone camera and send it to server?

I am using AVCaptureSession to capture video and get real time frame from iPhone camera but how can I send it to server with multiplexing of frame and sound and how to use ffmpeg to complete this task, if any one have any tutorial about ffmpeg or any example please share here.
The way I'm doing it is to implement an AVCaptureSession, which has a delegate with a callback that's run on every frame. That callback sends each frame over the network to the server, which has a custom setup to receive it.
Here's the flow:
http://developer.apple.com/library/ios/#documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/03_MediaCapture.html#//apple_ref/doc/uid/TP40010188-CH5-SW2
And here's some code:
// make input device
NSError *deviceError;
AVCaptureDevice *cameraDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
AVCaptureDeviceInput *inputDevice = [AVCaptureDeviceInput deviceInputWithDevice:cameraDevice error:&deviceError];
// make output device
AVCaptureVideoDataOutput *outputDevice = [[AVCaptureVideoDataOutput alloc] init];
[outputDevice setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
// initialize capture session
AVCaptureSession *captureSession = [[[AVCaptureSession alloc] init] autorelease];
[captureSession addInput:inputDevice];
[captureSession addOutput:outputDevice];
// make preview layer and add so that camera's view is displayed on screen
AVCaptureVideoPreviewLayer *previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:captureSession];
previewLayer.frame = view.bounds;
[view.layer addSublayer:previewLayer];
// go!
[captureSession startRunning];
Then the output device's delegate (here, self) has to implement the callback:
-(void) captureOutput:(AVCaptureOutput*)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection*)connection
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer( sampleBuffer );
CGSize imageSize = CVImageBufferGetEncodedSize( imageBuffer );
// also in the 'mediaSpecific' dict of the sampleBuffer
NSLog( #"frame captured at %.fx%.f", imageSize.width, imageSize.height );
}
Sending raw frames or individual images will never work well enough for you (because of the amount of data and number of frames). Nor can you reasonably serve anything from the phone (WWAN networks have all sorts of firewalls). You'll need to encode the video, and stream it to a server, most likely over a standard streaming format (RTSP, RTMP). There is an H.264 encoder chip on the iPhone >= 3GS. The problem is that it is not stream oriented. That is, it outputs the metadata required to parse the video last. This leaves you with a few options.
1) Get the raw data and use FFmpeg to encode on the phone (will use a ton of CPU and battery).
2) Write your own parser for the H.264/AAC output (very hard).
3) Record and process in chunks (will add latency equal to the length of the chunks, and drop around 1/4 second of video between each chunk as you start and stop the sessions).
There is a long and a short story to it.
This is the short one:
go look at https://github.com/OpenWatch/H264-RTSP-Server-iOS
this is a starting point.
you can get it and see how he extracts the frame. This is a small and simple project.
Then you can look at kickflip which has a specific function "encodedFrame" its called back onces and encoded frame arrives from this point u can do what you want with it, send via websocket. There is a bunch of very hard code avalible to read mpeg atoms
Look here , and here
Try capturing video using AV Foundation framework. Upload it to your server with HTTP streaming.
Also check out a stack another stack overflow post below
(The post below was found at this link here)
You most likely already know....
1) How to get compressed frames and audio from iPhone's camera?
You can not do this. The AVFoundation API has prevented this from
every angle. I even tried named pipes, and some other sneaky unix foo.
No such luck. You have no choice but to write it to file. In your
linked post a user suggest setting up the callback to deliver encoded
frames. As far as I am aware this is not possible for H.264 streams.
The capture delegate will deliver images encoded in a specific pixel
format. It is the Movie Writers and AVAssetWriter that do the
encoding.
2) Encoding uncompressed frames with ffmpeg's API is fast enough for
real-time streaming?
Yes it is. However, you will have to use libx264 which gets you into
GPL territory. That is not exactly compatible with the app store.
I would suggest using AVFoundation and AVAssetWriter for efficiency
reasons.