Upload live streaming video from iPhone like Ustream or Qik - iphone

How to live stream videos from iPhone to server like Ustream or Qik? I know there's something called Http Live Streaming from Apple, but most resources I found only talks about streaming videos from server to iPhone.
Is Apple's Http Living Streaming something I should use? Or something else? Thanks.

There isn't a built-in way to do this, as far as I know. As you say, HTTP Live Streaming is for downloads to the iPhone.
The way I'm doing it is to implement an AVCaptureSession, which has a delegate with a callback that's run on every frame. That callback sends each frame over the network to the server, which has a custom setup to receive it.
Here's the flow: https://developer.apple.com/library/content/documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/04_MediaCapture.html#//apple_ref/doc/uid/TP40010188-CH5-SW2
And here's some code:
// make input device
NSError *deviceError;
AVCaptureDevice *cameraDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
AVCaptureDeviceInput *inputDevice = [AVCaptureDeviceInput deviceInputWithDevice:cameraDevice error:&deviceError];
// make output device
AVCaptureVideoDataOutput *outputDevice = [[AVCaptureVideoDataOutput alloc] init];
[outputDevice setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
// initialize capture session
AVCaptureSession *captureSession = [[[AVCaptureSession alloc] init] autorelease];
[captureSession addInput:inputDevice];
[captureSession addOutput:outputDevice];
// make preview layer and add so that camera's view is displayed on screen
AVCaptureVideoPreviewLayer *previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:captureSession];
previewLayer.frame = view.bounds;
[view.layer addSublayer:previewLayer];
// go!
[captureSession startRunning];
Then the output device's delegate (here, self) has to implement the callback:
-(void) captureOutput:(AVCaptureOutput*)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection*)connection
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer( sampleBuffer );
CGSize imageSize = CVImageBufferGetEncodedSize( imageBuffer );
// also in the 'mediaSpecific' dict of the sampleBuffer
NSLog( #"frame captured at %.fx%.f", imageSize.width, imageSize.height );
}
EDIT/UPDATE
Several people have asked how to do this without sending the frames to the server one by one. The answer is complex...
Basically, in the didOutputSampleBuffer function above, you add the samples into an AVAssetWriter. I actually had three asset writers active at a time -- past, present, and future -- managed on different threads.
The past writer is in the process of closing the movie file and uploading it. The current writer is receiving the sample buffers from the camera. The future writer is in the process of opening a new movie file and preparing it for data. Every 5 seconds, I set past=current; current=future and restart the sequence.
This then uploads video in 5-second chunks to the server. You can stitch the videos together with ffmpeg if you want, or transcode them into MPEG-2 transport streams for HTTP Live Streaming. The video data itself is H.264-encoded by the asset writer, so transcoding merely changes the file's header format.

I have found one library that will help you on this.
HaishinKit Streaming Library
Above Library is giving you all option streaming Via RTMP or HLS.
Just follow this library given step and read it all instruction carefully. Please don't direct run example code given in this library it is having some error instead of that get required class and pod into your demo app.
I have just done it with this you can record screen, Camera and Audio.

I'm not sure you can do that with HTTP Live Streaming. HTTP Live Streaming segments the video in 10 secs (aprox.) length, and creates a playlist with those segments.
So if you want the iPhone to be the stream server side with HTTP Live Streaming, you will have to figure out a way to segment the video file and create the playlist.
How to do it is beyond my knowledge. Sorry.

Related

How to get real time video stream from iphone camera and send it to server?

I am using AVCaptureSession to capture video and get real time frame from iPhone camera but how can I send it to server with multiplexing of frame and sound and how to use ffmpeg to complete this task, if any one have any tutorial about ffmpeg or any example please share here.
The way I'm doing it is to implement an AVCaptureSession, which has a delegate with a callback that's run on every frame. That callback sends each frame over the network to the server, which has a custom setup to receive it.
Here's the flow:
http://developer.apple.com/library/ios/#documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/03_MediaCapture.html#//apple_ref/doc/uid/TP40010188-CH5-SW2
And here's some code:
// make input device
NSError *deviceError;
AVCaptureDevice *cameraDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
AVCaptureDeviceInput *inputDevice = [AVCaptureDeviceInput deviceInputWithDevice:cameraDevice error:&deviceError];
// make output device
AVCaptureVideoDataOutput *outputDevice = [[AVCaptureVideoDataOutput alloc] init];
[outputDevice setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
// initialize capture session
AVCaptureSession *captureSession = [[[AVCaptureSession alloc] init] autorelease];
[captureSession addInput:inputDevice];
[captureSession addOutput:outputDevice];
// make preview layer and add so that camera's view is displayed on screen
AVCaptureVideoPreviewLayer *previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:captureSession];
previewLayer.frame = view.bounds;
[view.layer addSublayer:previewLayer];
// go!
[captureSession startRunning];
Then the output device's delegate (here, self) has to implement the callback:
-(void) captureOutput:(AVCaptureOutput*)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection*)connection
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer( sampleBuffer );
CGSize imageSize = CVImageBufferGetEncodedSize( imageBuffer );
// also in the 'mediaSpecific' dict of the sampleBuffer
NSLog( #"frame captured at %.fx%.f", imageSize.width, imageSize.height );
}
Sending raw frames or individual images will never work well enough for you (because of the amount of data and number of frames). Nor can you reasonably serve anything from the phone (WWAN networks have all sorts of firewalls). You'll need to encode the video, and stream it to a server, most likely over a standard streaming format (RTSP, RTMP). There is an H.264 encoder chip on the iPhone >= 3GS. The problem is that it is not stream oriented. That is, it outputs the metadata required to parse the video last. This leaves you with a few options.
1) Get the raw data and use FFmpeg to encode on the phone (will use a ton of CPU and battery).
2) Write your own parser for the H.264/AAC output (very hard).
3) Record and process in chunks (will add latency equal to the length of the chunks, and drop around 1/4 second of video between each chunk as you start and stop the sessions).
There is a long and a short story to it.
This is the short one:
go look at https://github.com/OpenWatch/H264-RTSP-Server-iOS
this is a starting point.
you can get it and see how he extracts the frame. This is a small and simple project.
Then you can look at kickflip which has a specific function "encodedFrame" its called back onces and encoded frame arrives from this point u can do what you want with it, send via websocket. There is a bunch of very hard code avalible to read mpeg atoms
Look here , and here
Try capturing video using AV Foundation framework. Upload it to your server with HTTP streaming.
Also check out a stack another stack overflow post below
(The post below was found at this link here)
You most likely already know....
1) How to get compressed frames and audio from iPhone's camera?
You can not do this. The AVFoundation API has prevented this from
every angle. I even tried named pipes, and some other sneaky unix foo.
No such luck. You have no choice but to write it to file. In your
linked post a user suggest setting up the callback to deliver encoded
frames. As far as I am aware this is not possible for H.264 streams.
The capture delegate will deliver images encoded in a specific pixel
format. It is the Movie Writers and AVAssetWriter that do the
encoding.
2) Encoding uncompressed frames with ffmpeg's API is fast enough for
real-time streaming?
Yes it is. However, you will have to use libx264 which gets you into
GPL territory. That is not exactly compatible with the app store.
I would suggest using AVFoundation and AVAssetWriter for efficiency
reasons.

Using different resolution presets with AVFoundation

I'm trying to use AVFoundation to have three recording modes: Audio, Video and Photo. Audio and Video work just fine, but the problem is, if I set the session preset to AVCaptureSessionPreset352x288, the still pictures are also saved at that resolution. If I change my session preset to AVCaptureSessionPresetPhoto, then the photos look great but the video stops working because that isn't a supported preset for video. I've tried creating multiple sessions, reassigning the session preset, etc. but nothing seems to work. Anyone have a way to make this work with the video at a low resolution and still images at full resolution?
before taking the picture set the property for a new session preset
// captureSession is your capture session object
[captureSession beginConfiguration];
captureSession.sessionPreset = AVCaptureSessionPresetHigh;
[captureSession commitConfiguration];
then call your capture image handler
captureStillImageAsynchronouslyFromConnection: completionHandler:
then change back to low res (= prevPreset)
[captureSession beginConfiguration];
captureSession.sessionPreset = prevPreset;
[captureSession commitConfiguration];

During http live streaming audio continues but video cuts out after 30 seconds

I have edited my videos in final cut pro and used their export to http live streaming, which includes an audio, cell low and hi video, wifi low and hi, .m3u8 and index files. I have put all the files onto my web server and am using this to call the videos
-(IBAction)introVideo:(id)sender
{
NSLog(#"intro button pressed");
NSString *url = #"http://www.andalee.com/iPhoneVideos/intro/Intro.m3u8";
MPMoviePlayerViewController* moviePlayer = [[MPMoviePlayerViewController alloc] initWithContentURL:[NSURL URLWithString:url]];
[self presentMoviePlayerViewControllerAnimated:moviePlayer];
}
(Side note: how should this be released?)
Here is the Index.m3u8
#EXTM3U
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=225416,CODECS="mp4a.40.2, avc1.42e015"
Intro%20-%20Cellular%20Low.segments/prog_index.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=480386,CODECS="mp4a.40.2, avc1.42e015"
Intro%20-%20Cellular%20High.segments/prog_index.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=751434,CODECS="mp4a.40.2, avc1.42e01e"
Intro%20-%20Wi-Fi%20Low.segments/prog_index.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=1250210,CODECS="mp4a.40.2, avc1.4d401e"
Intro%20-%20Wi-Fi%20High.segments/prog_index.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=2545049,CODECS="mp4a.40.2, avc1.4d401e"
Intro%20-%20Broadband%20Low.segments/prog_index.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=5056100,CODECS="mp4a.40.2, avc1.4d401f"
Intro%20-%20Broadband%20High.segments/prog_index.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=33290,CODECS="mp4a.40.2"
Intro%20-%20Audio%20for%20HTTP%20Live%20Streaming.segments/prog_index.m3u8
When I test my app I initially get video and sound, but after 30 seconds I lose video while the audio continues to play. Any ideas what would be causing this?
This could simply be caused by a low bandwidth condition, which will trigger a bitrate change (in this case to the audio only version). If you try it in the emulator with a local server it might work correctly.
Most likely the file which is used next after the intro has wrong codec or the path is not right. Make sure all of the paths in Intro.m3u8 are correct and reachable from outside.

iOS Video: More than 4 simultaneous AVAssetReaders possible?

I would like to render multiple H264 mp4 videos on multiple views at the same time. Target is to read about 8 short videos, each at a size of 100x100 pixels and let them display their content on multiple positions on the screen, simultaneously.
Imagine 24 squares on the screen, each showing one video out of pool of 8 videos.
MoviePlayer doesn't work, for it's only showing one fullscreen video. An AVPlayer with multiple AVPlayerLayers is limited, because only the most-recently-created Layer will show it's content on screen (according to the documentation and my testing).
So, i wrote a short video class and created an instance for every .mp4 file in my package, using AVAssetReader to read it's content. On update, every videoframe is retreived converted to an UIImage and displayed, according to the video's framerate. Furthermore, these images are cached for a fast access on looping.
- (id) initWithAsset:(AVURLAsset*)asset withTrack:(AVAssetTrack*)track
{
self = [super init];
if (self)
{
NSDictionary* settings = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA], (NSString*)kCVPixelBufferPixelFormatTypeKey, nil];
mOutput = [[AVAssetReaderTrackOutput alloc] initWithTrack:track outputSettings:settings];
mReader = [[AVAssetReader alloc] initWithAsset:asset error:nil];
[mReader addOutput:mOutput];
BOOL status = [mReader startReading];
}
return self;
}
- (void) update:(double)elapsed
{
CMSampleBufferRef buffer = [mOutput copyNextSampleBuffer];
if (buffer)
{
UIImage* image = [self imageFromSampleBuffer:sampleBuffer];
}
[...]
}
Actually this works pretty well, but only for 4 videos. The fifth one never shows up. First I thought of memory issues, but I tested it on the following devices:
iPhone 3GS
iPhone 4
iPad
iPad 2
I had the same behaviour on each device: 4 videos playing in a smooth loop, no differences.
If it would have been a memory issue, I would have expect at least either the iPad 2 to show 5 or 6 videos (due to it's better hardware) or the 3GS to show only 1 or a crash somewhere.
The simulator shows all videos, though.
Debugging on the device shows, that
BOOL status = [mReader startReading];
returns false for video 5,6,7 and 8.
So, is there some kind of hardware setting (or restriction) that doesn't allow more than 4 simultaneous AVAssetReaders? Because, I can't really explain this behaviour. I don't think that all devices have the exact same amount of video memory.
Yes, iOS has an upper limit on the number of videos that can be decoded at one time. While your approach is good, I don't know of any way to work around this upper limit as far as having that many h.264 decoders active at once. If you are interested, please have a look at my solution to this problem, this is an xcode project called Fireworks. Basically, this demo shows decoding a bunch of alpha channel videos to disk, then each one is played by mapping a portion of the video files into memory. This approach makes it possible to decode more than 4 movies at the same time without using up all the system memory and without running into the hard limit of the number of h.264 decoder objects.
Have you tried creating separate AVPlayerItems based on the same AVAsset for each AVPlayerLayer?
Here's my latest iteration of a perfectly smooth-scrolling collection view with real-time video previews (up to 16 at a time):
https://youtu.be/7QlaO7WxjGg
It even uses a cover flow custom layout and "reflection" view that mirrors the video preview perfectly. The source code is here:
http://www.mediafire.com/download/ivecygnlhqxwynr/VideoWallCollectionView.zip

Streaming with an AVAudioplayer

In order to load a sound file, I have the following code in my application :
- (id) init:(NSString*)a_filename ext:(NSString*)a_ext
{
...
NSString *t_soundFilePath = [CFileLoader getPathForResource:filename WithExtension:ext];
NSURL *t_fileURL = [[[NSURL alloc] initFileURLWithPath: t_soundFilePath] autorelease];
player = [[AVAudioPlayer alloc] initWithContentsOfURL: t_fileURL error: nil];
[player prepareToPlay];
...
}
All the sounds that I load are in my bundle, so I would like to know if the method "initwithcontentsofurl" stream the sound file or if all the file is cached.
I have a lot of sprites in my app so I want to minimize memory space used for sounds.
thx for your help
I used AVPlayer (not AVAudioPlayer) to stream background audio.
Sounds that are played using the AVAudioPlayer are streamed real time from storage. The drawback being that you can occasionally notice a small lag when it is due to start up. If you use OpenAL the sounds are loaded entirely into memory.
There's a good discussion of this sort of thing in the ObjectAL documentation - a library that is meant to simplify the playing of sounds and effects on the iPhone.
If you want to stream audio in your app you can use the following player instead of using AVAudioPlayer
https://github.com/mattgallagher/AudioStreamer
For this player you don't need to put sound files in your bundle, you can put them at server and use url to stream audio.
Hope, this wil help you.
" The AVAudioPlayer class does not provide support for streaming audio
based on HTTP URL's. The URL used with initWithContentsOfURL: must be
a File URL (file://). That is, a local path".
https://developer.apple.com/library/ios/qa/qa1634/_index.html
But you can work around - for example write asynchronously to local file, and init AVAudioPlayer with this file URL, it will work.