How to fetch h.264 bytes from iPhone Camera? [duplicate] - iphone

This question already has answers here:
How can we get H.264 encoded video stream from iPhone Camera?
(3 answers)
Closed 9 years ago.
I need to publish live video stream from iPhone Camera to RTMP Server (Wowza server). The video stream must be in h.264 format. I know AVFoundation stores video to a file in h.264 compression but I don;t need to store the video to a file. I just want to capture and send it to the server. I am using following delegate method :
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection{
if (connection == videoConnection) {
//I want something like this
NSData* h264VideoData=[self h264Data:sampleBuffer];
}
}
I don't need to send audio to the server, I just want to send video.

Update for 2017:
You can do streaming Video and Audio now by using the VideoToolbox API.
Read the documentation here: VTCompressionSession
Original answer (from 2013):
That is currently not possible, you'll have to write to a file or compress the video stream with a software-encoder (you won't get HD and very bad battery times though). All possibilities to get a hand on the hardware-encoder have to write to the disk. I think this is because of memory shortage on the devices.
Methods to get hardware accelerated h264 compression:
AVAssetWriter
AVCaptureMovieFileOutput
As you can see both write to a file, writing to a pipe does not work as the encoder updates header information after a frame or GOP has been fully written. So you better don't touch the file while the encoder writes to it as it does randomly rewrite header information. Without this header information the video file will not be playable (it updates the size field, so the first header written says the file is 0 bytes).
You can, however, record 5 seconds and then switch the output-file, transmit the now "old" 5 seconds snippet and delete it afterwards. You'll have to demux the *.mov or *.mp4 container though to get to the h264 video data to send.
If you need audio: if you switch the file you'll loose some audio samples, so you'll have to roll your own buffer management for that (or just record audio separately).

Related

Stream images with h.264 encoding [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 12 months ago.
Improve this question
I am extracting frames from camera and doing processing on the extracted frame.Once the processing is done, I want to stream these frames with h.264 encoding to other system.How can I do that?
You will generally want to put the H.264 into a video container like MP4 or AVI.
For example the wrapping from raw frame to streaming protocol for online video might be:
raw pixels bitmap
raw pixels encoded (e.g. h.264 encoded)
encoded video stream packaged into container with audio streams, subtitles etc (e.g. mp4 container)
container broken into 'chunks' or segments for streaming (on iOS using HLS streaming format).
Another common approach is for a camera to stream content to a dedicated streaming server and the server then provide streams to end devices using a streaming protocol like HLS or MPEG DASH. An example (at the time of writing and it appear to be kept updated) showing a stream from a camera using RTSP to a Server and then HLS or MPEG DASH from the server is here:
https://www.wowza.com/docs/how-to-re-stream-video-from-an-ip-camera-rtsp-rtp-re-streaming
If your use case is simple you will possibly not want to use a segmented ABR streaming protocol like HLS or MPEG-DASH so you could just stream the mp4 file from a regular HTTP server.
One way to approach this that will allow you build on other's examples is to use openCV in Python - you can see an example in this question and answers of writing video frames to an AVI or MP4 container: Writing an mp4 video using python opencv
Once you have your MP4 file created you can place in a folder and use a regular HTTP server to make it available for users to download or stream.
Note that if you want to stream the frames as a live stream, i.e. as you are creating them one by one, then this is trickier as you won't simply have a complete MP4 file to stream. If you do want to do this then leveraging an existing implementation would be a good place to start - this one is an example of a point to point web socket based live stream and I open source and Python based:
https://github.com/rena2damas/remote-opencv-streaming-live-video
if you want to stream data over UDP socket - use RTP protocol for streaming.
Please go through the rfc specification of RFC 6184
Media Pieline for processing the camera data:
Camera RAW data ( RGB/YUV/NV12) -> H.264 encoder -> NALU packets RTP packetization-> Socket communication.
You can use ffmpeg python interface to achieve this goal.

How to play video while it is downloading using AVPro video in unity3D?

I want to play the video simultaneously while it is downloading via unitywebrequest. Will AVPro video support this? If so please provide me some guidance, as i am new to unity and avpro video. I can able to play the video which is downloaded fully through FullscreenVideo.prefab in AVPro demo. Any help will be much appreciated.
There are two main options you could use for displaying the video while it is still downloading.
Through livestream
You can stream a video to AVPro video using the "absolute path or URL" option on the media player component, then linking this to a stream in rtsp, MPEG-DASH, HLS, or HTTP progressive streaming format. Depending on what platforms you will be targeting some of these options will work better than others
A table of which file format supports what platform can be found in the AVProVideo Usermanual that is included with AVProVideo from page 12 and onwards.
If you want to use streaming you also need to set the "internet access" option to "required" in the player settings, as a video cannot stream without internet access.
A video that is being streamed will automatically start/resume playing when enough video is buffered.
This does however require a constant internet connection which may not be ideal if you're targeting mobile devices, or unnecessary if you're planning to play videos in a loop.
HLS m3u8
HTTP Live Streaming (HLS) works by cutting the overall stream into shorter, manageable hunks of data. These chunks will then get downloaded in sequence regardless of how long the stream is. m3u8 is a file format that works with playlists that keeps information on the location of multiple media files instead of an entire video, this can then be fed into a HLS player that will play the small media files in sequence as dictated in the m3u8 file.
using this method is usefull if you're planning to play smaller videos on repeat as the user will only have to download each chunk of the video once, which you can then store for later use.
You can also make these chunks of video as long or short as you want, and set a buffer of how many chunks you want to have pre-loaded. if for example you set the chunk size to 5 seconds, with a buffer of 5 videos the only loading time you'll have is when loading the first 25 seconds of the video. once these first 5 chunks are loaded it will start playing the video and load the rest of the chunks in the background, without interrupting the video (given your internet speed can handle it)
a con to this would be that you have to convert all your videos to m3u8 yourself. a tool such as FFMPEG can help with this though.
references
HLS
m3u8
AVPro documentation

how to stream high quality video and audio from iPhone to remote server through Internet

I am seeking the following three items, which I cannot find on STACKOVERFLOW or anywhere:
sample code for AVFoundation capturing to file chunks (~10seconds) that are ready for compression?
sample code for compressing the video and audio for transmisison across the Internet?
ffmpeg?
sample code for HTTP Live Streaming sending files from iPhone to Internet server?
My goal is to use the iPhone as a high quality AV camcorder that streams to a remote server.
If the intervening data rate bogs down, files should buffer at the iPhone.
thanks.
You can use AVAssetWriter to encode a MP4 file of your desired length. The AV media will be encoded into the container in H264/AAC. You could then simply upload this to a remote server. If you wanted you could segment the video for HLS streaming, but keep in mind that HLS is designed as a server->client streaming protocol. There is no notion of push as far as I know. You would have to create a custom server to accept pushing of segmented video streams (which does not really make a lot of sense given the way HLS is designed. See the RFC Draft. A better approach might be to simply upload the MP4(s) via a TCP socket and have your server segment the video for streaming to client viewers. This could be easily done with FFmpeg either on the command line, or via a custom program.
I also wanted to add that if you try and stream 720p video over a cellular connection your app will more then likely get rejected for excessive data use.
Capture Video and Audio using AVFouncation. You can specify the Audio and Video codecs to kCMVideoCodecType_H264 and kAudioFormatMPEG4AAC, Frame sizes, Frame rates in AVCaptureformatDescription. It will give you Compressed H264 video and AAC aduio.
Encapsulate this and transmit to server using any RTP servers like Live555 Media.

How to effectively transfer real time video between two iOS device (Like facetime, skype, fring, tango)

I know how to get frame from iOS sdk.
[How to capture video frames from the camera as images using AV Foundation(http://developer.apple.com/library/ios/#qa/qa1702/_index.html)]
It's pixel, and i can transfer it to JPEG.
What the way I want to transfer the video is like this:
One iOS device A:
Get the pixel or JPEG from call function
-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
Using existed technology encoding to h.264 - ffmpeg
encapsulate video with TS stream
Run http server, and wait for request
The other iOS device B:
http request to A(using http simply instead of rtp/rtsp)
So my question is, do I need to use ffmpeg to get h.264 stream or i can get from iOS API?
If I use ffmpeg to encode to h.264(libx264), how to do that, is there any sample code or guideline?
I've read the post What's the best way of live streaming iphone camera to a media server?
It's a pretty good discussion, but i want to know the detail.
The license for ffmpeg is incompatible with iOS applications distributed through the App Store.
If you want to transfer realtime video and have any kind of a usable frame rate, you won't want to use http nor TCP.
Although this doesn't directly answer your question about which video format to use, I would suggest looking into some 3rd party frameworks like ToxBox or QuickBlox. There is a fantastic tutorial using Parse and OpenTok here:
http://www.iphonegamezone.net/ios-tutorial-create-iphone-video-chat-app-using-parse-and-opentok-tokbox/

Is there a "simple" way to play linear PCM audio on the iPhone?

I'm making a media playback app which gets given uncompressed linear PCM (über raw) audio from a third-party decoder, but I'm going crazy when it comes to just playing back the most simple audio format I can imagine..
The final app will get the PCM data progressively as the source file streams from a server, so I looked at Audio Queues initially (since I can seemingly just give it bytes on the go), but that turned out to be mindfudge - especially since Apple's own Audio Queue sample code seems to go off on a magical field trip of OpenGL and needless encapsulation..
I ended up using an AVAudioPlayer during (non-streaming) prototyping; I currently create a WAVE header and add it to the beginning of the PCM data to get AVAudioPlayer to accept it (since it can't take raw PCM)
Obviously that's no use for streaming (WAVE sets the entire file length in the header, so data can't be added on-the-go)..
..essentially: is there a way to just give iPhone OS some PCM bytes and have it play them?
You should revisit the AudioQueue code. Once you strip away all the guff you should have about 2 pages of code plus a callback in which you can supply the raw PCM. This callback is also synchronous with your main loop, so you don't even have to worry about locking.