Displaying a YUV formatted video stream in iOS - iphone

I am writing an iOS application that gets live video streams from analog security cameras. I can get the video stream from our server application and decode it from it's proprietary format on the phone. The decoder leaves me with raw YUV (Y'CrCb technically) data. I'm not really sure what the best way (or even how) to display this info.
I've read that I should manually convert to RGB and display in something like a UIImageView, but that seems fairly clunky when there could be upwards of 30 fps on the video stream.
I've also read to use OpenGL with the YUV info to create a 2d texture and display that. Unfortunately I have no idea where to even begin with this and I'm not even sure if this is the direction I want to pursue.
So my question to all of you is: What's the best way to display this information on an iOS device? Secondly if this requires something like OpenGL could anyone suggest a good tutorial, book, code sample or any other learning resource so I can learn more about it.
Thanks in advance.

The best way really is to let the GPU do the job. I do know it's possible with a shader program, but frankly I don't speak OpenGL. This question might be of help, though.

Related

How to offline debug augmented reality in Unity?

I was wondering if there was a way to record the sensor and video data from my iPhone, save it in some way, and then feed it into Unity to test an AR app.
I'd like to see how different algorithms behave on identical input, and that's hard to do when the only way to test is to pick up my phone and wave it around.
What do you can do is capture the image buffer. I've done something similar using ARCore. Not sure if ARKit has a similar implementation. I found this when I did a brief search https://forum.unity.com/threads/how-to-access-arframe-image-in-unity-arkit.496372/
In ARCore, you can take this image buffer and using ImageConversion.EncodeToPNG you can create PNG files with the timestamp. You can pull your sensor data in parallel. Depending on what you want, you can write it to a file using a similar approach: https://support.unity3d.com/hc/en-us/articles/115000341143-How-do-I-read-and-write-data-from-a-text-file-
After which, you can use FFMPEG to convert these PNGs into a video. If you want to try different algorithms, there's a good chance the PNGs alone will be enough. Else you can use a command like so: http://freesoftwaremagazine.com/articles/assembling_video_png_stream_ffmpeg/
You should be able to pass these images and the corresponding sensor data to your algorithm to check.

How I can record and save RTCVideoTrack locally in appRTC iOS?

I am using replay kit to record the screen, what I want to achieve is recording or capturing the screen with Audio while I am doing a call using webRTC SDK. I have used appRTC from github.
I think I can achieve this by AVCaptureSession! as I want to exclude replayKit
There is no relevant code to provide.
This is challenging, but it can be done. I can't provide the detailed answers on this because it's pretty core to our app and what we're building and it's A LOT of code, but hopefully it helps to know it can be done.
A couple of pointers for you:
Take a look at http://cocoadocs.org/docsets/GoogleWebRTC/1.1.20266/Classes/RTCCameraVideoCapturer.html This will let you access the AVCaptureSession that WebRTC is using, you can successfully hook up your AVAssetWriter to this.
Look into the RTCVideoRenderer protocol reference. http://cocoadocs.org/docsets/Quickblox-WebRTC/2.2/Protocols/RTCVideoRenderer.html It will enable you to take the frames as WebRTC renders them and process them before sending passing them back to WebRTC. You'll need to convert the RTCI420Frame you receive to a CVPixelBufferRef (which is a YUV420 to RGB conversion).

Stream screen from Mac to iPhone

I want to create a program that stream the screen of my Mac to my iPhone. Kind of like it is done in Liveview. I'm still relatively new to Objective-C, so I don't know where to start to make such an application.
It seems you have to have something installed both on your Mac and on your iPhone, but how would you actually stream the screen of your Mac to your iPhone?
Hope someone can point me in the right direction.
Update of question
Thanks for the answers. Still seems a bit vague to me and I'm not sure I really need full video streaming. Implementing also seems to be a pain, since there aren't any real good resources for it.
Taking a screenshot every second or so and streaming it to my iPhone as an image, would actually be ok. I've figured out how to stream an image with Bonjour from my Mac to my iPhone.
The screenshot I need to send to my iPhone is of the design that I'm currently working on in photoshop. I've figured out how to take a screenshot and how to get a list of all open windows. But how to make a snapshot of an open PSD-file, I don't know.
Any suggestions on that?
It's a very big subject, so not really something that can be tackled with a simple response. However, I would suggest that one approach would be to write a VNC client for the iPhone. Indeed, this open source exists that's probably worth a look:
http://code.google.com/p/vnsea/
Tim
I would go with the frequent screenshot approach. You would prepare a screenshot of the item you want to transmit and then use some easy library like my DTBonjour to transmit these objects via WiFi to iOS clients.
https://www.cocoanetics.com/2012/11/and-bonjour-to-you-too/
If you were using layer-backing then you could also use the renderLayer... methods which would also include sub-layers.
The most fidelity you'd get from encoding the individual screen shots in a streaming video format, though this is way more work.
This is called RFB (or RDP), and most remote-screen applications use RFB/RDP protocol and libraries which implement it.

How to record user generated sound output on iPhone

I have a series of sounds that a user will play, rearrange, and edit etc. while using my app. When the user is finished, I want them to be able to save their work and record it to an mp3.
I don't want to play it through speakers and record it with the mic since that will result in low sound quality and interference. I cannot think of any ways of doing this that doesn't require extra hardware and/or a computer.
How can I do this using just their device?
Well, I would say it cant be done with AVFoundation.
My suggestion is to use Audio Units, and transform all your interactions to an audio graph. at some point you set a render notify on the RemoteIO so every time it renders sounds to the speakers you get a callback where you can write it down those frames/packets/data into a file.
I will probably suggest to use AAC(m4a) over MP3. I am not very fond of MP3, and to be honest as far as I know the sdk does not provide encoding to MP3, probably due to licensing issues. I could be wrong though. Check this sample code below, probably the best sample code you will ever find on Audio units on the web.
AudioGraph by Tom Zic

What's the best way of live streaming iphone camera to a media server?

According to this What Techniques Are Best To Live Stream iPhone Video Camera Data To a Computer? is possible to get compressed data from iphone camera, but as I've been reading in the AVFoundation reference you only get uncompressed data.
So the questions are:
1) How to get compressed frames and audio from iPhone's camera?
2) Encoding uncompressed frames with ffmpeg's API is fast enough for real-time streaming?
Any help will be really appreciated.
Thanks.
You most likely already know....
1) How to get compressed frames and audio from iPhone's camera?
You can not do this. The AVFoundation API has prevented this from every angle. I even tried named pipes, and some other sneaky unix foo. No such luck. You have no choice but to write it to file. In your linked post a user suggest setting up the callback to deliver encoded frames. As far as I am aware this is not possible for H.264 streams. The capture delegate will deliver images encoded in a specific pixel format. It is the Movie Writers and AVAssetWriter that do the encoding.
2) Encoding uncompressed frames with ffmpeg's API is fast enough for real-time streaming?
Yes it is. However, you will have to use libx264 which gets you into GPL territory. That is not exactly compatible with the app store.
I would suggest using AVFoundation and AVAssetWriter for efficiency reasons.
I agree with Steve. I'd add that on trying with Apple's API, you're going to have to do some seriously nasty hacking. AVAssetWriter by default spends a second before spilling its buffer to file. I haven't found a way to change that with settings. The way around that seems to be to force small file writes and file close with the use of multiple AVAssetWriters. But then that introduces lots of overhead. It's not pretty.
Definitely file a new feature request with Apple (if you're an iOS developer). The more of us that do, the more likely they'll add some sort of writer that can write to a buffer and/or to a stream.
One addition I'd make to what Steve said on the x264 GPL issue is that I think you can get a commercial license for that which is better than GPL, but of course costs you money. But that means you could still use it and get pretty OK results, and not have to open up your own app source. Not as good as an augmented Apple API using their hardware codecs, but not bad.