How do you retrieve the video stream from Unity MRTK-WebRTC? - unity3d

Right now, I have successfully managed the Web-RTC connections between the HoloLens and Unity. But now I would like to retrieve the video stream from HoloLens and perform some opencv algorithms. I am currently unsure how I can go about extracting the video stream?
Would greatly appreciate some help in this, really a newbie to all of this.
Thanks a lot.

The VideoReceiver component provided VideoTrack property to receives video frames from the remote peer. Therefore, you can register to I420AVideoFrameReady or Argb32VideoFrameReady event with a callback method to handle every new video frame received.

Related

Is there a way to stream Unity3D camera view out as a real camera output?

I am thinking of streaming out a Unity3D camera view as it were a real camera (same output, streams and options). I would need to do the following:
Encode the frames in either: MJPEG/ MXPEG/ MPEG-4/ H.264/ H.265/ H.264+/ H.265+.
Send metadata: string input/output.
I have not seen anything about streaming out unity camera views, except 1 question (Streaming camera view to website in unity?).
Would anyone know if this were possible? and if so what would the basic outline to follow be?
Thank you for the feedback.
I would probably start with keijiro's FFMPEG Out plugin, I have a strong feeling FFPMEG allows streaming the video via commandline, which is exactly what keijiro is doing in his plugin, should be relatively easy to modify it to stream instead of recording to disk https://github.com/keijiro/FFmpegOut
You can also do it via ROS creating a publisher and publishing camera stream from the Unity to the ROS topic :)

How I can record and save RTCVideoTrack locally in appRTC iOS?

I am using replay kit to record the screen, what I want to achieve is recording or capturing the screen with Audio while I am doing a call using webRTC SDK. I have used appRTC from github.
I think I can achieve this by AVCaptureSession! as I want to exclude replayKit
There is no relevant code to provide.
This is challenging, but it can be done. I can't provide the detailed answers on this because it's pretty core to our app and what we're building and it's A LOT of code, but hopefully it helps to know it can be done.
A couple of pointers for you:
Take a look at http://cocoadocs.org/docsets/GoogleWebRTC/1.1.20266/Classes/RTCCameraVideoCapturer.html This will let you access the AVCaptureSession that WebRTC is using, you can successfully hook up your AVAssetWriter to this.
Look into the RTCVideoRenderer protocol reference. http://cocoadocs.org/docsets/Quickblox-WebRTC/2.2/Protocols/RTCVideoRenderer.html It will enable you to take the frames as WebRTC renders them and process them before sending passing them back to WebRTC. You'll need to convert the RTCI420Frame you receive to a CVPixelBufferRef (which is a YUV420 to RGB conversion).

Recording the live streming

Currently I'm playing the live radio stream using MPMoviePlayerController. I want to record & save the radio programs in my application.
Can someone please help me on this?
Thanks in advance
You would be able to do so using AVFoundation.
Initial an instance of AVURLAsset using you stream url
Use AVAssetReader to read the stream bytes
Use AVAssetWriter to write bytes to new file.
Hope I could help
If you're looking into the audio portion, I made a sample app that can record say, Pandora while it's running. Hope this helps! https://github.com/casspangell/AudioMic

How to record audio stream from internet radio resourse?

In my application I play internet audio/radio stream with AVPlayer. I want to save and playing audio. Could you give me a right way to resolve this problem (with example, please).
check the link http://blog.evandavey.com/2010/04/how-to-iphone-sdk-play-and-record-audio-concurrently.html hope it will help .

IPhone: Video API: Live video streaming modify

I have a question about video stream processing. Is it possible to get access and modify real time video stream during recording (f.e. I want to add some text to video)? I can do this as a preview by getting separate frames, but I'm looking for tool which will allow me to store video with my text in video frames.
Probably there is already some libraries/tools available (but I haven't found any yet).
Try GPUIMAGE library. It can help you.
You should check AVCam sample code by apple. That might be a starting point.