Is WebRTC a good solution for implementing file playback with an OSD display?
I.e. The sending device streams the *.webm files sequentially (either using MediaStream or DataChannel) out p2p,
and then the receiving web browser displays the video and allows
the user to toggle on and off the OSD display?
(OSD layer can use same DataChannel that has the same PeerConnection
as the video stream's Mediastream/Datachannel.)
So, does this use case align very well as what WebRTC is intended for?
Related
I'm currently developing an app for the HoloLens 2 that needs to stream audio from a desktop PC.
The idea is to send control information (position, orientation, etc.) to a Cycling'74 Max/Msp application running on a Windows 10 computer to process audio for 3D audio playback. I now need to somehow stream the resulting sound to the Unity app running on the HoloLens. Both devices run on the same network.
At the moment I've achieved something using mrtk webrtc for unity in combination with a virtual cable as input. My issue is that this seems to be optimized for microphone use as it applies some options like noise reduction and smaller bandwidth. I can't find a way to set the options for webrtc to stream what I need (music) with better quality.
Does anyone know how to change that on mrtk webrtc or has a better solution for the audio streaming to the hololens?
WebRTC project for Mixed Reality is deprecated and it is designed for real-time communication. If your requirement is media consumption, you need other workaround solutions.
For dedicated media streaming, you can set up a DLNA server on your PC for media access.
You may also set up Samba or NFS on your PC if you need to access files in other formats.
I am using flutter mediasoup client to stream video+audio from the server.
This works well in general.
However, I now want to measure audio level (ie, loud/soft) from the incoming audio stream so that I can display an audio level indicator widget.
The underlying stream is this webrtc class, but there doesn't seem to be any API to directly extract audio level.
I found this thread in flutter-webrtc repo, but it led to no concrete solution.
So, I wonder if anyone has had any success in extracting audio level from webrtc media stream.
Thank you.
I also posted this question in flutter-webrtc github repo and got a response that eventually led to a solution in my case:
https://github.com/flutter-webrtc/flutter-webrtc/issues/833
I am making a video chatting web application using C# socket programming to transfer data. I want to use the Web Audio API to capture audio and video in my view page, but I dont know how to transfer the audio using sockets (which are defined in controller class.) Can the API be used for socket programming if I can capture the raw bits from the API?
(I've also tried using WEB RTC, but I am unable to create multiple peer connections. As my application involves multiple peers, I prefer normal socket programming.)
If you mean, can you just get access to the raw audio/video bits from getUserMedia - yes, you can. (For audio, check out any of the input demos on webaudiodemos.appspot.com - particularly AudioRecorder shows how to get the bits from a ScriptProcessor node.) But I would caution that streaming audio and video over the net is not a trivial task. You can't really just push the bits over the wire with no thought to buffering (or adaptive capabilities, unless you can guarantee high-speed local network only).
I am trying to create a streaming video DVR like functionality in an app I am developing. I have an HTTP Live Stream that I have successfully gotten to play on the iPad. I want the user to be able to push the "Record" button, and begin recording the video that is currently playing from that point. This video file will be accessible from the app or from the camera roll. Currently, I am using the MPMoviePlayerController object to play the video stream. I do not see any methods of accessing the data from the object in Apple's documentation. Here are some thoughts I had on ways of going about this.
1) Somehow access the video data from MPMoviePlayerController, and write this to a file. Or use another type of player object that will allow me to play the video and access the currently playing data.
2) Implement some sort of screen capture recording that gets a video capture of the iPad's screen. This would allow me to record the video in a "screenshot" sort of way.
3) Locate the HTTP Live Streaming video segments where they are stored by MPMoviePlayerController. Presumably they need to be stored somewhere on the iPad for playback. Is there a way of accessing these files?
4) Manually download the stream video segments over http while streaming the file. This seems like its not ideal since the stream would have to be downloaded twice.
5) This could work. Periodically download the video segments to the iPhone. Set up a local http server on the iPhone and server the videos to the MPMoviePlayerController. This way the video segments could be marked for recording and assembled into a video.
6) I do have control of the streaming server. I could write some server side code to record the video on the server end, then send the video to the iPad after the fact. I would rather not do this.
Has anyone done any of these things? Ideally the iPhone would just be able to access the video data somehow and easily record it. I would rather not get into options 4, 5, or 6 (above) if I don't have to.
Thanks in advance.
DVR on the device is somewhat not encouraged, due to the limited space available and other factors like battery life, processing power, cleanup procedures after the user stops the dvr, etc.
If you want to achieve DVR playback on iOS devices (or other devices using HLS), I suggest you keep the video server side. The live stream is already captured and segmented server side, all you would have to do is keep the segments a bit longer, instead of deleting them. By using the EXT-X-PLAYLIST-TYPE and EXT-X-MEDIA-SEQUENCE tags, you can suggest to the player that he's opening a live stream which has DVR (earlier) video available.
Alternatively, you can use a server that does that out of the box, for example Wowza. Here's an article on how to achieve this with Wowza
I'd like to get real-time video from the iPhone to another device (either desktop browser or another iPhone, e.g. point-to-point).
NOTE: It's not one-to-many, just one-to-one at the moment. Audio can be part of stream or via telephone call on iphone.
There are four ways I can think of...
Capture frames on iPhone, send
frames to mediaserver, have
mediaserver publish realtime video
using host webserver.
Capture frames on iPhone, convert to
images, send to httpserver, have
javascript/AJAX in browser reload
images from server as fast as
possible.
Run httpServer on iPhone, Capture 1 second duration movies on
iPhone, create M3U8 files on iPhone, have the other
user connect directly to httpServer on iPhone for
liveStreaming.
Capture 1 second duration movies on
iPhone, create M3U8 files on iPhone,
send to httpServer, have the other
user connected to the httpServer
for liveStreaming. This is a good answer, has anyone gotten it to work?
Is there a better, more efficient option?
What's the fastest way to get data off the iPhone? Is it ASIHTTPRequest?
Thanks, everyone.
Sending raw frames or individual images will never work well enough for you (because of the amount of data and number of frames). Nor can you reasonably serve anything from the phone (WWAN networks have all sorts of firewalls). You'll need to encode the video, and stream it to a server, most likely over a standard streaming format (RTSP, RTMP). There is an H.264 encoder chip on the iPhone >= 3GS. The problem is that it is not stream oriented. That is, it outputs the metadata required to parse the video last. This leaves you with a few options.
Get the raw data and use FFmpeg to encode on the phone (will use a ton of CPU and battery).
Write your own parser for the H.264/AAC output (very hard)
Record and process in chunks (will add latency equal to the length of the chunks, and drop around 1/4 second of video between each chunk as you start and stop the sessions).
"Record and process in chunks (will add latency equal to the length of the chunks, and drop around 1/4 second of video between each chunk as you start and stop the sessions)."
I have just wrote such a code, but it is quite possible to eliminate such a gap by overlapping two AVAssetWriters. Since it uses the hardware encoder, I strongly recommend this approach.
We have similar needs; to be more specific, we want to implement streaming video & audio between an iOS device and a web UI. The goal is to enable high-quality video discussions between participants using these platforms. We did some research on how to implement this:
We decided to use OpenTok and managed to pretty quickly implement a proof-of-concept style video chat between an iPad and a website using the OpenTok getting started guide. There's also a PhoneGap plugin for OpenTok, which is handy for us as we are not doing native iOS.
Liblinphone also seemed to be a potential solution, but we didn't investigate further.
iDoubs also came up, but again, we felt OpenTok was the most promising one for our needs and thus didn't look at iDoubs in more detail.