Not Able to switch off video on my local system, although I am able to do it with the socket signalling server, but that is not saving the bandwidth, I want to save the bandwidth, by turning the audio and video off in reality, and same should happen at peer side.
To switch of Video and Audio at your local system and to see that same effect on the peers side, you need to change the stream and renegotiate with the peer using new SDP constraints, and session constraints.
Related
Problem:
Streaming live audio via an Icecast mountpoint. On the server side, when the live show stops, the server reverts to playing a music playlist (the actual mountpoint stays /live). However, when the live stream stops, the audio player stops too. Dev tools says the request has been cancelled. Player must be in HTML5, so no Flash.
Mountpoint: http://198.154.112.233:8716/
Stream: http://198.154.112.233:8716/live
I've tried:
Listening for the stream to end, and tell the player to reconnect. However, all of the events on the jPlayer and Mediaelement.js APIs don't return anything when the stream is interrupted.
Busy contacting the server host to ask for advice when dealing with their behind-the-scenes playlist switcher.
I'd like to find a client-side solution to this. Could websockets / webrtc solve this problem by keeping a connection open?
Your problem isn't client-side, but how you are handling your encoding. No changes client-side can appropriately fix this problem.
The stream configuration you are using is that the encoder is using files on disk as a backup stream. Unfortunately, it sounds like instead of re-encoding and splicing (and matching sample-rate and channels if needed), it is just sending the raw file data.
This works some of the time, as MPEG decoders are often tolerant of corrupt streams, and will re-sync. However, sometimes the stream is too broken, and the decoder gives up. The decoder will also often stop if there is a change in sample rate or channel count. (Bitrate changes are generally not a large problem.)
To fix your problem, you must contact your host.
Yes this is unfortunately a problem if the playlist and live stream are not the same codec. Additional tools such as Liquidsoap have solved the problem for me, as well as providing many more features:
savonet.sourceforge.net
I am currently doing a small project and one of the components is to capture the webcam stream from one side to the other (Client-->Server). right now i have the stream from the Server as bytes and as far as i know i should transfer these bytes using UDP. My question is how to do that,
is it should be enclosed into a file and then transferred?
should i transfer the raw bytes?
should i create a buffer at the client side and when it gets full show it on the screen?
in short i would like to know how to implement the transfer of the stream from the server to the client (i need just on side).
You can stream a webcam to a client via multiple ways.
use Windows media Server/ Flash media Server. Push your webcam to the server by Windows Media Encoder or flash media encoder, and use the server live link to playback on the client(windows /Web).
Use Windows Media Encoder to stream your webcam to anyone without a server involved. when your encoder starts, you will get a URL to view your stream, which you can use to playback on the client(windows /Web).
use third party streaming services, where they give you a publishing point to publish your webcam stream, and use the link provided by them to playback on the client(windows /Web).. (check with brighcove or Mogulus by LiveStream
Hope this helps.
I am seeking the following three items, which I cannot find on STACKOVERFLOW or anywhere:
sample code for AVFoundation capturing to file chunks (~10seconds) that are ready for compression?
sample code for compressing the video and audio for transmisison across the Internet?
ffmpeg?
sample code for HTTP Live Streaming sending files from iPhone to Internet server?
My goal is to use the iPhone as a high quality AV camcorder that streams to a remote server.
If the intervening data rate bogs down, files should buffer at the iPhone.
thanks.
You can use AVAssetWriter to encode a MP4 file of your desired length. The AV media will be encoded into the container in H264/AAC. You could then simply upload this to a remote server. If you wanted you could segment the video for HLS streaming, but keep in mind that HLS is designed as a server->client streaming protocol. There is no notion of push as far as I know. You would have to create a custom server to accept pushing of segmented video streams (which does not really make a lot of sense given the way HLS is designed. See the RFC Draft. A better approach might be to simply upload the MP4(s) via a TCP socket and have your server segment the video for streaming to client viewers. This could be easily done with FFmpeg either on the command line, or via a custom program.
I also wanted to add that if you try and stream 720p video over a cellular connection your app will more then likely get rejected for excessive data use.
Capture Video and Audio using AVFouncation. You can specify the Audio and Video codecs to kCMVideoCodecType_H264 and kAudioFormatMPEG4AAC, Frame sizes, Frame rates in AVCaptureformatDescription. It will give you Compressed H264 video and AAC aduio.
Encapsulate this and transmit to server using any RTP servers like Live555 Media.
I'm working on a personal project where the iPhone connects to a server-type application running on a Mac. The iPhone send and receives textual/ASCII data via standard sockets. I now need to stream the microphone from the Mac to the iPhone. I've done some work with AudioServices before but wanted to check my thoughts here before getting too deep.
I'm thinking I can:
1. Create an Audio Queue in the standard Cocoa application on the Mac.
2. In my Audio Queue Callback function, rather than writing it to a file, write it to another socket I open for audio streaming.
3. On the iPhone, receive the raw sampled/encoded audio data from the TCP stream and dump it into an Audio Queue Player which outputs to headphone/speaker.
I know this is no small task and I've greatly simplified what I need to do but could it be as easy as that?
Thanks for any help you can provide,
Stateful
This looks broadly sensible, but you'll almost certainly need to do a few more things:
Buffering. On the "recording" end, you probably don't want to block the audio queue if the buffer is full. On the "playback" end, I don't think you can just pass buffers into the queue (IIRC you'll need to buffer it until you get a callback).
Concurrency. I'm pretty sure AQ callbacks happen on their own thread, so you'll need some sort of locking/barriers around your buffer accesses.
Buffer pools, if memory allocation ends up being a big overhead.
Compression. AQ might be able to give you "IMA4" frames (IMA ADPCM 4:1, or so); I'm not sure if it does hardware MP3 decompression on the iPhone.
Packetization, if e.g. you need to interleave voice chat with text chat.
EDIT: Playback sync (or whatever you're supposed to call it). You need to be able to handle different effective audio clock rates, whether it's due to a change in latency or something else. Skype does it by changing playback speed (with pitch-correction).
EDIT: Packet loss. You might be able to get away with using TCP over a short link, but that depends a lot on the quality of your wireless network. UDP is a minor pain to get right (especially if you have to detect an MTU hole).
Depending on your data rates, it might be worthwhile going for the lower-level (BSD) socket API and potentially even using readv()/writev().
If all you want is an "online radio" service and you don't care about the protocol used, it might be easier to use AVPlayer/MPMoviePlayer to play audio from a URL instead. This involves implementing a server which speaks Apple's HTTP streaming protocol; I believe Apple has some sample code that does this.
My iPhone application plays a wma audio stream over the mms:// protocol.
When the Wi-Fi connection drops it won't switch to 3G to continue streaming. I have enough buffer to play for another 15 seconds (I tried to increase the buffer size, but it will stop anyway).
What sort of mechanism can I implement so the playing won't stop and the iPhone changes from Wi-fi to 3G?
In other posts I saw that it should do it automatically, but in my case it doesn't, because its wma over mms protocol.
Thanks!
I may be talking out my butt, however, you could try detecting the Error from the connection when it fails. and restart the stream from your buffered location so you can resume the download. If the internet is down the iPhone should reconnect it to accommodate your new request for the mms feed.