Can some one tell me that how it is possible to rewind the live stream via RED5 Server.?
Is it possible or not.
Code Snippet may help.Reply soon.
Also. I know pause has to deal with the flash player but i want to know from which position stream starts playing(from runtime,where it was stopped).??
Awaiting Quick Response.
B/R
I think you cannot rewind a live stream. As how i understand the live stream is spreaded directly to all connected clients. The frames will not be saved at the server. So the server isnt able to "go back".
You need to record the stream if you want to be able to rewind.
If you pause the stream the last frame is frozen on your screen. The server continues the broadcast and you miss the frames which are broadcasted during that time. If you continue playing, then the next frame is the LIVE frame which is broadcasted at that time. You miss some frames.
Thats the nature of a live stream. It is "live"! If you pause or rewind it isnt live any more. Thats a recorded stream.
Create custom application or modify existing one (for example oflaDemo).
Create server stream in your class in appStart():
private IServerStream serverStream;
...
public boolean appStart( IScope app ) {
serverStream = StreamUtils.createServerStream( app , "MyOwnTVChannel" );
Add .flv files from /streams/ (oflaDemo example) to play:
serverStream.addItem( SimplePlayItem.build( "prometheus" , 0 , 20000 ) );
serverStream.addItem( SimplePlayItem.build( "someOthefFLVMovie" , 0 , 20000 ) );
20000 means 20 sec of playing - you can use setRepeat(true) after starting.
Start your stream:
serverStream.start();
Now, flash clients can watch your own TV channel with NetStream .play( "MyOwnTVChannel" );
Remeber that if you do not set repeating, your channel will end in 40 sec in this example.
Related
getUserMedia(constrains).then(stream => {
var recorder = new MediaRecorder(stream)
})
recorder.start()
recorder.pause()
// get new stream getUserMedia(constrains_new)
// how to update recorder stream here?
recorder.resume()
Is it possible? I've try to create MediaStream and use addTrack and removeTrack methods to change stream tracks but no success (recorder stops when I try to resume it with updated stream)
Any ideas?
The short answer is no, it's not possible. The MediaStream recording spec explicitly describes this behavior: https://w3c.github.io/mediacapture-record/#dom-mediarecorder-start. It's bullet point 15.3 of that algorithm which says "If at any point, a track is added to or removed from stream’s track set, the UA MUST immediately stop gathering data ...".
But in case you only want to record audio you can probably use an AudioContext to proxy your streams. Create a MediaStreamAudioDestinationNode and use the stream that it provides for recording. Then you can feed your streams with MediaStreamAudioSourceNodes and/or MediaStreamTrackAudioSourceNodes into the audio graph and mix them in any way you desire.
Last but not least there are currently plans to add the functionality you are looking for to the spec. Maybe you just have to wait a bit. Or maybe a bit longer depending on the browser you are using. :-)
https://github.com/w3c/mediacapture-record/issues/167
https://github.com/w3c/mediacapture-record/pull/186
I'm using libspotify SDK, C library for win32.
I think to have a right setup, every session callback is registered. I don't understand why i can't receive the call for end_of_track, while music_delivery continues to be called with zero padding 22050 long frames.
I attempt to start playing first loading the track with sp_session_load; till it returns SP_ERROR_IS_LOADING I post a message on my message queue (synchronization method I've used, PostMessage win32 API) in order to reload again with same API sp_session_load. As soon as it returns SP_ERROR_OK I use the sp_session_play and the music_delivery starts immediately, with correct frames.
I don't know why at the end of track the libspotify runtime then start sending zero padded frames, instead of calling end_of_track callback.
In other conditions it works perfectly: I've used the sp_track obtained from a album browse, so the track is fully loaded at the moment I load to the current session for playing: with this track, it works fine with end_of_track called correctly. In the case with padding error, I search the track using its Spotify URI and got the results; in this case the track metadata are not still ready (at the play attempt) so I used that kind of "polling" on sp_session_load with PostMessage.
Can anybody help me?
I ran into the same problem and I think the issue was that I was consuming the data too fast without giving other threads time to do any work since I was spending all of my time in the music_delivery callback. I found that if I add some throttling and notify the main thread that it can wake up to do some processing, the extra zeros at the end of track is reduced to one delivery of 22,050 frames (or 500ms at 44.1kHz).
Here is an example of what I added to my callback, heavily borrowed from the jukebox.c example provided with the SDK:
/* Buffer 1 second of data, then notify the main thread to do some processing */
if (g_throttle > format->sample_rate) {
pthread_mutex_lock(&g_notify_mutex);
g_notify_do = 1;
pthread_cond_signal(&g_notify_cond);
pthread_mutex_unlock(&g_notify_mutex);
// Reset the throttle counter
g_throttle = 0;
return 0;
}
As I said, there was still 22,050 frames of zeros delivered before the track stopped, but I believe libspotify may purposely do this to ensure that the duration calculated by the number of frames received (song_duration_ms = total_frames_delivered / sample_rate * 1000) is greater than or equal to the duration reported by sp_track_duration. In my case, the track I was trying to stream was 172,000ms in duration, without the extra padding the duration calculated is 171,796ms, but with the padding it was 172,296ms.
Hope this helps.
I need to quickly seek thru H.264 encoded video stream in MP4 container. I am using libav to decode frames, so I stumbled upon avformat_seek_file() method.
My problem is, assuming H.264 stream begins with keyframe, when I seek to timestamp 0 (regardless of time_base), I should be at the beggining of the stream. But Im not. I usually get few seconds into video. Also, if I seek to, for example 10 seconds, I usually get around 12 or so. Is it possible for keyframes to be so "rare"? It seems that AVSEEK_FLAG_ANY has no impact on seek result. Tested on multiple FullHD H.264 MP4 videos.
Code:
unsigned long seekTo = 0;
//Doesen´t actually matter for 0 since it will be also 0
seekTo = av_rescale_q(seekTo, AVRational{1, AV_TIME_BASE}, pFormatCtx->streams[videoStream]->time_base);
int result = avformat_seek_file(pFormatCtx, videoStream, INT_FAST64_MIN, seekTo, seekTo, AVSEEK_FLAG_ANY);
avcodec_flush_buffers(pCodecCtx);
Try using av_seek_frame instead. Read here for some gotchas about using that and seeking around.
My problem is, assuming H.264 stream begins with keyframe, when I seek to timestamp 0 (regardless of time_base), I should be at the beggining of the stream
Note that some files can have their first keyframe at a negative DTS, e.g. you need to seek to timestamp -1 or something like this.
You can set the flag inside AVFMT_SEEK_TO_PTS into AVInputFormat::flags before opening the AVFormatContext to use PTS which will be 0-based.
I'm developing an RTSP server that should emulate a live source, while streaming the data from a file.
What I currently have is mostly based on gst-rtsp-server example test-readme.c, only with the following pipeline:
gst_rtsp_media_factory_set_launch(factory, "( "
"filesrc location=stream.mkv ! matroskademux name=demuxer "
"demuxer. ! queue ! rtph264pay name=pay0 pt=96 "
"demuxer. ! queue ! rtpmp4gpay name=pay1 pt=97 "
")");
This works very well, except for one problem: when the RTSP client (which uses RTSP/TCP interleave transport) is not able to receive data, the whole pipeline locks up until the client is ready again, and then resumes at the original position without any jump.
Since I want to emulate live source which cannot buffer its video indefinitely, the desired behavior in this case is to continue playing the file, so when the client blocks for 5 seconds, it will lose 5 seconds of recording.
I've attempted to achieve this by limiting queue sizes and setting them as leaky (by setting them as queue max-size-bytes=1000000 max-size-time=1000000000 leaky=upstream, which should provide buffer to ~1 second of video, but no more). This did not work entirely as I hoped: the source and demuxer filled the queue and then completely emptied themselves in 0.1 sec.
I figured I need some way to throttle pipeline throughput before the queue, either by limiting the demuxer to real-time demuxing, or finding/making a gstreamer filter that will let through 1 second of data per 1 second of real time.
Do you have any hints on how to do this?
So it seems that while leaky queue and limiter can be done, they don't help much in this regard as GStreamer RTSP implementation has its own queue for outgoing TCP data. What appears to work is keeping the pipeline unchanged and patching gst-rtsp-server module to limit its queue length (to 1 MB in this case, recent version also limit message count to 100):
--- gst-rtsp-server-1.4.5/gst/rtsp-server/rtsp-client.c 2014-11-06 11:20:28.000000000 +0100
+++ gst-rtsp-server-1.4.5-r1/gst/rtsp-server/rtsp-client.c 2015-04-28 14:25:14.207888281 +0200
## -3435,11 +3435,11 ##
gst_rtsp_client_set_send_func (client, do_send_message, priv->watch,
(GDestroyNotify) gst_rtsp_watch_unref);
/* FIXME make this configurable. We don't want to do this yet because it will
* be superceeded by a cache object later */
- gst_rtsp_watch_set_send_backlog (priv->watch, 0, 100);
+ gst_rtsp_watch_set_send_backlog (priv->watch, 1000000, 100);
GST_INFO ("client %p: attaching to context %p", client, context);
res = gst_rtsp_watch_attach (priv->watch, context);
return res;
I want to record a stream which is published with Flash Live Encoder to FMS 3.5, but split the recording in files with predefined length. For example if a stream 'webcam' is published I want to record it in chunks of 10 minutes: 'webcam1.flv', 'webcam2.flv' ...
From what I can tell there's no facility to work with timers. The only solution I could think of was using stream.record() with a time limit parameter but that seems like a hack because it triggers NetStream.Record.DiskQuotaExceeded on the stream when the recordin should stop and start recording another chunk.
Has anyone done something similar?
On the server side why not just republish and record the stream with some timestamped name. Then run a timer that fires every ten minutes (or whatever) which stops the recording of that stream, and creates a new server side stream playing the client stream.
Something along the lines of:
setInterval("setNewStream", 600000);
function setNewStream() {
var now = new Date();
serverStream.record(false);
var filename = "recording-"+ now.getHours() + "-" + now.getMinutes();
serverStream = Stream.get(filename);
serverStream.play("clientStream");
serverStream.record();
}