Case
Reading from a file continuously and feeding to appsrc element.
Source - appsrc
I have a GStreamer pipeline in PLAYING state. Now would want the pipeline to flush / clean when I press a button that means appsrc queue should be cleared. The playback should start from whatever buffers are now added to / or were added after flush.
Issue
the APIs I used returned false. I am not able to flush.
fprintf(stderr, "The flush event start was <%d>",gst_element_send_event(GST_ELEMENT (pipe), gst_event_new_flush_start());
fprintf(stderr, "The flush event stop was <%d>",gst_element_send_event(GST_ELEMENT (pipe), gst_event_new_flush_stop()));
Both the above returned 0. That means false.
What is the reason for this false ?
How can I try flushing the data in a pipeline with some API? Or Is there any other API for skipping playback ?
Tried
Sending gst_event_new_flush_start () and gst_event_new_flush_stop () to the pipeline with and without a gap of some milliseconds
gst_event_new_seek (1.0, GST_FORMAT_TIME, GST_SEEK_FLAG_FLUSH, GST_SEEK_TYPE_SET, 0, GST_SEEK_TYPE_SET, 0);
setting pipeline to NULL and then to PLAYING again
All these could not work.
It was not working for me, because I was using 0.10 v of gstreamer.If there is not a constraint to use 0.10 version of gstreamer, please use 1.2 version of gstreamer.
However, in v 0.10 the flushing seek generally works as a substitute to flush the pipeline.
While using 1.2v, the two APIs gst_event_new_flush_start () and gst_event_new_flush_stop () work. In that case you can use the flush directly.
In either way you should be able to flush. Make an internal API where you do a flushing seek and use it as your flush API.
There is a post in gstreamer-devel mailing listwhich has talked about behavior of flush in the two versions.
Related
I need to log trace events during boot so I configure an AutoLogger with all the required providers. But when my service/process starts I want to switch to real-time mode so that the file doesn't explode.
I'm using TraceEvent and I can't figure out how to do this move correctly and atomically.
The first thing I tried:
const int timeToWait = 5000;
using (var tes = new TraceEventSession("TEMPSESSIONNAME", #"c:\temp\TEMPSESSIONNAME.etl") { StopOnDispose = false })
{
tes.EnableProvider(ProviderExtensions.ProviderName<MicrosoftWindowsKernelProcess>());
Thread.Sleep(timeToWait);
}
using (var tes = new TraceEventSession("TEMPSESSIONNAME", TraceEventSessionOptions.Attach))
{
Thread.Sleep(timeToWait);
tes.SetFileName(null);
Thread.Sleep(timeToWait);
Console.WriteLine("Done");
}
Here I wanted to make that I can transfer the session to real-time mode. But instead, the file I got contained events from a 15s period instead of just 10s.
The same happens if I use new TraceEventSession("TEMPSESSIONNAME", #"c:\temp\TEMPSESSIONNAME.etl", TraceEventSessionOptions.Create) instead.
It seems that the following will cause the file to stop being written to:
using (var tes = new TraceEventSession("TEMPSESSIONNAME"))
{
tes.EnableProvider(ProviderExtensions.ProviderName<MicrosoftWindowsKernelProcess>());
Thread.Sleep(timeToWait);
}
But here I must reenable all the providers and according to the documentation "if the session already existed it is closed and reopened (thus orphans are cleaned up on next use)". I don't understand the last part about orphans. Obviously some events might occur in the time between closing, opening and subscribing on the events. Does this mean I will lose these events or will I get the later?
I also found the following in the documentation of the library:
In real time mode, events are buffered and there is at least a second or so delay (typically 3 sec) between the firing of the event and the reception by the session (to allow events to be delivered in efficient clumps of many events)
Does this make the above code alright (well, unless the improbable happens and for some reason my thread is delayed for more than a second between creating the real-time session and starting processing the events)?
I could close the session and create a new different one but then I think I'd miss some events. Or I could open a new session and then close the file-based one but then I might get duplicate events.
I couldn't find online any examples of moving from a file-based trace to a real-time trace.
I managed to contact the author of TraceEvent and this is the answer I got:
Re the exception of the 'auto-closing and restarting' feature, it is really questions about the OS (TraceEvent simply calls the underlying OS API). Just FYI, the deal about orphans is that it is EASY for your process to exit but leave a session going. This MAY be what you want, but often it is not, and so to make the common case 'just work' if you do Create (which is the default), it will close a session if it already existed (since you asked for a new one).
Experimentation of course is the touchstone of 'truth' but I would frankly expecting unusual combinations to just work is generally NOT true.
My recommendation is to keep it simple. You need to open a new session and close the original one. Yes, you will end up with duplicates, but you CAN filter them out (after all they are IDENTICAL timestamps).
The other possibility is use SetFileName in its intended way (from one file to another). This certainly solves your problem of file size growth, and often is a good way to deal with other scenarios (after all you can start up you processing and start deleting files even as new files are being generated).
I'm trying to use delay and amb to execute a sequence of the same task separated by time.
All I want is for a download attempt to execute some time in the future only if the same task failed before in the past. Here's how I have things set up, but unlike what I'd expect, all three downloads seem to execute without delay.
Observable.amb([
Observable.catch(redditPageStream, Observable.empty()).delay(0 * 1000),
Observable.catch(redditPageStream, Observable.empty()).delay(30 * 1000),
Observable.catch(redditPageStream, Observable.empty()).delay(90 * 1000),
# Observable.throw(new Error('Failed to retrieve reddit page content')).delay(10000)
# Observable.create(
# (observer) ->
# throw new Error('Failed to retrieve reddit page content')
# )
]).defaultIfEmpty(Observable.throw(new Error('Failed to retrieve reddit page content')))
full code can be found here. src
I was hoping that the first successful observable would cancel out the ones still in delay.
Thanks for any help.
delay doesn't actually stop the execution of what ever you are doing it just delays when the events are propagated. If you want to delay execution you would need to do something like:
redditPageStream.delaySubscription(1000)
Since your source is producing immediately the above will delay the actual subscription to the underlying stream to effectively delay when it begins producing.
I would suggest though that you use one of the retry operators to handle your retry logic though rather than rolling your own through the amb operator.
redditPageStream.delaySubscription(1000).retry(3);
will give you a constant retry delay however if you want to implement the linear backoff approach you can use the retryWhen() operator instead which will let you apply whatever logic you want to the backoff.
redditPageStream.retryWhen(errors => {
return errors
//Only take 3 errors
.take(3)
//Use timer to implement a linear back off and flatten it
.flatMap((e, i) => Rx.Observable.timer(i * 30 * 1000));
});
Essentially retryWhen will create an Observable of errors, each event that makes it through is treated as a retry attempt. If you error or complete the stream then it will stop retrying.
I'm trying to develop my own, custom plugin with N request sink pads and M sometimes src pads. Sink pads are added to GstCollectPads object. I've managed to make plugin up & running, it receives buffers and process them the right way within gst_my_plugin_collected( GstCollectPads *pads ) callback, than i push the buffers to the peer of selected src pad.
These are the last lines in my *_collected(...) implementation.
281 GSList *it = pads->data;
282 for( it; it != NULL; it=it->next ) {
283 cdata = (GstCollectData*)(it->data);
284 outbuf = gst_collect_pads_peek( pads, cdata );
295 gst_pad_push( elem->srcpads[i++], outbuf );
298 }
299 return GST_FLOW_OK;
Sample pipeline:
gst-ndl-launch filesrc location=in.log ! myplugin ! filesink location=out.log
runs in the infinite loop processing all the time the same data from in.log file writing it to out.log file, just like it doesn't know when it reaches End-Of-File.
My guess is, I somehow need to tell my plugin that processing should stop, maybe by sending EOS message in some way, however I've got no idea how to do it. Thus my question is:
What should i do within my plugin in order to stop processing when End of file occurs?
// UPDATE:
It appears that my pipeline processes only first buffer in an infinite loop.
So my previous idea about sending EOS message was invalid, instead i must somehow remove
processed buffer in order to receive next one. Still don't know how to do it so any help will be appreciated.
What should i do after processing buffer from GstCollectData so that it won't process the same buffer again and again?
Ok maybe simpler answer:
gst_collect_pads_pop () instead of gst_collect_pads_peek ()?
Make sure to check for null.
http://gstreamer.freedesktop.org/data/doc/gstreamer/head/gstreamer-libs/html/GstCollectPads.html#gst-collect-pads-pop
Turns out it was just what i suspected at the begining, not handling EOS event.
In order to fix my issue i had to implement gst_collect_pads_event function,
its strange though that there is not a single word about this in GstCollectPads reference page.
(problem solved)
I'm using libspotify SDK, C library for win32.
I think to have a right setup, every session callback is registered. I don't understand why i can't receive the call for end_of_track, while music_delivery continues to be called with zero padding 22050 long frames.
I attempt to start playing first loading the track with sp_session_load; till it returns SP_ERROR_IS_LOADING I post a message on my message queue (synchronization method I've used, PostMessage win32 API) in order to reload again with same API sp_session_load. As soon as it returns SP_ERROR_OK I use the sp_session_play and the music_delivery starts immediately, with correct frames.
I don't know why at the end of track the libspotify runtime then start sending zero padded frames, instead of calling end_of_track callback.
In other conditions it works perfectly: I've used the sp_track obtained from a album browse, so the track is fully loaded at the moment I load to the current session for playing: with this track, it works fine with end_of_track called correctly. In the case with padding error, I search the track using its Spotify URI and got the results; in this case the track metadata are not still ready (at the play attempt) so I used that kind of "polling" on sp_session_load with PostMessage.
Can anybody help me?
I ran into the same problem and I think the issue was that I was consuming the data too fast without giving other threads time to do any work since I was spending all of my time in the music_delivery callback. I found that if I add some throttling and notify the main thread that it can wake up to do some processing, the extra zeros at the end of track is reduced to one delivery of 22,050 frames (or 500ms at 44.1kHz).
Here is an example of what I added to my callback, heavily borrowed from the jukebox.c example provided with the SDK:
/* Buffer 1 second of data, then notify the main thread to do some processing */
if (g_throttle > format->sample_rate) {
pthread_mutex_lock(&g_notify_mutex);
g_notify_do = 1;
pthread_cond_signal(&g_notify_cond);
pthread_mutex_unlock(&g_notify_mutex);
// Reset the throttle counter
g_throttle = 0;
return 0;
}
As I said, there was still 22,050 frames of zeros delivered before the track stopped, but I believe libspotify may purposely do this to ensure that the duration calculated by the number of frames received (song_duration_ms = total_frames_delivered / sample_rate * 1000) is greater than or equal to the duration reported by sp_track_duration. In my case, the track I was trying to stream was 172,000ms in duration, without the extra padding the duration calculated is 171,796ms, but with the padding it was 172,296ms.
Hope this helps.
I'm developing an RTSP server that should emulate a live source, while streaming the data from a file.
What I currently have is mostly based on gst-rtsp-server example test-readme.c, only with the following pipeline:
gst_rtsp_media_factory_set_launch(factory, "( "
"filesrc location=stream.mkv ! matroskademux name=demuxer "
"demuxer. ! queue ! rtph264pay name=pay0 pt=96 "
"demuxer. ! queue ! rtpmp4gpay name=pay1 pt=97 "
")");
This works very well, except for one problem: when the RTSP client (which uses RTSP/TCP interleave transport) is not able to receive data, the whole pipeline locks up until the client is ready again, and then resumes at the original position without any jump.
Since I want to emulate live source which cannot buffer its video indefinitely, the desired behavior in this case is to continue playing the file, so when the client blocks for 5 seconds, it will lose 5 seconds of recording.
I've attempted to achieve this by limiting queue sizes and setting them as leaky (by setting them as queue max-size-bytes=1000000 max-size-time=1000000000 leaky=upstream, which should provide buffer to ~1 second of video, but no more). This did not work entirely as I hoped: the source and demuxer filled the queue and then completely emptied themselves in 0.1 sec.
I figured I need some way to throttle pipeline throughput before the queue, either by limiting the demuxer to real-time demuxing, or finding/making a gstreamer filter that will let through 1 second of data per 1 second of real time.
Do you have any hints on how to do this?
So it seems that while leaky queue and limiter can be done, they don't help much in this regard as GStreamer RTSP implementation has its own queue for outgoing TCP data. What appears to work is keeping the pipeline unchanged and patching gst-rtsp-server module to limit its queue length (to 1 MB in this case, recent version also limit message count to 100):
--- gst-rtsp-server-1.4.5/gst/rtsp-server/rtsp-client.c 2014-11-06 11:20:28.000000000 +0100
+++ gst-rtsp-server-1.4.5-r1/gst/rtsp-server/rtsp-client.c 2015-04-28 14:25:14.207888281 +0200
## -3435,11 +3435,11 ##
gst_rtsp_client_set_send_func (client, do_send_message, priv->watch,
(GDestroyNotify) gst_rtsp_watch_unref);
/* FIXME make this configurable. We don't want to do this yet because it will
* be superceeded by a cache object later */
- gst_rtsp_watch_set_send_backlog (priv->watch, 0, 100);
+ gst_rtsp_watch_set_send_backlog (priv->watch, 1000000, 100);
GST_INFO ("client %p: attaching to context %p", client, context);
res = gst_rtsp_watch_attach (priv->watch, context);
return res;