I'm using libspotify SDK, C library for win32.
I think to have a right setup, every session callback is registered. I don't understand why i can't receive the call for end_of_track, while music_delivery continues to be called with zero padding 22050 long frames.
I attempt to start playing first loading the track with sp_session_load; till it returns SP_ERROR_IS_LOADING I post a message on my message queue (synchronization method I've used, PostMessage win32 API) in order to reload again with same API sp_session_load. As soon as it returns SP_ERROR_OK I use the sp_session_play and the music_delivery starts immediately, with correct frames.
I don't know why at the end of track the libspotify runtime then start sending zero padded frames, instead of calling end_of_track callback.
In other conditions it works perfectly: I've used the sp_track obtained from a album browse, so the track is fully loaded at the moment I load to the current session for playing: with this track, it works fine with end_of_track called correctly. In the case with padding error, I search the track using its Spotify URI and got the results; in this case the track metadata are not still ready (at the play attempt) so I used that kind of "polling" on sp_session_load with PostMessage.
Can anybody help me?
I ran into the same problem and I think the issue was that I was consuming the data too fast without giving other threads time to do any work since I was spending all of my time in the music_delivery callback. I found that if I add some throttling and notify the main thread that it can wake up to do some processing, the extra zeros at the end of track is reduced to one delivery of 22,050 frames (or 500ms at 44.1kHz).
Here is an example of what I added to my callback, heavily borrowed from the jukebox.c example provided with the SDK:
/* Buffer 1 second of data, then notify the main thread to do some processing */
if (g_throttle > format->sample_rate) {
pthread_mutex_lock(&g_notify_mutex);
g_notify_do = 1;
pthread_cond_signal(&g_notify_cond);
pthread_mutex_unlock(&g_notify_mutex);
// Reset the throttle counter
g_throttle = 0;
return 0;
}
As I said, there was still 22,050 frames of zeros delivered before the track stopped, but I believe libspotify may purposely do this to ensure that the duration calculated by the number of frames received (song_duration_ms = total_frames_delivered / sample_rate * 1000) is greater than or equal to the duration reported by sp_track_duration. In my case, the track I was trying to stream was 172,000ms in duration, without the extra padding the duration calculated is 171,796ms, but with the padding it was 172,296ms.
Hope this helps.
Related
I'm using a rxdart ZipStream within my app to combine two streams of incoming bluetooth data. Those streams are used along with "bufferCount" to collect 500 elements each before emitting. Everything works fine so far, but if the stream subscription gets cancelled at some point, there might be a number of elements in those buffers that are omitted after that. I could wait for a "buffer cycle" to complete before cancelling the stream subscription, but as this might take some time depending on the configured sample rate, I wonder if there is a solution to get those buffers as they are even if the number of elements might be less than 500.
Here is some simplified code for explanation:
subscription = ZipStream.zip2(
streamA.bufferCount(500),
streamB.bufferCount(500),
(streamABuffer, streamBBuffer) {
return ...;
},
).listen((data) {
...
});
Thanks in advance!
So for anyone wondering: As bufferCount is implemented with BufferCountStreamTransformer which extends BackpressureStreamTransformer, there is a dispatchOnClose property that defaults to true. That means if the underlying stream whose emitted elements are buffered is closed, then the remaining elements in that buffer are emitted finally. This also applies to the example above. My fault was to close the stream and to cancel the stream subscription instantly. With awaiting the stream's closing and cancelling the stream subscription afterwards, everything works as expected.
When trying to acquire some signals in the frequency domain, I've encountered the issue of having snd_pcm_readi() take a wildly variable amount of time. This causes problems in the logic section of my code, which is time dependent.
I have that most of the time, snd_pcm_readi() returns after approximately 0.00003 to 0.00006 seconds. However, every 4-5 call to snd_pcm_readi() requires approximately 0.028 seconds. This is a huge difference, and causes the logic part of my code to fail.
How can I get a consistent time for each call to snd_pcm_readi()?
I've tried to experiment with the period size, but it is unclear to me what exactly it does even after re-reading the documentation multiple times. I don't use an interrupt driven design, I simply call snd_pcm_readi() and it blocks until it returns -- with data.
I can only assume that the reason it blocks for a variable amount of time, is that snd_pcm_readi() pulls data from the hardware buffer, which happens to already have data readily available for transfer to the "application buffer" (which I'm maintaining). However, sometimes, there is additional work to do in kernel space or on the hardware side, hence the function call takes longer to return in these cases.
What purpose does the "period size" serve when I'm not using an interrupt driven design? Can my problem be fixed at all by manipulation of the period size, or should I do something else?
I want to achieve that each call to snd_pcm_readi() takes approximately the same amount of time. I'm not asking for a real time compliant API, which I don't imagine ALSA even attempts to be, however, seeing a difference in function call time on the order of being 500 times longer (which is what I'm seeing!) then this is a real problem.
What can be done about it, and what should I do about it?
I would present a minimal reproducible example, but this isn't easy in my case.
Typically when reading and writing audio, the period size specifies how much data ALSA has reserved in DMA silicon. Normally the period size specifies your latency. So for example while you are filling a buffer for writing through DMA to the I2S silicon, one DMA buffer is already being written out.
If you have your period size too small, then the CPU doesn't have time to write audio out in the scheduled execution slot provided. Typically people aim for a minimum of 500 us or 1 ms in latency. If you are doing heavy forms of computation, then you may want to choose 5 ms or 10 ms of latency. You may choose even more latency if you are on a non-powerful embedded system.
If you want to push the limit of the system, then you can request the priority of the audio processing thread be increased. By increasing the priority of your thread, you ask the scheduler to process your audio thread before all other threads with lower priority.
One method for increasing priority taken from the gtkIOStream ALSA C++ OO classes is like so (taken from the changeThreadPriority method) :
/** Set the current thread's priority
\param priority <0 implies maximum priority, otherwise must be between sched_get_priority_max and sched_get_priority_min
\return 0 on success, error code otherwise
*/
static int changeThreadPriority(int priority){
int ret;
pthread_t thisThread = pthread_self(); // get the current thread
struct sched_param origParams, params;
int origPolicy, policy = SCHED_FIFO, newPolicy=0;
if ((ret = pthread_getschedparam(thisThread, &origPolicy, &origParams))!=0)
return ALSA::ALSADebug().evaluateError(ret, "when trying to pthread_getschedparam\n");
printf("ALSA::Stream::changeThreadPriority : Current thread policy %d and priority %d\n", origPolicy, origParams.sched_priority);
if (priority<0) //maximum priority
params.sched_priority = sched_get_priority_max(policy);
else
params.sched_priority = priority;
if (params.sched_priority>sched_get_priority_max(policy))
return ALSA::ALSADebug().evaluateError(ALSA_SCHED_PRIORITY_ERROR, "requested priority is too high\n");
if (params.sched_priority<sched_get_priority_min(policy))
return ALSA::ALSADebug().evaluateError(ALSA_SCHED_PRIORITY_ERROR, "requested priority is too low\n");
if ((ret = pthread_setschedparam(thisThread, policy, ¶ms))!=0)
return ALSA::ALSADebug().evaluateError(ret, "when trying to pthread_setschedparam - are you su or do you have permission to set this priority?\n");
if ((ret = pthread_getschedparam(thisThread, &newPolicy, ¶ms))!=0)
return ALSA::ALSADebug().evaluateError(ret, "when trying to pthread_getschedparam\n");
if(policy != newPolicy)
return ALSA::ALSADebug().evaluateError(ALSA_SCHED_POLICY_ERROR, "requested scheduler policy is not correctly set\n");
printf("ALSA::Stream::changeThreadPriority : New thread priority changed to %d\n", params.sched_priority);
return 0;
}
I need to detect when the current playing audio/video is paused. I cannot find anything for 1.0. My app is a bit complex but here is condensed code
/* This function is called when the pipeline changes states. We use it to
* keep track of the current state. */
static void state_changed_cb(GstBus *bus, GstMessage *msg, CustomData *data)
{
GstState old_state, new_state, pending_state;
gst_message_parse_state_changed(msg, &old_state, &new_state, &pending_state);
if(GST_MESSAGE_SRC(msg) == GST_OBJECT(data->playbin))
{
g_print("State set to %s\n", gst_element_state_get_name(new_state));
}
}
gst_init(&wxTheApp->argc, &argv);
m_playbin = gst_element_factory_make("playbin", "playbin");
if(!m_playbin)
{
g_printerr("Not all elements could be created.\n");
exit(1);
}
CustomData* data = new CustomData(xid, m_playbin);
GstBus *bus = gst_element_get_bus(m_playbin);
gst_bus_set_sync_handler(bus, (GstBusSyncHandler) create_window, data, NULL);//here I do video overly stuffs
g_signal_connect (G_OBJECT (bus), "message::state-changed", (GCallback)state_changed_cb, &data);
What do I do wrong? I cannot find working example on connecting such events on Gstreamer 1.0 and 0.x seems a bit different than 1.0 so the vast exaples there don't help
UPDATE
I have found a way to get signals. I run wxWidgets timer with 500ms time span and each time timer fires I call
GstMessage* msg = gst_bus_pop(m_bus);
if(msg!=NULL)
{
g_print ("New Message -- %s\n", gst_message_type_get_name(msg->type));
}
Now I get a lot of 'state-change' messages. Still I want to know if that message is for Pause or Stop or Play or End of Media (I mean way to differentiate which message is this) so that I can notify the UI.
So while I get signals now, the basic problem, to get specific signals, remains unsolved.
You have to call gst_bus_add_signal_watch() (like in 0.10) to enable emission of the signals. Without that you can only use the other ways to get notified about GstMessages on that bus.
Also just to be sure, you need a running GLib main loop on the default main context for this to work. Otherwise you need to do things a bit different.
For the updated question:
Check the documentation: gst_message_parse_state_changed() can be used to parse the old, new and pending state from the message. This is also still the same as in 0.10. From the application point of view, and conceptionally nothing much has changed really between 0.10 and 1.0
Also you shouldn't do this timeout-waiting as it will block your wxwidget main loop. Easiest solution would be to use a sync bus handler (which you already have) and dispatch all messages from there to some callback on the wxwidget main loop.
I have a GPUImageColorDodgeBlend filter with two inputs connected:
A GPUImageVideoCamera which is getting frames from the iPhone video camera.
A GPUImageMovie which is an (MP4) video file that I want to have laid over the live camera feed.
The GPUImageColorDodgeBlend is then connected to two outputs:
A GPUImageImageView to provide a live preview of the blend in action.
A GPUImageMovieWriter to write the movie to storage once a record button is pressed.
Now, before the video starts recording, everything works OK 100% of the time. The GPUImageVideo is blended over the live camera video fine, and no issues or warnings are reported.
However, when the GPUImageMovieWriter starts recording, things start to go wrong randomly. About 80-90% of the time, the GPUImageMovieWriter works perfectly, there are no errors or warnings and the output video is written correctly.
However, about 10-20% of the time (and from what I can see, this is fairly random), things seem to go wrong during the recording process (although the on-screen preview continues to work fine).
Specifically, I start getting hundreds & hundreds of Program appending pixel buffer at time: errors.
This error originates from the - (void)newFrameReadyAtTime:(CMTime)frameTime atIndex:(NSInteger)textureIndex method in GPUImageWriter.
This issue is triggered by problems with the frameTime values that are reported to this method.
From what I can see, the problem is caused by the writer sometimes receiving frames numbered by the video camera (which tend to have extremely high time values like 64616612394291 with a timescale of 1000000000). But, then sometimes the writer gets frames numbered by the GPUImageMovie which are numbered much lower (like 200200 with a timescale of 30000).
It seems that GPUImageWriter is happy as long as the frame values are increasing, but once the frame value decreases, it stops writing and just emits Program appending pixel buffer at time: errors.
I seem to be doing something fairly common, and this hasn't been reported anywhere as a bug, so my questions are (answers to any or all of these are appreciated -- they don't all need to necessarily be answered sequentially as separate questions):
Where do the frameTime values come from -- why does it seem so arbitrary whether the frameTime is numbered according to the GPUImageVideoCamera source or the GPUImageMovie source? Why does it alternative between each -- shouldn't the frame numbering scheme be uniform across all frames?
Am I correct in thinking that this issue is caused by non-increasing frameTimes?
...if so, why does GPUImageView accept and display the frameTimes just fine on the screen 100% of the time, yet GPUImageMovieWriter requires them to be ordered?
...and if so, how can I ensure that the frameTimes that come in are valid? I tried adding if (frameTime.value < previousFrameTime.value) return; to skip any lesser-numbered frames which works -- most of the time. Unfortunately, when I set playsAtActualSpeed on the GPUImageMovie this tends to become far less effective as all the frames end up getting skipped after a certain point.
...or perhaps this is a bug, in which case I'll need to report it on GitHub -- but I'd be interested to know if there's something I've overlooked here in how the frameTimes work.
I've found a potential solution to this issue, which I've implemented as a hack for now, but could conceivably be extended to a proper solution.
I've traced the source of the timing back to GPUImageTwoInputFilter which essentially multiplexes the two input sources into a single output of frames.
In the method - (void)newFrameReadyAtTime:(CMTime)frameTime atIndex:(NSInteger)textureIndex, the filter waits until it has collected a frame from the first source (textureInput == 0) and the second, and then forwards on these frames to its targets.
The problem (the way I see it) is that the method simply uses the frameTime of whichever frame comes in second (excluding the cases of still images for which CMTIME_IS_INDEFINTE(frameTime) == YES which I'm not considering for now because I don't work with still images) which may not always be the same frame (for whatever reason).
The relevant code which checks for both frames and sends them on for processing is as follows:
if ((hasReceivedFirstFrame && hasReceivedSecondFrame) || updatedMovieFrameOppositeStillImage)
{
[super newFrameReadyAtTime:frameTime atIndex:0]; // this line has the problem
hasReceivedFirstFrame = NO;
hasReceivedSecondFrame = NO;
}
What I've done is adjusted the above code to [super newFrameReadyAtTime:firstFrameTime atIndex:0] so that it always uses the frameTime from the first input and totally ignores the frameTime from the second input. So far, it's all working fine like this. (Would still be interested for someone to let me know why this is written this way, given that GPUImageMovieWriter seems to insist on increasing frameTimes, which the method as-is doesn't guarantee.)
Caveat: This will almost certainly break entirely if you work only with still images, in which case you will have CMTIME_IS_INDEFINITE(frameTime) == YES for your first input'sframeTime.
I have been using mach_absolute_time() for all my timing functions so far. calculating how long between frames etc.
I now want to get the exact time touch input events happen using event.timestamp in the touch callbacks.
the problem is these two seem to use completely different timers. sure, you can get them both in seconds, but their origins are different and seemingly random...
is there any way to sync the two different timers?
or is there anyway to get access to the same timer that the touch input uses to generate that timestamp property? otherwise its next to useless.
Had some trouble with this myself. There isn't a lot of good documentation, so I went with experimentation. Here's what I was able to determine:
mach_absolute_time depends on the processor of the device. It returns ticks since the device was last rebooted (otherwise known as uptime). In order to get it in a human readable form, you have to modify it by the result from mach_timebase_info (a ratio), which will return billionth of seconds (or nanoseconds). To make this more usable I use a function like the one below:
#include <mach/mach_time.h>
int getUptimeInMilliseconds()
{
static const int64_t kOneMillion = 1000 * 1000;
static mach_timebase_info_data_t s_timebase_info;
if (s_timebase_info.denom == 0) {
(void) mach_timebase_info(&s_timebase_info);
}
// mach_absolute_time() returns billionth of seconds,
// so divide by one million to get milliseconds
return (int)((mach_absolute_time() * s_timebase_info.numer) / (kOneMillion * s_timebase_info.denom));
}
Get the initial difference between two i.e
what is returned by mach_absolute_time() initally when your application starts and also get the event.timestamp initially at the same time...
store the difference... it would remain same through out the time your application runs.. so you can use this time difference to convert one to another...
How about CFAbsoluteTimeGetCurrent?