I'm in search of a best practice approach to schedule playlists using Liquidsoap. My current approach creates plenty of delays, hence not meeting the requirements for seamless playback.
Requirements:
After scheduling a new playlist is due to be scheduled, it should remove all previously scheduled playlist-items.
Avoid any delays when clearing previously queued playlist-items.
My current implementation:
Schedule a bunch of files (represent a playlist) by pushing them to an equeue
This queue starts playing
When the next timeslot is due, a new playlist cannot simply be queued, because it would only start after all tracks queued by the previous playlist are finished playing. Because of this, I'm removing all tracks of the previous playlist first using a Liquidsoap server script. This process is time-consuming and delays the timely execution of step 4.)
Schedule the new files by pushing them to an equeue.
How can I do this more elegantly?
Is it possible to clear an equeue w/o creating delays?
If there are "more correct" Liquidsoap features to achieve this, like a playlist (can I control when it is actualy played?) or request.dynamic (which is deprecated) instead of an equeue, please let me know.
Update: I'm currently using two queues: A and B. One minute before queue A should be playing I populate it with tracks (playlist). When it should be actually playing I turn up the volume. Then, one minute before queue B should be playing I populate this one. When it's actually time to be played I transition the volume from queue A > B. In theory this solution would be fine, but the issue here is that I'm not aware of a way that the queues pause until I turn up the volume. The tracks already seem to start playing at the very moment when the queue/playlist is filled.
It's hard to tell without reading the complete script, but I'm sure it's not possible to pause a queue. At best you can remove an item via the server interface: if it's the currently playing item and it's alone in the queue, then it will stop that queue. You might be interested by the Beets examples, that discuss how an external program can populate sources.
To switch from playlists A to B, the Liquidsoap way is to populate B exactly when it's time and an operator like fallback will make the transition. See also fallback.skip.
Related
In my app, I store data on the Firebase database (Firestore AND Storage) in the form of "Files" (what the user sees). When the user goes to their "Files" tab and selects a certain file (example: "Smith vs Wesson"), the app downloads data on the server (from BOTH Firestore and Storage) related to that file. Here's my problem: the app moves forward before the data has even finished returning and processing (sorted/stored into variables). I don't want the app to progress and take the user to the next screen until this is totally complete. The next three screens show the data retrieved from the server, so if it's moving forward before the data has even retrieved and sorted... well... you see the problem with that.
I tried using something like DispatchQueque.main.asyncAfter to add a three second delay, but the problem with this is that if the user's internet connection is poor, it may take longer than three seconds to retrieve the data. Likewise, if their internet connection is booming, it may take only a second to retrieve the data, but they're still stuck waiting on an unnecessary three-second delay. I only want the delay to last as long as it takes for the retrieval/sorting/storing function to complete its tasks... no matter how short or long.
I'm still learning and am mostly self-taught, so forgive my ignorance. From what I understand from the reading I've been doing, tasks are based off of "threads." The main thread is what the user sees, while there are threads that tasks can be done in the background to keep the user from experiencing longer wait times, etc (such as data retrieval from a server). I know typically you don't want to do tasks on the main thread, but in this case where I don't want the user to be able to progress, I need to find a way to pause the main thread until the other thread has completed the data retrieval and sorting/storing process.
I stumbled across something called "CountDownLatch." I read about it and kind of understand the concept of it... but not the code at all, to be honest. I don't know if CountDownLatch is the correct method to use here or not, but if it is, could someone please show me how I could use either CountDownLatch, or some other delay, to pause the progression of the app until the data is retrieved, sorted, and stored into the variables?
My data retrieval/sorting/storing function is called "getAndAppendClaimData." I handle all of those steps in this function, and it works perfectly. Like I said, I just need to provide some delay until it's finished, so that the code underneath this function isn't executed thereby segueing to the next screen.
So something kinda like this:
while getAndAppendClaimData is still processing {
showLoadingAnimation
}
once getAndAppendClaimData has finished **ALL** of its tasks {
performSegue to next screen
}
NOTE: I DON'T use listeners in my app because I don't need to update the user's screen in realtime... like with a messaging app or something. I just use the .getDocuments and documents.forEach functionalities for my data retrieval.
Please explain your answers or provide links to content explanations. You remember how it was when you were still learning..
Also, before some of you call this post a duplicate.. the other threads are outdated and most of them deal with apps that have listeners for realtime updates – which is different from my circumstance. Another thing, I'm doing a lot of research and learning... so please don't drop the whole "go do your research" bomb. Sometimes you need help tailoring things to your specific situation.
Thanks, I really appreciate the help!
Okay so after more research, I found that one way to keep your app from progressing while server data is downloading and sorting is with DispatchGroup.
First of all, create a DispatchGroup variable:
let dispatchGroup = DispatchGroup()
Then you can "enter" the group at the beginning of the call/function and "leave" the group at the end, once everything has finished processing (such as in completion in a Firebase call). If you're utilizing a loop to sort your data, then make sure to enter dispatchGroup
dispatchGroup.enter()
every time you enter the loop and leave dispatchGroup
dispatchGroup.leave()
at the end of each loop iteration. Once you're finished entering and leaving dispatchGroup for good, then call:
dispatchGroup.notify(queue: .main) {
// Here write whatever you want to do after it's finished retrieving and processing the data...
// Such as performing a segue
}
You would call this .notify outside of your loop, of course.
In my situation, I had a two loops: one to gather/store server data and one to sort it. I didn't want it to start trying to sort until it had finished gathering/sorting, so I executed the second loop inside dispatchGroup.notify, then performed my segue after the second loop finished.
Watch these three tutorials, they helped me out big time!
DispatchQueues
DispatchGroups
Semaphore vs DispatchGroup
For watchOS 3, Apple suggests updating the complication with WKRefreshBackgroundTask instead of using getNextRequestedUpdateDate.
How can I determine the time between two updates using the new approach?
I would only hack my data requesting (from an url) into getCurrentTimelineEntry and would update the complication, but I think that's not really what Apple would recommend.
A short code example would be a big help.
I generally covered this in a different answer, but I'll address your specific question here.
You're right that you shouldn't hack the complication controller to do any (asynchronous) fetching. The data source should only be responsible for returning existing data on hand as requested by the complication server. This was true for watchOS 2, and is still true in watchOS 3.
For watchOS 3, each background refresh can schedule the next one.
Overview of the process:
In your particular case, you can wait until your WKURLSessionRefreshBackgroundTask task finishes its downloading. At that point, schedule the next background refresh before completing your existing background task.
At that future time, your extension will be woken up again to start the entire background process again to:
Request new data from your web service
Handle the reply and update your data store
Tell the complication to update itself (which will use the new data on hand).
Update the dock snapshot
Schedule an upcoming background refresh task
Mark your current task as complete.
You can even chain a series of different background sub-tasks where each sub-task handles a separate aspect of a refresh cycle, and is responsible for scheduling the following sub-task.
Sample code:
If you haven't seen it, Apple provides its WatchBackgroundRefresh sample code to demonstrate part of this. You can use
WKExtension.shared().scheduleBackgroundRefresh(withPreferredDate:userInfo:)
to schedule (either the initial, or) a future task before the present task completes.
Although their example uses a refresh button to schedule the next background refresh, the concept is the same, whether it is a user action or background task that schedules the next request).
I am working on radio application where i need to convert speech to text. For that i am using third party api's. For geting better results i want to run two api's at the same time and compare the output. this should happen when user clicks on record button.
I know we can do this using GCD but not getting exact idea of how we can achieve this.
Need suggestion.
Thank you.
Th short answer is that you create two GCD queues, one for each Speech-to-Text task. Within each block, you call the two different APIs with the same input data. Then you either wait for the result, or get the block to invoke a callback status method when completed.
Note that you will need to ensure that the speech engines can safely run on background threads.
This is fairly straightforward if you want to record the audio first, then submit the data to two different engines for processing. But it sounds like you might want to start processing the audio as soon as the user clicks Record? In that case, it very much depends on the APIs as to how you feed them data in real time. You might want to just run them on separate threads explicitly and feed them data as it comes in.
I have to draw a waveform for an audio file (CMK.mp3) in my application.
For this I have tried this Solution
As this solution is using AVAssetreader, which is taking two much time to display the waveform.
Can anyone please help, is there any other way to display the waveform quickly?
Thanks
AVAssetReader is the only way to read an AVAsset so there is no way around that. You will want to tune the code to process it without incurring unwanted overhead. I have not tried that code yet but I intend on using it to build a sample project to share on GitHub once I have the time, hopefully soon.
My approach to tune it will be to do the following:
Eliminate all Objective-C method calls and use C only instead
Move all work to a secondary queue off the main queue and use a block to call back one finished
One obstacle with rendering a waveform is you cannot have more than one AVAssetReader running at a time, at least the last time I tried. (It may have changed with iOS 6 possibly) A new reader cancels the other and that interrupts playback, so you need to do your work in sequence. I do that with queues.
In an audio app that I built it reads the CMSampleBufferRef into a CMBufferQueueRef which can hold multiple sample buffers. (see copyNextSampleBuffer on AVAssetReader) You can configure the queue to provide you with enough time to process a waveform after an AVAssetReader finishes reading an asset so that the current playback does not exhaust the contents of the CMBufferQueueRef before you start reading more buffers into it for the next track. That will be my approach when I attempt it. I just have to be careful that I do not use too much memory by making the buffer too big or making the buffer so big that it causes issues with playback. I just do not know how long it will take to process the waveform and I will test it on my older iPods and iPhone 4 before I try it on my iPhone 5 to see if they all perform well.
Be sure to stay as close to C as possible. Calls to Objective-C resources during this processing will incur potential thread switching and other run-time overhead costs which are significant enough to be noticeable. You will want to avoid that. What I may do is set up Key-Value Observing (KVO) to trigger the AVAssetReader to start the next task quickly so that I can maintain gapless playback between tracks.
Once I start my audio experiments I will put them on GitHub. I've created a repository where I will do this work. If you are interested you can "watch" that repo so you will know when I start committing updates to it.
https://github.com/brennanMKE/Audio
I am new to objective-c/cocoa programming. I am making an application which is to constantly sync with a server and keep its view updated.
Now in a nutshell, heres what I thought of: Initiate an NSTimer to trigger every second or two, contact the server, if there is a change, update the view. Is this a good way of doing it?
I have read elsewhere that you can have a thread running in the background which monitors the changes and updates the view. I never worked with threads before and I know they can be quite troublesome and you need a good amount of experience with memory management to get most out of them.
I have one month to get this application done. What do you guys recommend? Just use an NSTimer and do it the way I though of...or learn multithreading and get it done that way (but keep in mind my time frame).
Thanks!
I think using separate thread in this case would be too much. You need to use threads when there is some task that runs for considerable amount of time and can freeze your app for some time.
In your case do this:
Create timer and call some method (say update) every N seconds.
in update send asynchronous request to server and check for any changes.
download data using NSURLConnection delegate and parse. Note: if there is probability that you can receive a huge amount of data from server and its processing can take much time (for example parsing of 2Mb of XML data) then you do need to perform that is a separate thread.
update all listeners (appropriate view controllers for example) with processed data.
continue polling using timer.
Think about requirements. The most relevant questions, IMO, are :
does your application have to get new data while running in background?
does your application need to be responsive, that is, not sluggish when it's fetching new data?
I guess the answer to the first question is probably no. If you are updating a view depending on the data, it's only required to fetch the data when the view is visible. You cannot guarantee always fetching data in background anyway, because iOS can always just kill your application. Anyway, in your application's perspective, multithreading is not relevant to this question. Because either you are updating only in foreground or also in background, your application need no more than one thread.
Multithreading is relevant rather to the second question. If your application has to remain responsive while fetching data, then you will have to run your fetching code on a detached thread. What's more important here is, the update on the user interface (like views) must happen on the main thread again.
Learning multithreading in general is something indeed, but iOS SDK provides a lot of help. Learning how to use operation queue (I guess that's the easiest to learn, but not necessarily the easiest to use) wouldn't take many days. In a month period, you can definitely finish the job.
Again, however, think clearly why you would need multithreading.