I have two timers running simultaneously. The first timer triggers every 1 second and takes 0.2 seconds to run. The second timer triggers every 20 minutes and takes 5 minutes to run. I would like to have the first timer continue triggering during the 5 minutes it takes the second timer execute its callback. In practice, during the second timer's callback the first timer does not trigger. Is it possible to configure the timers to execute the way I want?
There is a workaround, depending on how your timer callbacks' work is structured. If the long timer callback is running a long loop or sequence of calls to different functions, you can insert drawnow() or pause(0.01) calls to make it yield to Matlab's event dispatch queue, which will handle pending handle graphics and timer events, including your other Timer's trigger.
It's sort of like old-school cooperative multitasking where each thread had to explicitly yield control to other threads, instead of being pre-empted by the system's scheduler. Matlab is single-threaded with respect to M-code execution. When a Matlab function is running, events that get raised are put on an event queue and wait until the function finishes and returns to the command prompt, or drawnow(), pause(), uiwait() or a similar function is called. This is how you keep a Matlab GUI responsive, and is documented under their Handle Graphics stuff. But Matlab timer objects use the same event queue for their callbacks. (At least as of a couple versions ago; this is only semi-documented and might change.) So you can manage their liveness with the same functions. You may also need to tweak BusyMode on your timers.
This is kind of a hack but it should get you basic functionality as long as you don't need precise timing, and don't need the callbacks' code to actually run in parallel. (Whichever timer callback has yielded will wait for the other one to finish before proceeding with its own work.)
If the long callback is really blocked on a long operation that you can't stick drawnow calls in to, you're out of luck with basic Matlab and will need to use one of the workarounds the commenters suggest.
Related
I am working on OpenCL implementation wherein the host side particular function has to call every time the clEnqueueReadBuffer is done executing.
I am calling the kernels in a loop. It will look like below in an ordered queue.
clEnqueueNDRangeKernel() -> clEnqueueReadBuffer(&Event) ->
clEnqueueNDRangeKernel() -> clEnqueueReadBuffer(&Event) .......
I have used clSetEventCall() to register Events in each read command to execute a callback function. I have observed that, though the command queue is an in-order queue, the order of the callback function does not execute in-order.
Also, in OpenCL 1.2, it has a mention as below.
The order in which the registered user callback functions are called
is undefined. There is no guarantee that the callback functions
registered for various execution status values for an event will be
called in the exact order that the execution status of a command
changes.
Can anyone give me a solution? I want to execute the callback function in order.
A simple solution could be to subscribe the same callback function to both events. In the callback code, you can check the status of each relevant event and perform the operation you want accordingly.
Note that on some implementations, the driver will batch multiple commands for execution.
The immediate effect is that that multiple events will be signaled "at once" even though the associated commands complete at a different time.
// event1 & event2 are likely to be signaled at once:
clEnqueueNDRangeKernel();
clEnqueueReadBuffer(&event1);
clEnqueueNDRangeKernel();
clEnqueueReadBuffer(&event2);
Wheres:
// event1 is likely to be signaled before event2:
clEnqueueNDRangeKernel();
clEnqueueReadBuffer(&event1);
clflush(queue);
clEnqueueNDRangeKernel();
clEnqueueReadBuffer(&event2);
clflush(queue);
I would also check on which exact thread the callbacks are invoked.
Is it the same thread each time? or a different one? If the implementation opens a new thread for this task, it might be wiser to open a single thread yourself and wait for events in the order that you wish.
is it possible to stop an for in loop and continue it and the stopped position with a button.
i mean something like this:
for x in 0..<5 {
if x == 3 {
// show a button
// this button have to be pressed
// which do some tasks
// when all tasks are finished >
// continue this loop at the stopped point
}
}
is this possible? if yes, how?
Short answer: use a semaphore
Longer answer:
Your situation is an example of the more general case of how to pause some computation, saving its current state (local variables, call stack, etc.), and later to resume it from the same point, with all state restored.
Some languages/systems provide coroutines to support this, others the more esoteric call with current continuation, neither are available to you (currently) Swift...
What you do have is Grand Central Dispatch (GCD), provided in Swift through Dispatch, which provides support for executing concurrent asynchronous tasks and synchronisation between them. Other concurrency mechanisms such as pthread are also available, but GCD tends to be recommended.
Using GCD an outline of a solution is:
Execute you loop as an asynchronous concurrent task. It must not be executing on the main thread or you will deadlock...
When you wish to pause:
Create a semaphore
Start another async task to display the button, run the other jobs etc. This new task must signal the semaphore when it is finished. The new task may call the main thread to perform UI operations.
Your loop task waits on the semaphore, this will block the loop task until the button task signals it.
This may sound complicated but with Swift block syntax and Dispatch it is quite simple. However you do need to read up on GCD first!
Alternatively you can ask whether you can restructure your solution into multiple parts so saving/restoring the current state is not required. You might find designs such as continuation passing style useful, which again is quite easy using Swift's blocks.
HTH
This question already has answers here:
Difference between DispatchQueue.main.async and DispatchQueue.main.sync
(4 answers)
Closed 3 years ago.
I'm reading the docs on dispatch queues for GCD, and in it they say that the queues are FIFO, so I am woundering what effect this has on async / sync dispatches?
from my understand async executes things in the order that it gets things while sync executes things serial..
but when you write your GCD code you decide the order in which things happen.. so as long as your know whats going on in your code you should know the order in which things execute..
my questions are, wheres the benefit of async here? am I missing something in my understanding of these two things.
The first answer isn't quite complete, unfortunately. Yes, sync will block and async will not, however there are additional semantics to take into account. Calling dispatch_sync() will also cause your code to wait until each and every pending item on that queue has finished executing, also making it a synchronization point for said work. dispatch_async() will simply submit the work to the queue and return immediately, after which it will be executed "at some point" and you need to track completion of that work in some other way (usually by nesting one dispatch_async inside another dispatch_async - see the man page for example).
sync means the function WILL BLOCK the current thread until it has completed, async means it will be handled in the background and the function WILL NOT BLOCK the current thread.
If you want serial execution of blocks check out the creation of a serial dispatch queue
From the man page:
FUNDAMENTALS
Conceptually, dispatch_sync() is a convenient wrapper around dispatch_async() with the addition of a semaphore to wait for completion of the block, and a wrapper around the block to signal its completion.
See dispatch_semaphore_create(3) for more information about dispatch semaphores. The actual implementation of the dispatch_sync() function may be optimized and differ from the above description.
Tasks can be performed synchronously or asynchronously.
Synchronous function returns the control on the current queue only after task is finished. It blocks the queue and waits until the task is finished.
Asynchronous function returns control on the current queue right after task has been sent to be performed on the different queue. It doesn't wait until the task is finished. It doesn't block the queue.
Only in Asynchronous we can add delay -> asyncAfter(deadline: 10..
Earlier this month I asked this question 'What is a runloop?' After reading the answers and did some tries I got it to work, but still I do not understand it completely. If a runloop is just an loop that is associated with an thread and it don't spawn another thread behind the scenes how can any of the other code in my thread(mainthread to keep it simple) execute without getting "blocked"/not run because it somewhere make an infinite loop?
That was question number one. Then over to my second.
If I got something right about this after having worked with this, but not completely understood it a runloop is a loop where you attach 'flags' that notify the runloop that when it comes to the point where the flag is, it "stops" and execute whatever handler that is attached at that point? Then afterwards it keep running to the next in que.
So in this case no events is placed in que in connections, but when it comes to events it take whatever action associated with tap 1 and execute it before it runs to connections again and so on. Or am I as far as I can be from understanding the concept?
"Sort of."
Have you read this particular documentation?
It goes into considerable depth -- quite thorough depth -- into the architecture and operation of run loops.
A run loop will get blocked if it dispatches a method that takes too long or that loops forever.
That's the reason why an iPhone app will want to do everything which won't fit into 1 "tick" of the UI run loop (say at some animation frame rate or UI response rate), and with room to spare for any other event handlers that need to be done in that same "tick", either broken up asynchronously, on dispatched to another thread for execution.
Otherwise stuff will get blocked until control is returned to the run loop.
here is question:
i have 2 threads: the main one and the other.
on the main thread i perform anything gui related
on the other thread i perform all my calculations
on some operation i have to stop the second thread for some seconds, waiting the first thread to do something....
my question:
which is the best option and why?
being in the second thread..
use sleepUntilDate
use sleep function
any other option?
the pseudo code is this:
on the second thread:
...
do some calculations
send the results to the first thread and wait for the # of seconds to wait (let's say K)
wait K seconds
+[NSThread sleepForTimeInterval:] is more likely to do what you want if the time suddenly changes. Being in the second thread doesn't affect things.
However, I don't see why the second thread should have to sleep at all. If you want to wait for new data, use something like NSCondition or NSConditionLock to signal when data has arrived.
Alternatively, don't use threads directly at all. You can use NSOperation or performSelectorInBackground: or dispatch_*, to name a few.
EDIT: You need to be very careful when writing traditional thread-synchronization code, and you need to be just as careful every time you edit it to add a new feature. I have some threaded code which I need to think about for a couple minutes to figure out what's going on. Deadlocks are no fun.