swift stop loop and continue it with a button - swift

is it possible to stop an for in loop and continue it and the stopped position with a button.
i mean something like this:
for x in 0..<5 {
if x == 3 {
// show a button
// this button have to be pressed
// which do some tasks
// when all tasks are finished >
// continue this loop at the stopped point
}
}
is this possible? if yes, how?

Short answer: use a semaphore
Longer answer:
Your situation is an example of the more general case of how to pause some computation, saving its current state (local variables, call stack, etc.), and later to resume it from the same point, with all state restored.
Some languages/systems provide coroutines to support this, others the more esoteric call with current continuation, neither are available to you (currently) Swift...
What you do have is Grand Central Dispatch (GCD), provided in Swift through Dispatch, which provides support for executing concurrent asynchronous tasks and synchronisation between them. Other concurrency mechanisms such as pthread are also available, but GCD tends to be recommended.
Using GCD an outline of a solution is:
Execute you loop as an asynchronous concurrent task. It must not be executing on the main thread or you will deadlock...
When you wish to pause:
Create a semaphore
Start another async task to display the button, run the other jobs etc. This new task must signal the semaphore when it is finished. The new task may call the main thread to perform UI operations.
Your loop task waits on the semaphore, this will block the loop task until the button task signals it.
This may sound complicated but with Swift block syntax and Dispatch it is quite simple. However you do need to read up on GCD first!
Alternatively you can ask whether you can restructure your solution into multiple parts so saving/restoring the current state is not required. You might find designs such as continuation passing style useful, which again is quite easy using Swift's blocks.
HTH

Related

Why does the order of a dispatched print() change when run in a Playground?

In a Udacity's GCD course there was a small quiz:
let q = DispatchQueue.global(qos: .userInteractive)
q.async { () -> Void in
print("tic")
}
print("tac")
Which will be printed first?
The correct answer is: tac, then tic. Seems logical.
But, why is it so only when I create an Xcode project? In a playground it prints tic then tac. What am I missing?
In GCD
DispatchQueue.global(qos: .userInteractive).async{}
is below
DispatchQueue.main.async{}
Even thought it has quality of service (qos) as UI thread it does not means it is a main thread. So may be there is a difference in performance with play ground and ui project
please check apples documentation as well.
Apples documentation on GCD
key to your answer is in the question what really are you asking the "system" to do, an by system that is the whatever the code is running on the playground, your computer/phone or emulator. You are executing an asynchronous "block" of code - print("tic") with some priority ".userInteractive". If the system handles the asynchronous block first or continues with "normal" execution depends on priority and available resources. With asynchronous calls there is no real why to guarantee that it is executed before or after the code continues that is the nature of being asynchronous i.e execute the block as soon as the system allows it, all without blocking your current work. So the difference you are seeing in playground vs a project/emulator is that the project/phone/emulator must keep the UI/primary thread responsive so it continue with print("tac"), while the development playground favors the thread executing print("tic"). Bottom line is it has to deal with priority of execution and the available resources and how its implemented on the system you're running the code.

can a timer trigger during another timer's callback?

I have two timers running simultaneously. The first timer triggers every 1 second and takes 0.2 seconds to run. The second timer triggers every 20 minutes and takes 5 minutes to run. I would like to have the first timer continue triggering during the 5 minutes it takes the second timer execute its callback. In practice, during the second timer's callback the first timer does not trigger. Is it possible to configure the timers to execute the way I want?
There is a workaround, depending on how your timer callbacks' work is structured. If the long timer callback is running a long loop or sequence of calls to different functions, you can insert drawnow() or pause(0.01) calls to make it yield to Matlab's event dispatch queue, which will handle pending handle graphics and timer events, including your other Timer's trigger.
It's sort of like old-school cooperative multitasking where each thread had to explicitly yield control to other threads, instead of being pre-empted by the system's scheduler. Matlab is single-threaded with respect to M-code execution. When a Matlab function is running, events that get raised are put on an event queue and wait until the function finishes and returns to the command prompt, or drawnow(), pause(), uiwait() or a similar function is called. This is how you keep a Matlab GUI responsive, and is documented under their Handle Graphics stuff. But Matlab timer objects use the same event queue for their callbacks. (At least as of a couple versions ago; this is only semi-documented and might change.) So you can manage their liveness with the same functions. You may also need to tweak BusyMode on your timers.
This is kind of a hack but it should get you basic functionality as long as you don't need precise timing, and don't need the callbacks' code to actually run in parallel. (Whichever timer callback has yielded will wait for the other one to finish before proceeding with its own work.)
If the long callback is really blocked on a long operation that you can't stick drawnow calls in to, you're out of luck with basic Matlab and will need to use one of the workarounds the commenters suggest.

Grand Central Dispatch async vs sync [duplicate]

This question already has answers here:
Difference between DispatchQueue.main.async and DispatchQueue.main.sync
(4 answers)
Closed 3 years ago.
I'm reading the docs on dispatch queues for GCD, and in it they say that the queues are FIFO, so I am woundering what effect this has on async / sync dispatches?
from my understand async executes things in the order that it gets things while sync executes things serial..
but when you write your GCD code you decide the order in which things happen.. so as long as your know whats going on in your code you should know the order in which things execute..
my questions are, wheres the benefit of async here? am I missing something in my understanding of these two things.
The first answer isn't quite complete, unfortunately. Yes, sync will block and async will not, however there are additional semantics to take into account. Calling dispatch_sync() will also cause your code to wait until each and every pending item on that queue has finished executing, also making it a synchronization point for said work. dispatch_async() will simply submit the work to the queue and return immediately, after which it will be executed "at some point" and you need to track completion of that work in some other way (usually by nesting one dispatch_async inside another dispatch_async - see the man page for example).
sync means the function WILL BLOCK the current thread until it has completed, async means it will be handled in the background and the function WILL NOT BLOCK the current thread.
If you want serial execution of blocks check out the creation of a serial dispatch queue
From the man page:
FUNDAMENTALS
Conceptually, dispatch_sync() is a convenient wrapper around dispatch_async() with the addition of a semaphore to wait for completion of the block, and a wrapper around the block to signal its completion.
See dispatch_semaphore_create(3) for more information about dispatch semaphores. The actual implementation of the dispatch_sync() function may be optimized and differ from the above description.
Tasks can be performed synchronously or asynchronously.
Synchronous function returns the control on the current queue only after task is finished. It blocks the queue and waits until the task is finished.
Asynchronous function returns control on the current queue right after task has been sent to be performed on the different queue. It doesn't wait until the task is finished. It doesn't block the queue.
Only in Asynchronous we can add delay -> asyncAfter(deadline: 10..

Does puting a block on a sync GCD queue locks that block and pauses the others?

I read that GCD synchronous queues (dispatch_sync) should be used to implement critical sections of code. An example would be a block that subtracts transaction amount from account balance. The interesting part of sync calls is a question, how does that affect the work of other blocks on multiple threads?
Lets imagine the situation where there are 3 threads that use and execute both system and user defined blocks from main and custom queues in asynchronous mode. Those block are all executed in parallel in some order. Now, if a block is put on a custom queue with sync mode, does that mean that all other blocks (including on other threads) are suspended until the successful execution of the block? Or does that mean that only some lock will be put on that block while other will still execute. However, if other blocks use the same data as the sync block then it's inevitable that other blocks will wait until that lock will be released.
IMHO it doesn't matter, is it one or multiple cores, sync mode should freeze the whole app work. However, these are just my thoughts so please comment on that and share your insights :)
Synchronous dispatch suspends the execution of your code until the dispatched block has finished. Asynchronous dispatch returns immediately, the block is executed asynchronously with regard to the calling code:
dispatch_sync(somewhere, ^{ something });
// Reached later, when the block is finished.
dispatch_async(somewhere, ^{ something });
// Reached immediately. The block might be waiting
// to be executed, executing or already finished.
And there are two kinds of dispatch queues, serial and concurrent. The serial ones dispatch the blocks strictly one by one in the order they are being added. When one finishes, another one starts. There is only one thread needed for this kind of execution. The concurrent queues dispatch the blocks concurrently, in parallel. There are more threads being used there.
You can mix and match sync/async dispatch and serial/concurrent queues as you see fit. If you want to use GCD to guard access to a critical section, use a single serial queue and dispatch all operations on the shared data on this queue (synchronously or asynchronously, does not matter). That way there will always be just one block operating with the shared data:
- (void) addFoo: (id) foo {
dispatch_sync(guardingQueue, ^{ [sharedFooArray addObject:foo]; });
}
- (void) removeFoo: (id) foo {
dispatch_sync(guardingQueue, ^{ [sharedFooArray removeObject:foo]; });
}
Now if guardingQueue is a serial queue, the add/remove operations can never clash even if the addFoo: and removeFoo: methods are called concurrently from different threads.
No it doesn't.
The synchronised part is that the block is put on a queue but control does not pass back to the calling function until the block returns.
Many uses of GCD are asynchronous; you put a block on a queue and rather than waiting for the block to complete it's work control is passed back to the calling function.
This has no effect on other queues.
If you need to serialize the access to a certain resource then there are at least two
mechanisms that are accessible to you. If you have an account object (that is unique
for a given account number), then you can do something like:
#synchronize(accountObject) { ... }
If you don't have an object but are using a C structure for which there is only one
such structure for a given account number then you can do the following:
// Should be added to the account structure.
// 1 => at most 1 object can access accountLock at a time.
dispatch_semaphore_t accountLock = dispatch_semaphore_create(1);
// In your block you do the following:
block = ^(void) {
dispatch_semaphore_wait(accountLock,DISPATCH_TIME_FOREVER);
// Do something
dispatch_semaphore_signal(accountLock);
};
// -- Edited: semaphore was leaking.
// At the appropriate time release the lock
// If the semaphore was created in the init then
// the semaphore should be released in the release method.
dispatch_release(accountLock);
With this, regardless of the level of concurrency of your queues, you are guaranteed that only one thread will access an account at any given time.
There are many more types of synchronization objects but these two are easy to use and
quite flexible.

What is the difference between GCD Dispatch Sources and select()?

I've been writing some code that replaces some existing:
while(runEventLoop){
if(select(openSockets, readFDS, writeFDS, errFDS, timeout) > 0){
// check file descriptors for activity and dispatch events based on same
}
}
socket reading code. I'd like to change this to use a GCD queue, so that I can pop events on to the queue using dispatch_async instead of maintaining a "must be called on next iteration" array. I also am already using a GCD queue to /contain/ this particular action, hence wanting to devolve it to a more natural GCD dispatch form. ( not a while() loop monopolizing a serial queue )
However, when I tried to refactor this into a form that relied on dispatch sources fired from event handlers tied to DISPATCH_SOURCE_TYPE_READ and DISPATCH_SOURCE_TYPE_WRITE on the socket descriptors, the library code that depended on this scheduling stopped working. My first assumption is that I'm misunderstanding the use of DISPATCH_SOURCE_TYPE_READ and DISPATCH_SOURCE_TYPE_WRITE - I had assumed that they would yield roughly the same behavior as calling select() with those socket descriptors.
Do I misunderstand GCD dispatch sources? Or, regarding the refactor, am I using it in a situation where it is not best suited?
The short answer to your question is: none. There are no differences, both GCD dispatch sources and select() do the same thing: they notify the user that a specific kernel event happened or that a particular condition holds true.
Note that, on a mac or iOS device you should not use select(), but rather the more advanced kqueue() and kevent() (or kevent64()).
You may certainly convert the code to use GCD dispatch sources, but you need to be careful not to break other code relying on this. So, this needs a complete inspection of the whole code handling signals, file descriptors, socket and all of the other low level kernel events.
May be a simpler solution could be to maintain the original code, simply adding GCD code in the part that react to events. Here, you dispatch events on different queues depending on the particular type of event.