Which method of reversing a queue is better? Recursive calls using function call stack or by directly using an empty stack?
Related
Are swift higher order functions like map, reduce, filter, sort : synchronous or asynchronous ?
and are higher order functions are thread safe or not ? if not then how can we make them thread safe ? By implementing them inside serial queue ?
Are swift higher order functions like map, reduce, filter, sort ?
Synchronous
Thread safe
No , it's your job to use in a non-mixed way otherwise embed the operation in a serial queue ( main queue is serial)
When you pass a function or a block to another function, notice the type of the parameter: it can be #escaping or #nonescaping (the latter is the default and is usually omitted from the definiteion).
Non-escaping blocks will definitely be called on the same thread in a synchronous manner. For example, all list manipulation higher order functions (map, filter, etc) are non-escaping. In fact some of these functions can be inlined and optimized by the compiler to not have any function calls at all.
Escaping blocks are a bit different. If you design a higher order function yourself and you happen to store the block in a variable to be called later, the compiler will force you to declare it as #escaping. These blocks are allowed to be called much later and from any thread.
When dealing with system or API calls you need to check the documentation. For example the documentation on AVCaptureDevice.requestAccess(for:completionHandler:) states the callback can be called on arbitrary thread and therefore you are responsible for ensuring your UI code is executed on the main thread. You typically do it this way:
AVCaptureDevice.requestAccess(for: .video, completionHandler: { (granted) in
DispatchQueue.main.async {
// Execute UI code here
}
})
Another example is the URLSessionTask family of classes that typically deliver the results of a network operation via an async callback on a non-main thread. You can take advantage of the fact that you are on a different thread, or you could "return" to the main one like in the example above.
All in all, it's up to the designer of the higher order function, so a short answer is: if it's an escaping parameter then check the documentation.
I am pretty new to iPhone development. i need some help on how to synchronize a callback method and a for loop.
For example:
I have a for loop say 1 to 3.
Within this loop, first i send message to a receiver. The result from the receiver is obtained in a callback function. With this result i need to perform some parsing. Now how can i continue with the loop??
BR,
Suppi
Edited with Code:
-(void)requestData{
for (int i=1; i<3; i++) {
completeMessage = [self generateMessage:message];
[self sendMessageToReceiver:completeMessage];
//now it goes to the callback function to read message from receiver. How do i return to this point?? to continue the loop.
[self dosomething:result];
}
}
I don't know much about iPhone development but based on my asynchronous function calling experience you might have to reconsider your approach - assuming this is an asynchronous function call.
When you go through the loop the first time, your code is going to call all the asynchronous functions and move on. It is not going to wait. If you want it to wait for each function call then you either shouldn't use asynchronous functions or use a thread.wait or thread.sleep function in the loop. You could also use some kind of thread synchronization and signalling in the loop. For example, you could make the asynchronous call and then your thread waits until it gets a signal from your callback to continue.
You may want to take your custom end processing out of the loop and do it after all your callbacks are done. You could put state in a common location for each of your callbacks and use it after the callbacks are done.
Of course, you would need to wait until all the callbacks are done before you can continue.
Hope this helps.
Launch the message in a separate thread:
[receiver performSelectorInBackground:#selector(doSomething)];
use performSelectorInBackground:withObject: if you wish to pass a parameter.
Convert your "for" loop into the equivalent goto statements. Then break the goto basic blocks into methods and method calls without goto's. Then break the method containing the wait into 2 methods and use an asynchronous call and callback in between them. You may have to save some of the local and for loop's implicit state in instance variables.
Goto's are not always bad. They are just implicit in more readable structured and/or OOP messaging constructs. Sometimes the compiler can't do the conversion for you, so you need to know enough about raw program control sequencing to do it yourself.
I've recently learned Haskell, and am trying to carry the pure functional style over to my other code when possible. An important aspect of this is treating all variables as immutable, i.e. constants. In order to do so, many computations that would be implemented using loops in an imperative style have to be performed using recursion, which typically incurs a memory penalty due to the allocation a new stack frame for each function call. In the special case of a tail call (where the return value of a called function is immediately returned to the callee's caller), however, this penalty can be bypassed by a process called tail call optimization (in one method, this can be done by essentially replacing a call with a jmp after setting up the stack properly). Does MATLAB perform TCO by default, or is there a way to tell it to?
If I define a simple tail-recursive function:
function tailtest(n)
if n==0; feature memstats; return; end
tailtest(n-1);
end
and call it so that it will recurse quite deeply:
set(0,'RecursionLimit',10000);
tailtest(1000);
then it doesn't look as if stack frames are eating a lot of memory. However, if I make it recurse much deeper:
set(0,'RecursionLimit',10000);
tailtest(5000);
then (on my machine, today) MATLAB simply crashes: the process unceremoniously dies.
I don't think this is consistent with MATLAB doing any TCO; the case where a function tail-calls itself, only in one place, with no local variables other than a single argument, is just about as simple as anyone could hope for.
So: No, it appears that MATLAB does not do TCO at all, at least by default. I haven't (so far) looked for options that might enable it. I'd be surprised if there were any.
In cases where we don't blow out the stack, how much does recursion cost? See my comment to Bill Cheatham's answer: it looks like the time overhead is nontrivial but not insane.
... Except that Bill Cheatham deleted his answer after I left that comment. OK. So, I took a simple iterative implementation of the Fibonacci function and a simple tail-recursive one, doing essentially the same computation in both, and timed them both on fib(60). The recursive implementation took about 2.5 times longer to run than the iterative one. Of course the relative overhead will be smaller for functions that do more work than one addition and one subtraction per iteration.
(I also agree with delnan's sentiment: highly-recursive code of the sort that feels natural in Haskell is typically likely to be unidiomatic in MATLAB.)
There is a simple way to check this. Create this function tail_recursion_check:
function r = tail_recursion_check(n)
if n > 1
r = tail_recursion_check(n - 1);
else
error('error');
end
end
and run tail_recursion_check(10), for example. You are going to see a very long stack trace with 10 items that says error at line 3. If there were tail call optimization, you would only see one.
I've been writing some code that replaces some existing:
while(runEventLoop){
if(select(openSockets, readFDS, writeFDS, errFDS, timeout) > 0){
// check file descriptors for activity and dispatch events based on same
}
}
socket reading code. I'd like to change this to use a GCD queue, so that I can pop events on to the queue using dispatch_async instead of maintaining a "must be called on next iteration" array. I also am already using a GCD queue to /contain/ this particular action, hence wanting to devolve it to a more natural GCD dispatch form. ( not a while() loop monopolizing a serial queue )
However, when I tried to refactor this into a form that relied on dispatch sources fired from event handlers tied to DISPATCH_SOURCE_TYPE_READ and DISPATCH_SOURCE_TYPE_WRITE on the socket descriptors, the library code that depended on this scheduling stopped working. My first assumption is that I'm misunderstanding the use of DISPATCH_SOURCE_TYPE_READ and DISPATCH_SOURCE_TYPE_WRITE - I had assumed that they would yield roughly the same behavior as calling select() with those socket descriptors.
Do I misunderstand GCD dispatch sources? Or, regarding the refactor, am I using it in a situation where it is not best suited?
The short answer to your question is: none. There are no differences, both GCD dispatch sources and select() do the same thing: they notify the user that a specific kernel event happened or that a particular condition holds true.
Note that, on a mac or iOS device you should not use select(), but rather the more advanced kqueue() and kevent() (or kevent64()).
You may certainly convert the code to use GCD dispatch sources, but you need to be careful not to break other code relying on this. So, this needs a complete inspection of the whole code handling signals, file descriptors, socket and all of the other low level kernel events.
May be a simpler solution could be to maintain the original code, simply adding GCD code in the part that react to events. Here, you dispatch events on different queues depending on the particular type of event.
Is it a function?
Is it a function being called from the source?
Or, is it a function being returned from the destination?
Or, is it just executing a function at the destination?
Or, is it a value returned from a function passed to the destination?
A callback is the building block of asynchronous processing.
Think of it this way: when you call someone and they don't answer, you leave a message and your phone number. Later on, the person calls you back based on the phone number you left.
A callback works in a similar manner.
You ask an API for a long running operation and you provide a method from within your code to be called with the result of the operation. The API does its work and when the result is ready, it calls your callback method.
From the great Wikipedia:
In computer programming, a callback is
executable code that is passed as an
argument to other code. It allows a
lower-level software layer to call a
subroutine (or function) defined in a
higher-level layer.
Said another way, when you pass a callback to your method, it's as if you are providing additional instructions (e.g., what you should do next). An attempt at making a simple human example follows:
Paint this wall this shade of green (where "paint" is analagous to the method called, while "wall" and "green" are similar to arguments).
When you have finished painting, call me at this number to let me know that you're done and I'll tell you what to do next.
In terms of practical applications, one place where you will sometimes see callbacks is in situations with asynchronous message passing. You might want to register a particular message as an item of interest for class B.
However, without something like a callback, there's no obvious way for class A to know that class B has received the message. With a callback, you can tell class B, here's the message that I want you to listen for and this is the method in class A that I want you to call when you receive it.
Here is a Java example of a callback from a related question.