In Requesting a clear, picturesque explanation of Reactive Extensions (RX)? I asked about what RX is all about, and I think, thanks to the provided answers I now got the idea.
In the referenced question i quoted a sentence from http://reactive-extensions.github.com/RxJS/ which says:
RxJS is to events as promises are to async.
Although I think that I got the idea behind RX, I do not get this sentence at all. I can not even say what it is exactly that I do not understand. It's more like ... I don't see the connection between the first and the second half of the sentence.
To me, this sentence sounds important and impressive, but I can hardly tell whether it's true or not, whether it's a great insight or not, and so on ...
Can anybody explain what the sentence means in words someone (like me) can understand who is new to all this reactive stuff?
Promises are a way to define computations that may happen once an asynchronous operation completes. RxJs is a way to define computations that may happen when one or more events, in a stream, occur (onNext), complete (onCompleted), or throw an exception (onError).
Related
I saw this article:
https://itnext.io/comparing-darts-loops-which-is-the-fastest-731a03ad42a2
It says that ".map" is slow with benchmark result
But I don't understand why slower than while/for loop
How does it work in low level?
I think it's because .map is called an unnamed method like this (_){ }
Can you explain that in detail?
Its because mapping an array will create a copy of each value than modify the original array.
Since a while/for loop does not copy the values but rather just accesses them using their index, it is a lot faster.
Can you explain that in detail?
It's like saying "I don't understand why hitchhiking on the back of a construction truck is so much slower than taking the high speed train to my destination".
The only detail that is important is that map is not a loop. map() internally probably uses a loop of some kind.
This person is misusing a method call that is meant for something else, just because a side-effect of that call when combining it with a call materializing the iterable, like toList(), is that it loops through the iterable given. It doesn't even have the side effect on it's own.
Stop reading "tutorials" or "tips" of people misusing language features. map() is not a loop. If you need a loop, use a loop. The same goes for the ternary operator. It's not an if, if you need an if, use it.
Use language features for what they are meant, stop misusing language features because their side-effect does what you want and then wondering why they don't work as well as the feature actually meant for it.
Sorry if this seems a bit ranty, but I have seen countless examples by now. I don't know where it comes from. My personal guess is "internet tutorials". Because everybody can write one. Please don't read them. Read a good book. It was written by professionals, proofread, edited, and checked. Internet tutorials are free, written by random people and about worth as much as they cost.
I can't seem to find anywhere whether complete and tryComplete are atomic operations on Promises in Scala. Promises are only supposed to be written to once, but if two tryCompletes happen concurrently in two different callbacks for example could something go wrong? Or are we assured that tryComplete is atomic?
First a quick note that success(...) is equivalent to calling complete(Success(...)) and tryComplete(...) is equivalent to complete(...).isCompleted.
In the docs it says
As mentioned before, promises have single-assignment semantics. As such, they can be completed only once. Calling success on a promise that has already been completed (or failed) will throw an IllegalStateException.
A promise can only complete once. Digging into the source code, DefaultPromise extends AtomicReference (ie. thread safe) and so all writes are atomic. This means that if you have two threads completing a promise, only one of them can ever succeed and it'll be whichever did so first. The other will throw an IllegalStateException.
Here's a small example of what happens when you try and complete a promise twice.
https://scastie.scala-lang.org/hTYBqVywSQCl8bFSgQI0Sg
Though apparently it seems one can circumvent the immutability of a Future by doing a bunch of weird casting acrobatics.
https://contributors.scala-lang.org/t/defaultpromise-violates-encapsulation/3440
One should probably avoid that.
If I have a function call that returns true or false as the condition of an if statement, do I have to worry about Swift forking my code for efficiency?
For example,
if (resourceIsAvailable()) { //execute code }
If checking for the resource is computationally expensive, will Xcode wait or attempt to continue on with the code?
Is this worth using a completion handler?
What if the resource check must make a database call?
Good question.
First off... can a function be used? Absolutely.
Second... should it be used?
A lot of that depends on the implementation of the function. If the function is known (to the person who wrote it) to take a long time to complete then I would expect that person to deal with that accordingly.
Thankfully with a lot of iOS things like that are taken out of the hands of the developer (mostly). CoreData and Network requests normally come with a completion handler. So any function that uses them would also need to be async and have a completion handler.
There is no fixed rule for this. My best advice would be...
If you can see the implementation of the function then try to work out what it’s doing.
If you can’t then give it a go. You could even use the time profiler in Xcode profiler to determine how long it is taking to complete.
The worst that could happen is you find it is slow and then change it for something else.
I’m new to scala and I’m trying to make sense of what this code is doing in a codebase I want to make updates to.
Removing some of the specifics, the chunk I don’t understand is this:
I’ve seen some scala code that does things like:
val someA = something.createSomeA(....)
Future {
someA.doSomething1(....)
someA.doSomething2(.....)
}
// then log some things unrelated to the future
someA
// end of func
I don’t really understand what the future is doing in this case as it’s not assigned to anything. Could someone explain what the Future is doing here?
I know the details depend on what the doSomethings are actually doing, but could someone explain generally what this would be for? I’m only familiar with the use of Futures when they’re assigned to a variable and then checked for completion in some way at a later point.
Help would be appreciated!! (Sorry for poor formatting, I’m doing this from my phone)
Three words for you: "fire and forget".
If you understand the case, when the future is assigned to a variable, and then checked/transformed later, then you already know what's happening here: the insides of the Future are being executed asynchronously.
The only difference is that in this case it is never accessed again. Why? Probably, because nobody cares. Some operations return a result when they complete, that can be used later, others do not.
For example, if I wanted to print out a log message asynchronously, I'd write something like Future { logger.info(mymessage) } without assigning it to anything. Why? Well, I don't really care when (or even if) it completes. There is no return value I could use, and, if it fails ... well, I don't have any meaningful way to handle that, other than ignoring the error.
For an operation like this, I don't need to wait for it to complete, since it doesn't return anything useful back to me anyway. So, I can just start it, and forget. No need to assign it to anything.
While re-reading scala.lan.org's page detailing Future here, I have stumbled up on the following sentence:
In the event that some of the callbacks never complete (e.g. the callback contains an infinite loop), the other callbacks may not be executed at all. In these cases, a potentially blocking callback must use the blocking construct (see below).
Why may the other callbacks not be executed at all? I may install a number of callbacks for a given Future. The thread that completes the Future, may or may not execute the callbacks. But, because one callback is not playing footsie, the rest should not be penalized, I think.
One possibility I can think of is the way ExecutionContext is configured. If it is configured with one thread, then this may happen, but that is a specific behaviour and a not generally expected behaviour.
Am I missing something obvious here?
Callbacks are called within an ExecutionContext that has an eventually limited number of threads - if not by the specific context implementation, then by the underlying operating system and/or hardware itself.
Let's say your system's limit is OS_LIMIT threads. You create OS_LIMIT + 1 callbacks. From those, OS_LIMIT callbacks immediately get a thread each - and none ever terminate.
How can you guarantee that the remaining 1 callback ever gets a thread?
Sure, there could be some detection mechanisms built into the Scala library, but it's not possible in the general case to make an optimal implementation: maybe you want the callback to run for a month.
Instead (and this seems to be the approach in the Scala library), you could provide facilities for handling situations that you, the developer, know are risky. This removes the element of surprise from the system.
Perhaps most importantly - it enables the developer to "bake in" the necessary information about handler/task characteristics directly into his/her program, rather than relying on some obscure piece of language functionality (which may change from version to version).