org.eclipse.core.commands.IHandler.isEnabled()
org.eclipse.core.commands.IHandler.isHandled()
org.eclipse.ui.handlers.IHandlerService.activateHandler(String, IHandler, Expression)
According to the above methods, an IHandler could have six states, enabled or not, handled or not, activated or not. So many states are confusing. I read some of the source code and documentation, but still don't know enough to distinguish them well.
What are they used for? Is there a correlation?
Related
These words are difficult to search online, so I can't find any information on them besides the docs, which seems to me to have almost the same description (specially for as and to).
What's the difference between as(), to() and compose() in RxJava2? When should I use any of them?
to and as are practically the same. The difference is that to uses the more broad Function interface and as uses the dedicated XConverter interface. The former can't be implemented for multiple reactive types. Issue, PR.
The difference between to/as and compose is that the former lets you turn the sequence into arbitrary result type during assembly time whereas the latter can only turn into the same reactive type but possibly different type argument(s).
It seems easy to find general info on a specific 'operator' (method, syntactic sugar), but I can't seem to find anything that has a list of all, or even just most, of these goodies. As such, it makes it fairly difficult, or at least overly time consuming, to work through learning the language.
I have already looked over this question. While it has great information, and definitely shows you how to find any information you need, I was hoping for something like a 'pocket ref' that just had all the relevant info and was only dedicated to that.
So, my question is this:
Is there a such a list?
Am I getting ahead of myself by looking for such a reference early on in learning the language?
Thanks in advance.
Well, a list of all operators makes as much sense as a list of all methods in the library, regardless of the type. It isn't going to be particularly useful except for finding information about a specific operator.
However, if you do want one, at any ScalaDoc site (http://www.scala-lang.org/api/current/ for the standard library) there is an alphabetic index just under the search bar. The first link (#) lists all the non-alphabetic methods (i.e. "operators").
Many of these are rarely used, or only in specific circumstances.
Obviously, any other library can introduce its own operators, and you'll need to check its own documentation.
I've run into a frustrating feature of KVO: all notifications are funneled through a single method (observeValueForKeyPath:....), requiring a bunch of IF statements if the object is observing numerous properties.
The ideal solution would be to pass a method as an argument to the method that establishes the observing in the first place, but it seems this isn't possible. Does a solution exist to this problem? I initially considered using the keyPath argument (addObserver:forKeyPath:options:context:) to call a method via NSSelectorFromString, but then I came across the post KVO Dispatcher pattern with Method as context and the article it linked to which offers a different solution in order to pass arguments along as well (although I haven't gotten that working yet).
I know a lot of people have come up against this issue. Has a standard way of handling it emerged?
OP asks:
Has a standard way of handling it emerged?
No, not really. There are a lot of different approaches out there. Here are some:
https://github.com/sleroux/KVO-Blocks
http://pandamonia.github.io/BlocksKit
http://www.mikeash.com/pyblog/friday-qa-2012-03-02-key-value-observing-done-right-take-2.html
https://github.com/ReactiveCocoa/ReactiveCocoa
http://blog.andymatuschak.org/post/156229939/kvo-blocks-block-callbacks-for-cocoa-observers
Seriously, there are a ton of these... Google "KVO blocks"
I can't say that any of the options I've seen seem prevalent enough to earn the title "standard way". I suspect most folks who feel motivated to conquer this issue just pick one and go with it, or write their own -- it's not as if adapting KVO to use block based callbacks is rocket science. The Method-based approach you link to doesn't seem like a step forward for simplicity. I get that you're trying to take the uncertainty of the string-based-key-path <-> method conversion out of the equation, but that kind of falls down because not all observable keys/keyPaths are methods. (If nothing else, you can observe arbitrary keys on NSMutableDictionaries and get notifications.)
It sure would be nice if Apple would release a new blocks-based KVO API, but I'm not holding my breath. But in the meantime, like I said, just pick one you like and use it or write your own and use that.
I'm struggling with the size of output files for large Modelica models. Off course, I can protect some objects in order to remove them completely from the result file. However, that gives rise to two problems:
it's not possible to redeclare protected objects
if i want to test my model in detail (eg for a short time period), i need to declare those objects publicly again in order to see their variables
I wonder if there's a trick to set the 'verbosity' of a Modelica model. Maybe what I would like is a third keyword next to public, protected, eg. transparent. Then, when setting up a simulation, I want be able to set the verbosity level to 1, or 2 with the following effect:
1--> consider all transparentelements as protected
2--> consider all transparentelements as public
This effect would propagate to all models and submodels.
I don't think this already exists. But is there an easy workaround?
Thanks,
Roel
As Michael Tiller wrote above, this is not handled the same way in all Modelica tools and there is no definite answer. To give an OpenModelica-specific answer, it's possible to use simulate(ModelName,outputFilter="regex"), to store only the variables that fully match the given regex (default is .*, matching any variable).
Roel,
I know several people wrestling with this issue. At the moment, all of this depends on the tool being used. I don't know how other tools handle filtering of results, but in Dymola you control it (as you point out) by giving the signals special qualifiers (e.g. protected).
One thing I've done in the past is to extend from a model and then add a bunch of output signals for things I'm interested in. Then you can select "Outputs" in Dymola to make sure those get in the results file. This is far from perfect because a) listing everything you want can get tedious and b) referencing protected variables is not strictly allowed (although Dymola lets you get away with it but issues a warning).
At Dassault, we are actively discussing this idea and hope to provide some better functionality along these lines. It isn't clear whether such functionality will be strictly tool specific or whether it will involve the language somehow. But if it is language related, we will (of course) work with the design group to formulate a specification that other tool vendors can support as well.
In SystemModeler, you go to the Settings tab in the Experiment Browswer in Simulation Center. Click on Output on the bottom and select which variables to store.
(The options are state variables, derivatives, algebraic variables, parameters, protected variables and if you mark the Store simulation log-option, you'll get some interesting statistics on events over time and function evaluations, opening another possibility to track down parts of the simulation and model that creates more evaluations)
I am not sure if this helps you, but in Dymola you can go to Simulation->Setup->Output and mark a checkbox saying "Store Protected variables". That way it is possible to declare most variables as protected: during normal simulation they are not stored, but when debugging your model, you just mark that checkbox and they are stored.
Of course that is not the same as your suggested keyword transparent, but maybe it helps a little...
A bit late, but in Dymola 2013 FD01 and later you can select which variables to store based on names (and model names) using the annotation __Dymola_selections, and even filter on user-defined tags - so you could create a tag name "transparent" in the model. See "Matching and variable selections" in the manual.
With the growth of dynamically typed languages, as they give us more flexibility, there is the very likely probability that people will write programs that go beyond what the specification allows.
My thinking was influenced by this question, when I read the answer by bobince:
A question about JavaScript's slice and splice methods
The basic thought is that splice, in Javascript, is specified to be used in only certain situations, but, it can be used in others, and there is nothing that the language can do to stop it, as the language is designed to be extremely flexible.
Unless someone reads through the specification, and decides to adhere to it, I am fairly certain that there are many such violations occuring.
Is this a problem, or a natural extension of writing such flexible languages? Or should we expect tools like JSLint to help be the specification police?
I liked one answer in this question, that the implementation of python is the specification. I am curious if that is actually closer to the truth for these types of languages, that basically, if the language allows you to do something then it is in the specification.
Is there a Python language specification?
UPDATE:
After reading a couple of comments, I thought I would check the splice method in the spec and this is what I found, at the bottom of pg 104, http://www.mozilla.org/js/language/E262-3.pdf, so it appears that I can use splice on the array of children without violating the spec. I just don't want people to get bogged down in my example, but hopefully to consider the question.
The splice function is intentionally generic; it does not require that its this value be an Array object.
Therefore it can be transferred to other kinds of objects for use as a method. Whether the splice function
can be applied successfully to a host object is implementation-dependent.
UPDATE 2:
I am not interested in this being about javascript, but language flexibility and specs. For example, I expect that the Java spec specifies you can't put code into an interface, but using AspectJ I do that frequently. This is probably a violation, but the writers didn't predict AOP and the tool was flexible enough to be bent for this use, just as the JVM is also flexible enough for Scala and Clojure.
Whether a language is statically or dynamically typed is really a tiny part of the issue here: a statically typed one may make it marginally easier for code to enforce its specs, but marginally is the key word here. Only "design by contract" -- a language letting you explicitly state preconditions, postconditions and invariants, and enforcing them -- can help ward you against users of your libraries empirically discovering what exactly the library will let them get away with, and taking advantage of those discoveries to go beyond your design intentions (possibly constraining your future freedom in changing the design or its implementation). And "design by contract" is not supported in mainstream languages -- Eiffel is the closest to that, and few would call it "mainstream" nowadays -- presumably because its costs (mostly, inevitably, at runtime) don't appear to be justified by its advantages. "Argument x must be a prime number", "method A must have been previously called before method B can be called", "method C cannot be called any more once method D has been called", and so on -- the typical kinds of constraints you'd like to state (and have enforced implicitly, without having to spend substantial programming time and energy checking for them yourself) just don't lend themselves well to be framed in the context of what little a statically typed language's compiler can enforce.
I think that this sort of flexibility is an advantage as long as your methods are designed around well defined interfaces rather than some artificial external "type" metadata. Most of the array functions only expect an object with a length property. The fact that they can all be applied generically to lots of different kinds of objects is a boon for code reuse.
The goal of any high level language design should be to reduce the amount of code that needs to be written in order to get stuff done- without harming readability too much. The more code that has to be written, the more bugs get introduced. Restrictive type systems can be, (if not well designed), a pervasive lie at worst, a premature optimisation at best. I don't think overly restrictive type systems aid in writing correct programs. The reason being that the type is merely an assertion, not necessarily based on evidence.
By contrast, the array methods examine their input values to determine whether they have what they need to perform their function. This is duck typing, and I believe that this is more scientific and "correct", and it results in more reusable code, which is what you want. You don't want a method rejecting your inputs because they don't have their papers in order. That's communism.
I do not think your question really has much to do with dynamic vs. static typing. Really, I can see two cases: on one hand, there are things like Duff's device that martin clayton mentioned; that usage is extremely surprising the first time you see it, but it is explicitly allowed by the semantics of the language. If there is a standard, that kind of idiom may appear in later editions of the standard as a specific example. There is nothing wrong with these; in fact, they can (unless overused) be a great productivity boost.
The other case is that of programming to the implementation. Such a case would be an actual abuse, coming from either ignorance of a standard, or lack of a standard, or having a single implementation, or multiple implementations that have varying semantics. The problem is that code written in this way is at best non-portable between implementations and at worst limits the future development of the language, for fear that adding an optimization or feature would break a major application.
It seems to me that the original question is a bit of a non-sequitor. If the specification explicitly allows a particular behavior (as MUST, MAY, SHALL or SHOULD) then anything compiler/interpreter that allows/implements the behavior is, by definition, compliant with the language. This would seem to be the situation proposed by the OP in the comments section - the JavaScript specification supposedly* says that the function in question MAY be used in different situations, and thus it is explicitly allowed.
If, on the other hand, a compiler/interpreter implements or allows behavior that is expressly forbidden by a specification, then the compiler/interpreter is, by definition, operating outside the specification.
There is yet a third scenario, and an associated, well defined, term for those situations where the specification does not define a behavior: undefined. If the specification does not actually specify a behavior given a particular situation, then the behavior is undefined, and may be handled either intentionally or unintentionally by the compiler/interpreter. It is then the responsibility of the developer to realize that the behavior is not part of the specification, and, should s/he choose to leverage the behavior, the developer's application is thereby dependent upon the particular implementation. The interpreter/compiler providing that implementation is under no obligation to maintain the officially undefined behavior beyond backwards compatibility and whatever commitments the producer may make. Furthermore, a later iteration of the language specification may define the previously undefined behavior, making the compiler/interpreter either (a) non-compliant with the new iteration, or (b) come out with a new patch/version to become compliant, thereby breaking older versions.
* "supposedly" because I have not seen the spec, myself. I go by the statements made, above.