When linting dart files, one of the checks that seems relatively standard (and part of the google pedantic style) is unnecessary_this, which favors not using the explicit this keyword unless necessary when the instance variable is shadowed.
Coming from more of a Java/Python background, where (in Java) the standard seems to favor explicit use of this., plus a pretty typical checkstyle check RequireThis, I'm wondering the rationale behind dart's preference for this type of style check - to me it seems Java and Dart have similar semantics for implicit this's, so why are the standard preferences opposing each other?
In the unnecessary_this docs, it says:
From the style guide:
DON'T use this when not needed to avoid shadowing
However the linked style guide does not mention it or provide any rationale.
I'm wondering because I'd like to have a check that is the exact opposite of unnecessary_this, but there doesn't seem to be one so I'm curious if there's something about dart I don't know that is rationale for implicit this.
Dart's style-guide focuses a lot on "don't write what is not necessary", and unnecessary_this is one of its representations
The overall rationale behind it is that by removing the obvious redundant bit, this decrease visual noise, and therefore increase readability (while also being easier to type).
The only reason I can think of for wanting a "require this" lint is to avoid getting confused by variable shadowing. But then, might as well have a lint for "don't shadow variables".
Related
Before you relieve the itching in your fingertips, I already understand:
how and when to use the try keyword
the differences between the try, try?, and try! keywords
What I want to understand is what the use of the unadorned try keyword buys me (and you and all of us) over and above merely quieting a compiler diagnostic. We're already inside the scope of a do, and clearly the compiler knows to demand a try, and I can't (yet) see how there might be some ambiguity about where the try needs to land. So why can't the compiler quietly do the right thing without the explicit appearance of the keyword?
There's been a fair amount of discussion (below) about the possibility that the language is trying to enforce readability for humans. I guess we'd need the input from one of the Swift language designers to determine whether that's true. And even if we had that it would be debatable whether it's wise and/or has been a success. So let's put that aside for the moment. Does the existence of the un-adorned try keyword solve some problem other than enforcing readability for humans?
After a long, productive discussion (linked elsewhere on this page)…
In short, the answer is no, there isn't a purpose other than enforcing readability, but it turns out the readability win is more significant than I had realized.
The try keyword should be seen as akin to (though not the equivalent of) a combination of if and goto. Although try doesn't direct the compiler to do anything it could not have inferred it should do, no one would argue that an if or a goto should be invisible. This makes try a little weird for folks coming from other languages — but not unreasonably so.
It may be difficult for Objective C programmers to grasp this because they are accustomed to assuming almost anything they do may raise an exception. Of course, Objective C exceptions are very different from Swift errors, but knowing this consciously is different from metabolizing it and knowing it unconsciously.
As well, if your intuition immediately tells you that as a matter of style in most cases there should probably be only one failable operation inside a do clause, it may be difficult to see what value a try adds.
I am writing a macro to get the enclosing val/var definition. I can get the enclosing val/var symbol, but I can not get the defining tree. One solution here suggested using enclosingClass:
https://stackoverflow.com/a/18451114/11989864
But all the enclosing-tree style API is deprecated:
https://www.scala-lang.org/api/2.13.0/scala-reflect/scala/reflect/macros/blackbox/Context.html
Is there a way to implement the functionality of enclosingClass? Or to get a tree from a symbol?
Reasons for deprecation are
Starting from Scala 2.11.0, the APIs to get the trees enclosing by
the current macro application are deprecated, and the reasons for that
are two-fold. Firstly, we would like to move towards the philosophy of
locally-expanded macros, as it has proven to be important for
understanding of code. Secondly, within the current architecture of
scalac, we are unable to have c.enclosingTree-style APIs working
robustly. Required changes to the typechecker would greatly exceed the
effort that we would like to expend on this feature given the
existence of more pressing concerns at the moment. This is somewhat
aligned with the overall evolution of macros during the 2.11
development cycle, where we played with c.introduceTopLevel and
c.introduceMember, but at the end of the day decided to reject them.
If you're relying on the now deprecated APIs, consider using the new
c.internal.enclosingOwner method that can be used to obtain the names
of enclosing definitions. Alternatively try reformulating your macros
in terms of completely local expansion...
https://www.scala-lang.org/api/2.13.0/scala-reflect/scala/reflect/macros/Enclosures.html
Regarding getting a tree from a symbol
there's no standard way to go from a symbol to a defining tree
https://stackoverflow.com/a/13768595/5249621
Why do you need def macro to get the enclosing val/var definition?
Maybe macro annotatations can be enough
https://docs.scala-lang.org/overviews/macros/annotations.html
I have a swiftlint warning that bothers me.
warning: Nesting Violation: Types should be nested at most 1 level deep (nesting)
However, the nesting of structs is an established programming technique, and quite a few people advocate it.
Edit:
Indeed #vadian points out the Swift language guide's rule:To nest a type within another type, write its definition within the outer braces of the type it supports. Types can be nested to as many levels as are required.
I am aware it clashes with the use of generics, and that Xcode may become unbearably slow. It actually was (through measuring the slowest compilation spots) the reason why I started looking at this nesting rule.
What is the reason for the lint rule, and what is the good practice in that respect? Please point out the technical reasons, rather than purely opinion-based advice.
Microsoft actually has a page about nested types, and when they are appropriate. While it is not targeted at Swift, it does have some interesting trans-language thoughtbits.
After much searching, all I've found is #jpsim's remark that "the idea behind the nesting rule is to avoid complex interfaces".
Therefore, apart from the compiler issues outlined in my question, which will eventually subside, there seems to not be any technical reason for this rule.
This is really a question of precedence: which is more preferred in C++, avoiding pointers or avoiding #includes in header files?
"Don't Use #include in header files."
There seems to be some ambiguity based on my research. In this SO question, the top answer says "...make sure you actually need an include, [don't use one] when a forward declaration or even leaving it out completely will do." (From Header files and include best practice)
And this article explains the negative effect excess header inclusions can have on compile-time: http://blog.knatten.org/2012/11/09/another-reason-to-avoid-includes-in-headers/
As well as this tutorial, stating, "...you should try to put all of your code in the CPP class and only the class declaration in the HPP file.": https://github.com/LaurentGomila/SFML/wiki/Tutorial%3A-Basic-Game-Engine#wiki-declarations
"Don't Use Pointers."
But, there is also evidence that pointers should be avoided most often as well:
c++: when to use pointers?
https://softwareengineering.stackexchange.com/questions/56935/why-are-pointers-not-recommended-when-coding-with-c
Which preference takes precedence?
If my understanding about avoiding #includes in header files is correct, this can easily be done by changing things like class members to pointers so I can use a forward declaration instead, but is this a good idea for class members whose lifetime only lasts as long as the class itself?
It's not really an "one or the other". Both statements are true, but you need to understand the reasoning behind them.
tl;dr: Use forward declaration where possible to reduce compile time. Use stack objects or references as much as possible and pointers only in rare cases.
"Don't Use #include in header files."
This is a rather general statement, which as is, would be wrong. The more important part behind this statement actually is: "Use forward declarations where ever possible". Includes in header files are not something bad per se, but they often aren't needed either.
Forward declarations can be used, if the included type/class/etc. is used as a pointer in the new type/class/etc. declaration within the given header. Forward declaration just tells the compiler: "Somewhere a long the way you'll find the actual declaration of type X." The include can even be removed if the type isn't used at all in the declaration. The reason is that the compiler doesn't need to know anything about these types to calculate the required memory layout for the new type. For example a pointer has "always" the same size. Including the file additionally in the header, would potentially only waste processing power, since the compiler would have to open and parse the file, thus adding expensive seconds to the compile time. So in most cases you'll do yourself a favor by reducing the unnecessary includes in the header files and instead use forward declaration.
For the sake of completion: Forward declaration are explicitly needed if you get circular references (class A depends on class B, which depends on class C, which depends on class A). However this can often also reveal either bad design and/or old/outdate coding standards which would lead us to the second topic.
"Don't use pointers."
Again the statement is a tiny bit too general. One might rather want to say: "Don't use raw pointers."
With C++11 and soon C++1y the language itself has changed a lot. As much bad C++ books the world has seen, the more outdated C++ books float around nowadays (here's a good list however). While in the past we were mostly stuck with pointers new and delete for memory management, we've evolved to better, more readable, less risk and 100% memory leak free ways to manage the data in memory. One of the magic words is RAII - since you linked something from SFML above, here's a nice demonstration of the power of RAII. I see many people use pointers and new and delete just because or maybe because they are thinking in Java or C# terms were objects get instantiated with the new keyword. In C++ however object don't need to use new to be allocated and it's mostly preferable to run things on the stack instead of the heap. This works for many, many things, especially when using STL containers, which will hide the dynamic management in the background. The usage of the heap is mostly all cases only preferable if you need the data to be dynamic, non "local" or you need a lot of it. However when you use the heap, make sure to use smart pointers such as std::unique_ptr or std::shared_ptr depending on the use case, but certainly not raw pointers. In modern C++ raw pointers should never own an object anymore. There are cases where it's okay to return a raw pointer to reference an object, but there's really no reason in modern C++ to call new on a raw pointer.
Lets get back to the original question though. The "Don't use raw pointers" is essentially more of a design question and quite unrelated to the whole header issue. While there might be some cases where you'll have to switch to raw pointers, due to circular references, the use of forward declarations is otherwise just about compilation time (and maybe clean code), but it's not as essential for the programming itself.
In short: Don't use raw pointers to avoid inclusions in header files, but use forward declaration where ever possible and utilize smart pointers as much as possible.
With the growth of dynamically typed languages, as they give us more flexibility, there is the very likely probability that people will write programs that go beyond what the specification allows.
My thinking was influenced by this question, when I read the answer by bobince:
A question about JavaScript's slice and splice methods
The basic thought is that splice, in Javascript, is specified to be used in only certain situations, but, it can be used in others, and there is nothing that the language can do to stop it, as the language is designed to be extremely flexible.
Unless someone reads through the specification, and decides to adhere to it, I am fairly certain that there are many such violations occuring.
Is this a problem, or a natural extension of writing such flexible languages? Or should we expect tools like JSLint to help be the specification police?
I liked one answer in this question, that the implementation of python is the specification. I am curious if that is actually closer to the truth for these types of languages, that basically, if the language allows you to do something then it is in the specification.
Is there a Python language specification?
UPDATE:
After reading a couple of comments, I thought I would check the splice method in the spec and this is what I found, at the bottom of pg 104, http://www.mozilla.org/js/language/E262-3.pdf, so it appears that I can use splice on the array of children without violating the spec. I just don't want people to get bogged down in my example, but hopefully to consider the question.
The splice function is intentionally generic; it does not require that its this value be an Array object.
Therefore it can be transferred to other kinds of objects for use as a method. Whether the splice function
can be applied successfully to a host object is implementation-dependent.
UPDATE 2:
I am not interested in this being about javascript, but language flexibility and specs. For example, I expect that the Java spec specifies you can't put code into an interface, but using AspectJ I do that frequently. This is probably a violation, but the writers didn't predict AOP and the tool was flexible enough to be bent for this use, just as the JVM is also flexible enough for Scala and Clojure.
Whether a language is statically or dynamically typed is really a tiny part of the issue here: a statically typed one may make it marginally easier for code to enforce its specs, but marginally is the key word here. Only "design by contract" -- a language letting you explicitly state preconditions, postconditions and invariants, and enforcing them -- can help ward you against users of your libraries empirically discovering what exactly the library will let them get away with, and taking advantage of those discoveries to go beyond your design intentions (possibly constraining your future freedom in changing the design or its implementation). And "design by contract" is not supported in mainstream languages -- Eiffel is the closest to that, and few would call it "mainstream" nowadays -- presumably because its costs (mostly, inevitably, at runtime) don't appear to be justified by its advantages. "Argument x must be a prime number", "method A must have been previously called before method B can be called", "method C cannot be called any more once method D has been called", and so on -- the typical kinds of constraints you'd like to state (and have enforced implicitly, without having to spend substantial programming time and energy checking for them yourself) just don't lend themselves well to be framed in the context of what little a statically typed language's compiler can enforce.
I think that this sort of flexibility is an advantage as long as your methods are designed around well defined interfaces rather than some artificial external "type" metadata. Most of the array functions only expect an object with a length property. The fact that they can all be applied generically to lots of different kinds of objects is a boon for code reuse.
The goal of any high level language design should be to reduce the amount of code that needs to be written in order to get stuff done- without harming readability too much. The more code that has to be written, the more bugs get introduced. Restrictive type systems can be, (if not well designed), a pervasive lie at worst, a premature optimisation at best. I don't think overly restrictive type systems aid in writing correct programs. The reason being that the type is merely an assertion, not necessarily based on evidence.
By contrast, the array methods examine their input values to determine whether they have what they need to perform their function. This is duck typing, and I believe that this is more scientific and "correct", and it results in more reusable code, which is what you want. You don't want a method rejecting your inputs because they don't have their papers in order. That's communism.
I do not think your question really has much to do with dynamic vs. static typing. Really, I can see two cases: on one hand, there are things like Duff's device that martin clayton mentioned; that usage is extremely surprising the first time you see it, but it is explicitly allowed by the semantics of the language. If there is a standard, that kind of idiom may appear in later editions of the standard as a specific example. There is nothing wrong with these; in fact, they can (unless overused) be a great productivity boost.
The other case is that of programming to the implementation. Such a case would be an actual abuse, coming from either ignorance of a standard, or lack of a standard, or having a single implementation, or multiple implementations that have varying semantics. The problem is that code written in this way is at best non-portable between implementations and at worst limits the future development of the language, for fear that adding an optimization or feature would break a major application.
It seems to me that the original question is a bit of a non-sequitor. If the specification explicitly allows a particular behavior (as MUST, MAY, SHALL or SHOULD) then anything compiler/interpreter that allows/implements the behavior is, by definition, compliant with the language. This would seem to be the situation proposed by the OP in the comments section - the JavaScript specification supposedly* says that the function in question MAY be used in different situations, and thus it is explicitly allowed.
If, on the other hand, a compiler/interpreter implements or allows behavior that is expressly forbidden by a specification, then the compiler/interpreter is, by definition, operating outside the specification.
There is yet a third scenario, and an associated, well defined, term for those situations where the specification does not define a behavior: undefined. If the specification does not actually specify a behavior given a particular situation, then the behavior is undefined, and may be handled either intentionally or unintentionally by the compiler/interpreter. It is then the responsibility of the developer to realize that the behavior is not part of the specification, and, should s/he choose to leverage the behavior, the developer's application is thereby dependent upon the particular implementation. The interpreter/compiler providing that implementation is under no obligation to maintain the officially undefined behavior beyond backwards compatibility and whatever commitments the producer may make. Furthermore, a later iteration of the language specification may define the previously undefined behavior, making the compiler/interpreter either (a) non-compliant with the new iteration, or (b) come out with a new patch/version to become compliant, thereby breaking older versions.
* "supposedly" because I have not seen the spec, myself. I go by the statements made, above.