Why do numbers outside of variable assignments compile? - swift

Why in Swift can I type numbers without assigning them to var or let, and the code will compile fine? What is this good for?
import Foundation
55
25
23+8
print("Hello, World!")
4
11
-45

What is this good for?
It isn't good for anything, in the sense that it doesn't cause anything to happen with regard to the flow of the program. It's just something you're doing for fun, if you see what I mean. The numeric value is not being captured, so in effect it is immediately thrown away, like a virtual particle that flashes into existence one moment and back out of existence the next.
The reason it is legal is that a Swift statement is an evaluatable expression. A numeric literal is an evaluatable expression, so it's legal — though useless — as an independent statement.
You can see the same thing in many other ways. This is legal:
let n = 23
n
n is an evaluatable expression, so it's legal as a separate statement. But it is useless.
EDIT In answer to your comment below: I see no reason why a useless statement should prevent a program from compiling. But in a case like this, I would agree that it might be helpful if the compiler would warn that you're doing something useless, and in fact, for at least one case of this sort of thing, I have filed a bug report requesting this.

Swift is a language that has side effects, meaning that some operations can result in a mutation of the global state of the program. This has the implications that the compiler cannot simply eliminate a stand-alone statement without making extra sure that it can do so without affecting the execution of the program. Due to the complexity of state in a program, this is generally not a trivial problem, therefore in order not to penalize users who wish to invoke functions that have side effects, a line must be drawn; Swift has chosen to let users put any kind of stand-alone statement, even ones that are obviously free of side effects (constants or constant expressions), rather than spend a lot of effort trying to differentiate among various possibilities.
There could be a compile-time warning that shows lines of code that have no effect. I don't know much about Swift so I can't tell you, but if there is, you should be able to find it in the documentation.

It has to do with the fact that a number or an expression is a valid statement. So it can exist on its own. Some other languages have the same or similar feature.

Like in any language you can pronounce sentences of no interest... 55 is such a swift sentence. You may think it could be easy to eliminate unuseful sentences, but it is impossible, so why not just eliminate (at compile-time) the obvious ones? Because, in general people don't try to use them or just to exercise, and it will not worth the effort. It would be easy to forbid 55, but what about n=n? Easy? Or n=2*n-n(it is much more tricky because you would need to implement mathematical properties of expressions inside the compiler, and optimisation tricks for them).

Related

What is the purpose of the `try` keyword in Swift?

Before you relieve the itching in your fingertips, I already understand:
how and when to use the try keyword
the differences between the try, try?, and try! keywords
What I want to understand is what the use of the unadorned try keyword buys me (and you and all of us) over and above merely quieting a compiler diagnostic. We're already inside the scope of a do, and clearly the compiler knows to demand a try, and I can't (yet) see how there might be some ambiguity about where the try needs to land. So why can't the compiler quietly do the right thing without the explicit appearance of the keyword?
There's been a fair amount of discussion (below) about the possibility that the language is trying to enforce readability for humans. I guess we'd need the input from one of the Swift language designers to determine whether that's true. And even if we had that it would be debatable whether it's wise and/or has been a success. So let's put that aside for the moment. Does the existence of the un-adorned try keyword solve some problem other than enforcing readability for humans?
After a long, productive discussion (linked elsewhere on this page)…
In short, the answer is no, there isn't a purpose other than enforcing readability, but it turns out the readability win is more significant than I had realized.
The try keyword should be seen as akin to (though not the equivalent of) a combination of if and goto. Although try doesn't direct the compiler to do anything it could not have inferred it should do, no one would argue that an if or a goto should be invisible. This makes try a little weird for folks coming from other languages — but not unreasonably so.
It may be difficult for Objective C programmers to grasp this because they are accustomed to assuming almost anything they do may raise an exception. Of course, Objective C exceptions are very different from Swift errors, but knowing this consciously is different from metabolizing it and knowing it unconsciously.
As well, if your intuition immediately tells you that as a matter of style in most cases there should probably be only one failable operation inside a do clause, it may be difficult to see what value a try adds.

What is the advantage of saying your function should never be inlined?

I understand Swift's inlining well. I know the nuances between the four function-inlining attributes. I use #inline(__always) a lot, especially when I'm just making sugary APIs like this:
public extension String {
#inline(__always)
var length: Int { count }
}
I do this because there's not really a cost involved in inlining it, but there would be the cost of an extra stack frame if it weren't inlined. For less-obvious sugar, I'll lean toward #inlinable andor #usableFromInline as needed.
However, one distinction vexes me. The two possible arguments to #inline are never and __always. Despite the lack of actual documentation, this choice of spelling here acts as a sort of self-documentation, implying that if you are going to use one of these, you should lean toward never, and __always is discouraged.
But why is this the direction the Swift language designers encourage? As far as I know, if no attribute is applied at all, then this is the behavior:
If a function (et al) is used within the module in which it's declared, the compiler might choose to inline it or not, depending on which would produce better code (by some measure)
If that function (et al) is used outside the module, its implementation is not exposed in a way that allows it to be inlined, so it is never inlined.
So, it seems most of the time, not-inlining is the default. That's fine and dandy, I have no problem with that on the surface; don't bloat the executable any more than you need to.
But then, I've never had a reason to think #inline(never) is useful. From what I understand, the only reason I would use #inline(never) is if I've noticed that the Swift compiler is choosing to inline a non-annotated function too much, and it's bloating my executable. This seems like a super-niche occurrence:
My software is running fine
The Swift compiler's algorithm for deciding whether to inline something is not making the right choice for my code
I care about the size of the binary so much that I'm inspecting it closely enough to discover that a function is being inlined automatically too much
The problem is only in code that I've written into my own module; not code I'm using from some other module
Or, as Rob said in the comments, if you're going through some disassembly and automatic inlining makes it hard to read.
I can't imagine that these are the use cases which the Swift language designers had in mind when designing this attribute. Especially since Swift is not meant for embedded systems, binary size (and the (dis)assembly in general) isn't really that much of a concern. I've never seen an unreasonably-large Swift binary anyway (>50MB).
So why is never encouraged more than __always? I often run into reasons why I should force a function to be inlined, but I've not yet seen a reason to force a function to be stacked, at least in my own work.

Progress-gl - What's benefit of placing variable declaration on top of the procedure

I've been doing Progress 4GL for 8 years though it's not my main responsibility. I do C++ and Java a lot more. When programming in other language it's suggested to have the declaration close to the usage. With 4GL however I see people place the declaration on top of the file. It's even in the coding standard.
I think placing them on top of them file would lead to 'vertical separation' problem. In most other language it's even suggested to do the assignment at the same line as the declaration.
The question is why it's suggested to do so in 4GL ? What's the benefit ? I know that it's possible to place the declaration anywhere in the file, given that it's declared before it is used.
I think the answer is to do with scoping, or the lack of it, within Progress 4GL.
If you are used to Java, say, and read a Progress 4GL program, that looks like
DO:
DEFINE VARIABLE x AS INTEGER INITIAL 4.
DISPLAY x.
END.
then you wouldn't expect to be able to use this value of x anywhere else in the program, and that any changes made in the block, wouldn't effect anything outside the block.
As I understand it, all progress variables declared within the body of a program are scoped to the whole program, unless they are declared are within an internal procedure or function, in which case they are scoped to the procedure or function.
(Incidentally any default buffers [i.e. undeclared] you use within an internal procedure/function are scoped to the whole program, not just the procedure or function, so you need to be very careful to explicity declare buffers in functions you intend ot use recursively).
I therefore think the convention of declaring variables at the beginning of a program is in order to reflect the fact that Progress will treat them has having been done so, regardless of where you put the declaration.
There is absolutely no benefit in scoping anything to the program as a whole when it could be scoped smaller.
Smaller scopes are easier to test, give less possibility of namespace conflict, and less opportunity for error.
Tightly scoped named buffers are especially useful when writing to the database because they eliminate the possibility of there ever being some other part of your code that uses the same buffer and causes a share-lock, i.e., this fails to compile:
do for b-customer transaction:
find b-customer where .... exclusive...
...
end.
...
find b-customer...
On the other hand, procedures and functions (and include files...) that share scope with the main body of code are a major source of bugs, because when you pick up your variable or whatever, you can never be entirely certain where it has been...
All of this is just basic Structured Programming, of course. It's true for every language and has been accepted since the 70's.
The "reason" that you usually see variables defined at the top is simple. Habit. That is just how things were done in the bad old days.
A lot of old code, or code written by old fossils, is written that way. No matter the language.
Some languages (COBOL springs to mind) even formalized it.
Is there any advantage to such an approach?
Not especially. I guess you could argue "they are all in one place and easy to find" but that isn't very compelling.
"Habit" is actually more compelling ;) If you are working with a team that expects a certain style or in an application where a particular style is prevalent then you should think twice before unilaterally throwing out a new way of doing things - the confusion could be a bigger problem than the advantages gained.

What did John McCarthy mean by *pornographic programming*?

In the History of Lisp, McCarthy writes :
The unexpected appearance of an interpreter tended to freeze the form of the language, and some of the decisions made rather lightheartedly for the ``Recursive functions ...'' paper later proved unfortunate. These included the COND notation for conditional expressions which leads to an unnecessary depth of parentheses, and the use of the number zero to denote the empty list NIL and the truth value false. Besides encouraging pornographic programming, giving a special interpretation to the address 0 has caused difficulties in all subsequent implementations.
What's he talking about?
... zero to denote the empty list ...
because 0==() has been the emoticon for pornography since 1958.
Now you know.
The fact that too many implementation details were leaking at a higher level, i.e. showing up too much
The original Fortran III spec document, a technical paper disseminated in the Winter of 1958 describes some very explicit additions to the Fortran II language, including ... inline assembly.
The PDF is here
A tantalizing description of the "additions" follows :
Some taboo code is
Mysteriously, Fortran-III was never released to the public (see section 5.), but disseminated in limited fashion before quietly fading away.
I think it is about mixing numerical and logic values, which can still be seen in popular constructs, probably originated in Fortran, like while (1). There are a lot of "clever" C algorithms, that rely on the fact, that 0 is false and every other value isn't.
The same applies at large to API calls, like in POSIX or Linux kernel, some of which return 0 on failure, while some -1 (there's a rule of thumb, when to apply which, but it is just folklore, so often it is broken). Considering the fact, that at McCarthy's time, those things weren't developed yet, you can see his "prophetic" power even here.
Perhaps it was his way of talking about null references: the billion dollar mistake (T. Hoare).

The evilness of 'var' in C#? [duplicate]

This question already has answers here:
Closed 13 years ago.
Possible Duplicate:
C# 'var' keyword versus explicitly defined variables
EDIT:
For those who are still viewing this, I've completely changed my opinion on var. I think it was largely due to the responses to this topic that I did. I'm an avid 'var' user now, and I think its proponents comments below were absolutely correct in pretty much all cases. I think the thing I like most about var is it REALLY DOES reduce repetition (conforms to DRY), and makes your code considerably cleaner. It supports refactoring (when you need to change the return type of something, you have less code cleanup to deal with, and NO, NOT everyone has a fancy refactoring tool!), and anecdotally, people don't really seem to have a problem not knowing the specific type of a variable up front (its easy enough to "discover" the capabilities of a type on-demand, which is generally a necessity anyway, even if you DO know the name of a type.)
So here's a big applause for the 'var' keyword!!
This is a relatively simple question...more of a poll really. I am a HUGE fan of C#, and have used it for over 8 years, since before .NET was first released. I am a fan of all of the improvements made to the language, including lambda expressions, extension methods, LINQ, and anonymous types. However, there is one feature from C# 3.0 that I feel has been SORELY misused....the 'var' keyword.
Since the release of C# 3.0, on blogs, forums, and yes, even Stackoverflow, I have seen var replace pretty much every variable that has been written! To me, this is a grave misuse of the feature, and leads to very arbitrary code that can have many obfuscated bugs due to the lack in clarity of what type a variable actually is.
There is only a single truly valid use for 'var' (in my opinion at least). What is that valid use, you ask? The only valid use is when you are incapable of knowing the type, and the only instance where that can happen:
When accessing an anonymous type
Anonymous types have no compile-time identity, so var is the only option. It's the only reason why var was added...to support anonymous types.
So...whats your opinion? Given the prolific use of var on blogs, forums, suggested/enforced by tools like ReSharper, etc. many up and coming developers will see it as a completely valid thing.
Do you think var should be used so prolifically?
Do you think var should ever be used for anything other than an anonymous type?
Is it acceptable to use in code posted to blogs to maintain brevity...terseness? (Not sure about the answer this one myself...perhaps with a disclaimer)
Should we, as a community, encourage better use of strongly typed variables to improve code clarity, or allow C# to become more vague and less descriptive?
I would like to know the communities opinions. I see var used a lot, but I have very little idea why, and perhapse there is a good reason (i.e. brevity/terseness.)
var is a splendid idea to help implement a key principle of good programming: DRY, i.e., Don't Repeat Yourself.
VeryComplicatedType x = new VeryComplicatedType();
is bad coding, because it repeats VeryComplicatedType, and the effects are all negative: more verbose and boilerplatey code, less readability, silly "makework" for both the reader and the writer of the code. Because of all this, I count var as a very useful enhancement in C# 3 compared to Java and previous versions of C#.
Of course it can be mildly misused, by using as the RHS an expression whose type is not clear and obvious (e.g., a call to a method whose declaration may be far away) -- such misuse may decrease readability (by forcing the reader to hunt for the method's declaration or ponder deeply about some other subtle expression's type) instead of increasing it. But if you stick to using var to avoid repetition, you'll be in its sweet spot, and no misuse.
I think it should be used in those situations where the type is clearly specified elsewhere in the same statement:
Dictionary<string, List<int>> myHashMap = new Dictionary<string, List<int>>();
is a pain to read. This could be replaced by the following with no loss of clarity:
var myHashMap = new Dictionary<string, List<int>>();
Pop quiz!
What type is this:
var Foo = new string[]{"abc","123","yoda"};
How about this:
var Bar = {"abc","123","yoda"};
It takes me roughly no longer to determine what types those are than with the explicity redundant specification of the type. As a programmer I have no issues with letting a compiler figure out things that are obvious for me. You may disagree.
Cheers.
Never say never. I'm pretty sure there are a bunch of questions where people have expounded their views on var, but here's mine once more.
var is a tool; use it where it's appropriate, and don't use it when it's not. You're right that the only required use of var is when addressing anonymous types, in which case you have no type name to use. Personally, I'd say any other use has to be considered in terms of readability and laziness; specifically, when avoiding use of a cumbersome type name.
var i = 5;
(Laziness)
var list = new List<Customer>();
(Convenience)
var customers = GetCustomers();
(Questionable; I'd consider it acceptable if and only if GetCustomers() returns an IEnumerable)
Read up on Haskell. It's a statically typed language in which you rarely have to state the type of anything. So it uses the same approach as var, as the standard "idiomatic" coding style.
If the compiler can figure something out for you, why write the same thing twice?
A colleague of mine was at first very opposed to var, just as you are, but has now started using it habitually. He was worried it would make programs less self-documenting, but in practice that's caused more by overly long methods.
var MyCustomers = from c in Customers
where c.City="Madrid"
select new { c.Company, c.Mail };
If I need only Company and Mail from Customers collection. It's nonsense define new type with members what I need.
If you feel that giving the same information twice reduces errors (the designers of many web forms that insist you type in your email address twice seem to agree), then you'll probably hate var. If you write a lot of code that uses complicated type specifications then it's a godsend.
EDIT: To exapand this a bit (incase it sounds like I'm not in favour of var):
In the UK (at least at the time I went), it was standard practice to make Computer Science students learn how to program in Standard ML. Like other functional languages it has a type system that puts languages in the C++/Java mould to shame.
Anyway, what I noticed at the time (and heard similar remarks from other students) was that it was a nightmare to get your SML programs to compile because the compiler was so increadibly picky about types, but once the did compile, they almost always ran without error.
This aspect of SML (and other functional languages) seems to be one the questioner sees as a 'good thing' - i.e. that anything that helps the compiler catch more errors at compile time is good.
Now here's the thing with SML: it uses type inference exclusively when assigning. So I don't think type inference can be inherently bad.
I agree with others that var eliminates redundancy. I have decided to use var where it eliminates redundancy as much as possible. I think consistency is important. Choose a style and stick with it through a project.
As Earwicker indicated, there are some functional languages, Haskell being one and F# being another, where such type inference is used much more pervasively -- the C# analogy would be declaring the return types and parameter types of methods as "var", and then having the compiler infer the static type for you. Static and explicit typing are two orthogonal concerns.
In fact, is it even correct to say that use of "var" is dynamic typing? From what I understood, that's what the new "dynamic" keyword in C# 4.0 is for. "var" is for static type inference. Correct me if I am wrong.
I must admit when i first saw the var keyword pop up i was very skeptical.
However it is definitely an easy way to shorten the lines of a new declaration, and i use it all the time for that.
However when i change the type of an underlying method, and accept the return type using var. I do get the occasional run time error. Most are still picked up by the compiler.
The secound issue i run into is when i am not sure what method to use (and i am simply looking through the auto complete). IF i choose the wrong one and expect it to be type FOO and it is type BAR then it takes a while to figure that out.
If i had of literally specified the variable type in both cases it would have saved a bit of frustration.
overall the benefits exceed the problems.
I have to dissent with the view that var reduces redundancy in any meaningful way. In the cases that have been put forward here, type inference can and should come out of the IDE, where it can be applied much more liberally with no loss of readability.