This question already has answers here:
Closed 13 years ago.
Possible Duplicate:
C# 'var' keyword versus explicitly defined variables
EDIT:
For those who are still viewing this, I've completely changed my opinion on var. I think it was largely due to the responses to this topic that I did. I'm an avid 'var' user now, and I think its proponents comments below were absolutely correct in pretty much all cases. I think the thing I like most about var is it REALLY DOES reduce repetition (conforms to DRY), and makes your code considerably cleaner. It supports refactoring (when you need to change the return type of something, you have less code cleanup to deal with, and NO, NOT everyone has a fancy refactoring tool!), and anecdotally, people don't really seem to have a problem not knowing the specific type of a variable up front (its easy enough to "discover" the capabilities of a type on-demand, which is generally a necessity anyway, even if you DO know the name of a type.)
So here's a big applause for the 'var' keyword!!
This is a relatively simple question...more of a poll really. I am a HUGE fan of C#, and have used it for over 8 years, since before .NET was first released. I am a fan of all of the improvements made to the language, including lambda expressions, extension methods, LINQ, and anonymous types. However, there is one feature from C# 3.0 that I feel has been SORELY misused....the 'var' keyword.
Since the release of C# 3.0, on blogs, forums, and yes, even Stackoverflow, I have seen var replace pretty much every variable that has been written! To me, this is a grave misuse of the feature, and leads to very arbitrary code that can have many obfuscated bugs due to the lack in clarity of what type a variable actually is.
There is only a single truly valid use for 'var' (in my opinion at least). What is that valid use, you ask? The only valid use is when you are incapable of knowing the type, and the only instance where that can happen:
When accessing an anonymous type
Anonymous types have no compile-time identity, so var is the only option. It's the only reason why var was added...to support anonymous types.
So...whats your opinion? Given the prolific use of var on blogs, forums, suggested/enforced by tools like ReSharper, etc. many up and coming developers will see it as a completely valid thing.
Do you think var should be used so prolifically?
Do you think var should ever be used for anything other than an anonymous type?
Is it acceptable to use in code posted to blogs to maintain brevity...terseness? (Not sure about the answer this one myself...perhaps with a disclaimer)
Should we, as a community, encourage better use of strongly typed variables to improve code clarity, or allow C# to become more vague and less descriptive?
I would like to know the communities opinions. I see var used a lot, but I have very little idea why, and perhapse there is a good reason (i.e. brevity/terseness.)
var is a splendid idea to help implement a key principle of good programming: DRY, i.e., Don't Repeat Yourself.
VeryComplicatedType x = new VeryComplicatedType();
is bad coding, because it repeats VeryComplicatedType, and the effects are all negative: more verbose and boilerplatey code, less readability, silly "makework" for both the reader and the writer of the code. Because of all this, I count var as a very useful enhancement in C# 3 compared to Java and previous versions of C#.
Of course it can be mildly misused, by using as the RHS an expression whose type is not clear and obvious (e.g., a call to a method whose declaration may be far away) -- such misuse may decrease readability (by forcing the reader to hunt for the method's declaration or ponder deeply about some other subtle expression's type) instead of increasing it. But if you stick to using var to avoid repetition, you'll be in its sweet spot, and no misuse.
I think it should be used in those situations where the type is clearly specified elsewhere in the same statement:
Dictionary<string, List<int>> myHashMap = new Dictionary<string, List<int>>();
is a pain to read. This could be replaced by the following with no loss of clarity:
var myHashMap = new Dictionary<string, List<int>>();
Pop quiz!
What type is this:
var Foo = new string[]{"abc","123","yoda"};
How about this:
var Bar = {"abc","123","yoda"};
It takes me roughly no longer to determine what types those are than with the explicity redundant specification of the type. As a programmer I have no issues with letting a compiler figure out things that are obvious for me. You may disagree.
Cheers.
Never say never. I'm pretty sure there are a bunch of questions where people have expounded their views on var, but here's mine once more.
var is a tool; use it where it's appropriate, and don't use it when it's not. You're right that the only required use of var is when addressing anonymous types, in which case you have no type name to use. Personally, I'd say any other use has to be considered in terms of readability and laziness; specifically, when avoiding use of a cumbersome type name.
var i = 5;
(Laziness)
var list = new List<Customer>();
(Convenience)
var customers = GetCustomers();
(Questionable; I'd consider it acceptable if and only if GetCustomers() returns an IEnumerable)
Read up on Haskell. It's a statically typed language in which you rarely have to state the type of anything. So it uses the same approach as var, as the standard "idiomatic" coding style.
If the compiler can figure something out for you, why write the same thing twice?
A colleague of mine was at first very opposed to var, just as you are, but has now started using it habitually. He was worried it would make programs less self-documenting, but in practice that's caused more by overly long methods.
var MyCustomers = from c in Customers
where c.City="Madrid"
select new { c.Company, c.Mail };
If I need only Company and Mail from Customers collection. It's nonsense define new type with members what I need.
If you feel that giving the same information twice reduces errors (the designers of many web forms that insist you type in your email address twice seem to agree), then you'll probably hate var. If you write a lot of code that uses complicated type specifications then it's a godsend.
EDIT: To exapand this a bit (incase it sounds like I'm not in favour of var):
In the UK (at least at the time I went), it was standard practice to make Computer Science students learn how to program in Standard ML. Like other functional languages it has a type system that puts languages in the C++/Java mould to shame.
Anyway, what I noticed at the time (and heard similar remarks from other students) was that it was a nightmare to get your SML programs to compile because the compiler was so increadibly picky about types, but once the did compile, they almost always ran without error.
This aspect of SML (and other functional languages) seems to be one the questioner sees as a 'good thing' - i.e. that anything that helps the compiler catch more errors at compile time is good.
Now here's the thing with SML: it uses type inference exclusively when assigning. So I don't think type inference can be inherently bad.
I agree with others that var eliminates redundancy. I have decided to use var where it eliminates redundancy as much as possible. I think consistency is important. Choose a style and stick with it through a project.
As Earwicker indicated, there are some functional languages, Haskell being one and F# being another, where such type inference is used much more pervasively -- the C# analogy would be declaring the return types and parameter types of methods as "var", and then having the compiler infer the static type for you. Static and explicit typing are two orthogonal concerns.
In fact, is it even correct to say that use of "var" is dynamic typing? From what I understood, that's what the new "dynamic" keyword in C# 4.0 is for. "var" is for static type inference. Correct me if I am wrong.
I must admit when i first saw the var keyword pop up i was very skeptical.
However it is definitely an easy way to shorten the lines of a new declaration, and i use it all the time for that.
However when i change the type of an underlying method, and accept the return type using var. I do get the occasional run time error. Most are still picked up by the compiler.
The secound issue i run into is when i am not sure what method to use (and i am simply looking through the auto complete). IF i choose the wrong one and expect it to be type FOO and it is type BAR then it takes a while to figure that out.
If i had of literally specified the variable type in both cases it would have saved a bit of frustration.
overall the benefits exceed the problems.
I have to dissent with the view that var reduces redundancy in any meaningful way. In the cases that have been put forward here, type inference can and should come out of the IDE, where it can be applied much more liberally with no loss of readability.
Related
Before you relieve the itching in your fingertips, I already understand:
how and when to use the try keyword
the differences between the try, try?, and try! keywords
What I want to understand is what the use of the unadorned try keyword buys me (and you and all of us) over and above merely quieting a compiler diagnostic. We're already inside the scope of a do, and clearly the compiler knows to demand a try, and I can't (yet) see how there might be some ambiguity about where the try needs to land. So why can't the compiler quietly do the right thing without the explicit appearance of the keyword?
There's been a fair amount of discussion (below) about the possibility that the language is trying to enforce readability for humans. I guess we'd need the input from one of the Swift language designers to determine whether that's true. And even if we had that it would be debatable whether it's wise and/or has been a success. So let's put that aside for the moment. Does the existence of the un-adorned try keyword solve some problem other than enforcing readability for humans?
After a long, productive discussion (linked elsewhere on this page)…
In short, the answer is no, there isn't a purpose other than enforcing readability, but it turns out the readability win is more significant than I had realized.
The try keyword should be seen as akin to (though not the equivalent of) a combination of if and goto. Although try doesn't direct the compiler to do anything it could not have inferred it should do, no one would argue that an if or a goto should be invisible. This makes try a little weird for folks coming from other languages — but not unreasonably so.
It may be difficult for Objective C programmers to grasp this because they are accustomed to assuming almost anything they do may raise an exception. Of course, Objective C exceptions are very different from Swift errors, but knowing this consciously is different from metabolizing it and knowing it unconsciously.
As well, if your intuition immediately tells you that as a matter of style in most cases there should probably be only one failable operation inside a do clause, it may be difficult to see what value a try adds.
In scala, we can define variable like:
val a=10 or var a=10.
Is there anyway we can check if a is a val or var programmatically?
Scala is an object-oriented language. This means, in particular, that you can only manipulate objects.
Variables are not objects in Scala (like pretty much every programming language), therefore there is no way that you could, for example, call a method on a variable to ask it whether it is a var or a val (because you can only call methods on objects, and variables aren't objects), and there is no way to pass a variable to a method to ask the method whether the variable is a val or a var (because you can only pass objects as arguments, and variables aren't objects).
Again, this is not really specific to Scala, this applies to the overwhelming majority of programming languages. Even in programming languages like Ruby with very powerful dynamic meta programming capabilities, variables aren't objects and cannot be reflected upon.
But wait, you might say, classes aren't objects in Scala either, but I can get a runtime representation of a class using scala.Predef.classof[T]! Indeed you can, but there is a fundamental difference between classes and variables: classes are runtime entities, so even if they don't exist as objects, they do at least exist at runtime. Variables are a pure compile time construct, they do not exist at runtime.
So, the only way you can get at a variable at all is using compile-time reflection. Which, very likely, is complete overkill for whatever it is that you want to do.
So, to answer your question:
Is there anyway we can check if a is a val or var programmatically?
Extremely short answer: No.
Very short answer: You shouldn't use var anyway. If you follow that advice, the question becomes moot.
Short answer: If your local variable scopes are so big and convoluted that you cannot even figure out whether a variable is a var or a val you have much bigger problems.
Simple answer: No.
Slightly more elaborate answer: No, not at runtime.
Very complex answer: It is probably possible using Compile-Time Reflection.
I understand Swift's inlining well. I know the nuances between the four function-inlining attributes. I use #inline(__always) a lot, especially when I'm just making sugary APIs like this:
public extension String {
#inline(__always)
var length: Int { count }
}
I do this because there's not really a cost involved in inlining it, but there would be the cost of an extra stack frame if it weren't inlined. For less-obvious sugar, I'll lean toward #inlinable andor #usableFromInline as needed.
However, one distinction vexes me. The two possible arguments to #inline are never and __always. Despite the lack of actual documentation, this choice of spelling here acts as a sort of self-documentation, implying that if you are going to use one of these, you should lean toward never, and __always is discouraged.
But why is this the direction the Swift language designers encourage? As far as I know, if no attribute is applied at all, then this is the behavior:
If a function (et al) is used within the module in which it's declared, the compiler might choose to inline it or not, depending on which would produce better code (by some measure)
If that function (et al) is used outside the module, its implementation is not exposed in a way that allows it to be inlined, so it is never inlined.
So, it seems most of the time, not-inlining is the default. That's fine and dandy, I have no problem with that on the surface; don't bloat the executable any more than you need to.
But then, I've never had a reason to think #inline(never) is useful. From what I understand, the only reason I would use #inline(never) is if I've noticed that the Swift compiler is choosing to inline a non-annotated function too much, and it's bloating my executable. This seems like a super-niche occurrence:
My software is running fine
The Swift compiler's algorithm for deciding whether to inline something is not making the right choice for my code
I care about the size of the binary so much that I'm inspecting it closely enough to discover that a function is being inlined automatically too much
The problem is only in code that I've written into my own module; not code I'm using from some other module
Or, as Rob said in the comments, if you're going through some disassembly and automatic inlining makes it hard to read.
I can't imagine that these are the use cases which the Swift language designers had in mind when designing this attribute. Especially since Swift is not meant for embedded systems, binary size (and the (dis)assembly in general) isn't really that much of a concern. I've never seen an unreasonably-large Swift binary anyway (>50MB).
So why is never encouraged more than __always? I often run into reasons why I should force a function to be inlined, but I've not yet seen a reason to force a function to be stacked, at least in my own work.
Why in Swift can I type numbers without assigning them to var or let, and the code will compile fine? What is this good for?
import Foundation
55
25
23+8
print("Hello, World!")
4
11
-45
What is this good for?
It isn't good for anything, in the sense that it doesn't cause anything to happen with regard to the flow of the program. It's just something you're doing for fun, if you see what I mean. The numeric value is not being captured, so in effect it is immediately thrown away, like a virtual particle that flashes into existence one moment and back out of existence the next.
The reason it is legal is that a Swift statement is an evaluatable expression. A numeric literal is an evaluatable expression, so it's legal — though useless — as an independent statement.
You can see the same thing in many other ways. This is legal:
let n = 23
n
n is an evaluatable expression, so it's legal as a separate statement. But it is useless.
EDIT In answer to your comment below: I see no reason why a useless statement should prevent a program from compiling. But in a case like this, I would agree that it might be helpful if the compiler would warn that you're doing something useless, and in fact, for at least one case of this sort of thing, I have filed a bug report requesting this.
Swift is a language that has side effects, meaning that some operations can result in a mutation of the global state of the program. This has the implications that the compiler cannot simply eliminate a stand-alone statement without making extra sure that it can do so without affecting the execution of the program. Due to the complexity of state in a program, this is generally not a trivial problem, therefore in order not to penalize users who wish to invoke functions that have side effects, a line must be drawn; Swift has chosen to let users put any kind of stand-alone statement, even ones that are obviously free of side effects (constants or constant expressions), rather than spend a lot of effort trying to differentiate among various possibilities.
There could be a compile-time warning that shows lines of code that have no effect. I don't know much about Swift so I can't tell you, but if there is, you should be able to find it in the documentation.
It has to do with the fact that a number or an expression is a valid statement. So it can exist on its own. Some other languages have the same or similar feature.
Like in any language you can pronounce sentences of no interest... 55 is such a swift sentence. You may think it could be easy to eliminate unuseful sentences, but it is impossible, so why not just eliminate (at compile-time) the obvious ones? Because, in general people don't try to use them or just to exercise, and it will not worth the effort. It would be easy to forbid 55, but what about n=n? Easy? Or n=2*n-n(it is much more tricky because you would need to implement mathematical properties of expressions inside the compiler, and optimisation tricks for them).
In comparing these two options for defining an instance property:
var networkManager = NetworkManager.sharedInstance()
var lazy networkManager = NetworkManager.sharedInstance()
Both:
Can evaluate a block to get the value
Can be declared inline (not a block, like above)
Lazy:
Can refer to self
Is not calculated until needed
If you don't use it, it is never calculated
Non-lazy:
No benefits whatsoever
It appears that there is no benefit to ever use a non-lazy variable. So why does the language allow the programmer to make this inferior choice?
(I am NOT asking about the difference between var and let à la Are Swift constants lazy by default?)
One reason might be that lazyness is not well-suited for situations where you want control when the evaluation happens. this is relevant in cases where the work being done in the assignment has side effects.
Although this pertains to closure, this blog post by stuart sierra explains this idea very well, and I think it applies equally in any language.
As others already said, there are several critical scenarios where you want the initialization of the properties to be deterministic.
This is an example (among many others) related to game development.
Often the instances of classes representing items in a game scene/level, are created before the level does begin.
Initialisation can be a time expensive task (load stuff from persistent storage, allocate memory, prepare the instances...) and doing this part before the player does begin playing the level does avoid CPU overhead.
This is critical because a CPU overhead in the middle of a level could cause a drop in the frame rate which is a nightmare for the user experience.
FYI. My feeling is that Swift wants to become more like a functional language and would like lazy instantiation in more places.
My early assessment of Swift has held up pretty well over time (well, the "not functional" part. I didn't anticipate how much Swift would favor methods over functions in later versions). Swift is not a functional language and does not intend to be one. This has come up often in WWDC talks, on the forums, on Twitter, and in conversations with the Swift team. Originally all maps and filters were lazy. Swift removed that because of the problems it caused. Probably the best talk on that subject is "Building Better Apps with Value Types in Swift". As they say:
We like mutation. We think it's valuable. We think it's easy to use when done correctly.
You don't get much more "non-functional" than that. Swift also embraces immutable data. But functional programming is about pure functions over immutable data, and that's not Swift.
(Of course there are plenty of non-lazy functional languages. Lazy and functional are orthogonal concepts. Haskell just happened to embrace both.)
To the question at hand, though:
I've found the lazy attribute rarely useful in real-world Swift (I'm being generous; I have never encountered a case where I kept it in the code). It doesn't offer anything like the laziness you get in Haskell. It isn't thread safe, so that's a nightmare. It forces you into reference types (or forces your structs to be mutable), so that can be annoying. If I heard they were pulling it from the language and we just had to roll our own, that'd be fine with me. (I'm tempted to write a proposal to do just that.) It implements a specific memo pattern that can occasionally be handy, but often isn't the one you want. So it's a very good thing that it isn't the default.
As you likely know, global variables and class variables are lazy by default, and I think that tends to work out pretty well since there are so many fewer of them, there's a much better chance they won't be accessed in practice, and that laziness is thread safe (which has a cost, but since they're so much rarer, the cost is much lower).
If you have an expensive object (in terms of, takes long to create) you would like to decide and control when it is created. One could argue that the lazy variable should be the default though. Maybe it has historical reasons. Lazy properties in ObjC resulted in a lot boilerplate code.