Before you relieve the itching in your fingertips, I already understand:
how and when to use the try keyword
the differences between the try, try?, and try! keywords
What I want to understand is what the use of the unadorned try keyword buys me (and you and all of us) over and above merely quieting a compiler diagnostic. We're already inside the scope of a do, and clearly the compiler knows to demand a try, and I can't (yet) see how there might be some ambiguity about where the try needs to land. So why can't the compiler quietly do the right thing without the explicit appearance of the keyword?
There's been a fair amount of discussion (below) about the possibility that the language is trying to enforce readability for humans. I guess we'd need the input from one of the Swift language designers to determine whether that's true. And even if we had that it would be debatable whether it's wise and/or has been a success. So let's put that aside for the moment. Does the existence of the un-adorned try keyword solve some problem other than enforcing readability for humans?
After a long, productive discussion (linked elsewhere on this page)…
In short, the answer is no, there isn't a purpose other than enforcing readability, but it turns out the readability win is more significant than I had realized.
The try keyword should be seen as akin to (though not the equivalent of) a combination of if and goto. Although try doesn't direct the compiler to do anything it could not have inferred it should do, no one would argue that an if or a goto should be invisible. This makes try a little weird for folks coming from other languages — but not unreasonably so.
It may be difficult for Objective C programmers to grasp this because they are accustomed to assuming almost anything they do may raise an exception. Of course, Objective C exceptions are very different from Swift errors, but knowing this consciously is different from metabolizing it and knowing it unconsciously.
As well, if your intuition immediately tells you that as a matter of style in most cases there should probably be only one failable operation inside a do clause, it may be difficult to see what value a try adds.
I have read various old StackOverflow discussions on this general topic but there is still one part of the puzzle which appears, to me at least, to be missing.
It is simply this: what is the actual mechanism by which the anonymous function is serialized? And, where could we find its source code?
Or is it all just magic?
Other relevant SO articles (the third of these itself points to some useful articles outside StockOverflow):
Serialization of Scala Functions
Why Scala can serialize...
How to serialize functions in Scala
I'm going to answer my own question with what, I believe is the correct answer. The reason I'm doing it this way is that it seems to me that this aspect of serialization is never explained and it does appear to work just by magic. I essentially confirmed (to my satisfaction) the answer as part of the research I was doing to ensure that my question above was indeed appropriate.
But the main reason I'm offering my own answer is that I invite knowledgeable users either to agree with it, to correct it, to expand upon it, or to destroy it. Here goes...
It's all magic. No, I'm just kidding. But essentially the mechanism, once Scala has taken the step of representing the anonymous function as a Class, is entirely provided for by Java. In addition, we, the programmer, need to ensure that an anonymous function is as much pure code as possible: no references to any objects that might not be serializable. The secret sauce is to be found in the Java class: ObjectStreamClass. Which, in turn, is invoked by the Java serialization classes: ObjectInputStream and ObjectOutputStream.
Essentially the serialized bytes contain the full pathname of the class, its serialVersionUID, and whatever other relevant information is necessary. When deserializing, the system will simply look up the class in the appropriate classpath and return a reference to it. This obviously assumes that the deserializing system has the class in its classpath. The mechanism for that is a little beyond the scope of my research but it's clear that in a system like Spark, it should be easy to arrange.
No (additional) compilation/decompilation of byte code is necessary as the classLoader has everything necessary. I'm slightly surprised to find the ObjectStreamClass in java.io rather than in the reflection package, but I suppose there's an argument for it being there, given the tight coupling with ObjectInputStream and ObjectOutputStream.
One thing to keep in mind is that while we think in terms of serializing/deserializing objects, rather than classes, what we are dealing with here is an object of type Class.
One more thing to note is that in Scala 2.12, anonymous functions are now implemented differently: as Java8 lambdas. This has broken the mechanism described above in a rather serious way. So serious, that Spark is currently having trouble supporting Scala 2.12. The holdup appears to be this issue: SPARK-14540.
This is really a question of precedence: which is more preferred in C++, avoiding pointers or avoiding #includes in header files?
"Don't Use #include in header files."
There seems to be some ambiguity based on my research. In this SO question, the top answer says "...make sure you actually need an include, [don't use one] when a forward declaration or even leaving it out completely will do." (From Header files and include best practice)
And this article explains the negative effect excess header inclusions can have on compile-time: http://blog.knatten.org/2012/11/09/another-reason-to-avoid-includes-in-headers/
As well as this tutorial, stating, "...you should try to put all of your code in the CPP class and only the class declaration in the HPP file.": https://github.com/LaurentGomila/SFML/wiki/Tutorial%3A-Basic-Game-Engine#wiki-declarations
"Don't Use Pointers."
But, there is also evidence that pointers should be avoided most often as well:
c++: when to use pointers?
https://softwareengineering.stackexchange.com/questions/56935/why-are-pointers-not-recommended-when-coding-with-c
Which preference takes precedence?
If my understanding about avoiding #includes in header files is correct, this can easily be done by changing things like class members to pointers so I can use a forward declaration instead, but is this a good idea for class members whose lifetime only lasts as long as the class itself?
It's not really an "one or the other". Both statements are true, but you need to understand the reasoning behind them.
tl;dr: Use forward declaration where possible to reduce compile time. Use stack objects or references as much as possible and pointers only in rare cases.
"Don't Use #include in header files."
This is a rather general statement, which as is, would be wrong. The more important part behind this statement actually is: "Use forward declarations where ever possible". Includes in header files are not something bad per se, but they often aren't needed either.
Forward declarations can be used, if the included type/class/etc. is used as a pointer in the new type/class/etc. declaration within the given header. Forward declaration just tells the compiler: "Somewhere a long the way you'll find the actual declaration of type X." The include can even be removed if the type isn't used at all in the declaration. The reason is that the compiler doesn't need to know anything about these types to calculate the required memory layout for the new type. For example a pointer has "always" the same size. Including the file additionally in the header, would potentially only waste processing power, since the compiler would have to open and parse the file, thus adding expensive seconds to the compile time. So in most cases you'll do yourself a favor by reducing the unnecessary includes in the header files and instead use forward declaration.
For the sake of completion: Forward declaration are explicitly needed if you get circular references (class A depends on class B, which depends on class C, which depends on class A). However this can often also reveal either bad design and/or old/outdate coding standards which would lead us to the second topic.
"Don't use pointers."
Again the statement is a tiny bit too general. One might rather want to say: "Don't use raw pointers."
With C++11 and soon C++1y the language itself has changed a lot. As much bad C++ books the world has seen, the more outdated C++ books float around nowadays (here's a good list however). While in the past we were mostly stuck with pointers new and delete for memory management, we've evolved to better, more readable, less risk and 100% memory leak free ways to manage the data in memory. One of the magic words is RAII - since you linked something from SFML above, here's a nice demonstration of the power of RAII. I see many people use pointers and new and delete just because or maybe because they are thinking in Java or C# terms were objects get instantiated with the new keyword. In C++ however object don't need to use new to be allocated and it's mostly preferable to run things on the stack instead of the heap. This works for many, many things, especially when using STL containers, which will hide the dynamic management in the background. The usage of the heap is mostly all cases only preferable if you need the data to be dynamic, non "local" or you need a lot of it. However when you use the heap, make sure to use smart pointers such as std::unique_ptr or std::shared_ptr depending on the use case, but certainly not raw pointers. In modern C++ raw pointers should never own an object anymore. There are cases where it's okay to return a raw pointer to reference an object, but there's really no reason in modern C++ to call new on a raw pointer.
Lets get back to the original question though. The "Don't use raw pointers" is essentially more of a design question and quite unrelated to the whole header issue. While there might be some cases where you'll have to switch to raw pointers, due to circular references, the use of forward declarations is otherwise just about compilation time (and maybe clean code), but it's not as essential for the programming itself.
In short: Don't use raw pointers to avoid inclusions in header files, but use forward declaration where ever possible and utilize smart pointers as much as possible.
Is there a way to create macros in c#
ex:
string checkString = "'bob' == 'bobthebuilder'" (this will be dynamic)
if (##checkString)
//.........
else
//.........
Thanks
No, C# doesn't have macros. You could capture your logic in a delegate and apply that delegate in multiple places, potentially... would that help?
If you could describe the problem you're trying to solve rather than the solution you think you'd like, we may be able to help more.
T4 seems to be gaining traction these days for .NET work. It's not quite what you asked for, but it may be extremely beneficial in some cases (or it may just be a hint down the wrong path).
In most cases, esp. with generics, I do not wish for 'templates' or 'macros' in C# (or Scala). In the example above, you could simply use:
bool sameStuff = "'bob' == 'bobthebuilder'";
...
if (sameStuff) {
...
}
More complex cases can generally be dealt with refactoring methods or using anonymous functions.
Additionally, attributes (while a completely different approach) round out the case for many "traditional" uses of templates.
As mentioned, no, but there are a number of other approaches:
Conditional compilation via #if
Templating via T4 or something else (we use a port of Ned Batchelder's (mentioned) Cog
Aspect-Oriented Programming via something like PostSharp
As Jon said, lots of ways; it'd be better to describe exactly what you want to do.
Short answer: No.
Long answer: You can write a wrapper around the C/C++ compiler's preprocessor.
Most of the syntax will be accepted with the notable exception of #region/#endregion. You can just prefix those with #pragma before processing, and remove the #pragma part afterwards.
This question already has answers here:
Closed 13 years ago.
Possible Duplicate:
C# 'var' keyword versus explicitly defined variables
EDIT:
For those who are still viewing this, I've completely changed my opinion on var. I think it was largely due to the responses to this topic that I did. I'm an avid 'var' user now, and I think its proponents comments below were absolutely correct in pretty much all cases. I think the thing I like most about var is it REALLY DOES reduce repetition (conforms to DRY), and makes your code considerably cleaner. It supports refactoring (when you need to change the return type of something, you have less code cleanup to deal with, and NO, NOT everyone has a fancy refactoring tool!), and anecdotally, people don't really seem to have a problem not knowing the specific type of a variable up front (its easy enough to "discover" the capabilities of a type on-demand, which is generally a necessity anyway, even if you DO know the name of a type.)
So here's a big applause for the 'var' keyword!!
This is a relatively simple question...more of a poll really. I am a HUGE fan of C#, and have used it for over 8 years, since before .NET was first released. I am a fan of all of the improvements made to the language, including lambda expressions, extension methods, LINQ, and anonymous types. However, there is one feature from C# 3.0 that I feel has been SORELY misused....the 'var' keyword.
Since the release of C# 3.0, on blogs, forums, and yes, even Stackoverflow, I have seen var replace pretty much every variable that has been written! To me, this is a grave misuse of the feature, and leads to very arbitrary code that can have many obfuscated bugs due to the lack in clarity of what type a variable actually is.
There is only a single truly valid use for 'var' (in my opinion at least). What is that valid use, you ask? The only valid use is when you are incapable of knowing the type, and the only instance where that can happen:
When accessing an anonymous type
Anonymous types have no compile-time identity, so var is the only option. It's the only reason why var was added...to support anonymous types.
So...whats your opinion? Given the prolific use of var on blogs, forums, suggested/enforced by tools like ReSharper, etc. many up and coming developers will see it as a completely valid thing.
Do you think var should be used so prolifically?
Do you think var should ever be used for anything other than an anonymous type?
Is it acceptable to use in code posted to blogs to maintain brevity...terseness? (Not sure about the answer this one myself...perhaps with a disclaimer)
Should we, as a community, encourage better use of strongly typed variables to improve code clarity, or allow C# to become more vague and less descriptive?
I would like to know the communities opinions. I see var used a lot, but I have very little idea why, and perhapse there is a good reason (i.e. brevity/terseness.)
var is a splendid idea to help implement a key principle of good programming: DRY, i.e., Don't Repeat Yourself.
VeryComplicatedType x = new VeryComplicatedType();
is bad coding, because it repeats VeryComplicatedType, and the effects are all negative: more verbose and boilerplatey code, less readability, silly "makework" for both the reader and the writer of the code. Because of all this, I count var as a very useful enhancement in C# 3 compared to Java and previous versions of C#.
Of course it can be mildly misused, by using as the RHS an expression whose type is not clear and obvious (e.g., a call to a method whose declaration may be far away) -- such misuse may decrease readability (by forcing the reader to hunt for the method's declaration or ponder deeply about some other subtle expression's type) instead of increasing it. But if you stick to using var to avoid repetition, you'll be in its sweet spot, and no misuse.
I think it should be used in those situations where the type is clearly specified elsewhere in the same statement:
Dictionary<string, List<int>> myHashMap = new Dictionary<string, List<int>>();
is a pain to read. This could be replaced by the following with no loss of clarity:
var myHashMap = new Dictionary<string, List<int>>();
Pop quiz!
What type is this:
var Foo = new string[]{"abc","123","yoda"};
How about this:
var Bar = {"abc","123","yoda"};
It takes me roughly no longer to determine what types those are than with the explicity redundant specification of the type. As a programmer I have no issues with letting a compiler figure out things that are obvious for me. You may disagree.
Cheers.
Never say never. I'm pretty sure there are a bunch of questions where people have expounded their views on var, but here's mine once more.
var is a tool; use it where it's appropriate, and don't use it when it's not. You're right that the only required use of var is when addressing anonymous types, in which case you have no type name to use. Personally, I'd say any other use has to be considered in terms of readability and laziness; specifically, when avoiding use of a cumbersome type name.
var i = 5;
(Laziness)
var list = new List<Customer>();
(Convenience)
var customers = GetCustomers();
(Questionable; I'd consider it acceptable if and only if GetCustomers() returns an IEnumerable)
Read up on Haskell. It's a statically typed language in which you rarely have to state the type of anything. So it uses the same approach as var, as the standard "idiomatic" coding style.
If the compiler can figure something out for you, why write the same thing twice?
A colleague of mine was at first very opposed to var, just as you are, but has now started using it habitually. He was worried it would make programs less self-documenting, but in practice that's caused more by overly long methods.
var MyCustomers = from c in Customers
where c.City="Madrid"
select new { c.Company, c.Mail };
If I need only Company and Mail from Customers collection. It's nonsense define new type with members what I need.
If you feel that giving the same information twice reduces errors (the designers of many web forms that insist you type in your email address twice seem to agree), then you'll probably hate var. If you write a lot of code that uses complicated type specifications then it's a godsend.
EDIT: To exapand this a bit (incase it sounds like I'm not in favour of var):
In the UK (at least at the time I went), it was standard practice to make Computer Science students learn how to program in Standard ML. Like other functional languages it has a type system that puts languages in the C++/Java mould to shame.
Anyway, what I noticed at the time (and heard similar remarks from other students) was that it was a nightmare to get your SML programs to compile because the compiler was so increadibly picky about types, but once the did compile, they almost always ran without error.
This aspect of SML (and other functional languages) seems to be one the questioner sees as a 'good thing' - i.e. that anything that helps the compiler catch more errors at compile time is good.
Now here's the thing with SML: it uses type inference exclusively when assigning. So I don't think type inference can be inherently bad.
I agree with others that var eliminates redundancy. I have decided to use var where it eliminates redundancy as much as possible. I think consistency is important. Choose a style and stick with it through a project.
As Earwicker indicated, there are some functional languages, Haskell being one and F# being another, where such type inference is used much more pervasively -- the C# analogy would be declaring the return types and parameter types of methods as "var", and then having the compiler infer the static type for you. Static and explicit typing are two orthogonal concerns.
In fact, is it even correct to say that use of "var" is dynamic typing? From what I understood, that's what the new "dynamic" keyword in C# 4.0 is for. "var" is for static type inference. Correct me if I am wrong.
I must admit when i first saw the var keyword pop up i was very skeptical.
However it is definitely an easy way to shorten the lines of a new declaration, and i use it all the time for that.
However when i change the type of an underlying method, and accept the return type using var. I do get the occasional run time error. Most are still picked up by the compiler.
The secound issue i run into is when i am not sure what method to use (and i am simply looking through the auto complete). IF i choose the wrong one and expect it to be type FOO and it is type BAR then it takes a while to figure that out.
If i had of literally specified the variable type in both cases it would have saved a bit of frustration.
overall the benefits exceed the problems.
I have to dissent with the view that var reduces redundancy in any meaningful way. In the cases that have been put forward here, type inference can and should come out of the IDE, where it can be applied much more liberally with no loss of readability.