Why does objective C does not support overloading? [duplicate] - iphone

This question already has an answer here:
Closed 11 years ago.
Possible Duplicate:
Why Objective-C doesn't support method overloading?
I was going through this link which has the answer to my question. But I was not able to understand what the right guy is trying to say. It will be really great if someone can simplify and provide an explanation.

In C++ the compiler has type information and can choose between several methods based on the types. To do the same in Objective C it would have to be done at run-time, because the compiler knows little about object types due to the dynamic nature of the language (i.e. all objects are of type id). While this seems possible, it would be very inefficient in practice.

I think it's a historical artifact. Objective-C is derived from C and Smalltalk, and neither of them support overloading.
If you want overloading, you can use Objective-C++ instead. Just name your sources with the ".mm" extension instead of just ".m".
Just be careful to be sure you know what you are doing if you mix C++ and Objective-C idioms. For example Objective-C exceptions and C++ exceptions are two completely different animals and cannot be used interchangeable.

What he is trying to say is that method overloading is not possible with dynamic typed languages since in dynamic typed languages the information about each of the object is not known until run time. In a statically typed during compile time the overloading can be resolved. You will just create the same functions with same names but the compiler would have enough information to resolve the ambiguity between various calls which it gets. But in dynamic typed languages since the objects are resolved only during run time it is not possible to resolve between the various calls.

Related

What is the primary technical challenge that Scala's implicit solves? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
While learning Scala, I found the concept of implicit difficult to rationalize. It allows one to pass values implicitly, without explicitly mentioning them.
What is its purpose for being and what is the problem that it seeks to solve?
At it's heart, implicit is a way of extending the behavior of values of a type in a way that's fully controllable on a local level in your program, and external to the original code that defines those values. It's one approach to solving the expression problem.
It lets you keep your core classes focused on their most fundamental structure and behavior, and factor out higher-level behaviors. It's used to achieve ad hoc polymorphism, where two formally unrelated data types can be seamlessly adapted to the same interface, so that they can be treated as instances of the same type.
For example, rather than your data model classes containing JSON serialization behavior, you can store that behavior elsewhere, and implicitly augment an object with the ability to serialize itself. This amounts to defining in an implicit instance, which specifies how your object can be viewed as "JSON serializable", rather than its original type, and it's done without editing real type of the object.
There are several forms of implicit, which are pretty thoroughly covered elsewhere. Use cases include enhance-my-library pattern, the typeclass pattern, implicit conversions, and dependency injection.
What's really interesting to me, in the context of this question, is how this differs from approaches in other languages.
Enhance-my-library and typeclasses
In many other languages, you accomplish this by monkey patching (typically where there is no type checking) or extension methods. These approaches have the downside of composing unpredictably and applying globally. In statically typed languages without a way of opening classes, you usually have to make explicit adapters. This has the downside of a lot of boilerplate. In both static and dynamic languages, you may also be able to use reflection, but usually with a lot of ceremony and complexity.
In Haskell, typeclasses exist as a first-class concept. They're global, though, so you don't get the local control over what typeclass is applied in a given situation. In Scala, you control what implicits are in scope locally, through the modules you import. And you can always opt out of implicit resolution entirely by passing parameters explicitly.
People advocate for global versus local resolution of typeclasses one way or the other, depending on who you ask.
Implicit conversions
A lot of other languages have no way to accomplish this. But it's become pretty frowned upon in Scala, so maybe this is for good reason.
There's a paper about type classes with older slides and discussion.
Being able implicitly to pass an object that encodes a type class simplifies the boilerplate.
Odersky just responded to a critique of implicits that
Scala would not be Scala if it did not have implicit parameters and
classes.
That suggests they solve a challenge that is central to the design of the language. In other words, supporting type classes is not an ancillary concern.
Its a deep question really It is something that is very powerful and you can use them to write abstract code eg typeclasses etc i can recommend some tutorials that you may look into and then we can haved a chat maybe sometime :)
It is all about providing sensible defaults in your code.
Also the magic of invoking apparently non existent methods on objects which just seems to work! All that good stuff is done via implicits.
But for all its power it may cause people to write some really bad code as well.
Please do watch Nick Partridge's presentation here and i am sure if you code along with him you will understand why and how to approach implicits.
Watch it here
Dick Walls excellent presentation with live coding
Watch both parts.

For Scala are there any advantages to type erasure?

I've been hearing a lot about different JVM languages, still in vaporware mode, that propose to implement reification somehow. I have this nagging half-remembered (or wholly imagined, don't know which) thought that somewhere I read that Scala somehow took advantage of the JVM's type erasure to do things that it wouldn't be able to do with reification. Which doesn't really make sense to me since Scala is implemented on the CLR as well as on the JVM, so if reification caused some kind of limitation it would show up in the CLR implementation (unless Scala on the CLR is just ignoring reification).
So, is there a good side to type erasure for Scala, or is reification an unmitigated good thing?
See Ola Bini's blog. As we all know, Java has use-site covariance, implemented by having little question marks wherever you think variance is appropriate. Scala has definition-site covariance, implemented by the class designer. He says:
Generics is a complicated language feature. It becomes even more
complicated when added to an existing language that already has
subtyping. These two features don’t play very well together in the
general case, and great care has to be taken when adding them to a
language. Adding them to a virtual machine is simple if that machine
only has to serve one language - and that language uses the same
generics. But generics isn’t done. It isn’t completely understood how
to handle correctly and new breakthroughs are happening (Scala is a
good example of this). At this point, generics can’t be considered
“done right”. There isn’t only one type of generics - they vary in
implementation strategies, feature and corner cases.
...
What this all means is that if you want to add reified generics to the
JVM, you should be very certain that that implementation can encompass
both all static languages that want to do innovation in their own
version of generics, and all dynamic languages that want to create a
good implementation and a nice interfacing facility with Java
libraries. Because if you add reified generics that doesn’t fulfill
these criteria, you will stifle innovation and make it that much
harder to use the JVM as a multi language VM.
i.e. If we had reified generics in the JVM, most likely those reified generics wouldn't be suitable for the features we really like about Scala, and we'd be stuck with something suboptimal.

Code to interfaces instead of implementations? [duplicate]

This question already has answers here:
Should you always Code To Interfaces In Java [closed]
(6 answers)
What does it mean to "program to an interface"?
(33 answers)
Closed 6 years ago.
I believe that it's better to code to interfaces instead of implementations. In Java:
List<User> users = new ArrayList<User>();
There's no need to specify the runtime type of users all over the program if the code only cares that it implements List.
However, I encounter many people who believe that it's totally fine, even when they're not using properties specific to the ArrayList:
ArrayList<User> users = new ArrayList<User>();
I try to explain that it's redundancy and makes the program harder to change, but they don't seem to care. Are there other reasons that this is important? Or perhaps my convictions are overstated?
Personally, I think that there are two parts to this argument.
If you're returning an object from a method in a class, it should return the most generic object possible. In this case, if you had a choice between returning ArrayList<User> or List<User>, return List<User> because it makes life easier for the people consuming your class.
If you're coding inside of a method and you don't mind hard-coding a concrete type, go for it. It's not what I would do, and it will make your code more fragile in the future but since there are no outside dependencies on that concrete type (hence the first part), you're not going to break anybody that is consuming your code.
Testability is one reason. If you implement using implementations it is very difficult to mock out the required object and use it in tests. You usually end up extending the implementation and overriding which is painful.
It is extremely important if you want to use dependency injection. It is also important for hibernate -- you MUST specify an interface if you have a collection type, because hibernate provides its own collection implementations.
That said, you don't need to be pedantic about it -- sometimes it doesn't matter. Sometimes you want to specify concrete type to get at methods only available on that type.

Can you suggest any good intro to Scala philosophy and programs design?

In Java and C++ designing program's objects hierarchy is pretty obvious. But beginning Scala I found myself difficult to decide what classes to define to better employ Scala's syntactic sugar facilities (an even idealess about how should I design for better performance). Any good readings on this question?
I have read 4 books on Scala, but I have not found what you are asking for. I guess you have read "Programming in Scala" by Odersky (Artima) already. If not, this is a link to the on-line version:
http://www.docstoc.com/docs/8692868/Programming-In-Scala
This book gives many examples how to construct object-oriented models in Scala, but all examples are very small in number of classes. I do not know of any book that will teach you how to structure large scale systems using Scala.
Imperative object-orientation has
been around since Smalltalk, so we
know a lot about this paradigm.
Functional object-orientation on the
other hand, is a rather new concept,
so in a few years I expect books
describing large scale FOO systems to
appear. Anyway, I think that the PiS
book gives you a pretty good picture
how you can put together the basic
building blocks of a system, like
Factory pattern, how to replace the
Strategy pattern with function
literals and so on.
One thing that Viktor Klang once told me (and something I really agree upon) is that one difference between C++/Java and Scala OO is that you define a lot more (smaller) classes when you use Scala. Why? Because you can! The syntactic sugar for the case class result in a very small penalty for defining a class, both in typing and in readability of the code. And as you know, many small classes usually means better OO (fewer bugs) but worse performance.
One other thing I have noticed is that I use the factory pattern a lot more when dealing with immutable objects, since all "changes" of an instance results in creating a new instance. Thank God for the copy() method on the case class. This method makes the factory methods a lot shorter.
I do not know if this helped you at all, but I think this subject is very interesting myself, and I too await more literature on this subject.
Cheers!
This is still an evolving matter. For instance, the just released Scala 2.8.0 brought support of type constructor inference, which enabled a pattern of type classes in Scala. The Scala library itself has just began using this pattern. Just yesterday I heard of a new Lift module in which they are going to try to avoid inheritance in favor of type classes.
Scala 2.8.0 also introduced lower priority implicits, plus default and named parameters, both of which can be used, separately or together, to produce very different designs than what was possible before.
And if we go back in time, we note that other important features are not that old either:
Extractor methods on case classes object companions where introduced February 2008 (before that, the only way to do extraction on case classes was through pattern matching).
Lazy values and Structural types where introduced July 2007.
Abstract types support for type constructors was introduced in May 2007.
Extractors for non-case classes was introduced in January 2007.
It seems that implicit parameters were only introduced in March 2006, when they replaced the way views were implemented.
All that means we are all learning how to design Scala software. Be sure to rely on tested designs of functional and object oriented paradigms, to see how new features in Scala are used in other languages, like Haskell and type classes or Python and default (optional) and named parameters.
Some people dislike this aspect of Scala, others love it. But other languages share it. C# is adding features as fast as Scala. Java is slower, but it goes through changes too. It added generics in 2004, and the next version should bring some changes to better support concurrent and parallel programming.
I don't think that there are much tutorials for this. I'd suggest to stay with the way you do it now, but to look through "idiomatic" Scala code as well and to pay special attention in the following cases:
use case classes or case objects instead of enums or "value objects"
use objects for singletons
if you need behavior "depending on the context" or dependency-injection-like functionality, use implicits
when designing a type hierarchy or if you can factor things out of a concrete class, use traits when possible
Fine grained inheritance hierarchies are OK. Keep in mind that you have pattern matching
Know the "pimp my library" pattern
And ask as many questions as you feel you need to understand a certain point. The Scala community is very friendly and helpful. I'd suggest the Scala mailing list, Scala IRC or scala-forum.org
I've just accidentally googled to a file called "ScalaStyleGuide.pdf". Going to read...

The evilness of 'var' in C#? [duplicate]

This question already has answers here:
Closed 13 years ago.
Possible Duplicate:
C# 'var' keyword versus explicitly defined variables
EDIT:
For those who are still viewing this, I've completely changed my opinion on var. I think it was largely due to the responses to this topic that I did. I'm an avid 'var' user now, and I think its proponents comments below were absolutely correct in pretty much all cases. I think the thing I like most about var is it REALLY DOES reduce repetition (conforms to DRY), and makes your code considerably cleaner. It supports refactoring (when you need to change the return type of something, you have less code cleanup to deal with, and NO, NOT everyone has a fancy refactoring tool!), and anecdotally, people don't really seem to have a problem not knowing the specific type of a variable up front (its easy enough to "discover" the capabilities of a type on-demand, which is generally a necessity anyway, even if you DO know the name of a type.)
So here's a big applause for the 'var' keyword!!
This is a relatively simple question...more of a poll really. I am a HUGE fan of C#, and have used it for over 8 years, since before .NET was first released. I am a fan of all of the improvements made to the language, including lambda expressions, extension methods, LINQ, and anonymous types. However, there is one feature from C# 3.0 that I feel has been SORELY misused....the 'var' keyword.
Since the release of C# 3.0, on blogs, forums, and yes, even Stackoverflow, I have seen var replace pretty much every variable that has been written! To me, this is a grave misuse of the feature, and leads to very arbitrary code that can have many obfuscated bugs due to the lack in clarity of what type a variable actually is.
There is only a single truly valid use for 'var' (in my opinion at least). What is that valid use, you ask? The only valid use is when you are incapable of knowing the type, and the only instance where that can happen:
When accessing an anonymous type
Anonymous types have no compile-time identity, so var is the only option. It's the only reason why var was added...to support anonymous types.
So...whats your opinion? Given the prolific use of var on blogs, forums, suggested/enforced by tools like ReSharper, etc. many up and coming developers will see it as a completely valid thing.
Do you think var should be used so prolifically?
Do you think var should ever be used for anything other than an anonymous type?
Is it acceptable to use in code posted to blogs to maintain brevity...terseness? (Not sure about the answer this one myself...perhaps with a disclaimer)
Should we, as a community, encourage better use of strongly typed variables to improve code clarity, or allow C# to become more vague and less descriptive?
I would like to know the communities opinions. I see var used a lot, but I have very little idea why, and perhapse there is a good reason (i.e. brevity/terseness.)
var is a splendid idea to help implement a key principle of good programming: DRY, i.e., Don't Repeat Yourself.
VeryComplicatedType x = new VeryComplicatedType();
is bad coding, because it repeats VeryComplicatedType, and the effects are all negative: more verbose and boilerplatey code, less readability, silly "makework" for both the reader and the writer of the code. Because of all this, I count var as a very useful enhancement in C# 3 compared to Java and previous versions of C#.
Of course it can be mildly misused, by using as the RHS an expression whose type is not clear and obvious (e.g., a call to a method whose declaration may be far away) -- such misuse may decrease readability (by forcing the reader to hunt for the method's declaration or ponder deeply about some other subtle expression's type) instead of increasing it. But if you stick to using var to avoid repetition, you'll be in its sweet spot, and no misuse.
I think it should be used in those situations where the type is clearly specified elsewhere in the same statement:
Dictionary<string, List<int>> myHashMap = new Dictionary<string, List<int>>();
is a pain to read. This could be replaced by the following with no loss of clarity:
var myHashMap = new Dictionary<string, List<int>>();
Pop quiz!
What type is this:
var Foo = new string[]{"abc","123","yoda"};
How about this:
var Bar = {"abc","123","yoda"};
It takes me roughly no longer to determine what types those are than with the explicity redundant specification of the type. As a programmer I have no issues with letting a compiler figure out things that are obvious for me. You may disagree.
Cheers.
Never say never. I'm pretty sure there are a bunch of questions where people have expounded their views on var, but here's mine once more.
var is a tool; use it where it's appropriate, and don't use it when it's not. You're right that the only required use of var is when addressing anonymous types, in which case you have no type name to use. Personally, I'd say any other use has to be considered in terms of readability and laziness; specifically, when avoiding use of a cumbersome type name.
var i = 5;
(Laziness)
var list = new List<Customer>();
(Convenience)
var customers = GetCustomers();
(Questionable; I'd consider it acceptable if and only if GetCustomers() returns an IEnumerable)
Read up on Haskell. It's a statically typed language in which you rarely have to state the type of anything. So it uses the same approach as var, as the standard "idiomatic" coding style.
If the compiler can figure something out for you, why write the same thing twice?
A colleague of mine was at first very opposed to var, just as you are, but has now started using it habitually. He was worried it would make programs less self-documenting, but in practice that's caused more by overly long methods.
var MyCustomers = from c in Customers
where c.City="Madrid"
select new { c.Company, c.Mail };
If I need only Company and Mail from Customers collection. It's nonsense define new type with members what I need.
If you feel that giving the same information twice reduces errors (the designers of many web forms that insist you type in your email address twice seem to agree), then you'll probably hate var. If you write a lot of code that uses complicated type specifications then it's a godsend.
EDIT: To exapand this a bit (incase it sounds like I'm not in favour of var):
In the UK (at least at the time I went), it was standard practice to make Computer Science students learn how to program in Standard ML. Like other functional languages it has a type system that puts languages in the C++/Java mould to shame.
Anyway, what I noticed at the time (and heard similar remarks from other students) was that it was a nightmare to get your SML programs to compile because the compiler was so increadibly picky about types, but once the did compile, they almost always ran without error.
This aspect of SML (and other functional languages) seems to be one the questioner sees as a 'good thing' - i.e. that anything that helps the compiler catch more errors at compile time is good.
Now here's the thing with SML: it uses type inference exclusively when assigning. So I don't think type inference can be inherently bad.
I agree with others that var eliminates redundancy. I have decided to use var where it eliminates redundancy as much as possible. I think consistency is important. Choose a style and stick with it through a project.
As Earwicker indicated, there are some functional languages, Haskell being one and F# being another, where such type inference is used much more pervasively -- the C# analogy would be declaring the return types and parameter types of methods as "var", and then having the compiler infer the static type for you. Static and explicit typing are two orthogonal concerns.
In fact, is it even correct to say that use of "var" is dynamic typing? From what I understood, that's what the new "dynamic" keyword in C# 4.0 is for. "var" is for static type inference. Correct me if I am wrong.
I must admit when i first saw the var keyword pop up i was very skeptical.
However it is definitely an easy way to shorten the lines of a new declaration, and i use it all the time for that.
However when i change the type of an underlying method, and accept the return type using var. I do get the occasional run time error. Most are still picked up by the compiler.
The secound issue i run into is when i am not sure what method to use (and i am simply looking through the auto complete). IF i choose the wrong one and expect it to be type FOO and it is type BAR then it takes a while to figure that out.
If i had of literally specified the variable type in both cases it would have saved a bit of frustration.
overall the benefits exceed the problems.
I have to dissent with the view that var reduces redundancy in any meaningful way. In the cases that have been put forward here, type inference can and should come out of the IDE, where it can be applied much more liberally with no loss of readability.