Should 'as' keyword be avoided in dart? Should 'dynamic' be used over 'Object'? - flutter

Reviewing a question recently but could not post response due to lacking any reputation.
In this question regarding a compiletime error coming from using List<Map<String,Object>> there was a compile time error when trying to pull the value of the Object which was known to be either a String or a Widget.
My resolution was to use as when calling using the values 'as String' or 'as Widget' in the appropriate spots.
Another more elegant solution was to replace 'Object' with 'dynamic'.
I remember reading 'as' was discouraged where possible. I don't know why, and I feel it resolve the issue. Is this simply because it should be cast as a specific type when created?
When trying to recreate in Dartpad I had no issues, potentially just a flutter issue?
Why does dynamic work, but Object doesn't in this scenario? I mean everything will be a subtype of object right?
Thanks,
Can copy and paste code across if required, felt context of attached question was valuable.

I remember reading 'as' was discouraged where possible. I don't know why, and I feel it resolve the issue. Is this simply because it should be cast as a specific type when created? When trying to recreate in Dartpad I had no issues, potentially just a flutter issue?
Explicit type casts are a code smell. They're not necessarily wrong or bad, but they're frequently indicative of APIs that are unnecessarily awkward and perhaps could be designed better to use the correct types in the first place.
Explicit casts are also brittle. They introduce potential runtime failure points. That is, if the actual (runtime) type of the object changed, the cast could fail, and that failure wouldn't be noticed at compilation time.
Why does dynamic work, but Object doesn't in this scenario? I mean everything will be a subtype of object right?
dynamic is a special type that disables static (compile-time) type-checking. Object (like every other non-dynamic type) is statically type-checked, so any methods that you call on it must be statically known to exist on that type.
Note that using dynamic is brittle too since failures involving dynamic types are inherently runtime failures. Using dynamic also can be less readable in general; except when there's some obvious context, readers won't know what type you expect the object to be and therefore won't know what behavior you expect the called method to have.
Also see the Effective Dart recommendation: AVOID using dynamic unless you want to disable static checking.

Related

Where to define typecast to struct in MATLAB OOP?

In the MATLAB OOP framework, it can be useful to cast an object to a struct, i.e., define a function that takes an object and returns a struct with equivalent fields.
What is the appropriate place to do this? I can think of several options:
Build a separate converter object that takes care of conversions between various classes
Add a function struct to the class that does the conversion to struct, and make the constructor accept structs
Neither option seems to be very elegant: the first means that logic about the class itself is moved to another class. On the other hand, in the second case, it provokes users to use the struct function for any object, which will in general give a warning (structOnObject).
Are there altenatives?
Personally I'd go with the second option, and not worry about provoking users to call struct on other classes; you can only worry about your own code, not that of a third-party, even if the third party is MathWorks. In any case, if they do start to call struct on an arbitrary class, it's only a warning; nothing actually dangerous is likely to happen, it's just not a good practice.
But if you're concerned about that, you can always call your converter method toStruct rather than struct. Or perhaps the best (although slightly more complex) way might be to overload cast for your class, accepting and handling the option 'struct', and passing any other option through to builtin('cast',....
PS The title of your question refers to typecasting, but what your after here is casting. In MATLAB, typecasting is a different operation, involving taking the exact bits of one type and reinterpreting them as bits of another type (possibly an array of the output type). See doc cast and doc typecast for more information on the distinction.
The second option sounds much better to me.
A quick and dirty way to get rid of the warning would be disabling it by calling
warning('off', 'MATLAB:structOnObject')
at the start of your program.
The solutions provided in Sam Roberts' answer are however much cleaner. I personally would go for the toStruct() method.

Why NOT use optionals in Swift?

I was reading up on how to program in Swift, and the concept of optionals bugged me a little bit. Not really in terms of why to use optionals, that makes sense, but more so as to in what case would you not want to use optionals. From what I understand, an optional just allows an object to be set to nil, so why would you not want that feature? Isn't setting an object to nil the way you tell ARC to release an object? And don't most of the functions in Foundation and Cocoa return optionals? So outside of having to type an extra character each time to refer to an object, is there any good reason NOT to use an optional in place of a regular type?
There are tons of reasons to NOT use optional. The main reason: You want to express that a value MUST be available. For example, when you open a file, you want the file name to be a string, not an optional string. Using nil as filename simply makes no sense.
I will consider two main use cases: Function arguments and function return values.
For function arguments, the following holds: If the argument needs to be provided, option should not be used. If handing in nothing is okay and a valid (documented!) input, then hand in an optional.
For function return values returning no optional is especially nice: You promise the caller that he will receive an object, not either an object or nothing. When you do not return an optional, the caller knows that he may use the value right away instead of checking for null first.
For example, consider a factory method. Such method should always return an object, so why should you use optional here? There are a lot of examples like this.
Actually, most APIs should rather use non-optionals instead of optionals. Most of the time, simply passing/receiving possibly nothing is just not what you want. There are rather few cases where nothing is an option.
Each case where optional is used must be thoroughly documented: In which circumstances will a method return nothing? When is it okay to hand nothing to a method and what will the consequences be? A lot of documentation overhead.
Then there is also conciseness: If you use an API that uses optional all over the place, your code will be cluttered with null-checks. Of course, if every use of optional is intentional, then these checks are fine and necessary. However, if the API only uses optional because its author was lazy and was simply using optional all over the place, then the checks are unnecessary and pure boilerplate.
But beware!
My answer may sound as if the concept of optionals is quite crappy. The opposite is true! By having a concept like optionals, the programmer is able to declare whether handing in/returning nothing is okay. The caller of a function is always aware of that and the compiler enforces safety. Compare that to plain old C: You could not declare whether a pointer could be null. You could add documentation comments that state whether it may be null, but such comments were not enforced by the compiler. If the caller forgot to check a return value for null, you received a segfault. With optionals you can be sure that noone dereferences a null pointer anymore.
So in conclusion, a null-safe type system is one of the major advances in modern programming languages.
The original idea of optionals (which existed long before Swift) is to force programmer to check value for nil before using it — or to prevent outside code from passing nil where it is not allowed. A huge part of crashes in software, maybe even most of them, happens at address 0x00000000 (or with NullPointerException, or alike) precisely because it is way too easy to forget about nil-pointer scenario. (In 2009, Tony Hoare apologized for inventing null pointers).
Not using optionals is as valid and widespread use case as using them: when the value absolutely can not be missing, there should be non-optional type; when it can, there should be an optional.
But currently, the existing frameworks are written in Obj-C without optionals in mind, so automatically generated bridges between Swift and Obj-C just have to take and return optionals, because it is impossible to automatically deeply analyze each method and figure out which arguments and return values should be optionals or not. I'm sure over time Apple will manually fix every case where they got it wrong; right now you should not use those frameworks as an example, because they are definitely not a good one. (For good examples, you could check a popular functional language like Haskell which had optionals since the beginning).

What is the role of the "interface {}" syntax in Go?

I've read through the Effective Go and Go Tutorials as well as some source, but the exact mechanism behind the interface {} syntax is Go is somewhat mysterious to me. I first saw it when trying to implement heap.Interface and it seems to be a container of some kind (reminds me of a monad a little) from which I can extract values of arbitrary type.
Why is Go written to use this? Is it some kind of workaround for generics? Is there a more elegant way to get values from a heap.Interface than having to dereference them with heap.Pop(&h).(*Foo) (in the case of a heap pointers to type Foo)?
interface{} is a generic box that can hold everything. Interfaces in go define a set of methods, and any type which implements these methods conforms to the interface. interface{} defines no methods, and so by definition, every single type conforms to this interface and therefore can be held in a value of type interface{}.
It's not really like generics at all. Instead, it's a way to relax the type system and say "any value at all can be passed here". The equivalent in C for this functionality is a void * pointer, except in Go you can query the type of the value that's being held.
Here is an excellent blog post that explains what is going on under the hood.

What does "no global type inference" mean regarding Scala?

I have read that Scala's type inference is not global so that is why people must place type annotations on the methods. (Would this be "local" type inference?)
I only a little understand that the reason is from its object-oriented nature, but clarity eludes me. Is there an explanation for "global type inference" and why Scala cannot have it that a beginner might understand?
The problem is that HM type inference is undecidable in general in a language with subtyping, overloading or similar features.Ref This means more and more stuff could be added to the inferencer to make it infer more special cases, but there will always be code where it will fail.
Scala has made the decision to make type annotations in method arguments and some other places mandatory. This might seem like a hassle first, but consider that this helps to document the code and provides the compiler with information it can understand in one place. Additionally, languages with HM inference often suffer from the problem that programming errors are sometimes detected in code far away from the original mistake, because the HM algorithm just went along and happened (by chance) to infer other parts of the code with the faulty type it inferred before it failed.
Scala's inference basically works from the outside (method definition) to the inside (code inside the method) and therefore limits the impact of a wrong type annotation.
Languages with HM inference work from the inside to the outside (ignoring the possibility to add type annotations) which means there is a chance that a small code change in one single method can change the meaning of the whole program. This can be good or bad.
Ref: Lower bounds on type inference with subtypes
The typical example for a global type inference is Hindley-Milner: It takes a given program and "calculates" all the necessary types. However in order to achieve this, the given language needs to have some properties (there are extensions to HM, which try to overcome some of these restrictions). Two things HM doesn't like are inheritance and method overloading. As far as I understand these are the main obstacles for Scala to adopt HM or some variant of it. Note that in practice even languages which heavily rely on HM never reach a 100% inference, e.g. even in Haskell you need a type annotation from time to time.
So Scala uses a more limited (as you say "local") form of type inference, which is still better than nothing. As far as I can tell the Scala team tries to improve the type inference from release to release when it is possible, but so far I've seen only smaller steps. The gap to a HM style type inferencer is still huge, and can't be closed completely.

The evilness of 'var' in C#? [duplicate]

This question already has answers here:
Closed 13 years ago.
Possible Duplicate:
C# 'var' keyword versus explicitly defined variables
EDIT:
For those who are still viewing this, I've completely changed my opinion on var. I think it was largely due to the responses to this topic that I did. I'm an avid 'var' user now, and I think its proponents comments below were absolutely correct in pretty much all cases. I think the thing I like most about var is it REALLY DOES reduce repetition (conforms to DRY), and makes your code considerably cleaner. It supports refactoring (when you need to change the return type of something, you have less code cleanup to deal with, and NO, NOT everyone has a fancy refactoring tool!), and anecdotally, people don't really seem to have a problem not knowing the specific type of a variable up front (its easy enough to "discover" the capabilities of a type on-demand, which is generally a necessity anyway, even if you DO know the name of a type.)
So here's a big applause for the 'var' keyword!!
This is a relatively simple question...more of a poll really. I am a HUGE fan of C#, and have used it for over 8 years, since before .NET was first released. I am a fan of all of the improvements made to the language, including lambda expressions, extension methods, LINQ, and anonymous types. However, there is one feature from C# 3.0 that I feel has been SORELY misused....the 'var' keyword.
Since the release of C# 3.0, on blogs, forums, and yes, even Stackoverflow, I have seen var replace pretty much every variable that has been written! To me, this is a grave misuse of the feature, and leads to very arbitrary code that can have many obfuscated bugs due to the lack in clarity of what type a variable actually is.
There is only a single truly valid use for 'var' (in my opinion at least). What is that valid use, you ask? The only valid use is when you are incapable of knowing the type, and the only instance where that can happen:
When accessing an anonymous type
Anonymous types have no compile-time identity, so var is the only option. It's the only reason why var was added...to support anonymous types.
So...whats your opinion? Given the prolific use of var on blogs, forums, suggested/enforced by tools like ReSharper, etc. many up and coming developers will see it as a completely valid thing.
Do you think var should be used so prolifically?
Do you think var should ever be used for anything other than an anonymous type?
Is it acceptable to use in code posted to blogs to maintain brevity...terseness? (Not sure about the answer this one myself...perhaps with a disclaimer)
Should we, as a community, encourage better use of strongly typed variables to improve code clarity, or allow C# to become more vague and less descriptive?
I would like to know the communities opinions. I see var used a lot, but I have very little idea why, and perhapse there is a good reason (i.e. brevity/terseness.)
var is a splendid idea to help implement a key principle of good programming: DRY, i.e., Don't Repeat Yourself.
VeryComplicatedType x = new VeryComplicatedType();
is bad coding, because it repeats VeryComplicatedType, and the effects are all negative: more verbose and boilerplatey code, less readability, silly "makework" for both the reader and the writer of the code. Because of all this, I count var as a very useful enhancement in C# 3 compared to Java and previous versions of C#.
Of course it can be mildly misused, by using as the RHS an expression whose type is not clear and obvious (e.g., a call to a method whose declaration may be far away) -- such misuse may decrease readability (by forcing the reader to hunt for the method's declaration or ponder deeply about some other subtle expression's type) instead of increasing it. But if you stick to using var to avoid repetition, you'll be in its sweet spot, and no misuse.
I think it should be used in those situations where the type is clearly specified elsewhere in the same statement:
Dictionary<string, List<int>> myHashMap = new Dictionary<string, List<int>>();
is a pain to read. This could be replaced by the following with no loss of clarity:
var myHashMap = new Dictionary<string, List<int>>();
Pop quiz!
What type is this:
var Foo = new string[]{"abc","123","yoda"};
How about this:
var Bar = {"abc","123","yoda"};
It takes me roughly no longer to determine what types those are than with the explicity redundant specification of the type. As a programmer I have no issues with letting a compiler figure out things that are obvious for me. You may disagree.
Cheers.
Never say never. I'm pretty sure there are a bunch of questions where people have expounded their views on var, but here's mine once more.
var is a tool; use it where it's appropriate, and don't use it when it's not. You're right that the only required use of var is when addressing anonymous types, in which case you have no type name to use. Personally, I'd say any other use has to be considered in terms of readability and laziness; specifically, when avoiding use of a cumbersome type name.
var i = 5;
(Laziness)
var list = new List<Customer>();
(Convenience)
var customers = GetCustomers();
(Questionable; I'd consider it acceptable if and only if GetCustomers() returns an IEnumerable)
Read up on Haskell. It's a statically typed language in which you rarely have to state the type of anything. So it uses the same approach as var, as the standard "idiomatic" coding style.
If the compiler can figure something out for you, why write the same thing twice?
A colleague of mine was at first very opposed to var, just as you are, but has now started using it habitually. He was worried it would make programs less self-documenting, but in practice that's caused more by overly long methods.
var MyCustomers = from c in Customers
where c.City="Madrid"
select new { c.Company, c.Mail };
If I need only Company and Mail from Customers collection. It's nonsense define new type with members what I need.
If you feel that giving the same information twice reduces errors (the designers of many web forms that insist you type in your email address twice seem to agree), then you'll probably hate var. If you write a lot of code that uses complicated type specifications then it's a godsend.
EDIT: To exapand this a bit (incase it sounds like I'm not in favour of var):
In the UK (at least at the time I went), it was standard practice to make Computer Science students learn how to program in Standard ML. Like other functional languages it has a type system that puts languages in the C++/Java mould to shame.
Anyway, what I noticed at the time (and heard similar remarks from other students) was that it was a nightmare to get your SML programs to compile because the compiler was so increadibly picky about types, but once the did compile, they almost always ran without error.
This aspect of SML (and other functional languages) seems to be one the questioner sees as a 'good thing' - i.e. that anything that helps the compiler catch more errors at compile time is good.
Now here's the thing with SML: it uses type inference exclusively when assigning. So I don't think type inference can be inherently bad.
I agree with others that var eliminates redundancy. I have decided to use var where it eliminates redundancy as much as possible. I think consistency is important. Choose a style and stick with it through a project.
As Earwicker indicated, there are some functional languages, Haskell being one and F# being another, where such type inference is used much more pervasively -- the C# analogy would be declaring the return types and parameter types of methods as "var", and then having the compiler infer the static type for you. Static and explicit typing are two orthogonal concerns.
In fact, is it even correct to say that use of "var" is dynamic typing? From what I understood, that's what the new "dynamic" keyword in C# 4.0 is for. "var" is for static type inference. Correct me if I am wrong.
I must admit when i first saw the var keyword pop up i was very skeptical.
However it is definitely an easy way to shorten the lines of a new declaration, and i use it all the time for that.
However when i change the type of an underlying method, and accept the return type using var. I do get the occasional run time error. Most are still picked up by the compiler.
The secound issue i run into is when i am not sure what method to use (and i am simply looking through the auto complete). IF i choose the wrong one and expect it to be type FOO and it is type BAR then it takes a while to figure that out.
If i had of literally specified the variable type in both cases it would have saved a bit of frustration.
overall the benefits exceed the problems.
I have to dissent with the view that var reduces redundancy in any meaningful way. In the cases that have been put forward here, type inference can and should come out of the IDE, where it can be applied much more liberally with no loss of readability.