Related
This question already has answers here:
What is the difference between `let` and `var` in Swift?
(32 answers)
Closed 8 years ago.
I'm new to Swift programming, and I've met the var and let types. I know that let is a constant and I know what that means, but I never used a constant mainly because I didn't need to. So why should I use var instead of let, at what situation should I use it?
Rather than constant and variable, the correct terminology in swift is immutable and mutable.
You use let when you know that once you assign a value to a variable, it doesn't change - i.e. it is immutable. If you declare the id of a table view cell, most likely it won't change during its lifetime, so by declaring it as immutable there's no risk that you can mistakenly change it - the compiler will inform you about that.
Typical use cases:
A constant (the timeout of a timer, or the width of a fixed sized label, the max number of login attempts, etc.). In this scenario the constant is a replacement for the literal value spread over the code (think of #define)
the return value of a function used as input for another function
the intermediate result of an expression, to be used as input for another expression
a container for an unwrapped value in optional binding
the data returned by a REST API call, deserialized from JSON into a struct, which must be stored in a database
and a lot more. Every time I write var, I ask myself: can this variable change?. If the answer is no, I replace var with let. Sometimes I also use a more protective approach: I declare everything as immutable, then the compiler will let me know when I try to modify one of them, and for each case I can proceed accordingly.
Some considerations:
For reference types (classes), immutable means that once you assign an instance to the immutable variable, you cannot assign another instance to the same variable.
For value types (numbers, strings, arrays, dictionaries, structs, enums) immutable means that that once you assign a value, you cannot change the value itself. For simple data types (Int, Float, String) it means you cannot assign another value of the same type. For composite data types (structs, arrays, dictionaries) it means you cannot assign a new value (such as a new instance of a struct) and you cannot change any of their stored properties.
Also an immutable variable has a semantic meaning for the developer and whoever reading the code - it clearly states that the variable won't change.
Last, but maybe less important from a pure development point of view, immutables can be subject to optimizations by the compiler.
Generally speaking, mutable state is to avoid as much as possible.
Immutable values help in reasoning about code, because you can easily track them down and clearly identify the value from the start to the end.
Mutable variables, on the other hand, make difficult to follow your data flow, since anyone can modify them at any time. Especially when dealing with concurrent applications, reasoning about mutable state can quickly become an incredibly hard task.
So, as a design principle, try to use let whenever possible and if you need to modify an object, simply produce a new instance.
Whenever you need to use a var, perhaps because using it makes the code clearer, try to limit their scope as much as possible and not to expose any mutable state. As an example, if you declare a var inside a function, it's safe to do so as long as you don't expose that mutability to the caller, i.e. from the caller point of view, it must not matter whether you used a var or a val in the implementation.
In general, if you know a variable's value is not going to change, declare it as a constant. Immutable variables will make your code more readable as you know for sure a particular variable is never being changed. This might also be better for the compiler as it can take advantage of the fact that a variable is constant and perform some optimisations.
This doesn't only apply to Swift. Even in C when the value of a variable is not be changed after being initialised, it's good practise to make sure it's const.
So, the way you think about "I didn't need to" should change. You don't need a constant only for values like TIMEOUT etc. You should have a constant variable anywhere you know the value of a variable doesn't need to be changed after initialisation.
Note: This is more of a general "programming as a whole" answer and not specific to Swift. #Antonio's answer has more of a focus on Swift.
I mean if there's some declarative way to prevent an object from changing any of it's members.
In the following example
class student(var name:String)
val s = new student("John")
"s" has been declared as a val, so it will always point to the same student.
But is there some way to prevent s.name from being changed by just declaring it like immutable???
Or the only solution is to declare everything as val, and manually force immutability?
No, it's not possible to declare something immutable. You have to enforce immutability yourself, by not allowing anyone to change it, that is remove all ways of modifying the class.
Someone can still modify it using reflection, but that's another story.
Scala doesn't enforce that, so there is no way to know. There is, however, an interesting compiler-plugin project named pusca (I guess it stands for Pure-Scala). Pure is defined there as not mutating a non-local variable and being side-effect free (e.g. not printing to the console)—so that calling a pure method repeatedly will always yield the same result (what is called referentially transparent).
I haven't tried out that plug-in myself, so I can't say if it's any stable or usable already.
There is no way that Scala could do this generally.
Consider the following hypothetical example:
class Student(var name : String, var course : Course)
def stuff(course : Course) {
magically_pure_val s = new Student("Fredzilla", course)
someFunctionOfStudent(s)
genericHigherOrderFunction(s, someFunctionOfStudent)
course.someMethod()
}
The pitfalls for any attempt to actually implement that magically_pure_val keyword are:
someFunctionOfStudent takes an arbitrary student, and isn't implemented in this compilation unit. It was written/compiled knowing that Student consists of two mutable fields. How do we know it doesn't actually mutate them?
genericHigherOrderFunction is even worse; it's going to take our Student and a function of Student, but it's written polymorphically. Whether or not it actually mutates s depends on what its other arguments are; determining that at compile time with full generality requires solving the Halting Problem.
Let's assume we could get around that (maybe we could set some secret flags that mean exceptions get raised if the s object is actually mutated, though personally I wouldn't find that good enough). What about that course field? Does course.someMethod() mutate it? That method call isn't invoked from s directly.
Worse than that, we only know that we'll have passed in an instance of Course or some subclass of Course. So even if we are able to analyze a particular implementation of Course and Course.someMethod and conclude that this is safe, someone can always add a new subclass of Course whose implementation of someMethod mutates the Course.
There's simply no way for the compiler to check that a given object cannot be mutated. The pusca plugin mentioned by 0__ appears to detect purity the same way Mercury does; by ensuring that every method is known from its signature to be either pure or impure, and by raising a compiler error if the implementation of anything declared to be pure does anything that could cause impurity (unless the programmer promises that the method is pure anyway).[1]
This is quite a different from simply declaring a value to be completely (and deeply) immutable and expecting the compiler to notice if any of the code that could touch it could mutate it. It's also not a perfect inference, just a conservative one
[1]The pusca README claims that it can infer impurity of methods whose last expression is a call to an impure method. I'm not quite sure how it can do this, as checking if that last expression is an impure call requires checking if it's calling a not-declared-impure method that should be declared impure by this rule, and the implementation might not be available to the compiler at that point (and indeed could be changed later even if it is). But all I've done is look at the README and think about it for a few minutes, so I might be missing something.
I had a doubt
I know that main difference between a function and procedure is
The function compulsory returns a value where as a procedure may or may not returns value.
But when we use a function of type void it returns nothing.
Can u people please clarify my doubt.
Traditionally, a procedure returning a value has been called a function (see below), however, many modern languages dispense with the term procedure altogether, preferring to use the term function for all named code blocks.
Read more at Suite101: Procedure, subroutine or function?: Programming terminology 101 - a look at the differences in approach and definition of procedures, subroutines and functions. http://www.suite101.com/content/procedure--subroutine-or-function--a8208#ixzz1GqkE7HjE
In C and its derivatives, the term "procedure" is rarely used. C has functions some of which return a value and some of which don't. I think this is an artefact of C's heritage where before the introduction of void in ANSI C, there was no way to not return a value. By default functions returned an int which you could ignore (can still) and might be some random number if no explicit return value was specified.
In the Pascal language family, the difference is explicit, functions return a value and procedures don't. A different keyword is used in each case for the definition. Visual Basic also differentiates with functions and subroutines(?).
Since we are talking about Objective-C, there are some further issues to confuse you. Functions associated with a class or object are known as "methods" (class methods and instance methods respectively).
Also, if we are being pedantic, you don't call Objective-C methods, you invoke them by sending a message to the object. The distinction is actually quite important because the message name (aka "selector") does not necessarily always refer to the same method, it can be changed at run time. This is fundamentally different to languages like Java and C++ where a particular method name for a particular class is really just a symbolic name for the address of the block of code constituting the body of the method.
Depending on the programming language, the distinction may be not so clear. Let's take a conservative language, Pascal:
procedure indeed has no return value. It is used for operations which do not have a return value, or have multiple return values. In the latter case, multiple arguments (the return-arguments or output-arguments) are passed by reference (using the var keyword) and their values are directly modified from inside the procedure. (Note that this latter case may not be considered good practice, depending on the circumstances).
function has a single return value, and usually we do not expect it to change the value of any of its arguments (which arguments may then be passed by value, or via the const keyword). Multiple return values may be returned by bundling them into a record.
C or Java does not distinguish syntactically, so a function of return type void can be thought of as a procedure. Scala distinguished between them by the presence of an equals sign between the method head and method body.
Generally, no matter how an actual language calls its construct, we would ideally expect that
A function takes arguments, doesn't modify any state (like mutating arguments, global variables, or printing info for the user to the console), and returns the result of computation.
A procedure takes arguments, performs operations which can have side-effects (writing to a database, printing to the console, maybe mutating variables), but hopefully doesn't mutate any arguments.
In practice however, depending on the situation, blends of these expectations can be observed. Sticking to these guidelines helps I think.
Also, does one imply the other?
What is the difference between a strongly typed language and a statically typed language?
A statically typed language has a type system that is checked at compile time by the implementation (a compiler or interpreter). The type check rejects some programs, and programs that pass the check usually come with some guarantees; for example, the compiler guarantees not to use integer arithmetic instructions on floating-point numbers.
There is no real agreement on what "strongly typed" means, although the most widely used definition in the professional literature is that in a "strongly typed" language, it is not possible for the programmer to work around the restrictions imposed by the type system. This term is almost always used to describe statically typed languages.
Static vs dynamic
The opposite of statically typed is "dynamically typed", which means that
Values used at run time are classified into types.
There are restrictions on how such values can be used.
When those restrictions are violated, the violation is reported as a (dynamic) type error.
For example, Lua, a dynamically typed language, has a string type, a number type, and a Boolean type, among others. In Lua every value belongs to exactly one type, but this is not a requirement for all dynamically typed languages. In Lua, it is permissible to concatenate two strings, but it is not permissible to concatenate a string and a Boolean.
Strong vs weak
The opposite of "strongly typed" is "weakly typed", which means you can work around the type system. C is notoriously weakly typed because any pointer type is convertible to any other pointer type simply by casting. Pascal was intended to be strongly typed, but an oversight in the design (untagged variant records) introduced a loophole into the type system, so technically it is weakly typed.
Examples of truly strongly typed languages include CLU, Standard ML, and Haskell. Standard ML has in fact undergone several revisions to remove loopholes in the type system that were discovered after the language was widely deployed.
What's really going on here?
Overall, it turns out to be not that useful to talk about "strong" and "weak". Whether a type system has a loophole is less important than the exact number and nature of the loopholes, how likely they are to come up in practice, and what are the consequences of exploiting a loophole. In practice, it's best to avoid the terms "strong" and "weak" altogether, because
Amateurs often conflate them with "static" and "dynamic".
Apparently "weak typing" is used by some persons to talk about the relative prevalance or absence of implicit conversions.
Professionals can't agree on exactly what the terms mean.
Overall you are unlikely to inform or enlighten your audience.
The sad truth is that when it comes to type systems, "strong" and "weak" don't have a universally agreed on technical meaning. If you want to discuss the relative strength of type systems, it is better to discuss exactly what guarantees are and are not provided.
For example, a good question to ask is this: "is every value of a given type (or class) guaranteed to have been created by calling one of that type's constructors?" In C the answer is no. In CLU, F#, and Haskell it is yes. For C++ I am not sure—I would like to know.
By contrast, static typing means that programs are checked before being executed, and a program might be rejected before it starts. Dynamic typing means that the types of values are checked during execution, and a poorly typed operation might cause the program to halt or otherwise signal an error at run time. A primary reason for static typing is to rule out programs that might have such "dynamic type errors".
Does one imply the other?
On a pedantic level, no, because the word "strong" doesn't really mean anything. But in practice, people almost always do one of two things:
They (incorrectly) use "strong" and "weak" to mean "static" and "dynamic", in which case they (incorrectly) are using "strongly typed" and "statically typed" interchangeably.
They use "strong" and "weak" to compare properties of static type systems. It is very rare to hear someone talk about a "strong" or "weak" dynamic type system. Except for FORTH, which doesn't really have any sort of a type system, I can't think of a dynamically typed language where the type system can be subverted. Sort of by definition, those checks are bulit into the execution engine, and every operation gets checked for sanity before being executed.
Either way, if a person calls a language "strongly typed", that person is very likely to be talking about a statically typed language.
This is often misunderstood so let me clear it up.
Static/Dynamic Typing
Static typing is where the type is bound to the variable. Types are checked at compile time.
Dynamic typing is where the type is bound to the value. Types are checked at run time.
So in Java for example:
String s = "abcd";
s will "forever" be a String. During its life it may point to different Strings (since s is a reference in Java). It may have a null value but it will never refer to an Integer or a List. That's static typing.
In PHP:
$s = "abcd"; // $s is a string
$s = 123; // $s is now an integer
$s = array(1, 2, 3); // $s is now an array
$s = new DOMDocument; // $s is an instance of the DOMDocument class
That's dynamic typing.
Strong/Weak Typing
(Edit alert!)
Strong typing is a phrase with no widely agreed upon meaning. Most programmers who use this term to mean something other than static typing use it to imply that there is a type discipline that is enforced by the compiler. For example, CLU has a strong type system that does not allow client code to create a value of abstract type except by using the constructors provided by the type. C has a somewhat strong type system, but it can be "subverted" to a degree because a program can always cast a value of one pointer type to a value of another pointer type. So for example, in C you can take a value returned by malloc() and cheerfully cast it to FILE*, and the compiler won't try to stop you—or even warn you that you are doing anything dodgy.
(The original answer said something about a value "not changing type at run time". I have known many language designers and compiler writers and have not known one that talked about values changing type at run time, except possibly some very advanced research in type systems, where this is known as the "strong update problem".)
Weak typing implies that the compiler does not enforce a typing discpline, or perhaps that enforcement can easily be subverted.
The original of this answer conflated weak typing with implicit conversion (sometimes also called "implicit promotion"). For example, in Java:
String s = "abc" + 123; // "abc123";
This is code is an example of implicit promotion: 123 is implicitly converted to a string before being concatenated with "abc". It can be argued the Java compiler rewrites that code as:
String s = "abc" + new Integer(123).toString();
Consider a classic PHP "starts with" problem:
if (strpos('abcdef', 'abc') == false) {
// not found
}
The error here is that strpos() returns the index of the match, being 0. 0 is coerced into boolean false and thus the condition is actually true. The solution is to use === instead of == to avoid implicit conversion.
This example illustrates how a combination of implicit conversion and dynamic typing can lead programmers astray.
Compare that to Ruby:
val = "abc" + 123
which is a runtime error because in Ruby the object 123 is not implicitly converted just because it happens to be passed to a + method. In Ruby the programmer must make the conversion explicit:
val = "abc" + 123.to_s
Comparing PHP and Ruby is a good illustration here. Both are dynamically typed languages but PHP has lots of implicit conversions and Ruby (perhaps surprisingly if you're unfamiliar with it) doesn't.
Static/Dynamic vs Strong/Weak
The point here is that the static/dynamic axis is independent of the strong/weak axis. People confuse them probably in part because strong vs weak typing is not only less clearly defined, there is no real consensus on exactly what is meant by strong and weak. For this reason strong/weak typing is far more of a shade of grey rather than black or white.
So to answer your question: another way to look at this that's mostly correct is to say that static typing is compile-time type safety and strong typing is runtime type safety.
The reason for this is that variables in a statically typed language have a type that must be declared and can be checked at compile time. A strongly-typed language has values that have a type at run time, and it's difficult for the programmer to subvert the type system without a dynamic check.
But it's important to understand that a language can be Static/Strong, Static/Weak, Dynamic/Strong or Dynamic/Weak.
Both are poles on two different axis:
strongly typed vs. weakly typed
statically typed vs. dynamically typed
Strongly typed means, a variable will not be automatically converted from one type to another. Weakly typed is the opposite: Perl can use a string like "123" in a numeric context, by automatically converting it into the int 123. A strongly typed language like python will not do this.
Statically typed means, the compiler figures out the type of each variable at compile time. Dynamically typed languages only figure out the types of variables at runtime.
Strongly typed means that there are restrictions between conversions between types.
Statically typed means that the types are not dynamic - you can not change the type of a variable once it has been created.
Answer is already given above. Trying to differentiate between strong vs week and static vs dynamic concept.
What is Strongly typed VS Weakly typed?
Strongly Typed: Will not be automatically converted from one type to another
In Go or Python like strongly typed languages "2" + 8 will raise a type error, because they don't allow for "type coercion".
Weakly (loosely) Typed: Will be automatically converted to one type to another:
Weakly typed languages like JavaScript or Perl won't throw an error and in this case JavaScript will results '28' and perl will result 10.
Perl Example:
my $a = "2" + 8;
print $a,"\n";
Save it to main.pl and run perl main.pl and you will get output 10.
What is Static VS Dynamic type?
In programming, programmer define static typing and dynamic typing with respect to the point at which the variable types are checked. Static typed languages are those in which type checking is done at compile-time, whereas dynamic typed languages are those in which type checking is done at run-time.
Static: Types checked before run-time
Dynamic: Types checked on the fly, during execution
What is this means?
In Go it checks typed before run-time (static check). This mean it not only translates and type-checks code it’s executing, but it will scan through all the code and type error would be thrown before the code is even run. For example,
package main
import "fmt"
func foo(a int) {
if (a > 0) {
fmt.Println("I am feeling lucky (maybe).")
} else {
fmt.Println("2" + 8)
}
}
func main() {
foo(2)
}
Save this file in main.go and run it, you will get compilation failed message for this.
go run main.go
# command-line-arguments
./main.go:9:25: cannot convert "2" (type untyped string) to type int
./main.go:9:25: invalid operation: "2" + 8 (mismatched types string and int)
But this case is not valid for Python. For example following block of code will execute for first foo(2) call and will fail for second foo(0) call. It's because Python is dynamically typed, it only translates and type-checks code it’s executing on. The else block never executes for foo(2), so "2" + 8 is never even looked at and for foo(0) call it will try to execute that block and failed.
def foo(a):
if a > 0:
print 'I am feeling lucky.'
else:
print "2" + 8
foo(2)
foo(0)
You will see following output
python main.py
I am feeling lucky.
Traceback (most recent call last):
File "pyth.py", line 7, in <module>
foo(0)
File "pyth.py", line 5, in foo
print "2" + 8
TypeError: cannot concatenate 'str' and 'int' objects
Data Coercion does not necessarily mean weakly typed because sometimes its syntacical sugar:
The example above of Java being weakly typed because of
String s = "abc" + 123;
Is not weakly typed example because its really doing:
String s = "abc" + new Integer(123).toString()
Data coercion is also not weakly typed if you are constructing a new object.
Java is a very bad example of weakly typed (and any language that has good reflection will most likely not be weakly typed). Because the runtime of the language always knows what the type is (the exception might be native types).
This is unlike C. C is the one of the best examples of weakly typed. The runtime has no idea if 4 bytes is an integer, a struct, a pointer or a 4 characters.
The runtime of the language really defines whether or not its weakly typed otherwise its really just opinion.
EDIT:
After further thought this is not necessarily true as the runtime does not have to have all the types reified in the runtime system to be a Strongly Typed system.
Haskell and ML have such complete static analysis that they can potential ommit type information from the runtime.
One does not imply the other. For a language to be statically typed it means that the types of all variables are known or inferred at compile time.
A strongly typed language does not allow you to use one type as another. C is a weakly typed language and is a good example of what strongly typed languages don't allow. In C you can pass a data element of the wrong type and it will not complain. In strongly typed languages you cannot.
Strong typing probably means that variables have a well-defined type and that there are strict rules about combining variables of different types in expressions. For example, if A is an integer and B is a float, then the strict rule about A+B might be that A is cast to a float and the result returned as a float. If A is an integer and B is a string, then the strict rule might be that A+B is not valid.
Static typing probably means that types are assigned at compile time (or its equivalent for non-compiled languages) and cannot change during program execution.
Note that these classifications are not mutually exclusive, indeed I would expect them to occur together frequently. Many strongly-typed languages are also statically-typed.
And note that when I use the word 'probably' it is because there are no universally accepted definitions of these terms. As you will already have seen from the answers so far.
Imho, it is better to avoid these definitions altogether, not only there is no agreed upon definition of the terms, definitions that do exist tend to focus on technical aspects for example, are operation on mixed type allowed and if not is there a loophole that bypasses the restrictions such as work your way using pointers.
Instead, and emphasizing again that it is an opinion, one should focus on the question: Does the type system make my application more reliable? A question which is application specific.
For example: if my application has a variable named acceleration, then clearly if the way the variable is declared and used allows the assignment of the value "Monday" to acceleration it is a problem, as clearly an acceleration cannot be a weekday (and a string).
Another example: In Ada one can define: subtype Month_Day is Integer range 1..31;, The type Month_Day is weak in the sense that it is not a separate type from Integer (because it is a subtype), however it is restricted to the range 1..31. In contrast: type Month_Day is new Integer; will create a distinct type, which is strong in the sense that that it cannot be mixed with integers without explicit casting - but it is not restricted and can receive the value -17 which is senseless. So technically it is stronger, but is less reliable.
Of course, one can declare type Month_Day is new Integer range 1..31; to create a type which is distinct and restricted.
What is the difference between forward declaration and forward reference?
Forward declaration is, in my head, when you declare a function that isn't yet implemented, but is this incorrect? Do you have to look at the specified situation for either declaring a case "forward reference" or "forward declaration"?
A forward declaration is the declaration of a method or variable before you implement and use it. The purpose of forward declarations is to save compilation time.
The forward declaration of a variable causes storage space to be set aside, so you can later set the value of that variable.
The forward declaration of a function is also called a "function prototype," and is a declaration statement that tells the compiler what a function’s return type is, what the name of the function is, and the types its parameters. Compilers in languages such as C/C++ and Pascal store declared symbols (which include functions) in a lookup table and references them as it comes across them in your code. These compilers read your code sequentially, that is, top to bottom, so if you don't forward declare, the compiler discovers a symbol that it can't reference in the lookup table, and it raises an error that it doesn't know how to respond to the function.
The forward declaration is a hint to the compiler that you have defined (filled out the implementation of) the function elsewhere.
For example:
int first(int x); // forward declaration of first
...
int first(int x) {
if (x == 0) return 1;
else return 2;
}
But, you ask, why don't we just have the compiler make two passes on every source file: the first one to index all the symbols inside, and the second to parse the references and look them up? According to Dan Story:
When C was created in 1972, computing resources were much more scarce
and at a high premium -- the memory required to store a complex
program's entire symbolic table at once simply wasn't available in
most systems. Fixed storage was also expensive, and extremely slow, so
ideas like virtual memory or storing parts of the symbolic table on
disk simply wouldn't have allowed compilation in a reasonable
timeframe... When you're dealing with magnetic tape where seek times
were measured in seconds and read throughput was measured in bytes per
second (not kilobytes or megabytes), that was pretty meaningful.
C++, while created almost 17 years later, was defined as a superset
of C, and therefore had to use the same mechanism.
By the time Java rolled around in 1995, average computers had enough
memory that holding a symbolic table, even for a complex project, was
no longer a substantial burden. And Java wasn't designed to be
backwards-compatible with C, so it had no need to adopt a legacy
mechanism. C# was similarly unencumbered.
As a result, their designers chose to shift the burden of
compartmentalizing symbolic declaration back off the programmer and
put it on the computer again, since its cost in proportion to the
total effort of compilation was minimal.
In Java and C#, identifiers are recognized automatically from source files and read directly from dynamic library symbols. In these languages, header files are not needed for the same reason.
A forward reference is the opposite. It refers to the use of an entity before its declaration. For example:
int first(int x) {
if (x == 0) return 1;
return second(x-1); // forward reference to second
}
int second(int x) {
if (x == 0) return 0;
return first(x-1);
}
Note that "forward reference" is used sometimes, though less often, as a synonym for "forward declaration."
From Wikipedia:
Forward Declaration
Declaration of a variable or function which are not defined yet. Their defnition can be seen later on.
Forward Reference
Similar to Forward Declaration but where the variable or function appears first the definition is also in place.
forward declarations are used to allow single-pass compilation of a language (C, Pascal).
if forward references are allowed without forward declaration (Java, C#), a two-pass compiler is required.