JsInterop for Java primitive object wrapper classes - gwt

GWT makes collection classes available via JsInterop out of the box, but the same is not true for Integer, Long, Char and so on (primitive object wrapper classes).
Is there a reason for that?
In order to make those Java emulated classes available via JsInterop, I needed to copy them into my own source, putting them in the same package as the one in gwt-user, then manually modifying them to use JsType (and fixing all method name clashes, as well as making only one constructor available via JsInterop).
Is there any easier way for achieving that other than doing this?

In order for those "boxed primitives" to behave as correctly as possible while in the Java side of your code, they need to actually be objects, rather than whatever might make sense as a primitive.
Double and Boolean get special treatment (as of GWT 2.6 or so), such that they can pass seamlessly between Java and JS. This makes sense for those types, since they actually are the "same" on both sides in terms of the values that can possibly be assigned (a js boolean is always nullable, so java.lang.Boolean makes sense, and a js number is specified to be a nullable 64-bit IEEE 754 floating point number, so likewise it makes sense to be java.lang.Double), but this comes at a cost: any js number would always pass an instanceof Double check, even if it started its life as a java int.
In contrast, the other Java primitives have no JS counterpart, and so may even behave weirdly as primitives, much less Objects.
char, byte - outside of "string" with single character in it, JS doesn't have a notion of a single character or byte. You can technically use these primitives as long as you take on any precision issues, but giving up the ability to use their boxed variants doesn't really make sense, since they don't really fit.
int, short, float - these look like they make sense to pass from Java to JS as a "number", but if they come back to Java as a number there is the possibility that they would be too big or too precise - without an explicit cast you are just trusting that they make sense. Adding two floats may also give you an unexpected result, since GWT doesn't emulate 32 bit float point values, just lets JS treat them as 64 bit values. Similarly to char/byte, it doesn't make sense to treat these like Objects, since they really aren't the same at all.
long/java.lang.Long is an even more special case - it isn't possible in JS (until bigint arrived, which is still not the same thing) to represent precise integers larger than +/- 2^53, since all numbers in JS are 64 bit floats. To correctly handle java long values requires emulation, and expensive math, so even primitive longs that pass back and forth to JS risk either losing precision, or ending up as an Object in JS (with fields for "high", "medium", and "low" bits in the full value).
Consider some example code where you box some simple primitives, and interact with external JS:
#JsMethod
public void takeNumber(Number foo) {
if (foo instanceof Integer) {
//...
} else if (foo instanceof Double) {
//...
}
}
How can that instanceof work, if Integer, Double, etc are all the equivelent of JS number? What if this method never exists in JS at all... could you guarantee that it would never be called from any JS method? What if the argument was a Object instead, so that arbitrary JS values could be passed in, and you could test to see if it was a String, Double, Integer, etc and respond accordingly?
What if both Integer and Double were the same when passed in and out of JS for a value like zero - would plain Java zeros be implemented differently in only-Java parts of your program? Would instanceof behave differently in some parts than others, depending on if it were at all possible that JS values could reach them?
--
In order to be the most consistent with how the "outside JS world" behaves, you almost always want to pass Double, Boolean when dealing with these sorts of values - this lets you test for null (JS has no checker to confirm an API isn't surprising you and passing null when it isn't legal), and do any kinds of bounds checking that might be required to see what you should do with the value. For some APIs you can get away with trusting that it will never be null, and likewise you can usually feel safe trusting that it is an int (JsArray.length, for example), but these are typically the exceptions. To retain the ability for your own Java to know the difference between these types, GWT has to let them actually behave like real Java classes, and have a notion of their own type.
--
Getting distracted here from the main answer, but how does String work? GWT is able to special case String, but it ends up having to also special case CharSequence, Comparable, Serializble, since it is possible that you could pass a String in from JS, assign it to a field of type CharSequence, then do an instanceof check against Comparable. For this reason, each of those types has a special case - if the instance actually implements the interface, the instanceof will pass, or if the instance is a plain JS string, it also will pass. Special casing is required in Object.equals, hashCode, getClass() as well to support String, so that two Object fields that happen to both be Strings know how to check their types. Going back now to the question at hand, what if ((Object) zeroFromJS) instanceof Integer and ((Object) zeroFromJS) instanceof Double were both true? What would ((Object) zeroFromJS).getClass() return?

Related

Bug? JsNumber toFixed returns different values in SuperDev and JS

I'm using GWT 2.8.2.
When I run the following code in SuperDev mode, it logs 123.456, which is what I expect.
double d = 123.456789;
JsNumber num = Js.cast(d);
console.log(num.toFixed(3));
When I compile to JavaScript and run, it logs 123 (i.e. it does not show the decimal places).
I have tried running the code on Android Chrome, Windows Chrome and Windows Firefox. They all exhibit the same behavior.
Any idea why there is a difference and is there anything I can do about it?
Update: After a bit more digging, I've found that it's to do with the coercion of the integer parameter.
console.log(num.toFixed(3)); // 123 (wrong)
console.log(num.toFixed(3d)); // 123.456 (correct)
It seems that the JsNumber class in Elemental2 has defined the signature as:
public native String toFixed(Object digits);
I think it should be:
public native String toFixed(int digits);
I'm still not sure why it works during SuperDev mode and not when compiled though.
Nice catch! This appears to be a bug in the jsinterop-generator configuration used when generating Elemental2's sources. Since JS doesn't have a way to say that a number is either an integer or a floating point value, the source material that jsinterop-generator works with can't accurately describe what that argument needs to be.
Usually, the fix is to add this to the integer-entities.txt (https://github.com/google/elemental2/blob/master/java/elemental2/core/integer_entities.txt), so that the generator knows that this parameter can only be an integer. However, when I made this change, the generator didn't act on the new line, and logged this fact. It turns out that it only makes this change when the parameter is a number of some kind, which Object clearly isn't.
The proper fix also then is probably to fix the externs that are used to describe what "JsNumber.toFixed" is supposed to take as an argument. The spec says that this can actually take some non-number value and after converting to a number, doesn't even need to be an integer (see https://www.ecma-international.org/ecma-262/5.1/#sec-15.7.4.5 and https://www.ecma-international.org/ecma-262/5.1/#sec-9.3).
So, instead we need to be sure to pass whatever literal value that the Java developer provides to the function, so that it is parsed correctly within JS - this means that the argument needs to be annotated with #DoNotAutobox. Or, we could clarify this to say that it can either be Object or Number for the argument, and the toFixed(Object) will still be emitted, but now there will be a numeric version too.
Alternatively, you can work around this as you have done, or by providing a string value of the number of digits you want:
console.log(num.toFixed("3"));
Filed as https://github.com/google/elemental2/issues/129
The problem is that "java" automatically wraps the int as an Integer and GWT end up transpiling the boxed Integer as a special object in JS (not a number). But, if you use a double, the boxed double is also transpiled as a native number by GWT and the problem disappears.
I'm not absolutely sure why this works in super-devmode, but it should not. I think that the difference is that SDM maps native toString to the Java toString, and (even weirder) the native toFixed call the toString of the argument. In SDM the boxed-interger#toString returns the string representation of the number which ends up coercing back to int, but in production, the boxed-interger#toString returns "[object Object]", which is handled as NaN.
There is a special annotation #DoNotAutobox to being able to use primitive integers in JS native APIs. This prevents integer auto-wrap, so the int is transpired to a native number (example usage in the Js#coerceToInt method). Elemental2 might add this annotation or change the type to int as you suggest. Please, create an issue in the elemental2 repo to fix this (https://github.com/google/elemental2/issues/new).

Working with opaque types (Char and Long)

I'm trying to export a Scala implementation of an algorithm for use in JavaScript. I'm using #JSExport. The algorithm works with Scala Char and Long values which are marked as opaque in the interoperability guide.
I'd like to know (a) what this means; and (b) what the recommendation is for dealing with this.
I presume it means I should avoid Char and Long and work with String plus a run-time check on length (or perhaps use a shapeless Sized collection) and Int instead.
But other ideas welcome.
More detail...
The kind of code I'm looking at is:
#JSExport("Foo")
class Foo(val x: Int) {
#JSExport("add")
def add(n: Int): Int = x+n
}
...which works just as expected: new Foo(1).add(2) produces 3.
Replacing the types with Long the same call reports:
java.lang.ClassCastException: 1 is not an instance of scala.scalajs.runtime.RuntimeLong (and something similar with methods that take and return Char).
Being opaque means that
There is no corresponding JavaScript type
There is no way to create a value of that type from JavaScript (except if there is an #JSExported constructor)
There is no way of manipulating a value of that type (other than calling #JSExported methods and fields)
It is still possible to receive a value of that type from Scala.js code, pass it around, and give it back to Scala.js code. It is also always possible to call .toString(), because java.lang.Object.toString() is #JSExported. Besides toString(), neither Char nor Long export anything, so you can't do anything else with them.
Hence, as you have experienced, a JavaScript 1 cannot be used as a Scala.js Long, because it's not of the right type. Neither is 'a' a valid Char (but it's a valid String).
Therefore, as you have inferred yourself, you must indeed avoid opaque types, and use other types instead if you need to create/manipulate them from JavaScript. The Scala.js side can convert back and forth using the standard tools in the language, such as someChar.toInt and someInt.toChar.
The choice of which type is best depends on your application. For Char, it could be Int or String. For Long, it could be String, a pair of Ints, or possibly even Double if the possible values never use more than 52 bits of precision.

Everything's an object in Scala

I am new to Scala and heard a lot that everything is an object in Scala. What I don't get is what's the advantage of "everything's an object"? What are things that I cannot do if everything is not an object? Examples are welcome. Thanks
The advantage of having "everything" be an object is that you have far fewer cases where abstraction breaks.
For example, methods are not objects in Java. So if I have two strings, I can
String s1 = "one";
String s2 = "two";
static String caps(String s) { return s.toUpperCase(); }
caps(s1); // Works
caps(s2); // Also works
So we have abstracted away string identity in our operation of making something upper case. But what if we want to abstract away the identity of the operation--that is, we do something to a String that gives back another String but we want to abstract away what the details are? Now we're stuck, because methods aren't objects in Java.
In Scala, methods can be converted to functions, which are objects. For instance:
def stringop(s: String, f: String => String) = if (s.length > 0) f(s) else s
stringop(s1, _.toUpperCase)
stringop(s2, _.toLowerCase)
Now we have abstracted the idea of performing some string transformation on nonempty strings.
And we can make lists of the operations and such and pass them around, if that's what we need to do.
There are other less essential cases (object vs. class, primitive vs. not, value classes, etc.), but the big one is collapsing the distinction between method and object so that passing around and abstracting over functionality is just as easy as passing around and abstracting over data.
The advantage is that you don't have different operators that follow different rules within your language. For example, in Java to perform operations involving objects, you use the dot name technique of calling the code (static objects still use the dot name technique, but sometimes the this object or the static object is inferred) while built-in items (not objects) use a different method, that of built-in operator manipulation.
Number one = Integer.valueOf(1);
Number two = Integer.valueOf(2);
Number three = one.plus(two); // if only such methods existed.
int one = 1;
int two = 2;
int three = one + two;
the main differences is that the dot name technique is subject to polymorphisim, operator overloading, method hiding, and all the good stuff that you can do with Java objects. The + technique is predefined and completely not flexible.
Scala circumvents the inflexibility of the + method by basically handling it as a dot name operator, and defining a strong one-to-one mapping of such operators to object methods. Hence, in Scala everything is an object means that everything is an object, so the operation
5 + 7
results in two objects being created (a 5 object and a 7 object) the plus method of the 5 object being called with the parameter 7 (if my scala memory serves me correctly) and a "12" object being returned as the value of the 5 + 7 operation.
This everything is an object has a lot of benefits in a functional programming environment, for example, blocks of code now are object too, making it possible to pass back and forth blocks of code (without names) as parameters, yet still be bound to strict type checking (the block of code only returns Long or a subclass of String or whatever).
When it boils down to it, it makes some kinds of solutions very easy to implement, and often the inefficiencies are mitigated by the lack of need to handle "move into primitives, manipulate, move out of primitives" marshalling code.
One specific advantage that comes to my mind (since you asked for examples) is what in Java are primitive types (int, boolean ...) , in Scala are objects that you can add functionality to with implicit conversions. For example, if you want to add a toRoman method to ints, you could write an implicit class like:
implicit class RomanInt(i:Int){
def toRoman = //some algorithm to convert i to a Roman representation
}
Then, you could call this method from any Int literal like :
val romanFive = 5.toRoman // V
This way you can 'pimp' basic types to adapt them to your needs
In addition to the points made by others, I always emphasize that the uniform treatment of all values in Scala is in part an illusion. For the most part it is a very welcome illusion. And Scala is very smart to use real JVM primitives as much as possible and to perform automatic transformations (usually referred to as boxing and unboxing) only as much as necessary.
However, if the dynamic pattern of application of automatic boxing and unboxing is very high, there can be undesirable costs (both memory and CPU) associated with it. This can be partially mitigated with the use of specialization, which creates special versions of generic classes when particular type parameters are of (programmer-specified) primitive types. This avoids boxing and unboxing but comes at the cost of more .class files in your running application.
Not everything is an object in Scala, though more things are objects in Scala than their analogues in Java.
The advantage of objects is that they're bags of state which also have some behavior coupled with them. With the addition of polymorphism, objects give you ways of changing the implicit behavior and state. Enough with the poetry, let's go into some examples.
The if statement is not an object, in either scala or java. If it were, you could be able to subclass it, inject another dependency in its place, and use it to do stuff like logging to a file any time your code makes use of the if statement. Wouldn't that be magical? It would in some cases help you debug stuff, and in other cases it would make your hairs grow white before you found a bug caused by someone overwriting the behavior of if.
Visiting an objectless, statementful world: Imaging your favorite OOP programming language. Think of the standard library it provides. There's plenty of classes there, right? They offer ways for customization, right? They take parameters that are other objects, they create other objects. You can customize all of these. You have polymorphism. Now imagine that all the standard library was simply keywords. You wouldn't be able to customize nearly as much, because you can't overwrite keywords. You'd be stuck with whatever cases the language designers decided to implement, and you'd be helpless in customizing anything there. Such languages exist, you know them well, they're the sequel-like languages. You can barely create functions there, but in order to customize the behavior of the SELECT statement, new versions of the language had to appear which included the features most desired. This would be an extreme world, where you'd only be able to program by asking the language designers for new features (which you might not get, because someone else more important would require some feature incompatible with what you want)
In conclusion, NOT everything is an object in scala: Classes, expressions, keywords and packages surely aren't. More things however are, like functions.
What's IMHO a nice rule of thumb is that more objects equals more flexibility
P.S. in Python for example, even more things are objects (like the classes themselves, the analogous concept for packages (that is python modules and packages). You'd see how there, black magic is easier to do, and that brings both good and bad consequences.

Why is String different than Int,Boolean,Byte... in scala?

Because I know a little bit Java I was trying to use in every Scala Code some Java types like java.lang.Integer, java.lang.Character, java.lang.Boolean ... and so on. Now people told me "No! For everything in Scala there is an own type, the Java stuff will work - but you should always prefer the Scala types and objects".
Ok now I see, there is everything in Scala that is in Java. Not sure why it is better to use for example Scala Boolean instead of Java Boolean, but fine. If I look at the types I see scala.Boolean, scala.Int, scala.Byte ... and then I look at String but its not scala.String, (well its not even java.lang.String confuse) its just a String. But I thought I should use everything that comes direct from scala. Maybe I do not understand scala correctly, could somebody explain it please?
First, 'well its not even java.lang.String' statement is not quite correct. Plain String name comes from type alias defined in Predef object:
type String = java.lang.String
and insides of Predef are imported in every Scala source, hence you use String instead of full java.lang.String, but in fact they are the same.
java.lang.String is very special class treated by JVM in special way. As #pagoda_5b said, it is declared as final, it is not possible to extend it (and this is good in fact), so Scala library provides a wrapper (RichString) with additional operations and an implicit conversion String -> RichString available by default.
However, there is slightly different situation with Integer, Character, Boolean etc. You see, even though String is treated specially by JVM, it still a plain class whose instances are plain objects. Semantically it is not different from, say, List class.
There is another situation with primitive types. Java int, char, boolean types are not classes, and values of these types are not objects. But Scala is fully object-oriented language, there are no primitive types. It would be possible to use java.lang.{Integer,Boolean,...} everywhere where you need corresponding types, but this would be awfully inefficient because of boxing.
Because of this Scala needed a way to present Java primitive types in object-oriented setting, and so scala.{Int,Boolean,...} classes were introduced. These types are treated specially via Scala compiler - scalac generates code working with primitives when it encounters one of these classes. They also extend AnyVal class, which prevents you from using null as a value for these types. This approach solves the problem with efficiency, leaves java.lang.{Integer,Boolean,...} classes available where you really need boxing, and also provides elegant way to use primitives of another host system (e.g. .NET runtime).
I'm just guessing here
If you look at the docs, you can see that the scala version of primitives gives you all the expected operators that works on numeric types, or boolean types, and sensible conversions, without resorting to boxing-unboxing as for java.lang wrappers.
I think this choice was made to give uniform and natural access to what was expected of primitive types, while at the same time making them Objects as any other scala type.
I suppose that java.lang.String required a different approach, being an Object already, and final in its implementation. So the "path of least pain" was to create an implicit Rich wrapper around it to get missing operations on String, while leaving the rest untouched.
To see it another way, java.lang.String was already good enough as-is, being immutable and what-else.
It's worth mentioning that the other "primitive" types in scala have their own Rich wrappers that provides additional sensible operations.

What is the difference between a strongly typed language and a statically typed language?

Also, does one imply the other?
What is the difference between a strongly typed language and a statically typed language?
A statically typed language has a type system that is checked at compile time by the implementation (a compiler or interpreter). The type check rejects some programs, and programs that pass the check usually come with some guarantees; for example, the compiler guarantees not to use integer arithmetic instructions on floating-point numbers.
There is no real agreement on what "strongly typed" means, although the most widely used definition in the professional literature is that in a "strongly typed" language, it is not possible for the programmer to work around the restrictions imposed by the type system. This term is almost always used to describe statically typed languages.
Static vs dynamic
The opposite of statically typed is "dynamically typed", which means that
Values used at run time are classified into types.
There are restrictions on how such values can be used.
When those restrictions are violated, the violation is reported as a (dynamic) type error.
For example, Lua, a dynamically typed language, has a string type, a number type, and a Boolean type, among others. In Lua every value belongs to exactly one type, but this is not a requirement for all dynamically typed languages. In Lua, it is permissible to concatenate two strings, but it is not permissible to concatenate a string and a Boolean.
Strong vs weak
The opposite of "strongly typed" is "weakly typed", which means you can work around the type system. C is notoriously weakly typed because any pointer type is convertible to any other pointer type simply by casting. Pascal was intended to be strongly typed, but an oversight in the design (untagged variant records) introduced a loophole into the type system, so technically it is weakly typed.
Examples of truly strongly typed languages include CLU, Standard ML, and Haskell. Standard ML has in fact undergone several revisions to remove loopholes in the type system that were discovered after the language was widely deployed.
What's really going on here?
Overall, it turns out to be not that useful to talk about "strong" and "weak". Whether a type system has a loophole is less important than the exact number and nature of the loopholes, how likely they are to come up in practice, and what are the consequences of exploiting a loophole. In practice, it's best to avoid the terms "strong" and "weak" altogether, because
Amateurs often conflate them with "static" and "dynamic".
Apparently "weak typing" is used by some persons to talk about the relative prevalance or absence of implicit conversions.
Professionals can't agree on exactly what the terms mean.
Overall you are unlikely to inform or enlighten your audience.
The sad truth is that when it comes to type systems, "strong" and "weak" don't have a universally agreed on technical meaning. If you want to discuss the relative strength of type systems, it is better to discuss exactly what guarantees are and are not provided.
For example, a good question to ask is this: "is every value of a given type (or class) guaranteed to have been created by calling one of that type's constructors?" In C the answer is no. In CLU, F#, and Haskell it is yes. For C++ I am not sure—I would like to know.
By contrast, static typing means that programs are checked before being executed, and a program might be rejected before it starts. Dynamic typing means that the types of values are checked during execution, and a poorly typed operation might cause the program to halt or otherwise signal an error at run time. A primary reason for static typing is to rule out programs that might have such "dynamic type errors".
Does one imply the other?
On a pedantic level, no, because the word "strong" doesn't really mean anything. But in practice, people almost always do one of two things:
They (incorrectly) use "strong" and "weak" to mean "static" and "dynamic", in which case they (incorrectly) are using "strongly typed" and "statically typed" interchangeably.
They use "strong" and "weak" to compare properties of static type systems. It is very rare to hear someone talk about a "strong" or "weak" dynamic type system. Except for FORTH, which doesn't really have any sort of a type system, I can't think of a dynamically typed language where the type system can be subverted. Sort of by definition, those checks are bulit into the execution engine, and every operation gets checked for sanity before being executed.
Either way, if a person calls a language "strongly typed", that person is very likely to be talking about a statically typed language.
This is often misunderstood so let me clear it up.
Static/Dynamic Typing
Static typing is where the type is bound to the variable. Types are checked at compile time.
Dynamic typing is where the type is bound to the value. Types are checked at run time.
So in Java for example:
String s = "abcd";
s will "forever" be a String. During its life it may point to different Strings (since s is a reference in Java). It may have a null value but it will never refer to an Integer or a List. That's static typing.
In PHP:
$s = "abcd"; // $s is a string
$s = 123; // $s is now an integer
$s = array(1, 2, 3); // $s is now an array
$s = new DOMDocument; // $s is an instance of the DOMDocument class
That's dynamic typing.
Strong/Weak Typing
(Edit alert!)
Strong typing is a phrase with no widely agreed upon meaning. Most programmers who use this term to mean something other than static typing use it to imply that there is a type discipline that is enforced by the compiler. For example, CLU has a strong type system that does not allow client code to create a value of abstract type except by using the constructors provided by the type. C has a somewhat strong type system, but it can be "subverted" to a degree because a program can always cast a value of one pointer type to a value of another pointer type. So for example, in C you can take a value returned by malloc() and cheerfully cast it to FILE*, and the compiler won't try to stop you—or even warn you that you are doing anything dodgy.
(The original answer said something about a value "not changing type at run time". I have known many language designers and compiler writers and have not known one that talked about values changing type at run time, except possibly some very advanced research in type systems, where this is known as the "strong update problem".)
Weak typing implies that the compiler does not enforce a typing discpline, or perhaps that enforcement can easily be subverted.
The original of this answer conflated weak typing with implicit conversion (sometimes also called "implicit promotion"). For example, in Java:
String s = "abc" + 123; // "abc123";
This is code is an example of implicit promotion: 123 is implicitly converted to a string before being concatenated with "abc". It can be argued the Java compiler rewrites that code as:
String s = "abc" + new Integer(123).toString();
Consider a classic PHP "starts with" problem:
if (strpos('abcdef', 'abc') == false) {
// not found
}
The error here is that strpos() returns the index of the match, being 0. 0 is coerced into boolean false and thus the condition is actually true. The solution is to use === instead of == to avoid implicit conversion.
This example illustrates how a combination of implicit conversion and dynamic typing can lead programmers astray.
Compare that to Ruby:
val = "abc" + 123
which is a runtime error because in Ruby the object 123 is not implicitly converted just because it happens to be passed to a + method. In Ruby the programmer must make the conversion explicit:
val = "abc" + 123.to_s
Comparing PHP and Ruby is a good illustration here. Both are dynamically typed languages but PHP has lots of implicit conversions and Ruby (perhaps surprisingly if you're unfamiliar with it) doesn't.
Static/Dynamic vs Strong/Weak
The point here is that the static/dynamic axis is independent of the strong/weak axis. People confuse them probably in part because strong vs weak typing is not only less clearly defined, there is no real consensus on exactly what is meant by strong and weak. For this reason strong/weak typing is far more of a shade of grey rather than black or white.
So to answer your question: another way to look at this that's mostly correct is to say that static typing is compile-time type safety and strong typing is runtime type safety.
The reason for this is that variables in a statically typed language have a type that must be declared and can be checked at compile time. A strongly-typed language has values that have a type at run time, and it's difficult for the programmer to subvert the type system without a dynamic check.
But it's important to understand that a language can be Static/Strong, Static/Weak, Dynamic/Strong or Dynamic/Weak.
Both are poles on two different axis:
strongly typed vs. weakly typed
statically typed vs. dynamically typed
Strongly typed means, a variable will not be automatically converted from one type to another. Weakly typed is the opposite: Perl can use a string like "123" in a numeric context, by automatically converting it into the int 123. A strongly typed language like python will not do this.
Statically typed means, the compiler figures out the type of each variable at compile time. Dynamically typed languages only figure out the types of variables at runtime.
Strongly typed means that there are restrictions between conversions between types.
Statically typed means that the types are not dynamic - you can not change the type of a variable once it has been created.
Answer is already given above. Trying to differentiate between strong vs week and static vs dynamic concept.
What is Strongly typed VS Weakly typed?
Strongly Typed: Will not be automatically converted from one type to another
In Go or Python like strongly typed languages "2" + 8 will raise a type error, because they don't allow for "type coercion".
Weakly (loosely) Typed: Will be automatically converted to one type to another:
Weakly typed languages like JavaScript or Perl won't throw an error and in this case JavaScript will results '28' and perl will result 10.
Perl Example:
my $a = "2" + 8;
print $a,"\n";
Save it to main.pl and run perl main.pl and you will get output 10.
What is Static VS Dynamic type?
In programming, programmer define static typing and dynamic typing with respect to the point at which the variable types are checked. Static typed languages are those in which type checking is done at compile-time, whereas dynamic typed languages are those in which type checking is done at run-time.
Static: Types checked before run-time
Dynamic: Types checked on the fly, during execution
What is this means?
In Go it checks typed before run-time (static check). This mean it not only translates and type-checks code it’s executing, but it will scan through all the code and type error would be thrown before the code is even run. For example,
package main
import "fmt"
func foo(a int) {
if (a > 0) {
fmt.Println("I am feeling lucky (maybe).")
} else {
fmt.Println("2" + 8)
}
}
func main() {
foo(2)
}
Save this file in main.go and run it, you will get compilation failed message for this.
go run main.go
# command-line-arguments
./main.go:9:25: cannot convert "2" (type untyped string) to type int
./main.go:9:25: invalid operation: "2" + 8 (mismatched types string and int)
But this case is not valid for Python. For example following block of code will execute for first foo(2) call and will fail for second foo(0) call. It's because Python is dynamically typed, it only translates and type-checks code it’s executing on. The else block never executes for foo(2), so "2" + 8 is never even looked at and for foo(0) call it will try to execute that block and failed.
def foo(a):
if a > 0:
print 'I am feeling lucky.'
else:
print "2" + 8
foo(2)
foo(0)
You will see following output
python main.py
I am feeling lucky.
Traceback (most recent call last):
File "pyth.py", line 7, in <module>
foo(0)
File "pyth.py", line 5, in foo
print "2" + 8
TypeError: cannot concatenate 'str' and 'int' objects
Data Coercion does not necessarily mean weakly typed because sometimes its syntacical sugar:
The example above of Java being weakly typed because of
String s = "abc" + 123;
Is not weakly typed example because its really doing:
String s = "abc" + new Integer(123).toString()
Data coercion is also not weakly typed if you are constructing a new object.
Java is a very bad example of weakly typed (and any language that has good reflection will most likely not be weakly typed). Because the runtime of the language always knows what the type is (the exception might be native types).
This is unlike C. C is the one of the best examples of weakly typed. The runtime has no idea if 4 bytes is an integer, a struct, a pointer or a 4 characters.
The runtime of the language really defines whether or not its weakly typed otherwise its really just opinion.
EDIT:
After further thought this is not necessarily true as the runtime does not have to have all the types reified in the runtime system to be a Strongly Typed system.
Haskell and ML have such complete static analysis that they can potential ommit type information from the runtime.
One does not imply the other. For a language to be statically typed it means that the types of all variables are known or inferred at compile time.
A strongly typed language does not allow you to use one type as another. C is a weakly typed language and is a good example of what strongly typed languages don't allow. In C you can pass a data element of the wrong type and it will not complain. In strongly typed languages you cannot.
Strong typing probably means that variables have a well-defined type and that there are strict rules about combining variables of different types in expressions. For example, if A is an integer and B is a float, then the strict rule about A+B might be that A is cast to a float and the result returned as a float. If A is an integer and B is a string, then the strict rule might be that A+B is not valid.
Static typing probably means that types are assigned at compile time (or its equivalent for non-compiled languages) and cannot change during program execution.
Note that these classifications are not mutually exclusive, indeed I would expect them to occur together frequently. Many strongly-typed languages are also statically-typed.
And note that when I use the word 'probably' it is because there are no universally accepted definitions of these terms. As you will already have seen from the answers so far.
Imho, it is better to avoid these definitions altogether, not only there is no agreed upon definition of the terms, definitions that do exist tend to focus on technical aspects for example, are operation on mixed type allowed and if not is there a loophole that bypasses the restrictions such as work your way using pointers.
Instead, and emphasizing again that it is an opinion, one should focus on the question: Does the type system make my application more reliable? A question which is application specific.
For example: if my application has a variable named acceleration, then clearly if the way the variable is declared and used allows the assignment of the value "Monday" to acceleration it is a problem, as clearly an acceleration cannot be a weekday (and a string).
Another example: In Ada one can define: subtype Month_Day is Integer range 1..31;, The type Month_Day is weak in the sense that it is not a separate type from Integer (because it is a subtype), however it is restricted to the range 1..31. In contrast: type Month_Day is new Integer; will create a distinct type, which is strong in the sense that that it cannot be mixed with integers without explicit casting - but it is not restricted and can receive the value -17 which is senseless. So technically it is stronger, but is less reliable.
Of course, one can declare type Month_Day is new Integer range 1..31; to create a type which is distinct and restricted.