Working with opaque types (Char and Long) - scala.js

I'm trying to export a Scala implementation of an algorithm for use in JavaScript. I'm using #JSExport. The algorithm works with Scala Char and Long values which are marked as opaque in the interoperability guide.
I'd like to know (a) what this means; and (b) what the recommendation is for dealing with this.
I presume it means I should avoid Char and Long and work with String plus a run-time check on length (or perhaps use a shapeless Sized collection) and Int instead.
But other ideas welcome.
More detail...
The kind of code I'm looking at is:
#JSExport("Foo")
class Foo(val x: Int) {
#JSExport("add")
def add(n: Int): Int = x+n
}
...which works just as expected: new Foo(1).add(2) produces 3.
Replacing the types with Long the same call reports:
java.lang.ClassCastException: 1 is not an instance of scala.scalajs.runtime.RuntimeLong (and something similar with methods that take and return Char).

Being opaque means that
There is no corresponding JavaScript type
There is no way to create a value of that type from JavaScript (except if there is an #JSExported constructor)
There is no way of manipulating a value of that type (other than calling #JSExported methods and fields)
It is still possible to receive a value of that type from Scala.js code, pass it around, and give it back to Scala.js code. It is also always possible to call .toString(), because java.lang.Object.toString() is #JSExported. Besides toString(), neither Char nor Long export anything, so you can't do anything else with them.
Hence, as you have experienced, a JavaScript 1 cannot be used as a Scala.js Long, because it's not of the right type. Neither is 'a' a valid Char (but it's a valid String).
Therefore, as you have inferred yourself, you must indeed avoid opaque types, and use other types instead if you need to create/manipulate them from JavaScript. The Scala.js side can convert back and forth using the standard tools in the language, such as someChar.toInt and someInt.toChar.
The choice of which type is best depends on your application. For Char, it could be Int or String. For Long, it could be String, a pair of Ints, or possibly even Double if the possible values never use more than 52 bits of precision.

Related

Do any programming languages provide the ability to name the return value of a function?

Quite commonly while programming I find it necessary to document the value that a function returns. In Java/Scala world, you often use comments above the function to do this.
However, this can stand out in contrast to the first-class documentation that function parameters get in all languages. For example:
def exponent(base: Int, power: Int): Int
Here we have the signature for a method that raises base to the power power and returns... probably the result of that computation? I know for certain it returns an Int, and it seems quite reasonable to infer that the return value is indeed the result of calculating base ^ power, but in many functions I've written and read it is not possible to infer the return value's semantic meaning quite so easily and you need to study the documentation and/or actually use the method to find out.
Which leads me to wonder, do any languages provide support for optionally declaring a semantic name for the return value?
def exponent(base: Int, power: Int): Int(exitCode)
A hah! Turns out this function actually returns an indication of whether the operation succeeded or failed! Look it is so clear right there in the method signature! My IDE could also intelligently create a variable with the same name when I call this method, a la:
// Typing in IntelliJ
exponent(5, 5)<TAB>
// Autocompletes to:
val exitCode = exponent(5, 5)
I'm not under any illusion that this is some sort of ground-breaking idea, but it seems like it could be generally useful, and I'm struck that I have never seen this concept implemented in any programming language.
Can you name any single programming language that does have this kind of semantic naming of return values?
In APL, for instance, the result of a function is declared as a variable. The function declaration in your example could be written like
exitCode ← base exponent power
in APL. However, a function with no side effects should always be named after the result it returns. If the function can fail I would use a value that is never returned on success, for instance -1 in this case.

JsInterop for Java primitive object wrapper classes

GWT makes collection classes available via JsInterop out of the box, but the same is not true for Integer, Long, Char and so on (primitive object wrapper classes).
Is there a reason for that?
In order to make those Java emulated classes available via JsInterop, I needed to copy them into my own source, putting them in the same package as the one in gwt-user, then manually modifying them to use JsType (and fixing all method name clashes, as well as making only one constructor available via JsInterop).
Is there any easier way for achieving that other than doing this?
In order for those "boxed primitives" to behave as correctly as possible while in the Java side of your code, they need to actually be objects, rather than whatever might make sense as a primitive.
Double and Boolean get special treatment (as of GWT 2.6 or so), such that they can pass seamlessly between Java and JS. This makes sense for those types, since they actually are the "same" on both sides in terms of the values that can possibly be assigned (a js boolean is always nullable, so java.lang.Boolean makes sense, and a js number is specified to be a nullable 64-bit IEEE 754 floating point number, so likewise it makes sense to be java.lang.Double), but this comes at a cost: any js number would always pass an instanceof Double check, even if it started its life as a java int.
In contrast, the other Java primitives have no JS counterpart, and so may even behave weirdly as primitives, much less Objects.
char, byte - outside of "string" with single character in it, JS doesn't have a notion of a single character or byte. You can technically use these primitives as long as you take on any precision issues, but giving up the ability to use their boxed variants doesn't really make sense, since they don't really fit.
int, short, float - these look like they make sense to pass from Java to JS as a "number", but if they come back to Java as a number there is the possibility that they would be too big or too precise - without an explicit cast you are just trusting that they make sense. Adding two floats may also give you an unexpected result, since GWT doesn't emulate 32 bit float point values, just lets JS treat them as 64 bit values. Similarly to char/byte, it doesn't make sense to treat these like Objects, since they really aren't the same at all.
long/java.lang.Long is an even more special case - it isn't possible in JS (until bigint arrived, which is still not the same thing) to represent precise integers larger than +/- 2^53, since all numbers in JS are 64 bit floats. To correctly handle java long values requires emulation, and expensive math, so even primitive longs that pass back and forth to JS risk either losing precision, or ending up as an Object in JS (with fields for "high", "medium", and "low" bits in the full value).
Consider some example code where you box some simple primitives, and interact with external JS:
#JsMethod
public void takeNumber(Number foo) {
if (foo instanceof Integer) {
//...
} else if (foo instanceof Double) {
//...
}
}
How can that instanceof work, if Integer, Double, etc are all the equivelent of JS number? What if this method never exists in JS at all... could you guarantee that it would never be called from any JS method? What if the argument was a Object instead, so that arbitrary JS values could be passed in, and you could test to see if it was a String, Double, Integer, etc and respond accordingly?
What if both Integer and Double were the same when passed in and out of JS for a value like zero - would plain Java zeros be implemented differently in only-Java parts of your program? Would instanceof behave differently in some parts than others, depending on if it were at all possible that JS values could reach them?
--
In order to be the most consistent with how the "outside JS world" behaves, you almost always want to pass Double, Boolean when dealing with these sorts of values - this lets you test for null (JS has no checker to confirm an API isn't surprising you and passing null when it isn't legal), and do any kinds of bounds checking that might be required to see what you should do with the value. For some APIs you can get away with trusting that it will never be null, and likewise you can usually feel safe trusting that it is an int (JsArray.length, for example), but these are typically the exceptions. To retain the ability for your own Java to know the difference between these types, GWT has to let them actually behave like real Java classes, and have a notion of their own type.
--
Getting distracted here from the main answer, but how does String work? GWT is able to special case String, but it ends up having to also special case CharSequence, Comparable, Serializble, since it is possible that you could pass a String in from JS, assign it to a field of type CharSequence, then do an instanceof check against Comparable. For this reason, each of those types has a special case - if the instance actually implements the interface, the instanceof will pass, or if the instance is a plain JS string, it also will pass. Special casing is required in Object.equals, hashCode, getClass() as well to support String, so that two Object fields that happen to both be Strings know how to check their types. Going back now to the question at hand, what if ((Object) zeroFromJS) instanceof Integer and ((Object) zeroFromJS) instanceof Double were both true? What would ((Object) zeroFromJS).getClass() return?

Why didn't scala design around Integer Overflow?

I am a former Java developer and I have recently watched the insightful and entertaining introduction to Scala for Java developers by professor Venkat Subramaniam (https://www.youtube.com/watch?v=LH75sJAR0hc).
A major point introduced is the elimination of declared types in lieu of "type inference". Presumably, this means the higher-order compiler recognizes the type I intend to use, by the context.
Being an application security expert by trade, the first thing I tried to do is break this type inference... Example:
// declare a function that returns the square of an input Int. The return type is to be inferred.
scala> val square = (x:Int) => x*x
square: Int => Int = <function1>
// I can see the compiler inferred an Int for the output value, which I do not agree with.
scala> square(2147483647)
res1: Int = 1
// integer overflow
My question is why did the compiler not see that "*" is an operator with a threat of overflow, and wrap the inputs in something a little more protective like a BigInteger?
According to the professor, I am supposed to forget about the internal implementation and just get on with my business logic. But after my quick demonstration I'm not so sure that Scala is safe for a programmer who doesn't understand what the compiler is doing with my methods.
I think #rightføld somewhat overstates how often overflows do or don't happen (particularly when considering an attacker who is actively trying to overflow you). But I agree with his basic point. Converting all math to BigInteger would almost certainly have created a massive performance impact over Java. For developers to choose such a language, they'd have to get something visible for that cost.
String objects have a much smaller performance overhead over cstrings for many operations. They also provide very visible benefits to the developer, which is why people use them, not security per se. There are many common things that string objects make easy to do over cstrings. BigInteger provides none of that. It requires exactly the same code at a fraction of the speed, but just won't overflow (a bug few developers see day to day, even if security guys see it more often).
The equivalent would have been a cstring (with strcmp, strcpy, strcat, etc.) that ran at a fraction of the speed, but just didn't require a null terminator. I don't think many people would have jumped to use that, either, no matter how much that would help security over null-terminated strings. And if the language required it, I don't see a lot of people anxious to use the language.
And as #rightføld suggests in the comments, interoperability with Java would be trashed, since most if not all numbers would wind up being BigInteger. You'd constantly be converting, which raises the same dangers of overflows while adding a lot of code complexity (and more performance impacts).
A from-scratch language might get away with ubiquitous BigInteger (like python) if the language had a lot of other compelling features, but it's a very hard thing to retrofit into a language that wants to be a natural transition from (and with) Java.
In addition to the above answers, I think this question misunderstands the purpose of type inference in a statically typed language. Type inference does not make the choices that you are referring to - promoting a Int to a BigInt. It is restricted to simply "inferring" the type of an expression based the the known types of subexpressions at compile time.
The * function in Int returns an Int when supplied with an Int input parameter
def *(x: Int): Int
In this case, since x is declared to be an Int, then x*x must be an Int based on the signature of *.
If we really wanted this behavior, we could define a function that promotes Int to BigInt when multiplying.
implicit class SafeInt(x: Int) {
def safeMult(a: Int): scala.math.BigInt = scala.math.BigInt(x)*a
}
Then when we can define a square with the desired property:
scala> val square = (x: Int) => x safeMult x
square: Int => scala.math.BigInt = <function1>
The compiler infers based on the methods available. Int has a method *(Int): Int that is, as far as the compiler knows, perfectly well defined; 2147483647*2147483647 is a perfectly good method call with the result 1, it doesn't throw ClassCastException or anything like that.
Why is the Int type written this way? Largely for Java/JVM compatibility; many parts of Scala have design compromises for the sake of Java compatibility. If you don't need that functionality, you might prefer to use Haskell or a similar language. (I suspect that even without the requirement for JVM compatibility, Scala would have wanted to expose the machine-native integer types so that users could make that performance/correctness tradeoff where desired. They might not have been the default though)
If you're doing numeric computation in Scala you probably want to use the Spire library, which makes it easy to abstract over numeric types, and provides several high-performance numeric types with particular properties. In particular it has a SafeLong type that handles arbitrary-precision integers but with much better performance than BigInt for values which fall within the Long range, similar to Python's integer type.
Because overflow occurs almost never in practice, and BigInteger is slow as a dog compared to Int. It is also most inconvenient to have all * operations on Ints return BigIntegers.
"Recognizes the type I intend to use" is not an accurate description of what scala tries to do. It infers the most generic type possible given the constraints imposed by the context. Hence if you write List(Nil, "1"), you'll get List[Serializable], because Serializable is an interface that List and String share - disregarding that Serializable was probably not on your mind at all.
The question you're asking could be asked more precisely as "why is Int the type of numeric literals instead of BigInteger?" - inference doesn't have much to do with it.
And we can opine all we want on that topic, but there's one most accurate answer describing why Scala is what it is: "because Java".
If you wanted the type of safety that you seem to want, then one approach is to define via a partial function which guards against numeric overflow and then returns either an Option[Int] or even perhaps an Either[Int, BigInteger].
The type inference for your square function is correct - given that it's inferred from the input types you've specified and the type of the * function...it's not really broken in my opinion.

Primitive types are not traited Immutable in scala?

Can anyone please share insight into the trait "Immutable" in scala? At first glance I thought this would be a nice control structure to limit a class I'm building, but oddly I noticed that primitive types do not extend this. Is there a reason for this? Is there a way to bind the syntax to Immutable or AnyVal?
class Test {
def test[T<:Immutable](x:T)={
println("passes "+x)
}
case class X(s:String) extends Immutable
test(X("hello")) //passes
// test("fail") - does not pass compiler
}
The only direct subtypes of Immutable in the Scala core library are:
collection.immutable.Traversable
collection.parallel.immutable.ParIterable
Nothing else refers to Immutable at all.
Immutable hasn't been changed since it was added in 2009 in Martin Odersky's "massive new collections checkin". I'm searching through that commit, and it looks like Immutable was never even used as a bound when it was first introduced either.
Honestly, I doubt there's much intent behind these traits anymore. Odersky probably planned to use Immutable to bound the type arguments on immutable collections, and then thought better of it. But that's just my speculation.
So-called primitive types (Boolean, Byte, Char, Short, Int, Long, Float, Double) are intrinsically immutable. 5 is 5 is 5. You cannot do anything to 5 to turn it into anything that is not 5.
Otherwise, immutability is a property of how a value is stored. If stored in a var, that var may be replaced freely with a new value (of a compatible type). By extension, constructed types (classes, traits and objects) may be either immutable or mutable depending on whether they allow any of their internal state to be altered following construction.
Java's String (also used as Scala's String) is immutable.
However, none of this has anything to do with you example, since you did not demonstrate mutability. You simply showed what happens when one applies the + method of one value to another value.
While it is certainly possible that one can implement a + method that mutates its (apparent) left-hand operand, one rarely does that. If there's a need for that kind of mutation, one would conventionally define the += method instead.
+ is somewhat special in that it may be applied to any value (if the argument / right-hand operand) is a String by virtue of an implicit conversion to a special class that defines +(s: String) so that the string concatenation interpretation of + may be applied. In other words, if you write e1 + "e2" and the type of the expression e1 does not define +, then Scala will convert e1 to String and concatenate it with "e2".

Pass null to a method expects Long

I have a Scala method that takes 2 parameters:
def test(x:Long,y:Int){}
On some occasion I need to pass null instead of long ... something like that:
test(null,x)
The result:
scala> test(null,2) :7: error: type mismatch; found :
Null(null) required: Long
test(null,2)
Why do I need to pass null?
Actually ,for some reason,I can't pass any default values.
Thus, I need such a null.
*Note:*I know that the solution would be making it Option.
However let's say I have no control over this method signature,can I do any work around?
Any ideas!
Thanks.
Null is a subtype of types which inherit from AnyRef, not from value types which inherit from AnyVal. This is why you are not able to pass null in. This corresponds to how, in java, you cant have a null of type long. (ignoring the boxed Long type).
However, this is an indication that the signature of the method should be changed to:
def test(x: Option[Long], y: Int)
which indicates that sometimes it goes no value for x. Since we have this nice Option class to deal with just this instance, there is little if any valid reasons to use null values, where you are relying on developers remembering to check for null values. Instead, with Option, the compiler will force you to take care of the fact that the value might not be there.
Since you can't change the signature, consider the mistake of Thinking Option[Foo] is the only/most natural way to express a missing function argument.
If the param to your function is a lower bound, then Long.MinValue might be a natural default.
If by "for some reason,I can't pass any default values" (whatever that could possibly mean) you mean you can't add defaults to the signature, and you're going the route suggested in another answer of adapting the method, you might as well change f(a,b) to g(b, a=Long.MinValue) or whatever before forwarding.
Instead of making clients of your adaptor method call g(b, None), let them call g(b). You're not passing the Option to the underlying f(a,b) anyway.
The way to convert scala primitives to Java wrapper classes, is to use the static valueOf members on the Java Primitive wrappers. I had this issue where I needed to convert an Option[Double] to a java.lang.Double or null. This is what I did:
val value: Option[Double]
val orNull = value.map(java.lang.Double.valueOf(_)).orNull
Just passing literal null should work if you are calling a method that accepts java.lang.Long/Double/Integer