Scodec: Using vectorOfN with a vlong field - scala

I am playing around with the Bitcoin blockchain to learn Scala and some useful libraries.
Currently I am trying to decode and encode Blocks with SCodec and my problem is that the vectorOfN function takes its size as an Int. How can I use a long field for the size while still preserving the full value range.
In other words is there a vectorOfLongN function?
This is my code which would compile fine if I were using vintL instead of vlongL:
object Block {
implicit val codec: Codec[Block] = {
("header" | Codec[BlockHeader]) ::
(("numTx" | vlongL) >>:~
{ numTx => ("transactions" | vectorOfN(provide(numTx), Codec[Transaction]) ).hlist })
}.as[Block]
}
You may assume that appropriate Codecs for the Blockheader and the Transactions are implemented. Actually, vlong is used as a simplification for this question, because Bitcoin uses its own codec for variable sized ints.

I'm not a scodec specialist but my common sense suggests that this is not possible because Scala's Vector being a subtype of GenSeqLike is limited to have length of type Int and apply that accepts Int index as its argument. And AFAIU this limitation comes from the underlying JVM platform where you can't have an array of size more than Integer.MAX_VALUE i.e. around 2^31 (see also "Criticism of Java" wiki). And although Vector theoretically could have work this limitation around, it was not done. So it makes no sense for vectorOfN to support Long size as well.
In other words, if you want something like this, you probably should start from creating your own Vector-like class that does support Long indices working around JVM limitations.

You may want to take a look at scodec-stream, which comes in handy when all of your data is not available immediately or does not fit into memory.
Basically, you would use your usual codecs.X and turn it into a StreamDecoder via scodec.stream.decode.many(normal_codec). This way you can work with the data through scodec without the need to load it into memory entirely.
A StreamDecoder then offers methods like decodeInputStream along scodec's usual decode.
(I used it a while ago in a slightly different context – parsing data sent by a client to a server – but it looks like it would apply here as well).

Related

Everything's an object in Scala

I am new to Scala and heard a lot that everything is an object in Scala. What I don't get is what's the advantage of "everything's an object"? What are things that I cannot do if everything is not an object? Examples are welcome. Thanks
The advantage of having "everything" be an object is that you have far fewer cases where abstraction breaks.
For example, methods are not objects in Java. So if I have two strings, I can
String s1 = "one";
String s2 = "two";
static String caps(String s) { return s.toUpperCase(); }
caps(s1); // Works
caps(s2); // Also works
So we have abstracted away string identity in our operation of making something upper case. But what if we want to abstract away the identity of the operation--that is, we do something to a String that gives back another String but we want to abstract away what the details are? Now we're stuck, because methods aren't objects in Java.
In Scala, methods can be converted to functions, which are objects. For instance:
def stringop(s: String, f: String => String) = if (s.length > 0) f(s) else s
stringop(s1, _.toUpperCase)
stringop(s2, _.toLowerCase)
Now we have abstracted the idea of performing some string transformation on nonempty strings.
And we can make lists of the operations and such and pass them around, if that's what we need to do.
There are other less essential cases (object vs. class, primitive vs. not, value classes, etc.), but the big one is collapsing the distinction between method and object so that passing around and abstracting over functionality is just as easy as passing around and abstracting over data.
The advantage is that you don't have different operators that follow different rules within your language. For example, in Java to perform operations involving objects, you use the dot name technique of calling the code (static objects still use the dot name technique, but sometimes the this object or the static object is inferred) while built-in items (not objects) use a different method, that of built-in operator manipulation.
Number one = Integer.valueOf(1);
Number two = Integer.valueOf(2);
Number three = one.plus(two); // if only such methods existed.
int one = 1;
int two = 2;
int three = one + two;
the main differences is that the dot name technique is subject to polymorphisim, operator overloading, method hiding, and all the good stuff that you can do with Java objects. The + technique is predefined and completely not flexible.
Scala circumvents the inflexibility of the + method by basically handling it as a dot name operator, and defining a strong one-to-one mapping of such operators to object methods. Hence, in Scala everything is an object means that everything is an object, so the operation
5 + 7
results in two objects being created (a 5 object and a 7 object) the plus method of the 5 object being called with the parameter 7 (if my scala memory serves me correctly) and a "12" object being returned as the value of the 5 + 7 operation.
This everything is an object has a lot of benefits in a functional programming environment, for example, blocks of code now are object too, making it possible to pass back and forth blocks of code (without names) as parameters, yet still be bound to strict type checking (the block of code only returns Long or a subclass of String or whatever).
When it boils down to it, it makes some kinds of solutions very easy to implement, and often the inefficiencies are mitigated by the lack of need to handle "move into primitives, manipulate, move out of primitives" marshalling code.
One specific advantage that comes to my mind (since you asked for examples) is what in Java are primitive types (int, boolean ...) , in Scala are objects that you can add functionality to with implicit conversions. For example, if you want to add a toRoman method to ints, you could write an implicit class like:
implicit class RomanInt(i:Int){
def toRoman = //some algorithm to convert i to a Roman representation
}
Then, you could call this method from any Int literal like :
val romanFive = 5.toRoman // V
This way you can 'pimp' basic types to adapt them to your needs
In addition to the points made by others, I always emphasize that the uniform treatment of all values in Scala is in part an illusion. For the most part it is a very welcome illusion. And Scala is very smart to use real JVM primitives as much as possible and to perform automatic transformations (usually referred to as boxing and unboxing) only as much as necessary.
However, if the dynamic pattern of application of automatic boxing and unboxing is very high, there can be undesirable costs (both memory and CPU) associated with it. This can be partially mitigated with the use of specialization, which creates special versions of generic classes when particular type parameters are of (programmer-specified) primitive types. This avoids boxing and unboxing but comes at the cost of more .class files in your running application.
Not everything is an object in Scala, though more things are objects in Scala than their analogues in Java.
The advantage of objects is that they're bags of state which also have some behavior coupled with them. With the addition of polymorphism, objects give you ways of changing the implicit behavior and state. Enough with the poetry, let's go into some examples.
The if statement is not an object, in either scala or java. If it were, you could be able to subclass it, inject another dependency in its place, and use it to do stuff like logging to a file any time your code makes use of the if statement. Wouldn't that be magical? It would in some cases help you debug stuff, and in other cases it would make your hairs grow white before you found a bug caused by someone overwriting the behavior of if.
Visiting an objectless, statementful world: Imaging your favorite OOP programming language. Think of the standard library it provides. There's plenty of classes there, right? They offer ways for customization, right? They take parameters that are other objects, they create other objects. You can customize all of these. You have polymorphism. Now imagine that all the standard library was simply keywords. You wouldn't be able to customize nearly as much, because you can't overwrite keywords. You'd be stuck with whatever cases the language designers decided to implement, and you'd be helpless in customizing anything there. Such languages exist, you know them well, they're the sequel-like languages. You can barely create functions there, but in order to customize the behavior of the SELECT statement, new versions of the language had to appear which included the features most desired. This would be an extreme world, where you'd only be able to program by asking the language designers for new features (which you might not get, because someone else more important would require some feature incompatible with what you want)
In conclusion, NOT everything is an object in scala: Classes, expressions, keywords and packages surely aren't. More things however are, like functions.
What's IMHO a nice rule of thumb is that more objects equals more flexibility
P.S. in Python for example, even more things are objects (like the classes themselves, the analogous concept for packages (that is python modules and packages). You'd see how there, black magic is easier to do, and that brings both good and bad consequences.

Scala Buffer: Size or Length?

I am using a mutable Buffer and need to find out how many elements it has.
Both size and length methods are defined, inherited from separate traits.
Is there any actual performance difference, or can they be considered exact synonyms?
They are synonyms, mostly a result of Java's decision of having size for collections and length for Array and String. One will always be defined in terms of the other, and you can easily see which is which by looking at the source code, the link for which is provided on scaladoc. Just find the defining trait, open the source code, and search for def size or def length.
In this case, they can be considered synonyms. You may want to watch out with some other cases such as Array - whilst length and size will always return the same result, in versions prior to Scala 2.10 there may be a boxing overhead for calling size (which is provided by a Scala wrapper around the Array), whereas length is provided by the underlying Java Array.
In Scala 2.10, this overhead has been removed by use of a value class providing the size method, so you should feel free to use whichever method you like.
As of Scala-2.11, these methods may have different performance. For example, consider this code:
val bigArray = Array.fill(1000000)(0)
val beginTime = System.nanoTime()
var i = 0
while (i < 2000000000) {
i += 1
bigArray.length
}
val endTime = System.nanoTime()
println(endTime - beginTime)
sys.exit(-1)
Running this on my amd64 computer gives about 2423834 nanos time (varies from time to time).
Now, if I change the length method to size, it will become about 70764719 nanos time.
This is more than 20x slower.
Why does it happen? I didn't dig it through, I don't know. But there are scenarios where length and size perform drastically different.
They are synonyms, as the scaladoc for Buffer.size states:
The size of this buffer, equivalent to length.
The scaladoc for Buffer.length is explicit too:
The length of the buffer. Note: xs.length and xs.size yield the same result.
Simple advice: refer to the scaladoc before asking a question.
UPDATE: Just saw your edit adding mention of performance. As Daniel C. Sobral aid, one is normally always implemented in term of the other, so they have the same performance.

Is it possible to implement F#'s infrastructure for Units of Measurement in Scala?

F# ships with special support for a unit of measurement system, which provides static type safety while compiling down to the numeric types instead of burdening the runtime with wrapping/unwrapping operations.
Is it possible to use some of Scala's type system magic to implement something comparable to that?
The answer is no.
Now, someone is bound to point me to Scalar, but that gives runtime checking. Perhaps, then, point to the efforts of Jesper Nordenberg's type-safe units or Jim McBeath's take on it, but these are cumbersome and awkward.
I'll point, instead to the Units compiler plugin. It gave Scala, back in 2008/2009, a pretty good system of units, as can be seen in this post. It did so, however, by extending the compiler, which would not be necessary if the type system was enough. Alas, it has not been maintained and it doesn't work anymore.
I don't know anything about it, but I just stumbled accross this talk at Scala Days: https://wiki.scala-lang.org/display/SW/ScalaDays+2011+Resources#ScalaDays2011Resources-ScalaUImplementingaScalalibraryforUnitsofMeasure
Kind of. You can encode the SI units quite easily using a type representation of integers in a tuple of exponents. See http://svn.assembla.com/svn/metascala/src/metascala/Units.scala for an example implementation.
It should also be possible to support an extensible units system if the units are encoded as a TList of pairs of a unit type and an integer (for example, ((M, _1), (S, _2)) where M <: Unit and S <: Unit). Calculating the types for quantity operations becomes a bit more complicated in this encoding.
Regarding performance there will always be a memory overhead for wrapping the value in a type containing the unit information. However there is probably no performance overhead in the actual operations as all unit checking is done at compile time.
Have a look at Units of Measure - A Scala Macro System. It seems to satisfy your requirements.

Implementing a Measured value in Scala

A Measured value consists of (typically nonnegative) floating-point number and unit-of-measure. The point is to represent real-world quantities, and the rules that govern them. Here's an example:
scala> val oneinch = Measure(1.0, INCH)
oneinch : Measure[INCH] = Measure(1.0)
scala> val twoinch = Measure(2.0, INCH)
twoinch : Measure[INCH] = Measure(2.0)
scala> val onecm = Measure(1.0, CM)
onecm : Measure[CM] = Measure(1.0)
scala> oneinch + twoinch
res1: Measure[INCH] = Measure(3.0)
scala> oneinch + onecm
res2: Measure[INCH] = Measure(1.787401575)
scala> onecm * onecm
res3: Measure[CMSQ] = Measure(1.0)
scala> onecm * oneinch
res4: Measure[CMSQ] = Measure(2.54)
scala> oncem * Measure(1.0, LITER)
console>:7: error: conformance mismatch
scala> oneinch * 2 == twoinch
res5: Boolean = true
Before you get too excited, I haven't implemented this, I just dummied up a REPL session. I'm not even sure of the syntax, I just want to be able to handle things like adding Measured quantities (even with mixed units), multiplying Measured quantities, and so on, and ideally, I like Scala's vaunted type-system to guarantee at compile-time that expressions make sense.
My questions:
Is there extant terminology for this problem?
Has this already been done in Scala?
If not, how would I represent concepts like "length" and "length measured in meters"?
Has this been done in some other language?
A $330-million Mars probe was lost because the contractor was using yards and pounds and NASA was using meters and newtons. A Measure library would have prevented the crash.
F# has support for it, see for example this link for an introduction. There has been some work done in Scala on Units, for example here and here. There is a Scala compiler plugin as well, as described in this blog post. I briefly tried to install it, but using Scala 2.8.1, I got an exception when I started up the REPL, so I'm not sure whether this plugin is actively maintained at the moment.
Well, this functionality exists in Java, meaning you can use it directly in Scala.
jsr-275, which was moved to google code. jscience implements the spec. Here's a good introduction. If you want a better interface, I'd use this as a base and build a wrapper around it.
Your question is fully answered with one word. You can thank me later.
FRINK. http://futureboy.us/frinkdocs/
FYI, I have developed a Scalar class in Scala to represent physical units. I am currently using it for my R&D work in air traffic control, and it is working well for me. It does not check for unit consistency at compile time, but it checks at run time. I have a unique scheme for easily substituting it with basic numeric types for efficiency after the application is tested. You can find the code and the user guide at
http://russp.us/scalar-scala.htm
Here is the summary from the website:
Summary-- A Scala class was designed to represent physical scalars and to eliminate errors involving implicit physical units (e.g., confusing radians and degrees). The standard arithmetic operators are overloaded to provide syntax identical to that for basic numeric types. The Scalar class itself does not define any units but is part of a package that includes a complete implementation of the standard metric system of units and many common non-metric units. The scalar package also allows the user to define a specialized or reduced set of physical units for any particular application or domain. Once an application has been developed and tested, the Scalar class can be switched off at compile time to achieve the execution efficiency of operations on basic numeric types, which are an order of magnitude faster. The scalar class can also be used for discrete units to enforce type checking of integer counts, thereby enhancing the static type checking of Scala with additional dynamic type checking.
Let me clarify my previous post. I should have said, "These kinds of errors ["meter/yard conversion errors"] are automatically AVOIDED (not "handled") by simply using my Scalar class. All unit conversions are done automatically. That's the easy part.
The harder part is the checking for unit inconsistencies, such as adding a length to a velocity. This is where the issue of dynamic vs. static type checking comes up. I agree that static checking is generally preferable, but only if it can be done without sacrificing usability and convenience.
I have seen at least two "projects" for static checking of units, but I have never heard of anyone actually using them for real work. If someone knows of a case where they were used, please let me know. Until you use software for real work, you don't know what sorts of issues will come up.
As I wrote above, I am currently using my Scalar class (http://russp.us/scalar-scala.htm) for my R&D work in ATC. I've had to make many tweaks along the way for usability and convenience, but it is working well for me. I would be willing to consider a static units implementation if a proven one comes along, but for now I feel that I have essentially 99% of the value of such a thing. Hey, the vast majority of scientists and engineers just use "Doubles," so cut me some slack!
"Yeah, ATC software with run-time type checking? I can see headlines now: "Flight 34 Brought Down By Meter/Yard Conversion"."
Sorry, but you don't know what you're talking about. ATC software is tested for years before it is deployed. That is enough time to catch unit inconsistency errors.
More importantly, meter/yard conversions are not even an issue here. These kinds of errors are automatically handled simply by using my Scalar class. For those kinds of errors, you need neither static nor dynamic checking. The issue of static vs. dynamic checking comes up only for unit inconsistencies, as in adding length to time. These kinds of errors are less common and are typically caught with dynamic checking on the first test run.
By the way, the interface here is terrible.

Scala linked list stackoverflow

Using scala I have added about 100000 nodes to a linked list. When I use the function length, for example mylist.length. I get a 'java.lang.StackOverflowError' error, is my list to big to process? The list is only string objects.
It appears the library implementation is not tail-recursive override def length: Int = if (isEmpty) 0 else next.length + 1. It seems like this is something that could be discussed on the mailing list to check if an enhancement ticket should be opened.
You can compute the length like this:
def length[T](l:LinkedList[T], acc:Int=0): Int =
if (l.isEmpty) acc else length(l.tail, acc + 1)
In Scala, computing the length of a List is an order n operation, therefore you should try to avoid it. You might consider switching to an Array, as that is a constant time operation.
You could try increasing the stack/heap size available to the JVM.
scala JAVA_OPTS="-Xmx512M -Xms16M -Xss16M" MyClass.scala
Where
-Xss<size> maximum native stack size for any thread
-Xms<size> set initial Java heap size
-Xmx<size> set maximum Java heap size
This question has some more information.
See also this This Scala document.
Can you confirm that you truly need to use the length method? It sounds like you might not be using the correct collection type for your use-case (hard to tell without any extra information). Lists are optimised to be mapped over using folds or a tail-recursive function.
Despite saying that, this is absolutely an oversight that can easily be fixed in the standard library with a tail-recursive function. Hopefully we can get it in time for 2.9.0.