Scala.js code produced by 1.0.1 slower than 0.6.32 - scala.js

I've been coding a small project using Scala.js:
https://github.com/ppgllrd/Algorithms.scalaJS.InfectiousDiseaseSimulator
I noticed that the Javascript produced by 1.0.1 compiler turns out to be quite slower than that produced by 0.6.32. Both of them can be accessed at:
https://ppgllrd.github.io/Algorithms.scalaJS.InfectiousDiseaseSimulator/0.6.32/
https://ppgllrd.github.io/Algorithms.scalaJS.InfectiousDiseaseSimulator/1.0.1/
My animations run slower for 1.0.1. This is specially noticeable when I use Firefox and set the population size parameter to its higher setting (1500). One can even notice that the initialisation of the algorithm (since you press Start until you see first frame of animation) takes much longer for 1.0.1.
I have compiled both versions in exactly same way (using Scala 2.13.1), the only difference being using addSbtPlugin("org.scala-js" % "sbt-scalajs" % "0.6.32") or addSbtPlugin("org.scala-js" % "sbt-scalajs" % "1.0.1") in my plugins.sbt.
Is this behaviour something to be expected? As 1.0.1 claims to provide better run-time performance, is there anything in particular I'm doing in my code which could be responsible for this loss of performance?

If it is especially noticeable on Firefox, it could be the use of ES 2015 by default in 1.x. Firefox' performance for ES 2015 is less than optimal, although using ES 2015 allows great code size reductions and an improved debugging experience.
You can force Scala.js 1.x to emit ES 5.1 (like Scala.js 0.6.x) with the following sbt setting:
scalaJSLinkerConfig ~= { _.withESFeatures(_.withUseECMAScript2015(false)) }
to be added in your project's settings. Make sure to reload and clean before testing anew.

Related

Calling RenderScript from C / JNI

I'm looking to replace the C atan2 function with something more efficient. RenderScript does offer atan2, including versions that take vectors.
The examples I found, demonstrates calling RenderScript from Java. Is it possible to call RS from C code ? an example would be great.
Thanks
It used to be possible, though RS support in the NDK has been dropped for some time now. It may still be possible, but even the NDK samples no longer include RS samples. Starting with Android 7 you could try to use "Single Source RenderScript", described here, which is supposed to be possible from C/C++ code.
The efficiency gains you may see using RS are due to a few possible reasons (which are very platform dependent):
RS will parallelize operations over your data set. In some cases the function you are calling (such as atan2) may parallelize the operation, if possible.
Your RS code may be executed on a co-processor (such as a GPU or DSP).
The RS provided intrinsics and library functions are highly optimized for the platform. Using atan2 as an example again, it may be possible that the function is more optimized in the RS core than the standard C library as it could be using a co-processor or it could be using architecture specific optimized implementation (assembly).
All of that being said, your code can take an I/O hit when moving data between RS space (Allocation) back to the non-RS code.
I have found two examples; here is the one I got to build and run:
https://github.com/adhere/NDKCallRenderScriptDemo
I've been searching for documentation of the C++ API but haven't found it.

Conditional compilation of code blocks in Scala

I was wondering if there is a way to conditionally exclude a block of code from being compiled in Scala using compile-time flags (i.e. some rough equivalent of the C family's #define). I am aware that there is no direct counterpart, and I don't think Scala's macros are what I need, so I was wondering if there is another way to do this.
In my current case specifically (and I provide this only as an example, because I've had other cases in the past that prompted the same question), I am building a library in ScalaJS. The library is a front-end component, and will primarily be used by my application - which is also using ScalaJS. However, I would like to allow this component to be called by native Javascript in other projects that are not using ScalaJS. As such, I want to have a user-configurable flag that will toggle the exporting of symbols to native Javascript upon request.
It makes no sense for these flags to be exported by default (in my application), since the only other code calling it will be other ScalaJS code, and thus having the overhead of exported symbols is pointless. Maintaining two separate code branches for something so trivial also seems like a futile effort.
This is basically what I have in mind (pseudo-code, of course):
...
#if JS_EXPORT
#JSExport
#endif
case class componentProps(
#if JS_EXPORT
#(JSExport #field)
#endif
val propertyOne: Int
#if JS_EXPORT
#(JSExport #field)
#endif
val propertyTwo: String
)
...
I am well aware that there is no pre-processor and the above is intended as pseudo-code only. I was just wondering if there is a way of accomplishing something similar, without unnecessary overhead such as using reflection (because I'm sure that would provide a bigger performance hit than just exporting by default).
Also, I was able to find this question: Conditional compilation in Scala. But that is not what I need.
There isn't any way which isn't a complete hack to do it within the source code.
The standard ways to accomplish dual JVM/JS projects are to minimize the number of source files where a difference occurs, and to do it by hand for those (almost all of Li Haoyi's projects are like this--check out the structure of fastparse for example); or to have two git branches which have the two variants and merge all changes from one to the other.
For your specific case, you do not need to have specific source code to do that: Scala.js provides the scalajs-stubs library. This is a JVM targeted library that contains stub annotations for Scala.js annotations (#JSExport et al.).
You can add it as a "provided" dependency to your JVM project, so it won't be needed at runtime:
libraryDependencies +=
"org.scala-js" %% "scalajs-stubs" % scalaJSVersion % "provided"
Note that the annotations are not static, i.e. they won't even appear in the .class files.
More details on scala-js.org (bottom of the page).

Simulink backwards compatibility

I implemented a (medium to big) simulink model in v2012b.
I thought it would work also in 2010bSP2, but it didn't. Some mask blocks are not opening and other strange errors.
In previous versions of simulink there was a "save as simulink 201x" model to force compatibility, but I couldn't find it anymore in 2012b.
Any clues on how to avoid rework?
Starting with 2012b, and the new interface, they have moved the option to the menu:
File / Export Model to / Previous Version
The feature never seems to fully work, and I often get warnings when first loading a model into an older version, so I would recommend giving your model a thorough check over and test. I always save again from the correct version, to clear the warnings.
The refactoring for a New Simulink Version can show up at several points.
Your own libraries: There is a feature called "Forwarding Tables" that allows you to specify where a block in the new library is (I suppose you will have to refactor your libraries as well, and probably someone else uses those libraries too)
It sounds like a biiig hack (and it is) but i found it sometimes the path of lowest resistantce... Just open your model in the Editor of your choice and replace the block paths whith the common refactoring funcitons. It is terrible i know but Simulink really lacks of refactoring functions...

intellij idea 11, scala slow execution [duplicate]

I've been programming in Scala for a while and I like it but one thing I'm annoyed by is the time it takes to compile programs. It's seems like a small thing but with Java I could make small changes to my program, click the run button in netbeans, and BOOM, it's running, and over time compiling in scala seems to consume a lot of time. I hear that with many large projects a scripting language becomes very important because of the time compiling takes, a need that I didn't see arising when I was using Java.
But I'm coming from Java which as I understand it, is faster than any other compiled language, and is fast because of the reasons I switched to Scala(It's a very simple language).
So I wanted to ask, can I make Scala compile faster and will scalac ever be as fast as javac.
There are two aspects to the (lack of) speed for the Scala compiler.
Greater startup overhead
Scalac itself consists of a LOT of classes which have to be loaded and jit-compiled
Scalac has to search the classpath for all root packages and files. Depending on the size of your classpath this can take one to three extra seconds.
Overall, expect a startup overhead of scalac of 4-8 seconds, longer if you run it the first time so disk-caches are not filled.
Scala's answer to startup overhead is to either use fsc or to do continuous building with sbt. IntelliJ needs to be configured to use either option, otherwise its overhead even for small files is unreasonably large.
Slower compilation speed. Scalac manages about 500 up to 1000 lines/sec. Javac manages about 10 times that. There are several reasons for this.
Type inference is costly, in particular if it involves implicit search.
Scalac has to do type checking twice; once according to Scala's rules and a second time after erasure according to Java's rules.
Besides type checking there are about 15 transformation steps to go from Scala to Java, which all take time.
Scala typically generates many more classes per given file size than Java, in particular if functional idioms are heavily used. Bytecode generation and class writing takes time.
On the other hand, a 1000 line Scala program might correspond to a 2-3K line Java program, so some of the slower speed when counted in lines per second has to balanced against more functionality per line.
We are working on speed improvements (for instance by generating class files in parallel), but one cannot expect miracles on this front. Scalac will never be as fast as javac.
I believe the solution will lie in compile servers like fsc in conjunction with good dependency analysis so that only the minimal set of files has to be recompiled. We are working on that, too.
The Scala compiler is more sophisticated than Java's, providing type inference, implicit conversion, and a much more powerful type system. These features don't come for free, so I wouldn't expect scalac to ever be as fast as javac. This reflects a trade-off between the programmer doing the work and the compiler doing the work.
That said, compile times have already improved noticeably going from Scala 2.7 to Scala 2.8, and I expect the improvements to continue now that the dust has settled on 2.8. This page documents some of the ongoing efforts and ideas to improve the performance of the Scala compiler.
Martin Odersky provides much more detail in his answer.
You should be aware that Scala compilation takes at least an order of magnitude longer than Java to compile. The reasons for this are as follows:
Naming conventions (a file XY.scala file need not contain a class called XY and may contain multiple top-level classes). The compiler may therefore have to search more source files to find a given class/trait/object identifier.
Implicits - heavy use of implicits means the compiler needs to search any in-scope implicit conversion for a given method and rank them to find the "right" one. (i.e. the compiler has a massively-increased search domain when locating a method.)
The type system - the scala type system is way more complicated than Java's and hence takes more CPU time.
Type inference - type inference is computationally expensive and a job that javac does not need to do at all
scalac includes an 8-bit simulator of a fully armed and operational battle station, viewable using the magic key combination CTRL-ALT-F12 during the GenICode compilation phase.
The best way to do Scala is with IDEA and SBT. Set up an elementary SBT project (which it'll do for you, if you like) and run it in automatic compile mode (command ~compile) and when you save your project, SBT will recompile it.
You can also use the SBT plug-in for IDEA and attach an SBT action to each of your Run Configurations. The SBT plug-in also gives you an interactive SBT console within IDEA.
Either way (SBT running externally or SBT plug-in), SBT stays running and thus all the classes used in building your project get "warmed up" and JIT-ed and the start-up overhead is eliminated. Additionally, SBT compiles only source files that need it. It is by far the most efficient way to build Scala programs.
The latest revisions of Scala-IDE (Eclipse) are much better atmanaging incremental compilation.
See "What’s the best Scala build system?" for more.
The other solution is to integrate fsc - Fast offline compiler for the Scala 2 language - (as illustrated in this blog post) as a builder in your IDE.
But not in directly Eclipse though, as Daniel Spiewak mentions in the comments:
You shouldn't be using FSC within Eclipse directly, if only because Eclipse is already using FSC under the surface.
FSC is basically a thin layer on top of the resident compiler which is precisely the mechanism used by Eclipse to compile Scala projects.
Finally, as Jackson Davis reminds me in the comments:
sbt (Simple build Tool) also include some kind of "incremental" compilation (through triggered execution), even though it is not perfect, and enhanced incremental compilation is in the work for the upcoming 0.9 sbt version.
Use fsc - it is a fast scala compiler that sits as a background task and does not need loading all the time. It can reuse previous compiler instance.
I'm not sure if Netbeans scala plugin supports fsc (documentation says so), but I couldn't make it work. Try nightly builds of the plugin.
You can use the JRebel plugin which is free for Scala. So you can kind of "develop in the debugger" and JRebel would always reload the changed class on the spot.
I read some statement somewhere by Martin Odersky himself where he is saying that the searches for implicits (the compiler must make sure there is not more than one single implicit for the same conversion to rule out ambiguities) can keep the compiler busy. So it might be a good idea to handle implicits with care.
If it doesn't have to be 100% Scala, but also something similar, you might give Kotlin a try.
-- Oliver
I'm sure this will be down-voted, but extremely rapid turn-around is not always conducive to quality or productivity.
Take time to think more carefully and execute fewer development micro-cycles. Good Scala code is denser and more essential (i.e., free from incidental details and complexity). It demands more thought and that takes time (at least at first). You can progress well with fewer code / test / debug cycles that are individually a little longer and still improve your productivity and the quality of your work.
In short: Seek an optimum working pattern better suited to Scala.

AOT compilation or native code compilation of Scala?

My scala application needs to perform simple operations over large arrays of integers & doubles, and performance is a bottleneck. I've struggled to put my finger on exactly when certain optimizations kick in (e.g. escape analysis) although I can observe their results through various benchmarking. I'd love to do some AOT compilation of my scala application, so I can see or enforce (or implement) certain optimizations ... or compile to native code, if possible, so I can cut corners like bounds checking and observe if it makes a difference.
My question: what alternative compilation methods work for scala? I'm interested in tools like llvm, vmkit, soot, gcj, etc. Who is using those successfully with scala at this point, or are none of these methods currently compatible or maintained?
GCJ can compile JVM classes to native code. This blog describes tests done with Scala code: http://lampblogs.epfl.ch/b2evolution/blogs/index.php/2006/10/02/scala_goes_native_almost?blog=7
To answer my own question, there is no alternative backend for Scala except for the JVM. The .NET backend has been in development for a long time, but its status is unclear. The LLVM backend is also not yet ready for use, and it's not clear what its future is.