How to build an ANTLR code generation target - code-generation

Is there an ANTLR4 version of "How to build an ANTLR code generation target". I know there is an ANTLR3 version but it seems to be way out of date.

The only real resource at this point is the repository for the ANTLR 4 C# target. It does include a Creating Targets document, but it's not up-to-date and some sections (notably the Release Structure) is not correct. The Git history for the project shows how I implemented the C# target starting from the Java runtime.
You should note the following:
Despite the removal of the AST and StringTemplate features from the ANTLR runtime, the ANTLR 4 runtime is very complicated due to the addition of the ALL(*) parsing algorithm. You should be very well versed in data structures, parallel programming, the semantics of Java code, and the semantics/libraries of your target language before attempting to create an ANTLR 4 target.
Creation and maintenance of the C# target is tremendously simplified by using a modified version of Sharpen to automate a large portion of the Java-to-C# conversion.

Related

Making sense of Scala development tools

There is a myriad of development tools and terms in the ecosystem, for example, language server, build server, Metals, BSP, LSP, Bloop, Zinc, Coursier, incremental compiler, presentation compiler, etc.
I was wondering if someone could demonstrate how they fit together and briefly explain relations and differences. Specifically I am hoping for a diagram and answer along the lines of Making sense of Scala FP Libraries. For example, here is my attempt
(Concept) (Example implementation)
--------------------------------------------------------------------
IDE Visual Studio Code
| |
Scala IDE plugin Metals Visual Studio Extension
| |
Language Server Protocol Microsoft LSP
| |
Scala language server Metals
| |
Build Server Protocol BSP from JetBrains and Scala Center
| |
Scala build server Bloop
| |
Build tool sbt
| |
Dependency resolution Coursier
| |
Incremental compiler Zinc
| |
Presentation compiler parser and typer phases of scalac
| |
Bytecode generation remaining compiler phases
IDEs like Intellij or (once) Scala IDE are intended for "smart" development, where the editor tells you if your code is correct, suggest fixes, autogenerate some code, provide navigation around code, refactoring - in other words many features that expand way beyond simple text edition with perhaps only syntax highlighting.
Intellij has a Scala extension which makes use of their own reimplementation of a Scala compiler - they needed that for better partial compilation and intellisense working even if part of the code is broken. The import build definition from other build tool (e.g. sbt or bloop) and from then Intellij doesn't reply on anything external (unless you use some option like "build with sbt").
Scala IDE relied on Scala presentation compiler for intellisense. As you can read on scala-ide.org:
The Scala IDE for Eclipse uses the Scala Presentation Compiler, a faster asynchronous version of the Scala Compiler. The presentation compiler only runs the phases up until and including the typer phase, that is, the first 4 of the 27 scala compilation phases. The IDE uses the presentation compiler to provide semantic features such as live error markers, inferred type hovers, and semantic highlighting. This document describes the key classes you need to know in order to understand how the Scala IDE uses the presentation compiler and provides some examples of interactions between the IDE and the presentation compiler.
Every other editor/IDE is intended to use Language Server Protocol - LSP is Microsoft's invention in order to standardize a way of supporting languages within different editors (though they invented it for the sake of VS Code) that would allow them to provide IDE features. Metals (from ScalaMeta Language
Server) is LSP implementation for Scala. As you can read here:
Code completions, type at point and parameter hints are implemented using the Scala presentation compiler, which is maintained by the Scala compiler team at Lightbend.
You can add it to VS Code using Scala Metals extension.
sbt, gradle, mill, fury, cbt, etc are build tools which use something like ivy2 or coursier to resolve and download dependencies, and then use Zinc incremental compiler in order to provide ability to (re)build things incrementally to normal compiler. Build tools can run tests, generate artifacts and deploy them to repository.
bloop is a solution to problem that compiletion is fast if JVM is hot, and JVM gets cold every time you kill your build tool/IDE. For that reason you use nailgun to keep some JVM warm, running build tasks in background. On its own bloop cannot generate configuration and in general it is something that should be generated by other build tool to speed up compilation during development. Protocol used to communicate with bloop server running in background is build server protocol (bsp).
Coursier, while is used primarily for dependency resolution, can also be used to install scala programs. Some of the noteworthy programs that you can install are:
scalafmt - scala formatter
ammonite - alternative REPL to scala which provides a lot of nice features
scalafix - code-rewriting tool which is used to provide automatic code migrations
I gave up on describing thing in table, as it would much better be shown on a graph, but since SO doesn't support that visuals I just resorted to a plain text.

Publish Local Is Adding Scala Version To Name of Library [duplicate]

The underlying mechanism used to indicate which version of Scala a library was compiled against is to append _<scala-version> to the library's name. This fairly simple approach allows interoperability with users of Maven, Ant and other build tools.
-- sbt Documentation: Cross-Build Publishing Conventions
While this is a simple approach, the interoperability with Maven and other build tools leaves something to be desired. Because the artifactId is different (e.g. scalatest_2.9.0 and scalatest_2.10.0), Maven treats them as different artifacts. Maven's dependency resolution mechanism is thus compromised and multiple versions of the same artifact (built against different scala versions) can wind up on the classpath.
Why not put the scala version in the classifier? This seems to be one of the primary intended use cases for the classifier:
The classifier allows [Maven] to distinguish artifacts that were built from the same POM but differ in their content. As a motivation for this element, consider for example a project that offers an artifact targeting JRE 1.5 but at the same time also an artifact that still supports JRE 1.4. The first artifact could be equipped with the classifier jdk15 and the second one with jdk14 such that clients can choose which one to use.
-- Maven Documentation: POM Reference
Appending version to the name is a historical decision that was made long time ago so it'll likely not going to change since many libraries are published with the convention already.
Having said that, as Seth noted, there was a discussion to review this topic a few years ago when sbt 0.12 shortened "_2.10.0" postfix to "_2.10" to take advantage of Scala library's binary compatibility between the minor versions. Here's Mark from [0.12] plan:
By cross versioning, I mean the practice of including some part of the Scala version in the module ID to distinguish an artifact generated by compiling the same source code against different Scala versions. I do not mean the ability to build against multiple Scala versions using +task, which will stay; I am just referring to the cross version convention.
[snip]
It has always been a hack to encode this in the inflexible pom.xml format and I think it may be best to move away from this for projects built against Scala 2.10 and later. However, perhaps this is better than any ad hoc solutions that might take its place. I don't see users of other build tools doing this, so I expect nothing would replace it.
Somewhere down the thread Josh suggested:
(1) Scala classifiers. These can be custom strings and can be specified with dependencies. At least, IIRC this should work.
Here's Mark's response:
What do mean by "can be specified with dependencies"? There is only one pom for all of the classifiers, right? How can you declare different dependencies for each classifier?
Here are some more interesting remark on classifiers from Geoff Reedy
I too thought that classifiers would be the perfect way to deal with
this issue especially in light of the suggestion in the maven docs that
classifiers java-1.4 and java-1.5 be used to distiguish between jars
appropriate for the respective platform. The fatal flaw seems to be
transitive dependency management. That is, there's no way to choose the
transitive dependency set based on the classifier used to require the
module. We'd need to be able to say that when you're using this module
with the scala-2.10 classifier it brings its own dependencies using the
scala-2.10 classifier and when used with the 2.9 classifier brings in
its own deps with the scala-2.9 classifier.
I think with the jvm versions it's possible to make this work because
jvm versioning has special support in the profile activation which can
can control dependencies.

How to discard files from the Xtext indexing process?

I have built an Xtext based editor for our DSL which works fine, but we get an out of memory error while the workspace is building or when we force a project cleaning. Our DSL plug-in is used in conjunction with the Eclipse CDT to build microcontrollers test programs. A test program project is made of C++ files and ".xxx" files for which I have built the DSL editor. The out of memory error occurs when the test program project contains at least one large ".xxx" file (~300 Mbyte). We don't even open this large file, we simply clean the project and the memory error occurs.
This seems to be an Xtext indexer issue. Is there a way to tell the Xtext indexer to ignore ".xxx" files located in a particular folder of the project? I have read several times the Scoping chapter of the excellent "Implementing DSLs with Xtext and Xtend" from Lorenzo Bettini, but did not find any solution to this issue. Can you help me, please?
the extension points for this are org.eclipse.xtext.resource.IResourceServiceProvider.canHandle(URI) or org.eclipse.xtext.ui.resource.IResourceUIServiceProvider.canHandle(URI, IStorage)

Why do scala maven artifacts have an artifact for each scala version instead of a classifier per scala version?

Since you have only source compatibility between Scala-versions you unfortunately need to compile libraries like scalatest or scalamock for each scala version they support. What puzzles me is that the libraries are provided with loads of artifacts (scalatest_2.9.0, scalatest_2.9.1, scalatest_2.10 and so forth) - one for each scala version, such that the maven repository is littered with many artefacts that are built from the same source. My instinct tells me rather to use one artifact with a classifier for each scala version. (In fact, the maven pom reference mentions that this was sometimes done with jdk14 and jdk15 classifiers for artifacts, which seems similar to me.) So, why did the Scala people go for the many artifact overkill :-) instead?
I may be wrong, but if a classifier's purpose is to "distinguish artifacts that were built from the same POM but differ in their content", then I see a very good reason not to use them for scala versioning: scala versions are not just binary incompatible, they can very well be source incompatible.
By example, when upgrading scala from 2.7 to 2.8, I had to make some significant changes to the code base. If I wanted to keep both a scala 2.7 and 2.8 version at the same time, I would have needed to create a parallel branch, and both branches would definitely not have the same source code.
When I read "from the same POM", I understand that it means from the same source code too, which would clearly not be the case with those two branches of code.
Another more important reason is that a classifier is essentially a single string, which is already used for many things. More or less standard classifiers include "sources", "javadoc" or "resources". The meanings of these classifiers are the same in scala project, and are totally orthogonal to the scala version, as I'll try to show.
Maven's documentation suggests using classifiers such as "jdk15" or "jdk14" to denote which version of the jvm the binary artifact was compiled against.
Given that java code is backward compatible, in principle the artifacts with both classifiers ("jdk15" or "jdk14") are compiled from the same source code. This is why you don't need to duplicate classifiers for "sources" artifact, or in other words you don't need to have a classifier named "sources-jdk14" and "sources-jdk15".
But you cannot apply the same rationale to the scala version: given that you might need different source code whether you compile against scala 2.7 or scala 2.8, you would indeed need two different artifacts with classifiers such as "sources-scala2.7" or "source-scala2.8". So we have composite classifiers already.
As for binary artifacts, you would also not only need to distinguish between the target jvm version (remember, you can compile your scala code to target different jvm versions) but also the scala version it was compiled against. So you would end up with something like "jdk14-scala2.7" or "jdk14-scala2.8" or "jdk15-scala2.7" or "jdk15-scala2.8". Yet another set of composite classifiers.
So the take home message is that the scala version really is a separate way of classifying artifacts, that is totally orthognal to all the existing classifiers.
Yes, we could really use composite classifiers as above (such as "sources-scala2.7") but then we would not be using standard classifiers, which is confusing enough in itself, but would also require to modify all the tooling around classifiers: what if I use a build tool that has no knowledge of scala (only java) but knows how to automatically publish a "source" artifact? Will I need to modify this build tool so that he knows to publish a "sources-scala2.7" artifact instead? On the other hand, if I encode the scala version in the (base) artifact name and give that to the build tool, everything works as usual and I get a an artifact with the "source" classifier.
All in all, and contrary to immediate intuition, encoding the scala version in the name allows for better integration in the existing java build ecosystem.
Scala provides inter-version compatibility of its bytecode output (.class files) only across patch-releases (third component of Major.Minor.Patch version spec).
Maven has no place to properly encode this as a first-class property of the artifact, so it has to be encoded by convention in the name.
Sadly...

Is there a Sonar-level code coverage equivalent for Scala?

I'm trying to set up simple code coverage reports for a team coding in mixed Scala/Java at approx. a 90/10 ratio and running into some serious roadblocks. I've previously set up & administrated Sonar to great success with a Java-only team, but it doesn't appear to be an option.
Sonar w/Scala plugin is buggy and appears to support Scala-only projects, not mixed ones.
SCCT integrates with our maven build, but fails out with false-negative test failures repeatedly.
Undercover has been my best luck so far; It's integrated with our maven build & generates reports, but they aren't archived or hosted anywhere as they would be with Sonar. There also appears to be no central index to make it simple to navigate the generated reports.
I've read the answers here on StackOverflow, but they largely date back to 2010 and suggest that no decent solution is available. Has this changed?
Is there something obvious I'm missing?
About Sonar side:
yes, the Scala Sonar Plugin development is currently stalled. It was initiated by the community, but nobody has offered to take it over yet. If there are some volunteers, we'll be glad to guide and help them.
concerning the support of several languages inside a single project, support will be coming in Sonar. I can't give you a roadmap for it, but we're currently thinking about how to add this support in Sonar in the next releases, so this is a short term issue.
You can either use SCCT or JaCoCo.
SCCT: It supports Scala up to version 2.10, but development seems to be a stalled for a about 9 months. It supports Scala natively and works with both, Maven and SBT.
JaCoCo is under sctive development. It supports any version of Scala, but not natively, but on bytecode level. So you might get some artifacts, e.g. some code gets only partial coveragege, because the generated bytecode has some theoretical code path JaCoCo sees (but which can never be executed from Scala code).
JaCoCo can be a little tricky to set up with Maven and Scala. Here a few tricks:
Use the variant with the agent launcher. Do not use the variant with preprocessing bytecode.
When using JaCoCo with Maven: There is a Maven task (jacoco:prepare-agent) which will produce the correct expression for the agent launcher and stores it into a property. You can then use this property as a command line parameter when running the Java virtual machine.
Parametrize the agent launcher, so that multiple launches (e.g. for running different tests) write to the same log file. Some IDE plugins will have problems with parsing such a file, but the JaCoCo Hudson plugin for example works fine.