Does anyone here try to adopt xtext2 and migrate from xtext1.x to xtext2.0?
It seems xtext2 brings many new atractive features. Such as A Reusable Expression Language and Xtend: A Code Generation Language . Many performance enhancement is made to the Xtext workbench and rename capability. So any one tell you experence about xtext2? Probably this is a bit early question. But I just wait and see.
xtext2 homepage
I updated an existing, not too complex language from Xtext 1 to Xtext 2, and tried to develop a new one using Xtext2 and XBase. I had to re-run the code generation step, and also had to modify the hand-written validators, because the error and warnings locations are to be specified using literals instead of integers. E.g.
error("File does not exist with path: " + path, fileReference, ViatraTestDslPackage.FILE__PATH);
is to be replaced with
error("File does not exist with path: " + path, ViatraTestDslPackage.Literals.FILE__PATH);
Similarly, the workflow has to be changed as well to incorporate some new features: the outline API uses different fragments (outline.OutlineTreeProviderFragment and outline.QuickOutlineFragment), for rename and compare support new fragments are to be added (refactoring.RefactorElementNameFragment and compare.CompareFragment).
With my experiments of XBase it seems, that adding that to a language that already supported some kind of expressions can be labour-intensive, because either old expressions has to be replaced with XBase expressions (or at least altered in a way to make them available in XBase expressions), otherwise you have to maintain two kind of expression support in your code generator or interpreter.
To conclude my answer, I believe, if you have a simple Xtext 1.0 editor, where you mostly relied on the automatically generated features, migrating to Xtext 2.0 seems easy and recommended; however, if you customized a lot of things in manually written code, be careful, because the migration might not be straight-forward, and I have found no real migration guide.
http://www.eclipse.org/Xtext/documentation/2_0_0/213-migrating-from-1.0.php#migrating_from_1_0_x_5_4
I just find this useful link.
Also I do meet some problem especially in serialization module. Luckily in mwe2 file, it leaves a version 1.0 serialization, i use that and fix the problem when using version 2.0 serialization module. Not knowing why..
Another problem is there is a strange bug in the xtext validation. It always complain about ClassCastException. cast from String to QualifiedName error.
It is still early considering the recent release date:
The team have just presented/demo'ed XTend2 in democamps during last month (June 2011).
Related
I want to generate code from my state machine in Magic Draw. Magic Draw supports code generation for classes but does not include an option for state machine. I tried using SinelaboreRT software. However, it generates limited code. We need to manually add 'Main' function and other functions defined inside states. I wanted to know if it is possible to generate an executable or a C/C++/Java code file with all the code mentioned inside states as well a 'Main' function?
Yes, there are three main options that I am aware of 1) make your own code generator, 2) buy a commercial code generator e.g. LieberLieber provides what appears to be a fairly sophisticated one, or 3) use one of the open source code generators such as Papyrus-RT.
The first option isn't actually that difficult depending on your target language and framework. For my work at MITRE, I have written a generator to take properly formed Magicdraw models and create deploy-able Spring microservices. I used the Spring state machine library to simplify the STM code generation.
I personally found most convenient way to create state machine from models is UML or any other DSL is the combination of
Eclipse Papyrus / Eclipse XText / Eclipse XTend
there is also new kind in the block, if you don't want to work Eclipse based
Langium
but they are suffering at the moment little bit from being new but I will advice you check them every 6 months, they seem promising.
If you want to see how is it done, I have several blogs about it.
UML Based:
Akka Finite State Machine Generation Blog2
Papyrus and Spring State Machine
DSL Based:
XText and Spring State Machine
I am using Eclipse (version: Kepler Service Release 1) with Prolog Development Tool (PDT) plug-in for Prolog development in Eclipse. Used these installation instructions: http://sewiki.iai.uni-bonn.de/research/pdt/docs/v0.x/download.
I am working with Multi-Agent IndiGolog (MIndiGolog) 0 (the preliminary prolog version of MIndiGolog). Downloaded from here: http://www.rfk.id.au/ramblings/research/thesis/. I want to use MIndiGolog because it represents time and duration of actions very nicely (I want to do temporal planning), and it supports planning for multiple agents (including concurrency).
MIndiGolog is a high-level programming language based on situation calculus. Everything in the language is exactly according to situation calculus. This however does not fit with the project I'm working on.
This other high-level programming language, Incremental Deterministic (Con)Golog (IndiGolog) (Download from here: http://sourceforge.net/p/indigolog/code/ci/master/tree/) (also made with Prolog), is also (loosly) based on situation calculus, but uses fluents in a very different way. It makes use of causes_val-predicates to denote which action changes which fluent in what way, and it does not include the situation in the fluent!
However, this is what the rest of the team actually wants. I need to rewrite MIndiGolog so that it is still an offline planner, with the nice representation of time and duration of actions, but with the causes_val predicate of IndiGolog to change the values of the fluents.
I find this extremely hard to do, as my knowledge in Prolog and of situation calculus only covers the basics, but they see me as the expert. I feel like I'm in over my head and could use all the help and/or advice I can get.
I already removed the situations from my fluents, made a planning domain with causes_val predicates, and tried to add IndiGolog code into MIndiGolog. But with no luck. Running the planner just returns "false." And I can make little sense of the trace, even when I use the GUI-tracer version of the SWI-Prolog debugger or when I try to place spy points as strategically as possible.
Thanks in advance,
Best, PJ
If you are still interested (sounds like you might not be): this isn't actually very hard.
If you look at Reiter's book, you will find that causes_vals are just effect axioms, while the fluents that mention the situation are usually successor-state-axioms. There is a deterministic way to convert from the former to the latter, and the correct interpretation of the causes_vals is done in the implementation of regression. This is always the same, and you can just copy that part of Prolog code from indiGolog to your flavor.
I've used lettuce for python in the past. It is a simple BDD framework where specs are written in an external plain text file. Implementation uses regex to identify each step, proving reusable code for each sentence in the specification.
Using scala, either with specs2 or scalatest I'm being forced to write the the specification alongside the implementation, making it impossible to reuse the implementation in another test (sure, we could implement it in a function somewhere) and making it impossible to separate the test implementation from the specification itself (something that I used to do, providing acceptance tests to clients for validation).
Concluding, I raise my question: Considering the importance of validating tests by clients, is there a way in BDD frameworks for scala to load the tests from an external file, raising an exception if a sentence in the test is not implemented yet and executing the test normally if all sentences have been implemented?
I've just discovered a cucumber plugin for sbt. Tests would be implemented under test/scala and specifications would be kept in test/resources as plain txt files. I'm just not sure on how reliable the library is and if it will have support in the future.
Edit:
The above is a wrapper for the following plugin wich solves perfectly the problem and supports Scala.
https://github.com/cucumber/cucumber-jvm
This is all about trade-offs. The cucumber-style of specifications is great because it is pure text, that easily editable and readable by non-coders.
However they are also pretty rigid as specifications because they impose a strict format based on features and Given-When-Then. In specs2 for example we can write any text we want and annotate only the lines which are meant to be actions on the system or verification. The drawback is that the text becomes annotated and that pending must be explicitly specified to indicate what hasn't been implemented yet. Also the annotation is just a reference to some code, living somewhere, and you can of course use the usual programming techniques to get reusability.
BTW, the link above is an interesting example of trade-off: in this file, the first spec is "uglier" but there are more compile-time checks that the When step uses the information from a Given step or that we don't have a sequence of Then -> When steps. The second specification is nicer but also more error-prone.
Then there is the issue of maintaining the regular expressions. If there is a strict separation between the people writing the features and the people implementing them, then it's very easy to break the implementation even if nothing substantial changes.
Finally, there is the question of version control. Who owns the document? How can we be sure that the code is in sync with the spec? Who refactors the specification when required?
There is no, by far, perfect solution. My own conclusion is that BDD artifacts should be in the hand of developers and verified by the other stakeholders, reading the code directly if it's readable or reading an html/pdf output. And if the BDD artifacts are owned by developers they might as well use their own tools to make their life easier with verification (using a compiler when possible) and maintenance (using automated refactorings).
You said yourself that it is easy to make the implementation reusable by the normal methods Scala provides for this kind of stuf (methods, functions, traits, classes, types ...), so there isn't really a problem there.
If you want to give a version without code to your customer, you can still give them the code files, and if they can't ignore a little syntax, you probably could write a custom reporter writing all the text out to a file, maybe even formatted with as html or something.
Another option would be to use JBehave or any other JVM based framework, they should work with Scala without a problem.
Eric's main design criteria was sustainability of executable specification development (through refactoring) and not initial convenience due to "beauty" of simple text.
see http://etorreborre.github.io/specs2/
The features of specs2 are:
Concurrent execution of examples by default
ScalaCheck properties
Mocks with Mockito
Data tables
AutoExamples, where the source code is extracted to describe the example
A rich library of matchers
Easy to create and compose
Usable with must and should
Returning "functional" results or throwing exceptions
Reusable outside of specs2 (in JUnit tests for example)
Forms for writing Fitnesse-like specifications (with Markdown markup)
Html reporting to create documentation for acceptance tests, to create a User Guide
Snippets for documenting APIs with always up-to-date code
Integration with sbt and JUnit tools (maven, IDEs,...)
Specs2 is quite impressive in both design and implementation.
If you look closely you will see the DSL can be extended while you keep the typesafe-ty and strong command of domain code under development.
He who leaves aside the "is more ugly" argument and tries this seriously will find power.
Checkout the structured forms and snippets
Just starting to learn scala for a new project. Have got to the point where I would like to define different properties files for the different environments the app is going to run on, ideally in a similar way to Rails - very lightweight, just one different properties file per environment that is loaded based on its name. I don't really care if it's a java properties file, YML or scala code.
In the spirit of not reinventing the wheel I've been looking to see if there is some accepted standard Scala way of doing this but I can't find one, I've found a few similar but not identical questions here where people suggest using system properties in the startup script but this feels like it would end up being a nightmare.
I could obviously implement it if needs be but feels like the sort of thing that should already exist. So - does it?
I'm using sbt if that makes a difference.
I know of Configgy. Also, Akka/Play 2.0 will be using Config, which looks nice too. See blog about the latter.
Basically, Configgy has been used for a while now, but has been deprecated, while Config will be all-new. However, having Config as the default Typesafe Stack configuration tool will probably make it the preferred tool for that pretty fast.
I have written a Configgy replacement called Configrity. It can use different input formats (like YAML), it's immutable, supports functional patterns and uses type class to convert automatically the values to the desired type.
I have written BeeConfig, a replacement for java.util.Properties except that it is a Scala API and uses UTF-encoded configuration files. It supports string interpolation, chaining and a bunch of other features. But its main objective is simplicity.
Bitbucket | Blog post
Rick
I have just finished the first version of a Java 6 compiler plugin, that automatically generates wrappers (proxy, adapter, delegate, call it what you like) based on an annotation.
Since I am doing mixed Java/Scala projects, I would like to be able to use the same annotation inside my Scala code, and get the same generated code (except of course in Scala). That basically means starting from scratch.
What I would like to do, and for which I haven't found an example yet, is how do I generate the code inside a Scala compiler plugin in the same way as in the Java compiler plugin. That is, I match/find where my annotation is used, get the AST for the annotated interface, and then ask the API to give me a Stream/Writer in which I output the generated Scala source code, using String manipulation.
That last part is what I could not find. So how do I tell the API to create a new Scala source file, and give me a Stream/Writer/File/Handle, so I can just write in it, and when I'm done, the Scala compiler compiles it, within the same run in which the plugin was invoked?
Why would I want to do that? Firstly, because than both plugins have the same structure, so maintenance is easy. Secondly, I want to open source it, and there is just no way to support every option that anyone would want, so I expect potential users to want to extend the generation with their own code. This will be a lot easier for them if they just have to do some printf(), instead of learning the AST API (this also applies to me).
Short answer:
It can't be done
Long answer:
You could conceivably generate your source file and push that through a parser instance within your plugin. But not in any way that's likely to be of any use to you, because you'd now have a bigger problem to contend with:
In order to grab all the type/name information for generating the delagate/proxy, you'll have to pick up the annotated type's AST after it has run through both the namer and typer phases (which are inseperable). The catch is that any attempts to call your generated code will already have failed typechecking, the compiler will have thrown an error, and any further bets are off.
Method synthesis is possible in limited cases, so long as you can somehow fool the typechecker for just long enough to get your code generated, which is the trick I pulled with my Autoproxy 'lite' plugin. Even then, you're far better off working with TreeDSL to generate code instead of pumping out raw source.
Kevin is entirely correct, but just for completeness it's worth mentioning that there is another alternative - write a compiler plugin that generates source. This is the approach that I've adopted in Borachio. It's not a very satisfactory solution, but it can be made to work.
Edit - I just reread your question and realised that you're actually asking about generating source anyway
So there is no support for this directly, but it's basically just a question of opening a file and writing the relevant "print" statements. There's no way to invoke the compiler "inside" a plugin AFAIK, but I've written an sbt plugin which hides most of the complexity of invoking the compiler twice.