Xtext - Operation - eclipse

i have a question about Xtext. I know that Xtext creates a Ecore Modell for the DSL that is programmed in the .xtext File. Am i getting it right that xtext only creates EClass, EAttribute, EEnum and ERefernce in the Ecore Model? There is no way to create an Attribute of an Rule to have an EOperaption?

Xtext allows to import an existing EPackage or infer a new one from a grammar definition. Since EOperations are not relevant to the concrete syntax, there is nothing that could be inferred for them. If you want to use EOperations, I suggest do switch to a manually maintained, imported package.

Adding to Sebastian's answer: If you still want to use an inferred model you can use a model-postprocessor to adjust the model. This is easier if you only want to adjust only one or two things in the model like - adding additional operations.

Related

Domain-Specific Languages in Racket compared to Model-Driven frameworks such as Xtext

Racket and Xtext are both considered as language workbenches, but they are based on different concepts and workflows.
As an experienced Xtext user, I find it difficult to adapt my thought process to Racket.
In Xtext, the grammar of a language is converted into, or mapped to, a set of classes (also called a metamodel).
Xtext also generates a parser that converts a source file into a set of instances of those classes. A scoping API allows to resolve named references, so that the result is an object graph (also called a model) rather than an abstract syntax tree (AST).
Such a model can be queried, transformed, or fed into a template engine to generate code.
In Racket, the reader produces an AST in the form of a syntax object. However, most examples that I have found seem to make an ad-hoc use of this syntax object. Either they are toy languages that do not need a complete object graph, or they are too complex and it is difficult to infer a general methodology.
For my current language project, after struggling with syntax objects, I have created the equivalent of a metamodel using Racket structs. Then it was fairly easy to convert a syntax object into an object graph that I could manipulate as if it were a model in EMF. However, I feel like I am not using syntax objects the way they are intended to be.
Here are my questions:
What tools or APIs are available to work on syntax objects and achieve a similar ease-of-use as a model-driven framework?
Are there documents that describe a general language development methodology in Racket, that could be applied to non-trivial languages?
Are there documents that explain the Racket way, compared to Xtext or any other model-driven language framework?
EDIT:
Based on the documentation for Metaprogramming helpers, syntax classes can be used to specify and compose syntax patterns, and attach attributes to their elements. They can achieve a similar purpose as the classes of a metamodel.
However, as far as I can see, syntax classes are not classes, and syntax objects are not linked to syntax classes in a class-instance relationship.
This has the following consequences:
Syntax classes do not support inheritance directly, but we can achieve a similar effect with ~or* and attribute declarations for subclasses.
Syntax classes do not come with accessors for their attributes: you have to call syntax-parse every time you want to read an attribute.
At this point, there are still two missing features that are not addressed in the documentation that I have found:
Traversing a syntax tree from child to parent: how can I get a reference to the syntax object enclosing a given syntax object?
Scoping: how can I define specific scoping rules for my language?

automatically creating indirect class dependencies in enterprise architect

I have several internal logic dependencies in my source code. For example
Class A accepts an object and that object to be valid in Class A needs to have particular interfaces such as InterfaceOne, InterfaceTwo
I would like a way to represent the Interface dependencies for Class A visually in enterprise architect. Right now i'm generating the base class by importing the source code then I'm manually creating the dependencies between the Classes and Interfaces.
In my source code these dependencies are all within a variable of the class
$requiredDependencies = array('InterfaceOne', 'InterfaceTwo')...
Is there a way to programatically either parse this code or maybe enterprise architect has a way to read comments (like doxygen) and I could specify such relation in comments?
The Grammar Framework lets you generate in-EA parsers for custom languages, allowing you to reverse-engineer code in whatever language you choose. This is a pretty complex beast, but have a look in the help file under Extending UML Models -- MDG Technology SDK -- Grammar Framework.
If the language is already supported by EA, then that reverse-engineering process cannot be modified (other than what's available in the options), although you can of course write your own parser from scratch using the grammar framework.
If you want to do additional processing for a reverse-engineered class based on what's in its source file, then you can find the source file in Element.GenFile. You would then have to parse it yourself, of course.

XText multiple file extensions

I'd like to define a language with different elements that shall be contained into different kind of files though linked (i.e. similarly to C++ with .cpp and .h files).
Is grammar mixin the right way to do that? If so how should I proceed?
Different elements in different file kinds sounds like a use case for Grammar Mixins. The base grammar should define the language concepts common for both languages, and the sub-languages would inherit from the base grammar.
Ideally create a manually written Ecore metamodel and map the concept to it (i.e. don't use 'generate').
Since 2.10, Xtext supports parser rule fragments. This means you can define certain reusable parts of rules with the 'fragment' keyword. See https://github.com/eclipse/xtext-core/blob/761ffeac7e62525be5a5473988d7f1d577298b67/org.eclipse.xtext.tests/src/org/eclipse/xtext/parser/fragments/FragmentTestLanguage.xtext.

How create dialects with XText

The project I'm working on has a custom file format, with a pre-defined structure. The structure is really simple and generic (and I cannot change it): it is composed by (nested) commands and typed properties.
Using this structure, several dialects have been created. The dialects are an "instantiation" of the generic grammar, and specify the name and the meaning of commands and the expected properties.
I created a model with EMF for one of these dialects, and I would like to reuse XText to easily create a professional text editor and be able to read and write my model into the correct format.
Now I have a choice. On one side, I can directly target the dialect, and mix in the same grammar the concepts from the custom file structure and those from the dialect. On the other side, I can create a grammar describing the file structure, and on top of this I can describe my dialect.
Which way I should follow? I think that the latter is the best one, but how can I create a grammar describing those two layers?
Xtext allows extending existing languages: in the head of the grammar you could specify a parent grammar, that gets inherited.
For an example, see the Domain model example from Xtext 2.0, that extends the XBase language:
grammar org.eclipse.xtext.example.domainmodel.Domainmodel with org.eclipse.xtext.xbase.Xbase
Every grammar element can be replaced by new syntax; new validation can be added, etc. See the following blog posts for further ideas: http://koehnlein.blogspot.com/2011/07/extending-xbase.html
You can use the same approach: create a base language, then extend them for your various dialects.

Project Lombok vs. Eclipse templates / code generation

Does Project Lombok offer any benefit compared to code templates / code generation in Eclipse? Are there any downsides (apart from including the .jar)?.
One advantage of Lombok is that once you've annotated a class with, say, the #Data annotation, you never need to regenerate the code when you make changes. For example, if you add a new field, #Data would automatically include that field in the equals, hashCode and toString methods. You'd need to manually make that change when using Eclipse generated methods. Some of the time, you may prefer the manual control but for most cases, I expect not.
The advantage of Lombok is that the code isn't actually there - i.e. classes are much more readable and are not cluttered.
Advantages:
Very easy to use
Classes are much cleaner ('no boilerplate code'),especially
'struct'-like inner classes shrink to a bare minimum:
#Data
private class AttrValue {
private String attribute;
private MyType value;
}
This will create both getters and setters, a toString(), and correct hash() / equals() methods including both variables.
The variant with #Value creates an immutable structure (no setters, all fields final).
No need to generate/remove code when you change fields (getters, setters, toString, hash, equals)
No interference with hand-coded methods: just add you own specific setter to the class where needed. Lombok skips this and generates everything else
Disadvantages:
No name refactoring, yet: renaming value above will not (yet) rename getValue() and setValue()
May slows down ecplise slightly
toString output not as nice as, for instance, ToStringBuilder from apache commons
Very few come to mind:
it is based on annotation, so no good for legacy project still in pre-Java5 (delombok can help). Actually, it requires using the javac v1.6 compiler.
it still have limitations regarding multiple constructors
The dependency issue is not to be overlooked though, but you have excluded it from your question.
Eclipse EMF offers some features which are very handy which Lombock does not yet support:
Powerful notification mechanims to get informed about changes in your instances
Generic API without java reflection. Access and modify instances without a strong reference to the type
Command und API based editing
Cross references between models: Create and load model trees and EMF handles the loading by creating a proxy for the cross reference. This saves memory and boost performance in huge domain trees
And much more...