Is it maybe EMF or EMOF? Eclipse? Or something totally different or nothing at all...?
From the EMF page:
EMF - The core EMF framework includes a meta model (Ecore) for describing models and runtime support for the models including:
change notification,
persistence support with default XMI serialization,
and a very efficient reflective API for manipulating EMF objects generically.
So I guess Ecore stands for "EMF core metamodel" here.
From this Eclipse help page:
For those of you that are familiar with OMG (Object Management Group) MOF (Meta Object Facility), you may be wondering how EMF relates to it.
Actually, EMF started out as an implementation of the MOF specification but evolved from there based on the experience we gained from implementing a large set of tools using it.
EMF can be thought of as a highly efficient Java implementation of a core subset of the MOF API.
However, to avoid any confusion, the MOF-like core meta model in EMF is called Ecore.
In the current proposal for MOF 2.0, a similar subset of the MOF model, which it calls EMOF (Essential MOF), is separated out. There are small, mostly naming differences between Ecore and EMOF; however, EMF can transparently read and write serializations of EMOF.
So the "Essential" for "E" does have some ground here.
Related
i am building an editor for manipulating graphical elements, each element must represent a type of element in an API Specification.
The elements of the API are basicaly some classes ad interfaces, they have certain usage constraints, like the element A can't be a child of the element B, or can't be connected with the element C, etc...
The editor should allow you to generate code accordinatly to what you have drawn and the generated code must be an implementation of the API that correspond to what you have drawn.
i know nothing (in practice) about model driven architecture and how i can generate code from a graphical model to some implmentation.
i don't want to mix the graphical model(containing graphical informations like the size and the coordinates, etc...) with the business model implementing the API Specification.
I am using eclipse GEF for building the editor
here're the problems i am facing:
Since the graphical model and the business model are separated, i was thinking of defining an emf model, the the editor would be an editor for that emf model. Is it possible then to transform the model drawn in the editor, using the emf model as basic construct elements, to a corresponding implementation of the API specification ?
i know since the graphical model and the business model are separated i have to implement some sort of a grammer sepcifiyng the usage constraints is antlr well suited for what i want to do (speaking about code generation and the grammer) or should i go with xtext ?
what eclipse framework tool whould help me to do what i want to do ?
If you already have a graphical editor, then it seems mostly unnecessary to create a lower-level textual model format (e.g. using antlr/Xtext) to execute code generation. Especially, if your model is already in EMF.
There are various code generator technologies for EMF model (e.g. Acceleo or Xtend can also be used for that); these generator will use the EMF model as an input, and provide the output code specific to the API you seem to be using. In both tools you have to manually assemble your output files by defining templates, and then serialize the results of these templates. This serialization is automatic in Acceleo, or you can do it using a Java-like API in Xtend.
If your model is not in EMF now, you can still use Xtend directly - however, I believe, Acceleo will not useful in this case.
In the Eclipse Modeling Framework (EMF), there are ecore files to define a model. From this model code (and other things) can be generated. This generation step is described by an "EMF Generator Model". Now my question is, why this file is called "model" instead of "configuration" or something like that. In my opinion, it does not model anything, but it describes a generation step...
While the other answers are perfectly correct, there is one additional difference between a "model" and a "configuration". All EMF models (including this generator model) can be modified, transformed and so on by every already available EMF tool (because they all use the same meta model).
This is a huge difference compared to the situation that the configuration can only be read by another tool, if it knows the exact format of the configuration serialization.
So you can create an UML diagram of the generator model, you can use it in a model based graphical editor, you can transform it using model-to-model transformation plugins, you can put it into EMFstore, ... without any of these tools having been prepared specifically for that model.
The current implementation of EMF was created with a bootstrapping-approach.
At first, the model that describes the data that is stored in ecore and genmodel were written by hand. As soon as EMF was stable enough, these were modeled and generated with EMF itself.
This means, the ecore and genmodel are in every way a EMF model.
This is similar to how many compilers for new programming languages are developed. The initial implementation has to be written in a second language, but as soon as the compiler is complete, you can use the new language to write a new implementation, add features, and then use the binaries of the previous version of the compiler to create the next.
From the creator of EMF, Ed Merks:
After all, EMF's generator model generates both the Ecore model and itself, so we're not actually in a position to delete our generated code. We need it to bootstrap the environment. It's prickly problem.
http://ed-merks.blogspot.de/2008/10/hand-written-and-generated-code-never.html
Actually the genmodel as well as the ecore files are also EMF models technically speaking. So this is not a surprise that it is called this way.
In fact you have to understand that EMF allows to describe any kind of structured information. So it can be used to describe your own semantics as well as describe code generation configurations or even describe itself (ecore).
I'm getting to grips with EMF and I'd like to check if a concept I have in my head is accurate.
I understand that one can create an EMF model in Eclipse and then use this to generate Java code.
I further understand that the model can be serialised to disk and then back again, but I don't understand the use of this.
Surely the model file itself can just be saved? Is there an obvious use case for serialization?
I think, you're confusing the two terms: "meta-model" and "model" here.
An EMF model is in fact a meta-model: it is the description of a model that can hold data. An EMF model/meta model can be represented in many different formats. For EMF, we usually use either .ecore/.genmodel or .xcore files.
From the EMF model/meta-model you can generate Java code that represents the model and the operations on the model. Seen from a theoretical level, the EMF model and the Java code are equal as they represent the same information.
With the generated Java code you can instantiate objects to hold model data. These data can then be saved to disk in a number of different formats. EMF can automatically provide the code needed to serialise the data of a model to disk in XML and back. (Actually, there are no generated code involved - it is all based on the description of your model in the ...Factory class). It is rather easy to implement other formats, such JSON or database schemas.
An example:
Assume that you have used EMF to describe a model for a bike (wheels, handlebar, frame, saddle, etc). From the EMF model you can generate Java classes that can describe the same bikes in terms of objects and relations between these.
You can now instantiate a number of different bikes in the model by creating/constructing and connecting the object of the Java classes.
These bikes can then be serialised as XML and back, so you can save the bikes to disk.
With MDA (Model Driven Architecture), we actually talk about 4 levels of models:
M0 is normally the physical artifact. E.g. a bike or a bill on paper.
M1 is the representation of the physical artifact - this is the model
M2 is the description of the model - the meta-model - in this case a EMF based model that describes the entities, relations and attributes of the model
M3 is the description of the description of the model - the meta-meta-model - which in fact can be represented in EMF as well. The information you find in the .ecore file and in the ...Package class are represented in M3 models as they describe M2 models.
The later really only matters to those of us, that teach MDA... In you normal work, you really only need to think of M0, M1 and M2...
Serialization refers to persisting content of your model instance (your data). You can serialize to XML, JSON, database, etc.
i have a question:
Within my modeling tool (Enterprise Architect) I have modeled a meta-model (UML based).
Now I want to transform the meta-model into Ecore. But I don't know how to do it.
Within Enterprise Architect I can export the Meta-Model to UML XMI. Does anyone know if it is possible to transform the generated XMI to Ecore XMI ?!
Thanks
Does anyone know if it is possible to transform the generated XMI to Ecore XMI ?!
Yes, it's possible - at least in outline. You can think of the problem in two parts:
What's the semantic mapping? In other words, how do you map concepts in the source XMI to concepts in the target eCore model?
How will you implement those mappings in practice?
Semantic Mapping
I'm assuming here your metamodel focuses on static structure. ECore doesn't support dynamic concepts outside of declaring EOperations. More on dynamics below if that's of relevance.
I don't know EA specifically, nor which version(s) of XMI it supports. However, it will be some variant of the core UML concepts: Class, Attribute, Operation, Association, AssociationEnd, etc.
eCore has a similar (if smaller) set of concepts: EClass, EAttribute, EDataType, EReference, EOperation, etc. There's a fairly strong correlation among the 'type' concepts; for example:
UML Class --> EClass
Attribute --> EAttribute
Operation --> EOperation
So the mapping there should be straighforward. Basically create one instance of the ECore equivalent for each UML concept.
Relationships are a little less obvious but still doable. ECore doesn't support relationships directly; EReference is the only analogous concept. However it's pretty easy to synthesise associations, for example:
A one way navigable UML association becomes a single EReference with min & max cardinality copied over
A UML bi-directional association becomes two EReferences, one in each direction. You should also set the EOtherEnd property, which effectively says the two EReferences are part of the same association.
Hopefully that gives you the idea.
Implementation
Having defined your conceptual mapping there are lots of options on how to do it. All will generally follow the same basic model:
Parse Source --> Map Source Concepts to Target Concepts --> generate target text.
You could use xslt (since it's just an XML->XML transformation). You could also use one of the many Model-to-model (M2M) and/or Model-to-text (M2T) toolkits available. See e.g. the eclipse modeling project (M2M, M2T). You could also go direct from EA by reading the model using the EA API instead of generating & parsing XMI. Which you choose will depend on your environment, skillset, etc.
If you want to see what it could look like in practice, you might take a look at MagicDraw. It provides ECore export out of the box. (Note it's a paid-for tool - but eval is available).
It might also be worth asking Sparx directly: I'd be a bit surprised if there isn't some ECore export add-on/plugin available for EA.
hth.
Dynamics
If your model has dynamics (state models etc.) then you have more of a problem. ECore doesn't cover those concepts at all. It's possible to extend ECore and that might be an option - but it's potentially more work as the tools that work with ECore will be less likely to understand your extensions.
You can easily go from Ecore to UML but the other way is not really possible. You have few plugins but when you try to use them it does not work.
My application is using a model base on an xsd that have been converted to an ecore before generation of the java classes.
One of my team member modified the .ecore metamodel in a previous version ,one attribute that used to be generated. He modified the attribute name but not the Extended MetaData specifying the element name used for xml persistance.
<eStructuralFeatures xsi:type="ecore:EReference" name="javaDocsAndUserApi" upperBound="-1"
eType="#//JavaDocsAndUserApi" containment="true" resolveProxies="false">
<eAnnotations source="http:///org/eclipse/emf/ecore/util/ExtendedMetaData">
<details key="kind" value="element"/>
<details key="name" value="docsAndUserApi"/>
</eAnnotations>
</eStructuralFeatures>
so we have an attribute name which is javaDocsAndUserApi and the persisted element named docsAndUserApi, and of course if I create change the attribute in the xsd to be named javaDocsAndUserApi, the ecore transformation will generate a metadata name javaDocsAndUserApi as well, which will break compatibility with previously persisted models.
I have looked at xsd authoring guide to find an ecore:som_attribute that would allow me to specify which key to use in the xsd to force the metadata to be named docsAndUserApi during the xsd to ecore transformation but did not find anything.
Does anybody have an idea to help me?
Thank you.
Dealing with evolving (meta-) models is not easy after all. It's basically comes down to migrating data from one format (conforming to one Ecore model) into another (conforming to another Ecore model).
You can apply model transformation techniques like ATL and AMW. This allows you to connect (weave) two Ecore (meta-) models (m1 and m2) and automatically generate code that transforms data from format m1 to format m2 and vice versa. (See here for some very interesting research papers on this subject.)
A pragmatic approach might be to manually implement the model transformation using EMF. Since the changes between your models are simple, this shouldn't be too hard to implement.