I'm getting to grips with EMF and I'd like to check if a concept I have in my head is accurate.
I understand that one can create an EMF model in Eclipse and then use this to generate Java code.
I further understand that the model can be serialised to disk and then back again, but I don't understand the use of this.
Surely the model file itself can just be saved? Is there an obvious use case for serialization?
I think, you're confusing the two terms: "meta-model" and "model" here.
An EMF model is in fact a meta-model: it is the description of a model that can hold data. An EMF model/meta model can be represented in many different formats. For EMF, we usually use either .ecore/.genmodel or .xcore files.
From the EMF model/meta-model you can generate Java code that represents the model and the operations on the model. Seen from a theoretical level, the EMF model and the Java code are equal as they represent the same information.
With the generated Java code you can instantiate objects to hold model data. These data can then be saved to disk in a number of different formats. EMF can automatically provide the code needed to serialise the data of a model to disk in XML and back. (Actually, there are no generated code involved - it is all based on the description of your model in the ...Factory class). It is rather easy to implement other formats, such JSON or database schemas.
An example:
Assume that you have used EMF to describe a model for a bike (wheels, handlebar, frame, saddle, etc). From the EMF model you can generate Java classes that can describe the same bikes in terms of objects and relations between these.
You can now instantiate a number of different bikes in the model by creating/constructing and connecting the object of the Java classes.
These bikes can then be serialised as XML and back, so you can save the bikes to disk.
With MDA (Model Driven Architecture), we actually talk about 4 levels of models:
M0 is normally the physical artifact. E.g. a bike or a bill on paper.
M1 is the representation of the physical artifact - this is the model
M2 is the description of the model - the meta-model - in this case a EMF based model that describes the entities, relations and attributes of the model
M3 is the description of the description of the model - the meta-meta-model - which in fact can be represented in EMF as well. The information you find in the .ecore file and in the ...Package class are represented in M3 models as they describe M2 models.
The later really only matters to those of us, that teach MDA... In you normal work, you really only need to think of M0, M1 and M2...
Serialization refers to persisting content of your model instance (your data). You can serialize to XML, JSON, database, etc.
Related
I'm aware that copying entity classes and properties into DTOs is considered anti-pattern, so by Exposed domain model pattern the same #Entity can be used as both database entity class, and DTO for service and MVC layer. (see here https://codereview.stackexchange.com/questions/93511/data-transfer-objects-vs-entities-in-java-rest-server-application)
But suppose we have microservice architecture where the same set of properties is used as entity in one project with persistence, and as DTO in another project which uses the first one as a service. What's the proposed pattern in such a situation?
Because the second project doesn't need #Entity related functionality, and if we put that class in shared library, it will be tied unnecessary to JPA specific APIs and libraries. And the alternative is to again use separate DTO classes anti-pattern.
When your requirements for a DTO model exactly match your entity model you are either in a very early stage of the project or very lucky that you just have a simple model. If your model is very simple, then DTOs won't give you many immediate benefits.
At some point, the requirements for the DTO model and the entity model will diverge though. Imagine you add some audit aspects, statistics or denormalization to your entity/persistence model. That kind of data is usually never exposed via DTOs directly, so you will need to split the models. It is also often the case that the main driver for DTOs is the fact that you don't need all the data all the time. If you display objects in e.g. a dropdown you only need a label and the object id, so why would you load the whole entity state for such a use case?
The fact that you have annotations on your DTO models shouldn't bother you that much, what is the alternative? An XML-like mapping? Manual object wiring?
If your model is used by third parties directly, you could use a subclassing i.e. keep the main model free of annotations and have annotated subclasses in your project that extend the main model.
Since implementing a DTO approach correctly, I created Blaze-Persistence Entity Views which will not only simplify the way you define DTOs, but it will also improve the performance of your queries.
If you are interested, I even have an example for an external model that uses entity view subclasses to keep the main model clean.
Thank you for the answers, but emphasize in the question is on microservice (MS) architecture and reusing defined entity POJOs from one MS in another as POJOs. From what I've read on microservices it's closely related to another question - should MSs share any common functionality and classes at all, or be completely independent? It seems there is no definite agreement on it, and also no definite answer, or widely accepted pattern, to this.
From my recent experience here is what I adopted, and it works well so far.
Have common functionality across MSs - yes, in form of a commons project added as dependency to all MSs, with its dependencies set as optional. Share entity classes (expose them in commons) - no.
The main reason is that entity classes are closely related to data store for particular MS. And as the established rule is that MSs shouldn't share data stores, then it makes sense not to share entity classes for those data stores. It helps MSs to be more independent, and freedom to manage their data store in their own way. It means some more typing to add additional DTO classes and conversion between them, but it's a trade-off worth taking to retain MS independence. Reasons Christian Beikov and Maksim Gumerov mentioned apply as well.
What we do share (put in commons) are some common functionality and helper classes (for cloud, discovery, error handling, rest and json configuration...), and pure DTOs, where T is transfer between MSs (rest entities or message payloads).
In the Eclipse Modeling Framework (EMF), there are ecore files to define a model. From this model code (and other things) can be generated. This generation step is described by an "EMF Generator Model". Now my question is, why this file is called "model" instead of "configuration" or something like that. In my opinion, it does not model anything, but it describes a generation step...
While the other answers are perfectly correct, there is one additional difference between a "model" and a "configuration". All EMF models (including this generator model) can be modified, transformed and so on by every already available EMF tool (because they all use the same meta model).
This is a huge difference compared to the situation that the configuration can only be read by another tool, if it knows the exact format of the configuration serialization.
So you can create an UML diagram of the generator model, you can use it in a model based graphical editor, you can transform it using model-to-model transformation plugins, you can put it into EMFstore, ... without any of these tools having been prepared specifically for that model.
The current implementation of EMF was created with a bootstrapping-approach.
At first, the model that describes the data that is stored in ecore and genmodel were written by hand. As soon as EMF was stable enough, these were modeled and generated with EMF itself.
This means, the ecore and genmodel are in every way a EMF model.
This is similar to how many compilers for new programming languages are developed. The initial implementation has to be written in a second language, but as soon as the compiler is complete, you can use the new language to write a new implementation, add features, and then use the binaries of the previous version of the compiler to create the next.
From the creator of EMF, Ed Merks:
After all, EMF's generator model generates both the Ecore model and itself, so we're not actually in a position to delete our generated code. We need it to bootstrap the environment. It's prickly problem.
http://ed-merks.blogspot.de/2008/10/hand-written-and-generated-code-never.html
Actually the genmodel as well as the ecore files are also EMF models technically speaking. So this is not a surprise that it is called this way.
In fact you have to understand that EMF allows to describe any kind of structured information. So it can be used to describe your own semantics as well as describe code generation configurations or even describe itself (ecore).
I reluctantly adopted a data mapper approach for my current project but I've been left a little confused.
Business Model
A worker has a name, a set of working hours, and a cost.
Data Model
A worker is made up of a labour type and a working pattern (three different tables). The cost of the worker is calculated based on a combination of the labour type and the working pattern.
My problem...
I seem to have two different models, one that represents business logic and one that represents data structure. I was of the understanding that my model should represent the business logic, but what happens when I want to insert a new worker? This is done using a form with two drop downs, the working pattern & the cost, the id's of which are not needed by the business model.
Confused? I am.
There is no real support for data models with the zend framework. But weierophinney does a realy good job to show how they could be implemented. Another very good description is this one.
Usually a model represents the data and includes the logic. The data model is a backend independed way to write/get data. For the model and the application it doesn't matter from where the data comes. Thus the datastorage can be exchanged without having to touch anything else.
Default data modelling in Zend Framework (Zend_Db_Table) isn't probably the best choice for object oriented data modelling.
Try using ORM like Doctrine (http://www.doctrine-project.org/) it allows you to create domain object model and almost transparently store it in database. This way you can have business model and data model combined in single classes.
Is it maybe EMF or EMOF? Eclipse? Or something totally different or nothing at all...?
From the EMF page:
EMF - The core EMF framework includes a meta model (Ecore) for describing models and runtime support for the models including:
change notification,
persistence support with default XMI serialization,
and a very efficient reflective API for manipulating EMF objects generically.
So I guess Ecore stands for "EMF core metamodel" here.
From this Eclipse help page:
For those of you that are familiar with OMG (Object Management Group) MOF (Meta Object Facility), you may be wondering how EMF relates to it.
Actually, EMF started out as an implementation of the MOF specification but evolved from there based on the experience we gained from implementing a large set of tools using it.
EMF can be thought of as a highly efficient Java implementation of a core subset of the MOF API.
However, to avoid any confusion, the MOF-like core meta model in EMF is called Ecore.
In the current proposal for MOF 2.0, a similar subset of the MOF model, which it calls EMOF (Essential MOF), is separated out. There are small, mostly naming differences between Ecore and EMOF; however, EMF can transparently read and write serializations of EMOF.
So the "Essential" for "E" does have some ground here.
My application is using a model base on an xsd that have been converted to an ecore before generation of the java classes.
One of my team member modified the .ecore metamodel in a previous version ,one attribute that used to be generated. He modified the attribute name but not the Extended MetaData specifying the element name used for xml persistance.
<eStructuralFeatures xsi:type="ecore:EReference" name="javaDocsAndUserApi" upperBound="-1"
eType="#//JavaDocsAndUserApi" containment="true" resolveProxies="false">
<eAnnotations source="http:///org/eclipse/emf/ecore/util/ExtendedMetaData">
<details key="kind" value="element"/>
<details key="name" value="docsAndUserApi"/>
</eAnnotations>
</eStructuralFeatures>
so we have an attribute name which is javaDocsAndUserApi and the persisted element named docsAndUserApi, and of course if I create change the attribute in the xsd to be named javaDocsAndUserApi, the ecore transformation will generate a metadata name javaDocsAndUserApi as well, which will break compatibility with previously persisted models.
I have looked at xsd authoring guide to find an ecore:som_attribute that would allow me to specify which key to use in the xsd to force the metadata to be named docsAndUserApi during the xsd to ecore transformation but did not find anything.
Does anybody have an idea to help me?
Thank you.
Dealing with evolving (meta-) models is not easy after all. It's basically comes down to migrating data from one format (conforming to one Ecore model) into another (conforming to another Ecore model).
You can apply model transformation techniques like ATL and AMW. This allows you to connect (weave) two Ecore (meta-) models (m1 and m2) and automatically generate code that transforms data from format m1 to format m2 and vice versa. (See here for some very interesting research papers on this subject.)
A pragmatic approach might be to manually implement the model transformation using EMF. Since the changes between your models are simple, this shouldn't be too hard to implement.