I import XMI of a package hierarchy to a local model, and it's successfully imported.
When I import the same XMI to a shared model (Oracle DB), all the sequences and messages in sequence diagrams are deleted.
Any ideas?
That is a known issue. (search "version control" + "sequence diagram" in the EA forum)
Using instances instead of classifiers in sequence diagrams will to some degree solve that issue.
An XMI representation of a model contains information on the elements in the exported package, and their connectors. Structurally, however, the connectors are not stored within the package in EA's data model, so EA simply writes into the XMI file every connector that connects any of the elements to anything else, whether or not the element at the other end of the connector is in scope.
On the other hand, a connector is by definition connected at both ends -- you cannot create a connector in EA that is only connected to an element at one end. This means that each connector is written to the XMI file with a reference to both its elements.
If both elements are in the scope of the XMI export (in the same package tree), then all's well. But if only one of them is, EA is not able to recreate the connector on import -- only one element is present in the XMI file. When this occurs, EA will ignore the offending connector.
The exception to this is if the element which is missing from the XMI file happens to be in the model already. In that case, EA will recreate the connector. I think this might be what you're seeing in your "local" model.
With Enterprise Architect v.13 I managed to get rid of the trouble;
Supposing that your sequences has Lifelines with Instance Classifier set to the Class/Component that you would like to use in our sequence:
Right Click on the Package that you would like to Export
Go to Package Baseline and create New Baseline for the package
Once Baseline is shown, from the "Baseline" choose "Export File"
Save it as XMI File
Import XMI file in the other EAP Project.
I compared the XMI file generated from a Baseline with an XMI generated by clicking on the option "Import/Export" and they have differences... it seems like that the Exported XMI of the Baseline contains full information of the model (included root nodes), thus I think the import process might resolve each link to objects that are not hold in the same package...
Related
I have a decision requirements diagram called MyDecision1.dmn for example. I want to then import that as a component in another dmn model, MyDecision2.dmn. MyDecision2.dmn should take the output of MyDecision1 and use it as an input. Am I able to do this in the jBPM workbench when editing dmn files?
I see that I can include models; however, instead of getting the entire DRD as a single asset, I only get the individual components and the task of assembling them is left to me..
In the later versions of KIE bpmn(Process) modeler you can add the DMNs as Activities and use the input output from them.
I was able to define a custom programming language in Enterprise Architect with custom data types by navigating to Project > Settings > Code Engineering Datatypes.... When I create a MDG file, I have the option to include the programming language definition, and as far as I can tell, this is working - at least, in a new project that uses the MDG file, I can see the programming language.
Now I would like to have the same behavior for DBMS and database datatypes defined through Project > Settings > Database Datatypes.... From my tests, I get the impression that these types are not automatically included in the MDG file, and I haven't found a trivial way to include them. Is there a way to add the database datatypes to the MDG file as well? If not, is there a way to achieve the same result through the automation interface, e. g. by writing an add-in that creates the DBMS and the associated datatypes?
Going the MDG Technology way, the answer appears to be no. It's possible to trick EA (11) into exporting DB types in an MDG Technology, but even if they're in there, they will be ignored in projects that use the MDG Technology.
DB types and code engineering (or, sometimes, "programming language") datatypes are both stored in EA's t_datatypes table. The same product name can be used for both a programming language and a DB engine.
It looks like the MDG Technology Wizard scans this table looking for rows with "Code" in the Type column during the setup (Code Modules wizard page), but when the time comes to write the actual datatypes into the output file, it retrieves all rows with the specified ProductName.
This means that if you create a DB product and populate it with a set of datatypes, and then create a programming language product with the same name but just a single dummy datatype, your DB types will be included in the MDG Technology XML file along with the dummy type.
However, it appears that while the regular properties dialog (for classes etc) checks the loaded MDG Technologies in addition to the t_datatypes table in order to populate the Language drop-down list, the specialized properties dialog for database tables does not check the MDG Technologies when populating the corresponding Database drop-down. So even though the datatypes are in the file, you can't use them.
Going the Add-In route, the answer is yes.
Have your Add-In respond to the EA_FileOpen event and check the Repository.Datatypes collection to see if your DB types are installed and if not, add them.
You don't actually need to write an Add-In if you don't want, you can write an in-EA script. The only thing an Add-In can do that scripts can't is respond to events (which is why those are listed in the Add-In Model section of the help file). So with a script you would have to trigger the function manually.
There is also an API to manage project's reference data of which code / DB datatypes is one category, but it only gives you control of some of the categories (eg requirement types and constraint types), and the datatype category is not one of them.
I am learning to use Sparx Enterprise Architect for requirements management. I am wondering, what is the best practice, or any practice to describe CSV file structure (i.e. header names and their types) that will be imported by designed system?
Do you use taggeed values, or simply just "Notes" area? Or maybe just link sample csv file, but then, how to include it in printed documentation?
Thank you for advice!
UML is, of course, not good at describing structures such as CSV formats. You can create classes with attributes "col_1", "col_2", etc, but the simplest way is probably to add a linked document to the requirement (right-click and select Create Linked Document).
The linked document can be included in a generated document quite easily. In the template editor, simply select the Linked Document node under the Element node.
These linked documents are stored inside the EA project. They are RTF, so they are very limited compared to other formats, but they support tables and other basic formatting, which is usually enough for requirement details.
In a project I work on there is a C# library containing business objects which are related to the backing database tables/stored procedures.
We imported the code into EA model (where we already have database model) and now I'd like to show dependency between a class and a table (or stored procedure output).
Since these are loosely coupled (i.e. only a portion of properties are shared between them) I'd like to have a relation between a class A and table B and in the properties of this relation to have the mapping (A.a <-> B.a , ...).
Is this possible and how?
You can draw connectors between two elements and then link one or both ends to an element feature (an attribute or an operation). Draw the connector, then right-click near the end and select Link to Element Feature.
You can draw any number of connectors between two elements, and link any number of them to any features at either or both ends.
You should note that this is an EA feature which is not in the UML standard. As such, it is also a little trickier to automate (the feature link is not documented in the API), but I've done it before for a client so it can be done. However, from your question I assume it's the manual case you're interested in.
My application is using a model base on an xsd that have been converted to an ecore before generation of the java classes.
One of my team member modified the .ecore metamodel in a previous version ,one attribute that used to be generated. He modified the attribute name but not the Extended MetaData specifying the element name used for xml persistance.
<eStructuralFeatures xsi:type="ecore:EReference" name="javaDocsAndUserApi" upperBound="-1"
eType="#//JavaDocsAndUserApi" containment="true" resolveProxies="false">
<eAnnotations source="http:///org/eclipse/emf/ecore/util/ExtendedMetaData">
<details key="kind" value="element"/>
<details key="name" value="docsAndUserApi"/>
</eAnnotations>
</eStructuralFeatures>
so we have an attribute name which is javaDocsAndUserApi and the persisted element named docsAndUserApi, and of course if I create change the attribute in the xsd to be named javaDocsAndUserApi, the ecore transformation will generate a metadata name javaDocsAndUserApi as well, which will break compatibility with previously persisted models.
I have looked at xsd authoring guide to find an ecore:som_attribute that would allow me to specify which key to use in the xsd to force the metadata to be named docsAndUserApi during the xsd to ecore transformation but did not find anything.
Does anybody have an idea to help me?
Thank you.
Dealing with evolving (meta-) models is not easy after all. It's basically comes down to migrating data from one format (conforming to one Ecore model) into another (conforming to another Ecore model).
You can apply model transformation techniques like ATL and AMW. This allows you to connect (weave) two Ecore (meta-) models (m1 and m2) and automatically generate code that transforms data from format m1 to format m2 and vice versa. (See here for some very interesting research papers on this subject.)
A pragmatic approach might be to manually implement the model transformation using EMF. Since the changes between your models are simple, this shouldn't be too hard to implement.