Loading an XMI representation for MOF into EMF - eclipse

I'm working with Enterprise Architect. There it's possible to export your model as MOF 1.4/XMI 1.2 into a file.
<?xml version="1.0" encoding="windows-1252"?>
<XMI xmi.version="1.2" xmlns:Model="org.omg.xmi.namespace.Model" timestamp="2012-03-19 16:16:33">
<XMI.header>
<XMI.documentation>
<XMI.exporter>Enterprise Architect</XMI.exporter>
<XMI.exporterVersion>5.1</XMI.exporterVersion>
</XMI.documentation>
<XMI.metamodel xmi.name="org.omg.mof.Model" xmi.version="1.4"/>
</XMI.header>
<XMI.content>
<Model:Package name="MofModel" xmi.id="EAPK_E660ED7D_A77D_4721_B26B_E43EA754C0F1" isRoot="true" isLeaf="false" isAbstract="false" visibility="public_vis">
<Model:Namespace.contents>
<Model:Class name="Class2" xmi.id="EAID_425DBFFA_432F_4a43_B12B_DEF05643C5A3" isRoot="false" isLeaf="false" isAbstract="false" isSingleton="false" visibility="public_vis">
<Model:GeneralizableElement.supertypes>
<Model:Class xmi.idref="EAID_E6FA2BB0_D81C_4b6c_86EF_9781887F5C26"/>
</Model:GeneralizableElement.supertypes>
</Model:Class>
<Model:Package name="Package1" xmi.id="EAPK_F9D099B3_F646_4ca1_93CE_CBE09014C651" isRoot="true" isLeaf="false" isAbstract="false" visibility="public_vis">
<Model:Namespace.contents>
<Model:Class name="Class1" xmi.id="EAID_E6FA2BB0_D81C_4b6c_86EF_9781887F5C26" isRoot="false" isLeaf="false" isAbstract="false" isSingleton="false" visibility="public_vis"/>
</Model:Namespace.contents>
</Model:Package>
</Model:Namespace.contents>
</Model:Package>
</XMI.content>
<XMI.extensions xmi.extender="Enterprise Architect 2.5"/>
After having this done, i want to load it in eclipse emf.
Until now, i found no possibility to do this.
Emf supports the XMI 2.0 standard (not 1.2).
Is there something to convert to this version or something?

You can tell EA to export in a wide range of XMI versions and other formats.
In the export dialog, click the Publish button and you should see a list with some dozen different XMI options.

Related

reltable is not working with dita open tool kit 3.0

I have created the file below. But it is giving me an error in dita open tool kit version 3.0.
{?xml version="1.0" encoding="UTF-8"?}
{!DOCTYPE map PUBLIC "-//OASIS//DTD DITA Map//EN" "map.dtd"}
{map id="ducks"}
{title}Ducks{/title}
{reltable}
{relheader}
{relcolspec type="concept"/}
{relcolspec type="task"/}
{relcolspec type="reference"/}
{/relheader}
{relrow}
{relcell}{topicref href="c.dita"/}{/relcell}
{relcell}{topicref href="t.dita"/}{/relcell}
{relcell}{topicref href="r.dita"/}{/relcell}
{/relrow}
{/reltable}
{/map}
The files c.dita (concept), t.dita (task) or r.dita (reference) are available.
Regards
Deepak Bhatia
Is this the full map? If so, then the problem is you are missing the structure that generates the output. Add topicref elements to build the content sand it should work.

Extracting data from owl file using java,gwt,eclipse

I have to display content from the owl file namely the class names.. onto my browser, I am using GWT,eclipse to do so, could some one tell me the following :-
1)how do I integrate the owl file with the eclipse project?
2)How do I run queries from my java project to extract class names from the owl file?
3)Where can I get the protege api to nclude into my project?!
You could just store your .owl file anywhere inside your project or on any other location on your harddrive. You just provide a path to it, when you load/store it (see code below).
Take a look at the OWLAPI, it allows you to load an existing ontology and retrieve all classes from it. Your code could look like this:
private static void loadAndPrintEntities() {
OWLOntologyManager manager = OWLManager.createOWLOntologyManager();
IRI documentIRI = IRI.create("file:///C:/folder/", "your_rontology.owl");
try {
OWLOntology ontology = manager.loadOntologyFromOntologyDocument(documentIRI);
//Prints all axioms, not just classes
ontology.axioms().forEach(a -> System.out.println(a));
} catch (OWLOntologyCreationException e) {
e.printStackTrace();
}
}
Rather than trying to integrate the Protegé API into your project, I suggest you write a plugin for Protegé. There are some great examples that should get you started. Import this project into Eclipse, modify the content, build your plugin and drop it into Protegé. That's it, you're ready to go!

UIMA RUTA - Sofa mapping -in Aggregate Pipeline

This is in regards to the question.
UIMA RUTA - how to do find & replace using regular expression and groups
I'm trying to setup Sofa mappings as suggested. I have an aggregate AE with several AEs and trying to incorporate 2 RUTA AEs/scripts within this pipeline. Both RUTA AEs (and associated scripts) are responsible for REGEXP find and replace using a Modifier. The 2nd AE is dependent on the output of the first AE. I had to configure the modifier's outputView of the 2nd AE, otherwise I was getting a 'Sofa data already set' exception.
In essence, I'm unable to weave the output of one as the input of the other AE.
The setup I have is similar to below,
_initialview --Input> (Normalizer1 RUTA AE) --Output> norm_1_out
norm_1_out --Input> (Normalizer2 RUTA AE) --Output> norm_2_out
norm_2_out --Input> (Other AE)
Here's the Aggregate AE code
<?xml version="1.0" encoding="UTF-8"?>
<analysisEngineDescription xmlns="http://uima.apache.org/resourceSpecifier">
<frameworkImplementation>org.apache.uima.java</frameworkImplementation>
<primitive>false</primitive>
<delegateAnalysisEngineSpecifiers>
<delegateAnalysisEngine key="NormalizerPrepStep1">
<import location="../../../ruta-annotators/desc/NormalizeNumbersEngine.xml"/>
</delegateAnalysisEngine>
<delegateAnalysisEngine key="NormalizerPrepStep2">
<import location="../../../ruta-annotators/desc/NormalizeRangesEngine.xml"/>
</delegateAnalysisEngine>
<delegateAnalysisEngine key="Normalizer">
<import location="../../../ruta-annotators/desc/NormalizerEngine.xml"/>
</delegateAnalysisEngine>
<delegateAnalysisEngine key="SimpleAnnotator">
<import location="../../../textanalyzer/desc/analysis_engine/SimpleAnnotator.xml"/>
</delegateAnalysisEngine>
</delegateAnalysisEngineSpecifiers>
<analysisEngineMetaData>
<name>RUTAAggregatePlaintextProcessor</name>
<description>Runs the complete pipeline for annotating documents in plain text format.</description>
<version/>
<vendor/>
<configurationParameters searchStrategy="language_fallback">
<configurationParameter>
<name>SegmentID</name>
<description/>
<type>String</type>
<multiValued>false</multiValued>
<mandatory>false</mandatory>
<overrides>
<parameter>SimpleAnnotator/SegmentID</parameter>
</overrides>
</configurationParameter>
</configurationParameters>
<configurationParameterSettings/>
<flowConstraints>
<fixedFlow>
<node>NormalizerPrepStep1</node>
<node>NormalizerPrepStep2</node>
<node>Normalizer</node>
<node>SimpleAnnotator</node>
</fixedFlow>
</flowConstraints>
<typePriorities>
<name>Ordering</name>
<description>For subiterator</description>
<version>1.0</version>
<priorityList>
</priorityList>
</typePriorities>
<fsIndexCollection/>
<capabilities>
<capability>
<inputs/>
<outputs/>
<inputSofas>
<sofaName>norm_1_out</sofaName>
<sofaName>norm_2_out</sofaName>
<sofaName>normalized</sofaName>
</inputSofas>
<languagesSupported/>
</capability>
</capabilities>
<operationalProperties>
<modifiesCas>true</modifiesCas>
<multipleDeploymentAllowed>true</multipleDeploymentAllowed>
<outputsNewCASes>false</outputsNewCASes>
</operationalProperties>
</analysisEngineMetaData>
<resourceManagerConfiguration/>
<sofaMappings>
<sofaMapping>
<componentKey>SimpleAnnotator</componentKey>
<aggregateSofaName>normalized</aggregateSofaName>
</sofaMapping>
<sofaMapping>
<componentKey>NormalizerPrepStep2</componentKey>
<aggregateSofaName>norm_1_out</aggregateSofaName>
</sofaMapping>
<sofaMapping>
<componentKey>Normalizer</componentKey>
<aggregateSofaName>norm_2_out</aggregateSofaName>
</sofaMapping>
</sofaMappings>
</analysisEngineDescription>
Few things to note,
all three RUTA AEs (step1, step2, normalizer) uses RUTA Modifier
the above setup throws an exception "No sofaFS with name norm_2_out
found." - this happens after step 2.
I have tried to switch 'norm_2_out' to 'modified' as the input sofa to
normalizer, this seems to move the processing to the next step in the pipeline (normalizer), but that throws an exception "Data for Sofa feature
setLocalSofaData() has already been set." at
org.apache.uima.ruta.engine.RutaModifier.process(RutaModifier.java:107)
I have tried with RUTA 2.2.0 (snapshot) with the same result
As I'm relatively new to both UIMA and RUTA, not sure if I'm doing something wrong or if there's a limitation that I'm running into.
BTW, I'm using RUTA 2.1.0
Thanks
The first thing that I noticed in your example is that you have to specify output sofas in your AAE. Those are all sofas that are created in the AAE, e.g, by one of its components.
Then there are sofa mappings missing. You have to connect the output views of the AEs with the input views of the other AEs. In your example, I only see the default input views.
I created a unit test, which can be applied as an example for this task.
The test is here: https://svn.apache.org/repos/asf/uima/ruta/trunk/ruta-core/src/test/java/org/apache/uima/ruta/engine/CascadedModifierTest.java
The resources (descriptors) used in the test are here: https://svn.apache.org/repos/asf/uima/ruta/trunk/ruta-core/src/test/resources/org/apache/uima/ruta/engine
Mind that I deleted the absolute paths in the ruta descriptors and adapted the namespace of the imported scripts. They are now loaded by classpath for the test instead of using the absolute paths.
The test calls an aggregate analysis engine AAE.xml, which imports and maps five analysis engines:
CWEngine.xml: simple Ruta script that replaces capitalized words. CW{->REPLACE("CW")}; CW.ruta
ModiferCW.xml: a normal modifier
SWEngine.xml: simple Ruta script that replaces small-written words. SW{->REPLACE("SW")}; SW.ruta
ModiferSW.xml: a normal modifier
SimpleEngine.xml: simple Ruta script that defines a new type and matches on "CW" followed by "SW". DECLARE CwSw; ("CW" "SW"){-> CwSw}; Simple.ruta
The aggreagted analysis engines defines three views: global1 (input), global2 (output) and global3 (output). The sofa mapping of the components is the following:
global1 -> [CWEngine, ModiferCW] -> global2 -> [SWEngine, ModiferSW] -> global3-> [SimpleEngine]
Given the text Peter is tired. in the view global1, the aggregate analysis engine creates two new views with the view global3 containing the text CW SW SW. and one annotation of the type Simple.CwSw.

grizzly logs to stderr, annoying in eclipse

When i run my junit jersey service tests using the grizzly framework in eclipse, the log is directed to stderr. As a result the console window grabs focus, and the log appears red.
I can't figure out the proper configuration steps. From my reading it looks like i need to add the slf4j.jar to my pom.xml and add a logging properties file somewhere? But i'm unsure which slf4j jars to add (there are many) or where to place the logging properties file.
Or, frankly, if this is the right approach in general.
p.s. also i am aware i can turn off the "show console when standard error changes" feature in eclipse, but i'd rather not paint over the problem. :)
It doesn't look to me like Grizzly used slf4j, but rather the "standard" java.util.logging framework. If that's the case, you can read about configuring it here: http://docs.oracle.com/javase/6/docs/technotes/guides/logging/overview.html#1.8
With Eric's help above I created this class:
package org.trebor.www;
import java.util.logging.ConsoleHandler;
import java.util.logging.Handler;
import java.util.logging.Logger;
public class LoggerTrap
{
public LoggerTrap()
{
Handler handler =
new ConsoleHandler()
{
{
setOutputStream(System.out);
}
};
Logger.getLogger("").addHandler(handler);
}
}
and added this jvm arg
-Djava.util.logging.config.class=org.trebor.www.LoggerTrap
and all java.logging goes to STDOUT. In the process I've learned that I don't much like java.logging.

EventLogInstaller Full Setup with Categories?

It appears the MSDN docs are broken concerning creating an Event Log completely along with a definitions file for messages. I am also lost on how to setup Categories (I have custom numbers in the 3000's for messages).
Can anyone point me to a link or show sample code on how to make this right?
You should start (if you haven't done so already) here:
EventLogInstaller Class (System.Diagnostics)
The sample provided there is the foundation for what you want to do. To sum it up, build a public class inheriting from System.Configuration.Install.Installer in an assembly (could be the same DLL where you have the rest of your application, a separate DLL, or an EXE file), decorate it with the RunInstaller attribute, and add your setup code in the constructor:
using System;
using System.Configuration.Install;
using System.Diagnostics;
using System.ComponentModel;
[RunInstaller(true)]
public class MyEventLogInstaller: Installer
{
private EventLogInstaller myEventLogInstaller;
public MyEventLogInstaller()
{
// Create an instance of an EventLogInstaller.
myEventLogInstaller = new EventLogInstaller();
// Set the source name of the event log.
myEventLogInstaller.Source = "NewLogSource";
// Set the event log that the source writes entries to.
myEventLogInstaller.Log = "MyNewLog";
// Add myEventLogInstaller to the Installer collection.
Installers.Add(myEventLogInstaller);
}
}
When you have your assembly compiled, you may use the InstallUtil tool available through the Visual Studio Command Prompt to run the installer code.
Regarding the message definition file (which includes category definitions), the MSDN documentation for EventLogInstaller.MessageResourceFile mentions that you should create an .mc file, compile it, and add it as a resource to your assembly. Digging around, I found an excellent post which should guide you to the end, here:
C# with .NET - Event Logging (Wayback Machine)