I am writing a new command for my eclipse RCP that should perform one task if one part is active, and other task if other task is active (eg. copy command that copies files if project explorer is active or copies text if text editor is active). I was thinking of having 2 handlers for one command (one defined in fragment.e4xmi of one plugin and other handler in fragment.e4xmi of another plugin). Is this doable?
On this page http://www.vogella.com/tutorials/EclipseRCP/article.html#importantmodelelement_examples it says that:
Each command can have only one valid handler for a given scope. The Eclipse framework selects the handler most specific to the model element.
For example, if you have two handlers for the "Copy" command, one for the window and another one for the part then the runtime selects the handlers closest to model element which is currently selected by the user.
Is it possible to have 2 handlers for one command in e4?
If you mean two handlers being called for one invocation of the command the answer is No.
As the reference you quote says the handler closest to the current model element is chosen.
For multiple handlers applying to different parts put each handler in the Handlers list for the part you want it to apply to. This can be in a fragment or the main e4xmi file.
Related
I am going through the tutorial for Model simulation of Enterprise Architect. In the Dynamic Simulations, under section of CreateObject, it says that the Output action pin stores the created object and can be passed via an object flow to another action. The attributes of the created object can be then accessed in the destination action. See EA Help and
Breakpoint 1: A CreateObject Action creates an Object ´cb´ and the local variable
Breakpoint 2: The created object passed to Action3 via an object flow in action pins, the control is passed with control flow.
However, as you can see that the object cb is nowhere to be seen in Action3. I would really appreciate if someone can help me in this and explain how it works.
So, I tried a lot of things to do this. And I somehow figured out a way to do part of it. So here is the answer.
Pre-Requisite: A class with a few attributes and operation. Along with the behavior of the operation to be performed. Like in the below example, following is the behavior defined for add operation.
this.x=x;
this.y=y;
return x+y;
Steps:
Create a CreateObject Action Element (Toolbar (space) > Action > Create Object).
In element properties (docked) set the Classifier. (Properties > Element tab > CreateObjectAction section > Classifier > {Class, Actor, Activity}).
The CreateObject has a 'result' ActionPin. Drag the ActionPin to the diagram from the browser on the CreateObjectAction Element.
Assign the ActionPin(result) of type as the classifier object from Element Properties of ActionPin.
Add an object flow to a new action or the destination action, on which the object is being passed.
Follow step 4 for the new action pin generated.
Add the control flow from the create object to a destination action(might be same or different than the action of Step 5).
Drag the Class to the activity diagram and create it's instance with a unique name. (IT'S STRANGE THAT WE NEED TO DO THIS, BUT IT DOESN'T WORK WITHOUT THIS)
Finally after completion of diagram. We can perform simulation. The newly created object can be seen in the Local window as UID.
This is the created object which can be used to access the operation and the attributes of instance. In order to do this, copy this name and use it in the effect field of action.
this.result = DFB9AEAD91_990D_4cec_AE54_93A8A3BC684F.add(7,7);
The output then can be seen in the local window.
NOTE: The object "cb" is still nowhere to be seen. However, an object is definitely created with a unique id as shown in the above code snippet. A part of the question still remains unanswered is the role of action pins.
I have a DLL file containing a few different classes. Inside this DLL i've created a Windows Form.
After building the DLL project, I opened up PowerShell (I used ISE for convenience..) and executed the below script -
[reflection.assembly]::LoadFile("...\MyDLL.dll")
$NewForm = New-Object (AssemblyName).(Class Name of a class with a few subs, one of which will show the form I created)
$NewForm.ShowForm()
ShowForm is a simple sub that shows the windows form by calling the forms name and .Show().
When I execute this, and the form appears, the form hangs. It's almost as if the entire form is disabled. I can't interact with any controls on the form, nor can I close it by pressing the red X at the top.
I'd like to store a few forms that go with methods I've created in DLL files, so that if I decide to use a form to go with them in other applications, I won't have to rebuild the form.
(To anyone who's wondering why, then, I'm using powershell - I feel it's easier to test out functions / subs in a DLL from Powershell first, since 3 lines in a commandline tool like Powershell can easily call a sub / function from a DLL instead of creating / modding / recompiling a .exe project to do the same)
Any time I ever try to use a form object in PowerShell calling the .Show() method, it always seems to fail. You might consider trying .ShowDialog(), which blocks the current thread from doing anything until it exits, but the session doesn't hang. I haven't researched the "why" behind this yet, but for what you've described, I'm assuming .ShowDialog will work for you.
I need to implement a custom ResultHandler but I am confused about how to actually integrate my custom class into the software package.
I have read this: http://elki.dbs.ifi.lmu.de/wiki/HowTo/InvokingELKIFromJava but my question is how are you meant to implement a custom result handler such that it shows up in the GUI?
The only way I can think of doing it is by extracting the elki.jar package and manually inserting my custom class into the source code, and then re-jarring the package. However I am fairly sure this is not the way it is meant to be done.
Also, in my resulthandler I need to output all the rows to a single text file with the cluster that each row belongs to displayed. How tips on how I can achieve this?
There are two questions in here.
in order to make your class instantiable by the UIs (both MiniGUI and command line), the classes must implement our Parameterization API. There are essentially two choices to make your class instantiable:
Add a public constructor without parameters (the UI won't know how to set your parameters!)
Add an inner static class Parameterizer that handles parameterization
in order to add your class to autocompletion (dropdown menu), the classes must be discovered by the MiniGUI/CLI/other UIs. ELKI uses two methods of discovery:
for .jar files, it reads the META-INF/elki/interfacename service files. This is a classic service-loader approach; except that we also allow ordering instances.
for directories only, ELKI will also scan for all .class files, and inspect them. This is mostly meant for development time, to avoid having to update the service files all the time. For performance reasons, we do not inspect the contents of .jar files; these are expected to use service files.
You do not need your class to be in the dropdown menu - you can always type the full class name. If this does not work, adding the name to the service file will not help either, but ELKI can either not find the class at all, or cannot instantiate it.
There is also a tutorial on implementing a custom result handler, but it does not discuss how to add it to the menu. In "development mode" - when having a folder with .class files - it will show up automatically.
I am developing an Eclipse RCP application and have defined a custom context (org.eclipse.ui.contexts) for my editors. This context is activated whenever I invoke one of my editors.
Further, I have defined a single-key key binding (org.eclipse.ui.bindings) that I have scoped to this context which, when typed within the editor context, invokes a command/handler (I'll use the letter 'J' for this example).
Everything works as expected. When I launch/select one of my custom editors, the context is activated and 'J' executes my handler. When I launch/select a view part, my custom editor's context is deactivated and 'J' no longer executes the handler. However, when I click in a Text widget somewhere in the window trim area--let's say the Quick Access field--and type the letter 'J', the keystroke is consumed and executes my handler, behavior I do NOT want.
The reason is that selecting another workbench part has the effect of activating its context and deactivating the previous one. However clicking anywhere else in the workbench window area (other than another part) does NOT deactivate the previous context. I am sure this is by design and is a perfectly reasonable approach. However, it prevents me from defining single-key key bindings.
Has anyone a) run into this problem before and b) if so, how did you solve it?
For now I am using a complete hack that involves using a global listener to disable the key binding service entirely on detecting entry into a Text widget, and re-enabling it on exit from the Text widget.
In extension point point <extension point="org.eclipse.ui.bindings"> do not specify the command ID that will replace the existing key.
See the documentation for details.
commandId - The unique identifier of the command to which the key sequence specified by this key binding is assigned. If the value of this attribute is an empty string, the key sequence is assigned to an internal 'no operation' command. This is useful for 'undefining' key bindings in specific key configurations and contexts which may have been borrowed from their parents.
When using the objectContribution-element (which is part of the org.eclipse.ui.popupMenus-extension point), I often (practically always) want to delegate to some command instead of implementing some action myself (since usually, I have the command and a handler already implemented). I'm doing this by using ICommandService and IHandlerService, but it feels there should be a way to achieve this programmatically. I could use viewerContribution instead of objectContribution, but then I would lose the easy way of showing the menu entry only when certain object types are selected. Ideally, I would like to use the enablement-checks that already exist for my handlers to apply to the menu entry defined by the objectContribution.
Ok, here's what I was missing: instead of using the org.eclipse.ui.popupMenus-extension point, I had to use the org.eclipse.ui.menus-extension point with a menuContribution that has its locationURI-attribute pointing to popup:org.eclipse.ui.popup.any?after=additions. This menuContribution can include a command-element (achieving the goal of binding directly to an existing command), and this command-element´s visibleWhen-element can be bound to the activation status of the bound command's handler via the checkEnabled-attribute (achieving the goal of having the popup-menu entry visible only when the enablement for the command handler is satisfied).
What's bad is that the documentation of the org.eclipse.ui.menus-extension point states that the org.eclipse.ui.popupMenus-extension point is to be considered deprecated, but the documentation of org.eclipse.ui.popupMenus does not mention this fact.