Process a model validation in Enterprise Architect via batch - enterprise-architect

I want to initiate a model validation via the context menu "Project > Model Validation > Validate selected" (1) and via a batch script (2).
For realizing (1) I followed the sparx documentation [1]. This works fine.
But I can't find a suitable API method for starting a model validation for (2).
Does an equivalent function exist?
With kind regards
MK
[1] http://www.sparxsystems.com/enterprise_architect_user_guide/9.2/automation/validation.html

Yes, you're just looking in the wrong place.
What you're looking at there is the documentation for validation callbacks into an Add-In, which is how you implement your own validation rules. The methods to execute a validation are in the Project class.
The Project is a singleton, accessed from the Repository class, which is also a singleton and which is available in the context of an executing script.
Note, however, that there are no API calls to select which validation rules are used; this can only be done via the GUI. Which makes the whole exercise a little frustrating.

Related

Symfony Workflow versus custom form validator

I started looking around Symfony Workflow component. Great possibilities to guide object lifecycle changes.
Workflow are object-related only. How to efficiently link this with form feedback? Without workflow, I would give a go for custom validator where I can put a check logic (possibly stored in a service to reuse them elsewhere) and then stop things if criterias don't match what's requested.
Should I trigger workflow transitions within custom form validators? If not, which architecture should I use here?

Drools - EventListener

We are planning to use Drools/JBoss BRMS 6 for business rules management. Our plan is to write rules using the workbench, deploy the rules package in multiple Execution Servers and allow applications to access the Rules package by making calls to the REST API. We do not have any Java wrappers or custom classes in between the calling applications and the rules package.
I am trying to incorporate some logging into the rules engine. I understand that there are EventListener interfaces that can be implemented.
Please would you provide some information/guidance on how to implement Listeners in our kind of set up? Where will I create and store the Java classes/methods that would implement Event Listeners?
How can a calling application insert an Event Listener into the session? Will it be part of the xml/json payload?
Thanks
1. Where to implement the listeners?
The listeners must be obviously implemented in Java. One simple place I found to put those implementation is in a separate maven project. After all, a project in the kie-workbench is a maven project itself. So you can create a separate project (outside the kie-workbench) implement the listeners you want to and then add this new project as a dependency in your kie-workbench's project (check the documentation on how to do that).
The only problem I found with this approach is that once you defined the dependency between your projects, the kie-workbench will scan every single class of it and of any other dependency it has. Check this link for more information.
So, if your listener project doesn't have too many dependencies, you should be good to go. Please note that, in theory, you could add any kie/drools dependency you have in your listener project as <scope>provided</scope>.
2. How can I configure these listeners?
A trick that I always use is to have what I call a "configuration" rule to do this kind of job.
A "Configuration" rule is a rule without LHS (and, if you are distrustful, a high salience). This kind of rules are guaranteed to be executed only once. Just make sure that you call a fireAllRules() before the first interaction with the kie-server, or that the first interaction always starts with a fireAllRules command.
Your configuration rule could look like this:
/**
Configures the session's listeners.
**/
rule "[SUB-CONFIG] Listeners Configuration"
salience 1000
when
then
((org.drools.impl.StatefulKnowledgeSessionImpl)kcontext.getKnowledgeRuntime()).addEventListener(new MyWorkingMemoryEventListener());
((org.drools.impl.StatefulKnowledgeSessionImpl)kcontext.getKnowledgeRuntime()).addEventListener(new MyAgendaEventListener());
end
You can place this rule in your kie-server project.
Hope it helps,

Spring Roo + GWT: is RequestFactory still the way to go if "dual control" is required for every data operation?

One requirement in our application is to implement "dual control" for everything, including CRUD operations.
Just to be clear, "dual control" is a feature that requires a change in the data to be approved by someone other than the change requestor. So when a user make changes to data, it's not directly committed to production tables. I'm aware of several ways to implement this (e.g. staging tables) but thats for other time.
The question, with such requirement, do you think we should follow the standard "data centric" way of generated Roo + GWT (which uses RequestFactory) ?
Or we'll better off implementing our own "command pattern" based framework to support dual control?
I'm inclined toward the latter. My intuition (which based on 3 days play-around with Roo+GWT) says that RequestFactory is not designed with dual control in mind, and we'll hit a wall if we try to force our way in. Would be more than happy to be proven wrong here.
Have you looked at RequestFactory's ServiceLayerDecorator? It mediates all interaction between the payload processing and your domain objects and code. As an example, you could override the getProperty and setProperty methods to read from and write into some kind of "shadow" log that holds pending mutations.
If you need to implement ACLs for objects, methods, or properties, the loadDomainObject and resolveX methods can be used to control which server-side classes any given request can interact with.
To wire in a custom decorator, you can subclass RequestFactoryServlet and call the two-arg constructor. Alternatively, you can just instantiate a SimpleRequestProcessor using the object returned from ServiceLayer.create().
Implementation note: all of RequestFactory's default domain-interaction behavior is built using a series of ServiceLayerDecorators; check out the GWT source if you want to see example code for building a ServiceLayerDecorator. One thing to note is that if your decorator calls any methods defined in the ServiceLayer API, it should use the instance provided by getTop(). ServiceLayerDecorator instances are expected to be stateless and reusable, so if you need to maintain state across method calls, consider using ThreadLocal variables, similar to RequestFactoryServlet.getThreadLocalX().
It really depends what "user experience" you want, and particularly whether you want users to validate "diffs" of what has been changed, or approve the "new version" (snapshot).
If you want diffs, because RequestFactory only sends diffs (i.e. the actual changes the user, or you code, made to the objects) to the server, then intercepting setProperty calls as suggested by Bob is certainly one way to do it (to make Bob suggestion a bit clearer: you'd "store" the diff in a static ThreadLocal so you can retrieve it from your service call). You could also use "smarter" domain objects, that build an internal diff when their setters are called; the diff would then be accessible for each object on the object itself.
If you want snapshots, then you simply have to implement your services to store the modified objects in "staging tables" or whatever rather than in the "production tables"; and then "move" them to the "production tables" when the "approve" service is called.
One thing that's clear (to me), is that you have to model your services and/or objects around "dual control" and not try to do it within "simple CRUD" operations (i.e. the "save" is not a "save", it's a "send for approval"; and there's a separate "approve" operation).

Using PostSharp to intercept ADO.Net

I have quite a large code base using a variety of different ADO technologies (i.e. some EF and in some cases using ADO.Net directly).
I'm wondering if there is any way to globally intercept any ADO.Net calls so that I can start auditing information - exact SQL statements that executed, time taken, results returned, etc.
The main idea being that if I can do this, I shouldn't have to change any of my existing code and that I should be able to just intercept/wrap the ADO.Net calls... Is this possible?
You can globally intercept any methods that you have access to (ie: your generated models & context). If you're needing to intercept methods in framework BCL then no.
If you just want to get the SQL generated from your EF models then intercept one of the desired methods with the OnMethodBoundaryAspect and you can do your logging in the OnEntry and OnExit methods.
Remember, you can intercept only code you have access to. Generated EF code is accessable but will over write any changes you make to it so you will need to apply the aspect using either a partial class or with an assembly declaration. I would suggest the latter since you want global interception.
Just my 2 cents: You might want to look at other alternatives for this such as SQL profiler or redesigning your architecture.
Afterthought is an open source tool that supports modifying an existing dll without requiring you to recompile from source to add aspect attributes. For this to work, you would need to create amendments (the way you describe your changes in Afterthought) in a separate dll, and this dll would need to have an assembly-level attribute implementing IAmendmentAttribute that would identify the types in your target assembly to process.
Take a look at the logging example to see how this works and let me know if you have any questions/issues.
Please note that Afterthought modifies your target assembly to make calls to static methods in another assembly (your tool). If you want to intercept calls with modifying the target assembly in any way, then I recommending looking into the .NET profiling API.
Jamie Thomas (primary author of Afterthought)

Using the Validation Block of Enterprise Library with Entity Framework

We've used the Validation Block of MS Enterprise Library for some time with great success in conjunction with custom DALs but we've recently started using the Entity Framework and can't get the Validation Block to work with it. The objects are dynamically created in EF and putting attributes on top of them will simply get wiped out when the models are re-genned.
Can these two co-exist? If not, does anyone have any recommendations for what validation library/simple rules engine would be a good candidate to use along with EF?
Thank you.
You need a validator which supports a "buddy class" (like this example for Dynamic Data). This seems to be a work in progress for VAB. I can't find an example of anyone actually using it yet, but it might work.
Validation Application Block supports the concept of configuration based validation. This way you can separate your generated domain entities from validation. You can use the Enterprise Library configuration tool for this. Simply right-click on your configuration file and start adding validation configuration.
I advise you to read the VAB Hands on Lab document (ValidationHOL.pdf) which is included in the Hands On Lab download. After reading that document, read this article. It explains how to integrate VAB with Entity Framework.
Good luck.