Package Validation in Enterprise Architect does not warn about Deployment Specification Instance deployments to Nodes or Devices - enterprise-architect

I have a question about Package Validation in Enterprise Architect. According to the UML Specification v2.5.1 section 19.2.3 "Semantics" paragraph 3:
DeploymentSpecifications can only be associated with DeploymentTargets that are
ExecutionEnvironments
However, when I connect a Deployment Specification Instance to a Node or Device via a Deployment Connector as shown in the following figure ...
... no warning or error is raised when I invoke Validate Current Package:
I enabled all options in the Model Validation Configuration dialogue:
Question
How can I enforce that behaviour in Enterprise Architect to only generate conforming UML?
Or how at least can I get a warning that the result if non-UML compliant?

The standard validation rules in EA only validate a limited set of UML syntax rules.
You could report this as a bug, and let Sparx add this rule, but you'll need to be patient.
As an alternative you can write your own validation rules that can be executed by the standard model validation. This will require you to write an add-in and validation code for each rule.
Or you can use the open source validation framework we developed, where you can define rules using SQL queries.

Related

How To integrate jBoss Drools with Wso2 Micro Integrator to use Drools as Business Rules Mgmt System

I have wso2 micro-integrator project implemented in hand, where the project has few hard-coded values & conditions in the business logic, so i want to make them as separate and implement those as rules using Drools(BRMS).
Then, how i could apply the Drools created rules on wso2 micro-integrated carbon applications.
Thanks for the help in advance,

Service Fabric: Plugins vs. Application Types

I'm developing a Service Fabric-based trading platform that will host hundreds of different long-running trading algorithms, all of which conform to a common interface and share a good deal of common code but can be vastly different in their internal specifics. I could model each of the different algos as an application type (which I'd dynamically load) but given the large number of different algos I have to wonder if in makes more sense to create a single Plugin Runner application type then implement the algos as plugins.
In a related question, I understand how to implement a plugin architecture, in general, but I'm not quite sure where one would place the actual plugins in order to be discoverable by an instance running on Service Fabric.
Anyway, thanks for your help....
Both approaches can work I think. Using lots of Application Types adds the (significant) overhead of running lots of processes, but allows you to use and upgrade multiple versions of the same algorithm running simultaneously.
Using the plugin approach requires you to deal with versioning yourself.
Using the Application approach probably requires some kind of request router, while the
plugin service could make it's own decisions (if it's stateless).
You can create a Stateful service that acts as the plugin repository, or mount a file share, or use a database, no restrictions from the platform here. You can use naming conventions to locate the proper plugin.
The following approach could work if an application upgrade is acceptable to you when changing the set of plugins needed for a given application instance.
Recall that Service Fabric apps must be packaged before deployment or upgrade. Using either msbuild tasks or Powershell, you could copy your plugin dlls to the plugin runner service's code package as a post-packaging step prior to the app upgrade. Then your plugin dlls would be available to the service at startup using Assembly.Load and the code package's path, available in your service implementation's Context.CodePackageActivationContext.GetCodePackageObject("Your-Code-Package-Name").Path property. The code package's name is defined in ServiceManifest.xml, and is named Code by default.

MS Project server 2016 update custom fields on tasks

For Project Server 2013 we’ve been using the SOAP API’s QueueUpdateProjectRequest to achieve this but in 2016 we can’t even checkout the project using SOAP.
We try to POST to /PWA/_vti_bin/psi/Project.asmx:
<?xml version='1.0' encoding='UTF-8' ?><ns2:Envelope xmlns:ns3="http://schemas.microsoft.com/office/project/server/webservices/Project/" xmlns:ns2="http://schemas.xmlsoap.org/soap/envelope/"><ns2:Header></ns2:Header><ns2:Body><ns3:CheckOutProject><ns3:projectUid>7475f3ef-226e-e611-80d3-0050568a983b</ns3:projectUid><ns3:sessionUid>c430ce2b-057e-4990-b5b6-9c6f28415739</ns3:sessionUid><ns3:sessionDescription></ns3:sessionDescription></ns3:CheckOutProject></ns2:Body></ns2:Envelope>
and get:
<s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/"><s:Body><s:Fault><faultcode xmlns:a="http://Microsoft.Office.Project.Server">a:ProjectServerFaultCode</faultcode><faultstring>Unhandled Communication Fault occurred</faultstring><detail><string xmlns="http://schemas.microsoft.com/2003/10/Serialization/">Incorrect inproc routing. No inproc host is available for Project.</string></detail></s:Fault></s:Body></s:Envelope>
We’ve also tried writing the custom field values using custom field internal names when Merge Posting to /ProjectServer/Projects('{#project}')/Draft/Tasks('{#Id}’.
The server seems to ignore the custom field values while correctly updating system field values.
There is documentation for updating custom fields on Project, but not on Task: https://github.com/OfficeDev/Project-REST-Basic-Operations/blob/master/updateprojectcustomfieldvalues.ps1
What is the proper way for updating custom fields on Tasks in Project Server 2016?
According to Microsoft, there is no Project class in the PSI anymore:
https://technet.microsoft.com/en-us/library/mt422816(v=office.16).aspx#Anchor_2
Project Server Interface (PSI) Project class removed
The Project class in the PSI is not supported in Project Server 2016. For all new development, use the Project Client Side Object Model (CSOM).
I'm getting the same error for calling PSI functions from the Project class.
I'm not 100% sure, but I guess on the server itself, the REST/SOAP operations are still using the PSI in the end, so you get the same error.
No idea whether you can still achieve what you need with the REST/SOAP.
The solution will be to use CSOM (as suggested by Microsoft), but I don't know if it fits your application.

Behavior difference between Actor and Service projects in Azure Service Fabric

In an Actor project, the AssemblyVersionAttribute value is used to update the ServiceManifest version, along with the code and config version. There is no such behavior for Service projects.
This updated version is also used to update the ServiceManifestRef 's ServiceManifestVersion reference in the ApplicationManifest. While the ApplicationManifest is modified on every build, it doesn't appear a manually set version within the Service project's ServiceManifest is updated in the ApplicationManifest either.
Is this planned or intended behavior for Service projects?
I'm running Visual Studio 2015 RC, the first preview of the Service Fabric SDK, and 4.0.95-preview1 of the NuGet packages.
Short answer: This behavior difference is temporary as we improve our tooling support for versioning and upgrade.
Slightly longer answer: Part of the original goal of the Service Fabric actor framework was to abstract away the details of manipulating the application and service manifests so that you can truly focus on your business logic. Hence, the SDK includes a tool (called FabActUtil) which is responsible for doing some of that manipulation on your behalf as a post-build step. There is currently no such tool for reliable services projects. We are considering options for reconciling this difference as part of adding upgrade support to Visual Studio. We need to strike a balance between keeping you in control of your versioning scheme and taking care of the chore of cascading your version changes throughout the application as required.

osgi multiple versions of service

i have osgi services service-1.0.0.jar and service-1.1.0.jar they are implementation of service-api-1.0.0.jar
both these service-1.0.0.jar and service-1.1.0.jar have same service name and packages.
service are registered by bundle-activator
lets assume bundle activator is com.abc.xyz.MyActivator - 1.0.0 and 1.1.0
issue I am facing is when I deploy these services and lookup using service tracker and filter on version I want, I always get same implementation regardless of what version chosen.
this makes me believe that what i am trying to achieve is not doable.
I need multiple implementation of service packaged in separate bundles with difference of version and be able to choose dynamically at runtime.
I am trying this in jboss-6.1.1 eap
if i keep different package name in versions looks like it is able to understand that these are 2 different services but when package names are same i get same service implementation.
am i doing something wrong? has anybody tried this?
is it correct that OSGI allow you to deploy multiple versions of service?
UPDATE After using unique package names for MyActivator in 1.0.0 and 1.1.0 looks like the services are able to maintain the uniqueness.
Does that mean Activators has to unique across bundles?
I assume that service-api-1.0.0.jar exports the package which defines the service interface. In that case, it sounds like you have two implementations of the same version of the service. Not different implementations of different versions of the service. So from a service user point of view, the services are that same: they are from the same service api package version.
I think you are using the OSGi services in a strange way. As a client you should not look into the implemention bundles to determine versions or other informations.
Instead you should use the service interface and service properties to distinguish between services.
So for example you can have a property version and publish the first service with version=1 and the second with version=2. Then you can filter for this property to get a specific service.
Using reflection is also a rather unusual thing. Better try to provide classes in the interface package that you use to exchange data between client and service. This will make the client less dependent on the service impl.
If you have multiple implementations of the same service API, and a client queries for an implementation of the service API, it could get any of the implementations. And that's good because the client shouldn't care.
Say for example you have a Greeting interface with multiple implementations registered as services, possibly by multiple bundles. If a client asks OSGi for a Greeting service then OSGi will simply pick one. After all, if you just ask for a Greeting without specifying anything else then you should accept any implementation. Clients certainly shouldn't care which particular bundle the service comes from: this is the nature of decoupling.
Incidentally, when this happens OSGi normally chooses the implementation that was registered first (actually the one with the lowest service.id, which is an ever-incrementing number, so it's effectively the same thing). This is probably why you see OSGi consistently choosing one particular service.
If your client needs to distinguish between service implementations then you can add properties to the published services and filter on those services. For example one bundle could publish a Greeting service with property language=en_US (i.e. US English) and another could publish a Greeting service with language=de. If your client only wants greetings in English then it can use a filter like (language=en*).