How to manage both Deployment and Component UML Models in Sparx EA? - deployment

I have an existing suite of SOA-connected applications (mixture of JavaEE, PHP and .Net) for which I need to provide an overall deployment model or architectural diagram.
I have found an example of a UML diagram for J2EE Application Deployment which is attractive because it's at just about the right level of detail (apps, containers, some component manifestation) for my current diagramming requirements.
I may even aggregate those at a higher level using something like the same author's Application Clustering Example.
I'm confident that I can jump right in at the component level or even at the artifact level and build my diagram(s) from there.
However, I also design specific Java components and would like to begin providing overall class diagrams to the development team when this current "architectural" exercise is complete. I expect this involves reverse engineering the Java code and starting from there.
My question is: what is my best strategy for meeting my current deployment and future component modeling needs?
Can I expect to back-fill the current artifacts I create now (eg. WAR or JAR file) with reverse-engineered components later?
Should I reverse engineer now, create the artifacts from the "bottom up", ignore most of the components, then update the reverse-engineered code later when it's time for component modelling? I would still require only logical (i.e. not backed by code) components for the .Net and PHP pieces since they're not my domain.
Should I make & keep my deployment artifacts separate (either via different EA projects or disconnected models in the same project) from my components, requiring a "manual" update to deployment diagrams / artifacts if/when code changes?
I'm just getting started with Sparx EA (after migrating from RSA) and would appreciate the perspective of anyone with more EA experience than myself... as well as feedback on any anti-pattern red flags raised by my descriptions above.

There is no good/general answer to your question. You should use MDA with CIM/PIM/PSM views where you put the components in the PSM and the class model in the PIM. Now, to keep all that in synch the only true way is to do it manually - the hard way. Though EA offers a model transformation I can not really recommend that. It pretends to link/synch PIM and PSM (in this case) automagically. But it's just a bad facade. First it works only one way (PIM to PSM) and second you soon loose the contact between both model views as you don't really see the traces. Instead install the <<trace>> connectors manually and annotate them as needed.

Related

Can an artifact consist of a list of .jar files?

I read in a UML manual that when there are many .jar files, it is possible to list them in a single artifact box. However, I have not been able to verify this from other sources, and since Visual Paradigm does not formally allow it, I would like to know if my diagram is compliant with UML notation.
If this is correct, is there a rule for choosing the name of the artifact?
I'm also trying to figure out what manifestations are. Since I don't recognize actual components in my application, but only several layers that I wouldn't define as components, I can't even find manifestations. Is it possible that there are no manifestations in a web application?
The shortcut notation using «artifact» is ambiguous, because the notation refers to a single artifact, with a name File.JAR when in reality there are plenty of them. Moreover, the UML specifications do not mention this possibility, so modelling tools shouldn't provide this feature.
However, UML provides a shortcut for deployed targets (such as nodes and execution environments), allowing to write the list of deployed artifacts directly in the box of the node, instead of drawing a lot of nested or related space-consuming artifact symbols. The UML specification explicitely allows it:
DeployedTargets are shown as a perspective view of cube labeled with the name of the DeployedTarget shown prepended by a colon. System elements deployed on a DeployedTarget, and Deployments that connect them, may be drawn inside the perspective cube. Alternately, deployed system elements can be shown as a textual list of element names.
The UML specification provide several examples page 653 and 657.
P.S: in addition of the UML specs, I've cross checked UML Distilled, The UML User's guide 2nd edition, and The UML Language reference manual 2nd edition. They are all consistent in that regard: they mention the possibility of deployments directly in an execution target (the older books clarify that it's in a compartment, i.e. after a separation line), none of them present this possibility for artifact symbols.
It depends how much you, not your tooling, cares about UML compliance
Broadly, the need for strict UML adherence varies: if you are using UML to generate code / documentation, etc, then yes you need to adhere to the spec. Whereas if you are just trying to communicate ideas to other people then, unless they are UML fanatics, they probably won't care as long as they can clearly understand what you're communicating.
The challenge for tools like Visual Paradigm and Sparx EA is that they need to be UML compliant. The means you get the strict adherence whether you need it or not - unless you find a work-around that lets you communicate your ideas even if from a UML stand-point it's a little weird.
I just wanted to complete this with what the UML spec says about artifacts (p. 654):
An Artifact represents some (usually reifiable) item of information that is used or produced by a software development process or by operation of a system. Examples of Artifacts include model files, source files, scripts, executable files, database tables, development deliverables, word-processing documents, and mail messages.
(emphasis by me)
Now, whatever reifiable will mean (probably refinable?) , I think the term item of information is broad enough to cover anything that holds information. May it be abit, a sentence in a file or a complete set of files.

Are there any workflow engines in existence that don't use BPMN and BPEL?

Our business is planning on building a rather large business application with about 2000 or so users.
Many objects in the system require a mildly complex series of approvals, notifications, etc.
For various reasons, our company has decided to reject formal use of BPMN or BPEL. What I am looking for is a workflow engine that I can pass these objects to as a means of facilitating, tracking, and managing the state of these objects. We are implementing this project using EJB 3.1 with a WebSphere AS.
Am I correct in my understanding of a workflow engine? Everything seems tied to BPMN or BPEL...am I just missing something here as to why most solutions seem to implement BPMN or BPEL? Some advice would be wonderful!
Workflow engines typically take an active role in an enterprise architecture. They execute a declarative process model, which is basically a directed graph consisting of nodes, which represent activities or tasks and edges, which represent the control flow between these edges. Such edges can be annotated with conditions to allow for expressing conditional branching/merging. There are several modelling languages around, like YAWL, XPDL, jPDL, BPEL and BPMN 2.0, which sit on top of these abstract concepts and some syntatic, visual and functional sugar, but only the latter are official industry standards. This is important to avoid vendor locks, make models interchangeable (at least to a certain extent), supportable by experts and different tools. During runtime, process instances are created based on a process model and are executed according to the control flow defined by the model. So the engine actively navigates from one activity to the next activity and thus "orchestrates" your business logic. The main difference between BPMN 2.0 and BPEL is that BPEL is tightly coupled to Web services, i.e. business functions to be invoked by activities are supposed to be rendered as Web service. So if you want to orchestrated WS-* services, it is still the best choice since BPMN 2.0 lacks well-defined and standardized bindings to concrete service implementations. In any case, I'd strongly recommend to use one of the standardized languages since they are both broadly accepted in industry and well supported by various vendors and open source communities.
I tried to explain that in more details because I was not entirely sure about what you mean by "facilitating, tracking, and managing the state of these objects". This sounds a little bit like you are more interested in passively monitoring an object's state change as opposed to actively controlling state changes using a workflow engine. If this assumption is right, then perhaps a abstract state machine would fit your needs better.
Take a look at jBPM5, it provides a very flexible core that allows you to build your own domain specific language on top of it. Right now the language provided is BPMN2, but you can easily add your own.
Cheers
We are building a product that has a migration path for BPMN 2.0 but does not - internally, use BPMN. We believe checklists are much easier to use in real-time workflows than flowcharts. Is still however, has rules/triggers/conditionals and more - so it's a tool that effectively models processes as "checklists on steroids":
Check it out at http://tallyfy.com

What has been done in the field of versioning models?

We had a rather nice lecture about Model driven architecture by a guy from Model Labs.
One thing that got me intrigued was the version control for models ( not to be confused with different models of version control) - or the lack of thereof. By version control for models he meant a way to version XML, EMF files which preserves their semantic.
So, I'm interested in what has been done so far on that field (he mentioned something about SVN and Moodle, though I could have misheard him). The Google search yielded nothing so I'm turning to the wisdom of the Stack Overflow.
I'm looking mostly for information in the form of books, articles, links.
I don't know of a VCS alone dedicated to Model, because Model-base design is often part of a all chain of documents that need to be kept in sync.
Namely (not an exhaustive list):
requirements documents (from which you start modeling)
source code and documentations (generated and implemented from the model)
Plus, I never saw the GUI aspect fully solved in those tools (one model painstakingly organized a certain way might be versioned without layout information, and restored organized another way).
One tool I know of which covers all of those development processes is Modelio, which includes a "teamwork manager"
Another example (which I don't know as much about) would be metaCASE, which has an interesting paper "The Model Repository: More than just XML under version control", about DSM (Domain-Specific Modeling).
DSM: model-based software development approach that uses visual models as primary artifacts in the development process.
DSM raises the level of abstraction beyond normal programming languages by directly specifying the solution in a language that uses concepts and rules from the problem domain – a Domain-Specific Language (DSL).
It does summarize the problem:
There is increased awareness within the modeling arena of the need for
a central repository of system description information.
This is brought on by a growing recognition that only with a strong central repository can modeling tools be integrated, cope with large projects, provide full life-cycle support, produce complete documentation, perform system-wide validation and verification, and adequately control a project.
A full list of version control tools for models can be found here: http://modeling-languages.com/content/version-control-tools-modeling-artifacts
Check EMF framework Edapt
it provides the following features:
Edapt IDE Tooling:
Ecore Editor enhancement to create and maintain the history of an Ecore
Operation-browser to execute refactorings on an Ecore
Release Tooling to prepare a migration plugin from the Ecore history
Custom Migration Support
Edapt Runtime:
API to detect version of given model instances
API to migrate model instances with registered migration plugins

Need some advice on starting a New Life with MVC 2 and which Tools to use for RAD in MVC2?

I have finally decided to hop up on the train of MVC 2.
Now I have been doing a lot of reading lately and following is the architecture which I think will be good enough for most Business Web Applications.
Layered Architecture:-
Model (layer which communicates with Database). EF4
Repository (Layer which communicates with Model and includes all the queries)
Business Layer (Validations, Helper Functions, Calls to repository)
Controllers (Controls the flow of the application and is responsible for providing data to the view from the Business Layer.)
Views (UI)
Now I have decided to create a separate project for each layer (Just to respect the separation of concerns dilemma. Although I know it's not necessary but I think it makes the project look more professional :-)
I am using AutoMetaData t4 template for Validation. I also came across FluentValidation but cant find much on it. Which one should I go with?
Which View Engine to go for?
Razor View Engine Was Love at first sight. But it's still in beta and I think it won't be easy to find examples of it. Am I right?
Spark .. I can't find much on it either and don't want to get stuck somewhere in the middle crying for help when there is no one to listen...:-(
T4 templates auto generate views and I can customize them to generate the views the way I want? Will this be possible with razor and spark or do I have to create them manually?
Is there any way to Auto generate the repositories?
I would really appreciate it if I can see a project based on the architecture above.
Kindly to let me know if it's a good architecture to follow.
I have some confusion on the business layer like is it really necessary?
This is a very broad question. I decided to use Fluent NHibernate's autoconfig feature for a greenfield application, and was quite impressed. A lot of my colleagues use CakePHP, and it needed very little configuration to get it to generate a database schema compatible with the default conventions cake uses, which is great for us.
I highly suggest the book ASP.NET MVC2 in Action. This book does a good job at covering the ecosystem of libraries that are used in making a maintainable ASP.NET MVC application.
As for the choice of view engines, that can depend on your background. I personally prefer my view to look as much like the HTML as possible, so I would choose Spark. On the other hand if you are used to working with ASP.NET classic, the WebForms view engine may get you up and running fastest.
Kindly to let me know if its a good architecture to follow?
It's a fine start - the only thing I would suggest you add is a layer of abstraction between your Business Logic and Data Access (i.e: Dependency Inversion / Injection) - see this: An Introduction to Dependency Inversion.
i know its not necessary but i think it makes the project looks more professional :-)
Ha! Usually you'll find that a lot of "stuff" isn't necessary - right up until the moment when it is, at which point it's usually too late.
Re View Engines: I'm still a newbie to ASP.NET MVC myself and so aren't familiar with the view engines your talking about; if I were you I'd dream up some test scenarios and then try tackling them with each product so you can directly compare them. Often, you need to take things for a test drive to be more comfortable - although this might take time, but it's usually worth it.
Edit:
If i suggest this layer to my PM and give him the above two reasons then i don't think he will accept it
Firstly, PM's are not tech leads (usually); you have responsibility for the design of the solution - not the PM. This isn't uncommon, in my experience most of the time the PM isn't even aware they are encroaching on your turf that isn't theres. It's not that I'm a "political land grabber" but I just tend to think of "separation of concerns" and, well, I'm sure you understand.
As the designer / architect it's up to you to interpret requirements and (taking business priorities into account) come up with solution that provides the best 'platform' going forward.
(Regarding DI) My question is , is it really worth it?
If you put a gun to my head I would say yes, however the real world is a little more complex.
If you answer yes to any of these questions then its more likely using DI would be a good idea:
The system is non-trivial
The expected life of the system is more than (not sure what the right figure is here, there probably isn't one, so I'm going to put a stake in the ground at) 2 years.
The system and/or its requirements are fluid.
Splitting up the work (BL / DAL) into different teams would be advantageous to the project (perhaps you're part of a distributed team).
The system is intended for a market with a diverse technical landscape (e.g: not everyone will want to use MS SQL).
You want to perform quality testing (this would make it easier).
The system is large / complex, so splitting up functionality and putting it into other systems is a possibility.
You want to offer more than one way to store data (say a file based repository for free, and a database driven repository for a fee).
Business drivers / environment are volatile - what if they came to you and said "this is excellent but now we want to offer a cloud-based version, can you put it on Azure?"
Id also like to point out that whilst there's definitely a learning curve involved it's not that huge, and once you're up-to-speed you'll still be at least as fast as you are now; or at worst you;ll take a little longer but you'll be providing much more value (with relatively less effort).
In terms of how much effort is involved...
One-Off Tasks (beyond getting the team up to speed):
Writting a Provider Loader or picking DI Framework. Once you've done this it will be reusable in all your projects.
'New' Common Tasks (assuming you're following the approach taken in the article):
Defining interface (on paper) - this is something you'll be doing right now anyway, except that you might not realise it. Basically it's OO Design, but as it's going to be the formal interface between two or more packages you need to give it some thought (and yes you can still refactor it - but ideally the interface should be "stable" and not change a lot; if it does change it's better to 'add' than to 'remove or change' existing members).
Writting interface code. This is very fast (minutes not hours), as you're not writting any implementation; and when you go to implement you can use tools provided by your IDE to generate code-stubs based on the interface.
Things you do now that you'd do differently:
Instantiating a variable (in your BL classes) to hold the provider, probably via a factory. Writting this shouldn't take long (again, minutes not hours) and it's fairly simple code to copy, paste & refactor where required.
Writing the DAL code: should be the same as before.
Sometimes it is way more easy to learn patterns from code : Sharp Architecture is a concrete implementation of good practices in MVC, using DDD.

How do you organise your code library?

I am interested to know how people organise their code libraries, particularly with respect to reusable components. I am talking in OO terms below but I am interested in how your organise libraries for other types of language also.
For example:
Are you a stickler for class library projects for everything or do you prefer to keep everything in a single project?
Do you reuse your prebuilt DLLs or do you include individual classes from previous projects in your current work? If individual classes, do you share them between the projects to ensure all are kept up to date or do you permit branching?
How large are your reusable elements? How focussed are they? How are they focussed?
What level of reuse do you attain through your preferred practices?
etc.
EDIT
I am not looking for specific guidance here, I am just interested in people's thoughts and practices. I am particularly interested in the reuse of code between disparate projects, rather than within a single project. (Unfortunately the use of 'project' here is misleading - I mean reuse between real-world projects undertaken for customers, not projects in a Visual Studio sense.)
It generally can be guide by deployment considerations:
How will you deploy (i.e. what will you copy on your production machine) ?
If what you are deploying are packaged components (i.e. dll, jar, war, ...), it is wise to organize the "code library" as a collection of packaged set of files.
That way, you will develop directly with the -- dll, jar, war, ... -- which will be deployed on the production platform.
The idea being: if it works with those packaged files, it may still work in production.
the reuse of code between disparate projects, rather than within a single project.
I maintain that kind of reuse is easier in a "component" approach (like the one discussed in the question "Vendor Branches in GIT")
Over more than 40 current projects, we achieved:
technical reuse by systematically isolating any pure technical aspect into independent framework (typically, log framework, exception framework, KPI - Key Performance Indicator - framework, and so on).
Those technical components are reused into every other projects.
functional reuse by setting a clear applicative architecture in order to divide any functional domain (given the business and functional specifications) into well-defined applications. That would typically involve, for instance, a bus layer which is also a great candidate for exposing services reused by any other projects.
Summary:
For large functional domain, a single project being not manageable, a good applicative architecture will lead to natural code reuse.
We follow these principles:
The Release-Reuse Equivalency Principle: The granule of reuse is the granule of release.
The Common Closure Principle: The classes in a package should be closed together against the same kinds of changes.
The Common Reuse Principle: The classes in a package are reused together.
The Acyclic Dependencies Principle: Allow no cycles in the package dependency graph.
The Stable Dependency Principle: Depend in the direction of stability.
The Stable Abstraction Principle: A package should be as abstract as it is stable.
You can find out more over here and over here.
It depends on what platform you work. I'm a (proud) Java developer and we have nice tools to organise our dependencies such as Maven or Ivy
Whatever else you decide good source code control is crucial to this,as it allows you to implement your strategy whatever way you like without ending up with lots of unrelated copies of your libraries.good branching support is essential.