Can an artifact consist of a list of .jar files? - deployment

I read in a UML manual that when there are many .jar files, it is possible to list them in a single artifact box. However, I have not been able to verify this from other sources, and since Visual Paradigm does not formally allow it, I would like to know if my diagram is compliant with UML notation.
If this is correct, is there a rule for choosing the name of the artifact?
I'm also trying to figure out what manifestations are. Since I don't recognize actual components in my application, but only several layers that I wouldn't define as components, I can't even find manifestations. Is it possible that there are no manifestations in a web application?

The shortcut notation using «artifact» is ambiguous, because the notation refers to a single artifact, with a name File.JAR when in reality there are plenty of them. Moreover, the UML specifications do not mention this possibility, so modelling tools shouldn't provide this feature.
However, UML provides a shortcut for deployed targets (such as nodes and execution environments), allowing to write the list of deployed artifacts directly in the box of the node, instead of drawing a lot of nested or related space-consuming artifact symbols. The UML specification explicitely allows it:
DeployedTargets are shown as a perspective view of cube labeled with the name of the DeployedTarget shown prepended by a colon. System elements deployed on a DeployedTarget, and Deployments that connect them, may be drawn inside the perspective cube. Alternately, deployed system elements can be shown as a textual list of element names.
The UML specification provide several examples page 653 and 657.
P.S: in addition of the UML specs, I've cross checked UML Distilled, The UML User's guide 2nd edition, and The UML Language reference manual 2nd edition. They are all consistent in that regard: they mention the possibility of deployments directly in an execution target (the older books clarify that it's in a compartment, i.e. after a separation line), none of them present this possibility for artifact symbols.

It depends how much you, not your tooling, cares about UML compliance
Broadly, the need for strict UML adherence varies: if you are using UML to generate code / documentation, etc, then yes you need to adhere to the spec. Whereas if you are just trying to communicate ideas to other people then, unless they are UML fanatics, they probably won't care as long as they can clearly understand what you're communicating.
The challenge for tools like Visual Paradigm and Sparx EA is that they need to be UML compliant. The means you get the strict adherence whether you need it or not - unless you find a work-around that lets you communicate your ideas even if from a UML stand-point it's a little weird.

I just wanted to complete this with what the UML spec says about artifacts (p. 654):
An Artifact represents some (usually reifiable) item of information that is used or produced by a software development process or by operation of a system. Examples of Artifacts include model files, source files, scripts, executable files, database tables, development deliverables, word-processing documents, and mail messages.
(emphasis by me)
Now, whatever reifiable will mean (probably refinable?) , I think the term item of information is broad enough to cover anything that holds information. May it be abit, a sentence in a file or a complete set of files.

Related

How to manage both Deployment and Component UML Models in Sparx EA?

I have an existing suite of SOA-connected applications (mixture of JavaEE, PHP and .Net) for which I need to provide an overall deployment model or architectural diagram.
I have found an example of a UML diagram for J2EE Application Deployment which is attractive because it's at just about the right level of detail (apps, containers, some component manifestation) for my current diagramming requirements.
I may even aggregate those at a higher level using something like the same author's Application Clustering Example.
I'm confident that I can jump right in at the component level or even at the artifact level and build my diagram(s) from there.
However, I also design specific Java components and would like to begin providing overall class diagrams to the development team when this current "architectural" exercise is complete. I expect this involves reverse engineering the Java code and starting from there.
My question is: what is my best strategy for meeting my current deployment and future component modeling needs?
Can I expect to back-fill the current artifacts I create now (eg. WAR or JAR file) with reverse-engineered components later?
Should I reverse engineer now, create the artifacts from the "bottom up", ignore most of the components, then update the reverse-engineered code later when it's time for component modelling? I would still require only logical (i.e. not backed by code) components for the .Net and PHP pieces since they're not my domain.
Should I make & keep my deployment artifacts separate (either via different EA projects or disconnected models in the same project) from my components, requiring a "manual" update to deployment diagrams / artifacts if/when code changes?
I'm just getting started with Sparx EA (after migrating from RSA) and would appreciate the perspective of anyone with more EA experience than myself... as well as feedback on any anti-pattern red flags raised by my descriptions above.
There is no good/general answer to your question. You should use MDA with CIM/PIM/PSM views where you put the components in the PSM and the class model in the PIM. Now, to keep all that in synch the only true way is to do it manually - the hard way. Though EA offers a model transformation I can not really recommend that. It pretends to link/synch PIM and PSM (in this case) automagically. But it's just a bad facade. First it works only one way (PIM to PSM) and second you soon loose the contact between both model views as you don't really see the traces. Instead install the <<trace>> connectors manually and annotate them as needed.

What has been done in the field of versioning models?

We had a rather nice lecture about Model driven architecture by a guy from Model Labs.
One thing that got me intrigued was the version control for models ( not to be confused with different models of version control) - or the lack of thereof. By version control for models he meant a way to version XML, EMF files which preserves their semantic.
So, I'm interested in what has been done so far on that field (he mentioned something about SVN and Moodle, though I could have misheard him). The Google search yielded nothing so I'm turning to the wisdom of the Stack Overflow.
I'm looking mostly for information in the form of books, articles, links.
I don't know of a VCS alone dedicated to Model, because Model-base design is often part of a all chain of documents that need to be kept in sync.
Namely (not an exhaustive list):
requirements documents (from which you start modeling)
source code and documentations (generated and implemented from the model)
Plus, I never saw the GUI aspect fully solved in those tools (one model painstakingly organized a certain way might be versioned without layout information, and restored organized another way).
One tool I know of which covers all of those development processes is Modelio, which includes a "teamwork manager"
Another example (which I don't know as much about) would be metaCASE, which has an interesting paper "The Model Repository: More than just XML under version control", about DSM (Domain-Specific Modeling).
DSM: model-based software development approach that uses visual models as primary artifacts in the development process.
DSM raises the level of abstraction beyond normal programming languages by directly specifying the solution in a language that uses concepts and rules from the problem domain – a Domain-Specific Language (DSL).
It does summarize the problem:
There is increased awareness within the modeling arena of the need for
a central repository of system description information.
This is brought on by a growing recognition that only with a strong central repository can modeling tools be integrated, cope with large projects, provide full life-cycle support, produce complete documentation, perform system-wide validation and verification, and adequately control a project.
A full list of version control tools for models can be found here: http://modeling-languages.com/content/version-control-tools-modeling-artifacts
Check EMF framework Edapt
it provides the following features:
Edapt IDE Tooling:
Ecore Editor enhancement to create and maintain the history of an Ecore
Operation-browser to execute refactorings on an Ecore
Release Tooling to prepare a migration plugin from the Ecore history
Custom Migration Support
Edapt Runtime:
API to detect version of given model instances
API to migrate model instances with registered migration plugins

How To Create a Flexible Plug-In Architecture?

A repeating theme in my development work has been the use of or creation of an in-house plug-in architecture. I've seen it approached many ways - configuration files (XML, .conf, and so on), inheritance frameworks, database information, libraries, and others. In my experience:
A database isn't a great place to store your configuration information, especially co-mingled with data
Attempting this with an inheritance hierarchy requires knowledge about the plug-ins to be coded in, meaning the plug-in architecture isn't all that dynamic
Configuration files work well for providing simple information, but can't handle more complex behaviors
Libraries seem to work well, but the one-way dependencies have to be carefully created.
As I seek to learn from the various architectures I've worked with, I'm also looking to the community for suggestions. How have you implemented a SOLID plug-in architecture? What was your worst failure (or the worst failure you've seen)? What would you do if you were going to implement a new plug-in architecture? What SDK or open source project that you've worked with has the best example of a good architecture?
A few examples I've been finding on my own:
Perl's Module::Plugable and IOC for dependency injection in Perl
The various Spring frameworks (Java, .NET, Python) for dependency injection.
An SO question with a list for Java (including Service Provider Interfaces)
An SO question for C++ pointing to a Dr. Dobbs article
An SO question regarding a specific plugin idea for ASP.NET MVC
These examples seem to play to various language strengths. Is a good plugin architecture necessarily tied to the language? Is it best to use tools to create a plugin architecture, or to do it on one's own following models?
This is not an answer as much as a bunch of potentially useful remarks/examples.
One effective way to make your application extensible is to expose its internals as a scripting language and write all the top level stuff in that language. This makes it quite modifiable and practically future proof (if your primitives are well chosen and implemented). A success story of this kind of thing is Emacs. I prefer this to the eclipse style plugin system because if I want to extend functionality, I don't have to learn the API and write/compile a separate plugin. I can write a 3 line snippet in the current buffer itself, evaluate it and use it. Very smooth learning curve and very pleasing results.
One application which I've extended a little is Trac. It has a component architecture which in this situation means that tasks are delegated to modules that advertise extension points. You can then implement other components which would fit into these points and change the flow. It's a little like Kalkie's suggestion above.
Another one that's good is py.test. It follows the "best API is no API" philosophy and relies purely on hooks being called at every level. You can override these hooks in files/functions named according to a convention and alter the behaviour. You can see the list of plugins on the site to see how quickly/easily they can be implemented.
A few general points.
Try to keep your non-extensible/non-user-modifiable core as small as possible. Delegate everything you can to a higher layer so that the extensibility increases. Less stuff to correct in the core then in case of bad choices.
Related to the above point is that you shouldn't make too many decisions about the direction of your project at the outset. Implement the smallest needed subset and then start writing plugins.
If you are embedding a scripting language, make sure it's a full one in which you can write general programs and not a toy language just for your application.
Reduce boilerplate as much as you can. Don't bother with subclassing, complex APIs, plugin registration and stuff like that. Try to keep it simple so that it's easy and not just possible to extend. This will let your plugin API be used more and will encourage end users to write plugins. Not just plugin developers. py.test does this well. Eclipse as far as I know, does not.
In my experience I've found there are really two types of plug-in Architectures.
One follows the Eclipse model which is meant to allow for freedom and is open-ended.
The other usually requires plugins to follow a narrow API because the plugin will fill a specific function.
To state this in a different way, one allows plugins to access your application while the other allows your application to access plugins.
The distinction is subtle, and sometimes there is no distiction... you want both for your application.
I do not have a ton of experience with Eclipse/Opening up your App to plugins model (the article in Kalkie's post is great). I've read a bit on the way eclipse does things, but nothing more than that.
Yegge's properties blog talks a bit about how the use of the properties pattern allows for plugins and extensibility.
Most of the work I've done has used a plugin architecture to allow my app to access plugins, things like time/display/map data, etc.
Years ago I would create factories, plugin managers and config files to manage all of it and let me determine which plugin to use at runtime.
Now I usually just have a DI framework do most of that work.
I still have to write adapters to use third party libraries, but they usually aren't that bad.
One of the best plug-in architectures that I have seen is implemented in Eclipse. Instead of having an application with a plug-in model, everything is a plug-in. The base application itself is the plug-in framework.
http://www.eclipse.org/articles/Article-Plug-in-architecture/plugin_architecture.html
I'll describe a fairly simple technique that I have use in the past. This approach uses C# reflection to help in the plugin loading process. This technique can be modified so it is applicable to C++ but you lose the convenience of being able to use reflection.
An IPlugin interface is used to identify classes that implement plugins. Methods are added to the interface to allow the application to communicate with the plugin. For example the Init method that the application will use to instruct the plugin to initialize.
To find plugins the application scans a plugin folder for .Net assemblies. Each assembly is loaded. Reflection is used to scan for classes that implement IPlugin. An instance of each plugin class is created.
(Alternatively, an Xml file might list the assemblies and classes to load. This might help performance but I never found an issue with performance).
The Init method is called for each plugin object. It is passed a reference to an object that implements the application interface: IApplication (or something else named specific to your app, eg ITextEditorApplication).
IApplication contains methods that allows the plugin to communicate with the application. For instance if you are writing a text editor this interface would have an OpenDocuments property that allows plugins to enumerate the collection of currently open documents.
This plugin system can be extended to scripting languages, eg Lua, by creating a derived plugin class, eg LuaPlugin that forwards IPlugin functions and the application interface to a Lua script.
This technique allows you to iteratively implement your IPlugin, IApplication and other application-specific interfaces during development. When the application is complete and nicely refactored you can document your exposed interfaces and you should have a nice system for which users can write their own plugins.
I once worked on a project that had to be so flexible in the way each customer could setup the system, which the only good design we found was to ship the customer a C# compiler!
If the spec is filled with words like:
Flexible
Plug-In
Customisable
Ask lots of questions about how you will support the system (and how support will be charged for, as each customer will think their case is the normal case and should not need any plug-ins.), as in my experience
The support of customers (or
fount-line support people) writing
Plug-Ins is a lot harder than the
Architecture
Usualy I use MEF. The Managed Extensibility Framework (or MEF for short) simplifies the creation of extensible applications. MEF offers discovery and composition capabilities that you can leverage to load application extensions.
If you are interested read more...
In my experience, the two best ways to create a flexible plugin architecture are scripting languages and libraries. These two concepts are in my mind orthogonal; the two can be mixed in any proportion, rather like functional and object-oriented programming, but find their greatest strengths when balanced. A library is typically responsible for fulfilling a specific interface with dynamic functionality, whereas scripts tend to emphasise functionality with a dynamic interface.
I have found that an architecture based on scripts managing libraries seems to work the best. The scripting language allows high-level manipulation of lower-level libraries, and the libraries are thus freed from any specific interface, leaving all of the application-level interaction in the more flexible hands of the scripting system.
For this to work, the scripting system must have a fairly robust API, with hooks to the application data, logic, and GUI, as well as the base functionality of importing and executing code from libraries. Further, scripts are usually required to be safe in the sense that the application can gracefully recover from a poorly-written script. Using a scripting system as a layer of indirection means that the application can more easily detach itself in case of Something Bad™.
The means of packaging plugins depends largely on personal preference, but you can never go wrong with a compressed archive with a simple interface, say PluginName.ext in the root directory.
I think you need to first answer the question: "What components are expected to be plugins?"
You want to keep this number to an absolute minimum or the number of combinations which you must test explodes. Try to separate your core product (which should not have too much flexibility) from plugin functionality.
I've found that the IOC (Inversion of Control) principal (read springframework) works well for providing a flexible base, which you can add specialization to to make plugin development simpler.
You can scan the container for the "interface as a plugin type advertisement" mechanism.
You can use the container to inject common dependencies which plugins may require (i.e. ResourceLoaderAware or MessageSourceAware).
The Plug-in Pattern is a software pattern for extending the behaviour of a class with a clean interface. Often behaviour of classes is extended by class inheritance, where the derived class overwrites some of the virtual methods of the class. A problem with this solution is that it conflicts with implementation hiding. It also leads to situations where derived class become a gathering places of unrelated behaviour extensions. Also, scripting is used to implement this pattern as mentioned above "Make internals as a scripting language and write all the top level stuff in that language. This makes it quite modifiable and practically future proof". Libraries use script managing libraries. The scripting language allows high-level manipulation of lower level libraries. (Also as mentioned above)

Common Libraries at a Company

I've noticed in pretty much every company I've worked that they have a common library that is generally shared across a number of projects. More often than not this has been a single companyx-commons project that ends up as a dumping ground for common programs including:
Command Line Parsers
File Utilities
Framework Helpers
etc...
Some of these are well thought out and some duplicate functionality found in Apache commons-lang, commons-io etc..
What are the things you have in your common library and more importantly how do you structure the common libraries to make them easy to improve and incorporate across other projects?
In my experience, the single biggest factor in the success of a common library is user buy-in; users in this case being other developers; and culture of your workplace/team(s) will be a big factor.
Separate libraries (projects/assemblies if you're in .Net) for different application tiers is essential (e.g: there's obviously no point putting UI and data access code together).
Keep things as simple as possible; what you don't put in a common library is often at least as important as what you do. Users of the library won't want to have to think, so usage needs to be super easy.
The golden rule we stuck to was keeping individual functions focused on a single task - do one thing and do it well (or very very well); don't try and provide something that tries to take every possibility into account, the more reusable you think you're making it - the less likely it is to be used. Code Complete (the book) has some excellent content on common libraries.
A good approach to setting/improving a library up is to do regular code reviews and retrospectives; find good candidates that you've already come up with and consider re-factoring them into a library for future projects; a good candidate will be something that more than one developer has had to do on more that one project (for example).
Set-up some sort of simple and clear governance of the libraries - someone who can 'own' a specific library and ensure it's overal quality (such as a senior dev or team lead).
I have so far written most of the common libraries we use at our office.
We have certain button classes that are just slightly more useful to us than the standard buttons
A database management class that does some internal caching and can connect to ODBC, OLEDB, SQL, and Access databases without even the flip of a parameter
Some grid and list controls that are multi threaded so we can add large amounts of data to them without the program slowing and without having to write all the multithreading code every time there is a performance issue with a list box/combo box.
These classes make it easier for all of us to work on each other's code and know how exactly they work since we all use the exact same interfaces throughout our products.
As far as organization goes, all of the DLL's are stored along with their source code on a shared development drive in the office that we all have access to. (We're a pretty small shop)
We split our libraries by function.
Commmon.Ui.dll has base classes for ui elements.
Common.Data.Dll is sort of a wrapper around Enterprise library Data access classes.
Common.Business is a dumping ground for other common classes that don't fit into one of those.
We create other specialized dlls as needs arise.

How do you organise your code library?

I am interested to know how people organise their code libraries, particularly with respect to reusable components. I am talking in OO terms below but I am interested in how your organise libraries for other types of language also.
For example:
Are you a stickler for class library projects for everything or do you prefer to keep everything in a single project?
Do you reuse your prebuilt DLLs or do you include individual classes from previous projects in your current work? If individual classes, do you share them between the projects to ensure all are kept up to date or do you permit branching?
How large are your reusable elements? How focussed are they? How are they focussed?
What level of reuse do you attain through your preferred practices?
etc.
EDIT
I am not looking for specific guidance here, I am just interested in people's thoughts and practices. I am particularly interested in the reuse of code between disparate projects, rather than within a single project. (Unfortunately the use of 'project' here is misleading - I mean reuse between real-world projects undertaken for customers, not projects in a Visual Studio sense.)
It generally can be guide by deployment considerations:
How will you deploy (i.e. what will you copy on your production machine) ?
If what you are deploying are packaged components (i.e. dll, jar, war, ...), it is wise to organize the "code library" as a collection of packaged set of files.
That way, you will develop directly with the -- dll, jar, war, ... -- which will be deployed on the production platform.
The idea being: if it works with those packaged files, it may still work in production.
the reuse of code between disparate projects, rather than within a single project.
I maintain that kind of reuse is easier in a "component" approach (like the one discussed in the question "Vendor Branches in GIT")
Over more than 40 current projects, we achieved:
technical reuse by systematically isolating any pure technical aspect into independent framework (typically, log framework, exception framework, KPI - Key Performance Indicator - framework, and so on).
Those technical components are reused into every other projects.
functional reuse by setting a clear applicative architecture in order to divide any functional domain (given the business and functional specifications) into well-defined applications. That would typically involve, for instance, a bus layer which is also a great candidate for exposing services reused by any other projects.
Summary:
For large functional domain, a single project being not manageable, a good applicative architecture will lead to natural code reuse.
We follow these principles:
The Release-Reuse Equivalency Principle: The granule of reuse is the granule of release.
The Common Closure Principle: The classes in a package should be closed together against the same kinds of changes.
The Common Reuse Principle: The classes in a package are reused together.
The Acyclic Dependencies Principle: Allow no cycles in the package dependency graph.
The Stable Dependency Principle: Depend in the direction of stability.
The Stable Abstraction Principle: A package should be as abstract as it is stable.
You can find out more over here and over here.
It depends on what platform you work. I'm a (proud) Java developer and we have nice tools to organise our dependencies such as Maven or Ivy
Whatever else you decide good source code control is crucial to this,as it allows you to implement your strategy whatever way you like without ending up with lots of unrelated copies of your libraries.good branching support is essential.