What is DCPS and what does it have to to with DDS? - publish-subscribe

It says in the OMG DDS specification Version 1.4
The DDS specification describes a Data-Centric Publish-Subscribe (DCPS) model for distributed application communication and integration.
I have three questions about this:
What is DCPS?
Does DDS always use DCPS?
Is DCPS used for other standards?

Historically speaking, DDS and DCPS were not the same thing. The following excerpt is from the introduction of version 1.2 of the DDS specification, published in 2006:
This specification describes two levels of interfaces:
A lower DCPS (Data-Centric Publish-Subscribe) level that is targeted towards the efficient delivery of the proper information to the proper
recipients.
An optional higher DLRL (Data Local Reconstruction Layer) level, which allows for a simple integration of the Service into the
application layer.
Then it goes on two explain in more detail so you can read it for yourself there.
It turned out that users adopted DCPS much more widely for their applications than DLRL and over time, DDS became synonymous to DCPS. In 2015, OMG published DLRL as a specification on its own, split off from the main DDS specification. The original acronym DCPS was maintained in the DDS specification.
If you look at Annex A - Compliance Points of the current version 1.4 of the DDS specification, you see:
This specification includes the following compliance profiles.
Minimum profile: This profile contains just the mandatory features of DCPS. None of the optional features are included.
So DDS-compliant products always include a mandatory subset of DCPS.
Note that the name DDS may be used to indicate the one DDS specification specifically, but also to indicate the whole ecosystem of DDS specifications -- it depends on the context.

Related

BPMN 2.0 confused about execute process

Im a little confused about BPMN 2.0 Engines
if you have modeled a process on the BPMN 2.0 standard in a BPM engine like activiti, activeVOS or JBOSS and you want to execute that process, the engine converts that BPMN "code" into another kind of code (like BPEL, XPDL, etc) or there is a way to "execute BPMN 2.0"?
There are some engines like IBM BPM that offers you only BPMN in their "basic" product, so, that product "executes" BPMN directly? or converts its in some way?
the same ocurred with JBOSS, if u model a BMPN process u can execute it?
Kind regards
BPMN 2.0 is an OMG specification that you can find here
The specification describes several level of comprehension. In your case, what you are interested in are ;"Process Modeling Conformance" and "Process Execution Performance".
The specification also provide rules for serialization which is based on XML.
Some editors are using this serialization internally, some others not. In the second case, most of the time they provide import/export in bpmn 2 format.
What you have to be aware is that, even if a lot of execution behaviors has been defined, there is still a lot of points where it is missing or at least not all vendors interprets the specification exactly the same.
The BPMN Model Interchange Working Group is working to file the gap and provide guidelines for a proper serialization and exchange of BPMN files between vendors.
To sum up, the short answer is: it doesn't really matters if they execute it directly of if they convert it internally. The only important thing is that the behavior respects the "Process Execution Conformance".
Regards,

Difference between RF and BPMN2?

I know this question is for the most part a moot point since RF files are no longer supported, but, as someone relatively new to the Drools environment working with a much older project, what is the difference between RF files and the newer BPMN2 processes (aside from RF being discontinued and the names)? Do they handle rule flow differently or is the difference mainly a different file extension?
The only difference is the underlying XML that is being used to store the process as a file. RuleFlow is a proprietary format, created by the Drools team, to store RuleFlow information. Once the BPMN 2.0 specification was available (which fitted the requirements the team had), we switched to using the BPMN 2.0 specification instead of our proprietary format.
We see no disadvantages to using BPMN 2.0 compared to RF (the language has even become more expressive) and you can easily transform a RF to BPMN 2.0.
Execution will be identical, this is only about how the process is stored as an XML file.

Exposing protocol-buffers interfaces

I am building a distributed system which consist of modules/application with interfaces defined by protobuf messages.
Is it a good idea to expose those protobuf messages to a client directly? ... or maybe it's better to prepare a shared library which will be responsible for translation of (let's assume) method based interface to a protobuf based for each module and clients won't be aware about protobuf at all?
It's neither a "good idea" nor a bad one. It depends on whether or not you want to impose protocol buffers onto your consumers. A large part of that decision is, then:
Who are your consumers? Do you mind exposing the protobuf specifics to them?
Will the clients be written in languages which have protobuf support?
My $0.02 is that this is a perfect use case for Protocol Buffers, since they were specifically designed with cross-system, cross-language interchange in mind. The .proto file makes for a concise, language-independent, thorough description of the data format. Of course, there are other similar/competing formats & libraries out there to consider (see: Thrift, Cap'n Proto, etc.) if you decide to head down this path.
If you are planning to define interfaces that take Google Protobuf message classes as arguments than according to this and that section in Google's Protobuf documentation it is not a good idea to expose Protobuf messages to a client directly. In short, with every version of Protobuf the generated code is likely to be not binary compatible with older code. So don't do it!
However, if you are planning to define interfaces that take byte arrays containing serialized Protobuf messages as function/method parameters then I totally agree with Matt Ball's answer.

Scala actors & Ambient Reference

In Phillip Haller's PhD thesis he mentioned in section (5.1 Future Work) that one of the interesting areas of research would be to extend the framework with ambient references and he cited Van Cutsen's paper.
Excerpt:
The Scala Actors library includes a runtime system that provides basic
support for remote (i.e., inter-VM) actor communication. To provide
support for fault tolerancy (for instance, in mobile ad-hoc networks),
it would be interesting to extend the framework with remote actor
references that support volatile connections, similar to ambient
references [36]. Integrating transactional abstractions for
fault-tolerant distributed programming (e.g., [52, 142]) into Scala
Actors is another interesting area for future work.
And citated paper is:
[36] Tom Van Cutsem, Jessie Dedecker, Stijn Mostinckx, Elisa Gonzalez
Boix, Theo D’Hondt, and Wolfgang De Meuter. Ambient references:
addressing objects in mobile networks. [...] pages 986–997. ACM, October
2006.
Is this what Akka did? If not, do you think it is still relevant to research this area given the fact that Akka exists today?
Yes, this is possible with Akka.
There are 2 ways to achieve this as far as I know:
akka-remote - Provides remote actor ref, but you need to decide where each actor should exsist.
akka-cluster - Provides cluster sharding. Automatically manages actor physical location and ensures that given shard (Actor) is present on at most one node in the cluster.

Is there a standard definition of what constitutes a version(revision) change

I am currently in bureaucratic hell at my company and need to define what constitutes the different levels of software change to our test programs. We have a rough practice that we follow internally, but I am looking for a standard (if it exists) to reference in our Quality system. I recognize that systems may vary greatly between developers, but ultimately I am looking for a "best practice" guide to what constitutes a major change, a minor change etc. I would like to reference a published doc in my submission to our quality system for ISO purposes if possible.
To clarify the software developed at my company is used internally for test automation of Semi-Conductors. We are not selling this code and versioning is really for record keeping only. We are using the x.y.z changes to effect the level of sign-off and approval needed for release.
A good practice is to use 3 level revision numbers:
x.y.z
x is the major
y is the minor
z are bug fixes
The important thing is that two different software versions with the same x should have binary compatibility. A software version with a y greater than another, but the same x may add features, but not remove any. This ensures portability within the same major number. And finally z should not change any functional behavior except for bug fixes.
Edit:
Here are some links to used revision-number schemes:
http://en.wikipedia.org/wiki/Software_versioning
http://apr.apache.org/versioning.html
http://www.advogato.org/article/40.html
I would add build number to the x.y.z format:
x.y.z.build
x = major feature change
y = minor feature change
z = bug fixes only
build = incremented every time the code is compiled
Including the build number is crucial for internal purposes where people are trying to figure out whether or not a particular change is in the binaries that they have.
to enlarge on what #lewap said, use
x.y.z
where z level changes are almost entirely bug fixes that don't change interfaces or involve external systems
where y level changes add functionality and may change the UI/API interface in addition to fixing more serious bugs that may involve external systems
where x level changes involve anything from a complete rewrite/redesign to just changing the database structures to changing databases (i.e. from Oracle to SQLServer) - in other words anything that isn't a drop in change that requires a "port" or "conversion" process
I think it might differ if you are working on an internal software vs a external software (a product).
For internal software it will almost never be a problem to use a formally defined scheme. However for a product the version or release number is in most cases a commercial decision that does not reflect any technical or functional criteria.
In our company the x.y in an x.y.z numbering scheme is determined by the marketing boys and girls. The z and the internal build number are determined by the R&D department and track back into our revision control system and are related to the sprint in which it was produced (sprint is the Scrum term for an iteration).
In addition formally defining some level of compatability between versions and releases could cause you to very rapidly move up or to hardly move at all. This might not reflect added functionality.
I think that best approach for you to take hen explaining this to your co-workers is by examples drawn from well known and succesful software packages, and the way they approach major and minor releases.
The first thing I would say is that the major.minor dot notation for releases is a relatively recent invention. For example, most releases of UNIX actually had names (which sometimes included a meaningless number) rather than version numbers.
But assuming you want to use major.minor, numbering, then the major number indicates a version that is basically incompatible with much that went before. Consider the change from Windows 2,0 to 3.0 - most 2,0 applications simply didn't fit in with the new overlapped windows in Windows 3,0. For less all-encompassing apps, a radical change in file formats (for example) could be a reason for a major version change - WP &n graphic apps often work this way.
The other reason for a major version number change is that the user notices a difference. Once again this was true for the change from Windows 2.0 to 3.0 and was responsible forv the latters success. If your app looks very different, that;s a major change.
A for the minor version number, this is typically used to indicate a chanhe that actually is quite major, but that won't be noticeable to the user. For example, the differences internally between Win 3.0 and Win 3.1 were actually quite major, but the interface stayed the same.
Regarding the third version number, well few people know hat it really means and fewer care. For example, in my everyday work I use the GNU C++ compiler version 3.4.5 - how does this differ from 3.4.4 - I haven't a clue!
As I said before in an answer to a similar question: The terminology used is not very precise. There is an article describing the five relevant dimensions. Data management tools for software development don't tend to support more than three of them consistently at the same time. If you want to support all five you have to describe a development proces:
Version (semantics: modification)
View (semantics: equivalence, derivation)
Hierarchy (semantics: consists of)
Status (semantics: approval, accessibility)
Variant (semantics: product variations)
Peter van den Hamer and Kees Lepoeter (1996) Managing Design Data: The Five Dimensions of CAD Frameworks, Configuration Management, and Product Data Management, Proceedings of the IEEE, Vol. 84, No. 1, January 1996