Business drives me crazy. They want us to connect one application to two versions of the same service. Both versions of the service differe in some properties. In an ideal world it should only be a configuration change to use the one or the other version of the service
Does anybody know a good pattern what could be used? Or any things I should think about?
Related
I'm facing an OSGi problem, and I'm not sufficiently well versed in OSGi details to figure out a way forward.
My problem is this:
I have a service, which lives behind a well-defined interface, and periodically emits a file in a particular location. This is controlled by the config admin (via a config file in Karaf)
Some components provide this service to others via a Karaf feature file, bundling my service in a particular version (1.X.0)
Other components provide this service in a newer version (1.Y.0, where Y > X), either via another feature file, or just by adding it to their kar file.
As these are just minor version changes, the consuming services don't really care which service they talk to (the API is the same).
My problem is that both of these bundles are Active in karaf, and there is a race condition as to who gets to overwrite who's output file.
I tried making the #Component into a Singleton (using scope = ServiceScope.SINGLETON), and while this might solve the case of every service consumer using the same implementation, the issue of file overwriting persists, as both services are Active.
Basically, I'm looking for a way to tell OSGi to "don't bother with the older versions, the new version (which is the same major as the others) are fine for all of the consumers (who use the default of [1.X,2[)
As the config file is akin to an "API" for enabling my service I would like to avoid having multiple config files for the different versions.
If possible, I would like to keep the version location logic outside of my service. I guess in theory, the service could listen for other versions of bundles providing the same service interface, and take appropriate action - but this seems very cumbersome to me. Surely there is a better way, which has less impact on the business logic code (i.e. my service)?
The simple answer is of course, why bother with the old bundle? Just uninstall it?
Anyway, the usual answer is then: I can't for some reason. Some solutions in preferred (my) order:
Remove older bundle
Make your components require a configuration and configure the appropriate component, the other one won't run. This is basically the pattern that gave us the Configurator specification. This is actually a really good solution that I use everywhere. It allows the application to be configured in fine detail.
Just solve the configuration file conflict in the bundles.
Use startlevels to never start the older bundle. A bit of a hack.
Register a service property with the service and let the references filter on that property. Rabbit hole.
Use Service Hooks to filter out the old service. This introduces ordering since the service hook must be registered before anyone using it. So I tend to shy away from it. Here is an implementation
This is imho a typical use case that, in retrospect, made the system much more complicated than expected. One hack like this does not seem to be too bad but these hacks tend to multiply like rabbits. OSGi is about clean modules communicating with well defined services. Your description seems you're there but then not correctly solving the problem will lead you down the path to the big ball of mud again :-(
For Apache Karaf there is a special way to implement the first solution from Peter (Remove older bundle).
Set dependency=true in the feature file for the bundle that provides the service.
This way Apache Karaf will automatically install the best bundle for the requirements of your other bundles. In this case it should only install the providing bundle with the highest minor version number.
Getting Started with Bluemix, what were your first applications using the platform? They could advise me? And forgive anything for the question.
Thank you for attention.
I think it depends on what is your programming experience and your personal preference. If you want to begin programming with the classic Object-Oriented paradigm you should try Liberty runtime. Instead if you like the scripting languages maybe you could give a try to Python, Ruby or Go runtimes. All of them provide a sample application that you can extend as you want, and have very detailed documentation. I suggest you to take a look at IBM Containers, they are very interesting and powerful, and they make you able to do potentially anything with the platform.
If you come from the on-premise world, please notice that Bluemix is built on Cloud Foundry, and there are two important considerations to think about:
Local file system storage is short-lived. When an application instance crashes or stops, the resources assigned to that instance are reclaimed by the platform including any local disk changes made since the app started. When the instance is restarted, the application will start with a new disk image. Although your application can write local files while it is running, the files will disappear after the application restarts.
Instances of the same application do not share a local file system. Each application instance runs in its own isolated container. Thus if your application needs the data in the files to persist across application restarts, or the data needs to be shared across all running instances of the application, the local file system should not be used.
For this reason local file system should not be used.
Personally since I had some experience in JEE + WAS my first application was a Web app developed on Liberty Runtime.
I suggest you to become familiar with IBM Bluemix DevOps Services, that allows you to develop, build and deploy working on a web IDE.
So the various runtimes and services within bluemix provide 2 types of samples to help you get started. Boilerplate, which are samples you can extend to develop new applications and there is a "deploy to bluemix" button on other samples which can be used to automatically get the sample installed and ready to try. What you start with is usually something like this to see it working and then go from there.
This question already has been answered in a way. But I think what you are looking for are the types of applications you can develop using Bluemix.
To directly answer your question - the first application I developed used the Concept Insights Watson service to extract insights from some news articles and create a concept based news search. I also experimented with Language Translation service where-in I converted the contents of a web page from English to Spanish.
If you look through the documentation pages for the various Watson services, you would come across various use cases where a particular service is applicable.
On a more general note I can see that Bluemix would help us to write some really smart applications in an easy way. The Watson Services provide a real simple interface to all application developers by taking away the highly complex tasks of machine learning and AI algorithms which would need a good level of expertise if done on our own.
Additionally, Bluemix is just like any other PaaS for e.g AWS or Google Compute Engine or Azure. Bluemix does provide relational databases, queues, time series database containers etc as a part of the platform. These would need to be used by the application that you are developing to cater to use cases of inter-process communication, data storage etc.
Hopefully this answer provides you some insight on what applications you can write with the Bluemix concept insight services
Our local applications run on WAS ND. When we moved to trying Liberty on BlueMix(as application, not service), typical problems were mostly around 12 Factors.
Config && Dev/Prod parity --> Earlier our configurations were in files inside application or configured in WAS. With Liberty, we were forced to externalize that and it was easy to setup environment variables in BlueMix.
Process --> As state-fullness was no longer an option, we had to change our application to store session outside, in relational as well as document DB.
Logs --> Logs are not available in local logs files as earlier.
Ephemeral instances --> As mentioned by Umberto
I'm testing out Azure Service Fabric and started adding a lot of actors and services to the same project - is this okay to do or will I lose any of service fabric features as fail overs, scaleability etc?
My preference here is clearly 1 actor/1 service = 1 project. The big win with a platform like this is that it allows you to write proper microservice-oriented applications at close to no cost, at least compared to the implementation overhead you have when doing similar implementations on other, somewhat similar platforms.
I think it defies the point of an architecture like this to build services or actors that span multiple concerns. It makes sense (to me at least) to use these imaginary constraints to force you to keep the area of responsibility of these services as small as possible - and rather depend on/call other services in order to provide functionality outside of the responsibility of the project you are currently implementing.
In regards to scaling, it seems you'll still be able to scale your services/actors independently even though they are a part of the same project - at least that's implied by looking at the application manifest format. What you will not be able to do, though, are independent updates of services/actors within your project. As an example; if your project has two different actors, and you make a change to one of them, you will still need to deploy an update to both of them since they are part of the same code package and will share a version number.
When designing a new J2EE based enterprise framework, do I have to prepare for the situation where separate business modules have to use different databases and have to run on different application server instances?
From another point of view: has anyone ever experienced a real life requirement for different databases & servers per module? If yes, what was the size of that enterprise?
Because (as far as I can see) this makes things a lot more complicated, and with the previous version of this framework (and in smaller banks), the case above never happened.
Thanks for the replies!
I'm not sure I've understood this phrasing correctly
where separate business modules have to use different databases
All the time.
Perhaps we're talking about different things here. I've never met any organisation without at least two databases. That includes me and my CD catalogue and guitar tunes databases on my laptop.
Do you mean different database vendors? Database versions, like Oracle vX and Oracle vY? Even under that definition I can think of no customers I've encountered who have universally stanadardised one one vendor or version.
So, would I expect a non-trivial system to have some modules looking at one database and some looking at another. Yes abolsutely.
Would I expect some modules to look at two databases, yes. Reference data in one live in another. History in yet another.
Different modules on different servers - yes. For reasons of isolation and scalability. That's one thing App Servers do quite well.
Overall, why do you see this as a problem? Your modules look up their JDBC connections in JNDI, they don't need to know that they are using different databases. It's an admin problem to wire up the modules correctly.
One major issue could be the use of XA transactions, but it's often possible to avoid updating both databases in the same module, or if from the same module in the same transaction.
Since I'm not a huge fan of any of the current solutions for managing the resources and knowledge that I have, I was thinking about making my own solution, which will involve custom code as well as possible integration of FOSS solutions. I would start development on my local machine, but if I like it, how difficult would it be to migrate over to a public server and let others also use this tool? What kinds of challenges might I be facing?
In theory, nothing, beyond just the process of moving stuff to the new machine. You can set up your own servers, on your own ports (port 80 for example).
You can even create your own fake domain at home, with just a tweak to the /etc/hosts files (or the equivalent on Windows).
Now, if you're developing on Windows and hosting on unix, you'll have platform issues, so I'd suggest against that, at least for a first project.
But other than that, it should be straightforward.
You didn't hard code any paths to "localhost" did you? If so, that should be the first thing to strip out. Either use relative paths, or have a configurable {AppPath} variable of some kind that you only need ever change once.
By the way, what language/framework are you using? it would help us provide sample code.
I would add that documentation is a highly important factor in any project if it is to be quickly embraced by the public. The tendency when developing in-house projects, especially if they are just for your own personal use, is to neglect, or even completely ignore documentation of all kinds, both of usage, as well as in the code. If users aren't told how to use the product, they wont use it, and if other potential developers don't know how or why things are done the way they are, or what the purpose of things are, they either won't bother with trying, or will cause other problems unintentionally.