Having OSGi services use latest version of bundle, even if multiple bundle versions are installed - service

I'm facing an OSGi problem, and I'm not sufficiently well versed in OSGi details to figure out a way forward.
My problem is this:
I have a service, which lives behind a well-defined interface, and periodically emits a file in a particular location. This is controlled by the config admin (via a config file in Karaf)
Some components provide this service to others via a Karaf feature file, bundling my service in a particular version (1.X.0)
Other components provide this service in a newer version (1.Y.0, where Y > X), either via another feature file, or just by adding it to their kar file.
As these are just minor version changes, the consuming services don't really care which service they talk to (the API is the same).
My problem is that both of these bundles are Active in karaf, and there is a race condition as to who gets to overwrite who's output file.
I tried making the #Component into a Singleton (using scope = ServiceScope.SINGLETON), and while this might solve the case of every service consumer using the same implementation, the issue of file overwriting persists, as both services are Active.
Basically, I'm looking for a way to tell OSGi to "don't bother with the older versions, the new version (which is the same major as the others) are fine for all of the consumers (who use the default of [1.X,2[)
As the config file is akin to an "API" for enabling my service I would like to avoid having multiple config files for the different versions.
If possible, I would like to keep the version location logic outside of my service. I guess in theory, the service could listen for other versions of bundles providing the same service interface, and take appropriate action - but this seems very cumbersome to me. Surely there is a better way, which has less impact on the business logic code (i.e. my service)?

The simple answer is of course, why bother with the old bundle? Just uninstall it?
Anyway, the usual answer is then: I can't for some reason. Some solutions in preferred (my) order:
Remove older bundle
Make your components require a configuration and configure the appropriate component, the other one won't run. This is basically the pattern that gave us the Configurator specification. This is actually a really good solution that I use everywhere. It allows the application to be configured in fine detail.
Just solve the configuration file conflict in the bundles.
Use startlevels to never start the older bundle. A bit of a hack.
Register a service property with the service and let the references filter on that property. Rabbit hole.
Use Service Hooks to filter out the old service. This introduces ordering since the service hook must be registered before anyone using it. So I tend to shy away from it. Here is an implementation
This is imho a typical use case that, in retrospect, made the system much more complicated than expected. One hack like this does not seem to be too bad but these hacks tend to multiply like rabbits. OSGi is about clean modules communicating with well defined services. Your description seems you're there but then not correctly solving the problem will lead you down the path to the big ball of mud again :-(

For Apache Karaf there is a special way to implement the first solution from Peter (Remove older bundle).
Set dependency=true in the feature file for the bundle that provides the service.
This way Apache Karaf will automatically install the best bundle for the requirements of your other bundles. In this case it should only install the providing bundle with the highest minor version number.

Related

Kubernetes shared library among pods

I have 15 pods that run different PHP applications in a Kubernetes cluster, each application must include a PHP library updated regularly, it's always the same lib/version for all pods.
What is the best approach to share a library with all pods?
I tried to find out a solution but my doubts persist.
What I thought:
Include the lib in the application container during the build, this solution create a consistent application but I have to re-deploy all 15 applications. Make sense?
Include the lib as shared volume among all pods, in this case I can update only the lib and all application will be updated. It seems better even if I don't like this option because each application depends on the shared volume.
What are your thoughts regarding this issue?
Always make your images be self-contained. Don't build an image that can't run unless something outside the container space is present, or try to use a volume to share code. (That is, pick your first option over the second.)
Say you have your shared library, and you have your 15 applications A through O that depend on it. Right now they're running last week's build image: service-a:20210929 and so on. Service A needs a new feature with support in the shared library, so you update that and redeploy it and announce the feature to your users. But then you discover that the implementation in the shared library causes a crash in service B on some specific customer requests. What now?
If you've built each service as a standalone image, this is easy. helm rollback service-b, or otherwise change service B back to the 20210929 tag while service A is still using the updated 20211006 tag. You're running mixed versions, but that's probably okay, and you can fix the library and redeploy everything tomorrow, and your system as a whole is still online.
But if you're trying to share the library, you're stuck. The new version breaks a service, so you need to roll it back; but rolling back the library would require you to also know to roll back the service that depends on the newer version, and so on. You'd have to undeploy your new feature even though the thing that's broken is only indirectly related to it.
If the library is really used everywhere, and if disk space is actually your limiting factor, you could restructure your image setup to have three layers:
The language interpreter itself;
FROM language-interpreter, plus the shared library;
FROM shared-library:20211006, plus this service's application code.
If they're identical, the lower layers can be shared between images, and the image pull mechanics know to not pull layers that are already present on the system. However, this can lead to a more complex build setup that might be a little trickier to replicate in developer environments.
Have you considered making a new microservice that has that shared library and the other 15 pods send requests to that one microservice to get what they need?
If this architecture works you would only have to update one deployment when that shared library is updated.
in our company, we have lots of teams using an common dict library. since we generate this library from tools, and make sure generated code is ok to go, what we do in vm env is to push this library to all servers, all no one need to worry about lib version issue.
we are moving to k8s, and require module/svc owner to keep up with library version changing by deploy new image. this changes work mode. last week, we found a bug caused by one module forget to update dict library :)
so, we are thinking moving some of configuration-like library to deployed by configuration center, and it can publish configurations dynamically.

ADO.NET Provider for SAP HANA - Version mismatch issue

I have a WPF application which is using ADO.NET client for SAP HANA ( Version 1.0.9.0) in my local development environment and i have added the same reference of SAP.DATA.HANA.v4.5.dll in my code. The connection works fine.
When i try to run the same application on a server which is having a different version of ADO.NET client , it throws error.
It should refer to the client from the location(C:\Program Files\sap\hdbclient\ado.net\v4.5) instead of version number ?
Can someone please explain if i am doing something wrong.
You're not doing anything wrong. But I think you're missing a key piece of knowledge about assembly references. And there are ways to deal with it, both for you and SAP.
When .NET references an assembly, it does so against the exact version used at compile time. Which means if at runtime the version number is different, it fails by default. Differences in the signing key used for strong-named assemblies or cultures can also cause the assembly loading to fail. This previous answer discusses that and suggests an approach to dealing with it using the AssemblyResolve event. You might have a lot more to do if your application has a complex plugin loading mechanism, but if it's really simple, you can get it updated with minimal effort. Of course, you'll still have issues if the client DB makes a breaking change, but they'll be different issues.
Another approach is to edit your application's configuration file to redirect assembly versions yourself. Sometimes Nuget packages are even set up to automatically insert these redirects into the application configuration file. Unfortunately this isn't an easy strategy with HANA given how frequently the versions change. And you need to know the exact version that's present to do this. So if you're deploying your application to multiple clients with different HANA versions, it could be a bit of a nightmare.
The sad thing is that there's already an existing mechanism in place for dealing with this sort of thing by the vendor who publishes the library. It's called a publisher policy assembly, but SAP botched its delivery of the policy assembly by including (and embedding) the most useless binding redirect ever conceived.
Normally, with a binding redirect, the purpose is to allow for seamless upgrading of an assembly to a newer version as long as your types haven't changed their public interfaces significantly. All you do is specify the range of versions that can be handled by this version in the XML.
SAP, however has this as their binding redirect element:
<bindingRedirect oldVersion="1.0.120.0-1.0.120.0" newVersion="1.0.120.0" />
Which does you the wonderful favor of allowing only the currently installed version of their DB client (in this case, 1.0.120.0). And no, that DLL doesn't change its ADO.NET classes enough for this sort of strict versioning. In my case, everything about the different assembly versions was exactly the same.
Had I the ability to get in touch with someone sane on the HANA team with enough clout to fix this, I'd suggest they set the oldVersion attribute properly to the oldest upgradable version and upgrade every subsequent driver package. So if you've installed 1.0.120.0, it'd look something more like this with the :
<bindingRedirect oldVersion="1.0.9.0-1.0.120.0" newVersion="1.0.120.0" />
And then it could be used by any software on the machine without extra effort. I mean, it's in the GAC. It wouldn't be that much effort for them to make this significantly easier. Although they'd need to patch all versions of the HANA client, as many applications (even those shipped by SAP) are tied to specific HANA versions. If they only fixed it in the latest version, we wouldn't see the benefits for several years.

Should actors/services be split into multiple projects?

I'm testing out Azure Service Fabric and started adding a lot of actors and services to the same project - is this okay to do or will I lose any of service fabric features as fail overs, scaleability etc?
My preference here is clearly 1 actor/1 service = 1 project. The big win with a platform like this is that it allows you to write proper microservice-oriented applications at close to no cost, at least compared to the implementation overhead you have when doing similar implementations on other, somewhat similar platforms.
I think it defies the point of an architecture like this to build services or actors that span multiple concerns. It makes sense (to me at least) to use these imaginary constraints to force you to keep the area of responsibility of these services as small as possible - and rather depend on/call other services in order to provide functionality outside of the responsibility of the project you are currently implementing.
In regards to scaling, it seems you'll still be able to scale your services/actors independently even though they are a part of the same project - at least that's implied by looking at the application manifest format. What you will not be able to do, though, are independent updates of services/actors within your project. As an example; if your project has two different actors, and you make a change to one of them, you will still need to deploy an update to both of them since they are part of the same code package and will share a version number.

Web Application deployment and database/runtime data management

I have decided to finally nail down my team's deployment processes, soup-to-nuts. The last remaining pain point for us is managing database and runtime data migration/management. Here are two examples, though many exist:
If releasing a new "Upload" feature, automatically create upload directory and configure permisions. In later releases, verify existence/permissions - forever, automatically.
If a value in the database (let's say an Account Status of "Signup") is no longer valid, automatically migrate data in database to proper values, given some set of business rules.
I am interested in implementing a framework that allows developers to manage and deploy these changes with the same ease that we manage and deploy our code.
So the first question is: 1. What tools/frameworks are out there that provide this capacity?
In general, this seems to be an issue in any given language and platform. In my specific case, I am deploying a .NET MVC2 application which uses Fluent NHibernate for database abstraction. I already have in my deployment process a tool which triggers NHibernate's SchemaUpdate - which is awesome.
What I have built up to address this issue in my own way, is a tool that will scan target assemblies for classes which inherit from a certain abstract class (Deployment). That abstract class exposes hooks which you can override and implement your own arbitrary deployment code - in the context of your application's codebase. the Deployment class also provides for a versioning mechanism and the tool manages the current "deployment version" of a given running app. Then, a custom NAnt task glues this together with the NAnt deployment script, triggering the hooks at the appropriate times.
This seems to work well, and does meet my goals - but here's my beef, and leads to my second question: 2. Surely what I just wrote has to already exist. If so, can you point me to it? and 3. Has anyone started down this path and have insight into problems with this approach?
Lastly, if something like this exists, but not on the .NET platform, please still let me know - as I would be more interested in porting a known solution than starting from zero on my own solution.
Thanks everyone, I really appreciate your feedback!
Each major release, have a script to create the environment with the exact requirements you need.
For minor releases, have a script that is split into the various releases and incrementally alters the environment. There are some big benefits to this
You can look at the changes to the environment over time by reading the script and matching it with release notes and change logs.
You can create a brand new environment by running the latest major and then latest minor scripts.
You can create a brand new environment of a previous version (perhaps for testing purposes) by specifying it to stop at a certain minor release.

Deployment friendly code

Is writing deployment friendly code considered a good virtue on the part of a programmer?
If yes, then what are the general considerations to be kept in mind when coding so that deployment of the same code later does not become a nightmare?
The biggest improvement to deployment is to minimize manual intervention and manual steps. If you have to type in configuration values or manually navigate through configuration screens there will be errors in your deployment.
If your code needs to "call home", make sure that the user understands why, and can turn the functionality off if necessary. This might only be a big deal if you are writing off-the-shelf software to be deployed on corporate networks.
It's also nice to not have your program be dependent on too many environmental things to run properly. To combat this, I like to define a directory structure with my own bin, etc and other folders so that everything can be self-contained.
The whole deployment process should be automated to minimize the human errors. The software should not be affected by the envrionment. Any new deployment should be easily rolled back in case any problem occurs. While coding you should not hard code configuration values which may be different for each environment. Configuration should be done in such a way that it can be easily automated depending upon enviroment.
Client or server?
In general, deployment friendly means that you complete and validate deployment as you complete a small story / unit of work. It comes from continual QA more than style. If you wait till the last minute to build and validate deployment, the cleanest code will never be friendly.
Everything else deployment, desktop or server, follows from early validation. You can add all the whacky dependencies you want, if you solve the delivery of those dependencies early. Some very convenient desktop deployment mechanisms result in a sand-boxed / partially trusted applications. Better to discover early that you can't do something (e.g. write your log to c:\log.txt) than to find out late that your customers can't install.
I'm not entirely sure what you mean by "deployment friendly code." What are you deploying? What do you mean by "deploying"?
If you mean that your code should be transferable between computers, I guess the best things to do would be to minimize unnecessary (with a given definition of "unnecessary") dependencies to external libraries, and document well the libraries that you do depend on.