I am developing a number of microservices which will run on Open Liberty. I have set up a test server in my eclipse environment which is configured to use all the features required by all the services which I am currently working on.
Whilst this works, it seems a heavy-handed approach and it would be good to test each service in an environment which closely resembles the target server. The services can differ in the set of features they require as well as the JVM settings necessary.
Each service will run in its own docker container and the docker configuration is defined in each project.
Is there a way to better test these services without explicitly setting up a new server for each individual service?
I am not aware of any way to segment the Liberty runtime (its features) nor the jvm (for different jvm settings) for different applications running in a single Liberty instance.
You can set app specific variables and retrieve them using MP Config, but that's not the same as jvm settings and certainly not the same as trying to segment specific features of the runtime to a specific application.
However, in general when testing, I would highly recommend trying to mimic your production environment as much as possible. Since you're planning on deploying into docker, I would do the same locally when testing, and given Liberty's lightweight, composable nature, it's unlikely that you'll hit resource issues locally when doing this (you should only enable the features on each Liberty instance that your app is using to minimize the size of that instance). This approach is one of the big benefits/value provided by containers and Liberty.
In other words, even if you could have one Liberty instance segmented per application, I would not recommend it for your testing because, as you said, "it would be good to test each service in an environment which closely resembles the target server"
I will be building an Android application (not a game) soon. I heard of containerized development and Docker/Kubernetes but I'm not well-versed in its functions and use cases.
Why should I build my Android application with Kubernetes?
Your question can be split up into two parts:
1. Why should I containerize my deployment?
I hope by "deployment", you are referring to the backend services that serve your Android application; not the application itself (not sure how one would do that...). Here is a good article.
Containerization is a powerful abstraction that can help you manage both your code and environment. Setting up a container with the correct dependencies, utilities etc., and securing them is a lot of work, as is the case with any server setup. However, once you have packaged everything into a container, you can deploy said container multiple times and build on-top of it. The value of the grunt work that you have done in the past is therefore carried forward in your future deployments; conversely, so are the bugs... Additionally, you can also leverage the Docker ecosystem and build on various community contributions greatly accelerating your workflows.
A possible unintended advantage is also protection against configuration drift. Whenever services fail or your application crashes, you can simply restart your container, and a fresh version of the service will be created again. However, to support these operations, you need to ensure that your containerized service behaves nicely across restarts and fails gracefully. There are many other caveats and advantages that are not listed here; you can find more discussion on Google.
2. Why should I use Kubernetes for my container orchestration?
If you have many containers (think in the order of 100s), then using a single-node solution like Docker/docker-compose to manage them becomes tedious.
If only there was a tool to manage across multiple nodes, implement service discovery between your nodes, have fault tolerance (ie. automatic restarts, backoff policies), do health-checking of your services, manage storage assets, and conveniently expose your containers to the public. That tool is Kubernetes.
Here is a more in-depth intro.
Hope this helps!
i have osgi services service-1.0.0.jar and service-1.1.0.jar they are implementation of service-api-1.0.0.jar
both these service-1.0.0.jar and service-1.1.0.jar have same service name and packages.
service are registered by bundle-activator
lets assume bundle activator is com.abc.xyz.MyActivator - 1.0.0 and 1.1.0
issue I am facing is when I deploy these services and lookup using service tracker and filter on version I want, I always get same implementation regardless of what version chosen.
this makes me believe that what i am trying to achieve is not doable.
I need multiple implementation of service packaged in separate bundles with difference of version and be able to choose dynamically at runtime.
I am trying this in jboss-6.1.1 eap
if i keep different package name in versions looks like it is able to understand that these are 2 different services but when package names are same i get same service implementation.
am i doing something wrong? has anybody tried this?
is it correct that OSGI allow you to deploy multiple versions of service?
UPDATE After using unique package names for MyActivator in 1.0.0 and 1.1.0 looks like the services are able to maintain the uniqueness.
Does that mean Activators has to unique across bundles?
I assume that service-api-1.0.0.jar exports the package which defines the service interface. In that case, it sounds like you have two implementations of the same version of the service. Not different implementations of different versions of the service. So from a service user point of view, the services are that same: they are from the same service api package version.
I think you are using the OSGi services in a strange way. As a client you should not look into the implemention bundles to determine versions or other informations.
Instead you should use the service interface and service properties to distinguish between services.
So for example you can have a property version and publish the first service with version=1 and the second with version=2. Then you can filter for this property to get a specific service.
Using reflection is also a rather unusual thing. Better try to provide classes in the interface package that you use to exchange data between client and service. This will make the client less dependent on the service impl.
If you have multiple implementations of the same service API, and a client queries for an implementation of the service API, it could get any of the implementations. And that's good because the client shouldn't care.
Say for example you have a Greeting interface with multiple implementations registered as services, possibly by multiple bundles. If a client asks OSGi for a Greeting service then OSGi will simply pick one. After all, if you just ask for a Greeting without specifying anything else then you should accept any implementation. Clients certainly shouldn't care which particular bundle the service comes from: this is the nature of decoupling.
Incidentally, when this happens OSGi normally chooses the implementation that was registered first (actually the one with the lowest service.id, which is an ever-incrementing number, so it's effectively the same thing). This is probably why you see OSGi consistently choosing one particular service.
If your client needs to distinguish between service implementations then you can add properties to the published services and filter on those services. For example one bundle could publish a Greeting service with property language=en_US (i.e. US English) and another could publish a Greeting service with language=de. If your client only wants greetings in English then it can use a filter like (language=en*).
Of all the programs I wrote so far, If I want it to work on another work station, I just have to copy and paste the executable and necessary files needed to make it run (e.g: .o files, binary files..).
But all the program built for commercial use always comes with an installer. For example PC games. So my question is: What is the main benefits/reasons of doing installation when we could just simply copy the files over to the targetted work station?
-One of the reason is probably to prevent piracy. But other than that, I'm sure there are other stronger reasons?
The Complexity of Deployment
Only the simplest applications can work with a simple file copy, and even then you need to have a convenient way to actually download and do the copying of the files to the right location - and this is what a setup is for. The setup is also a marketing tool that can be used for branding and consistency across products as well as allowing installation of a trial version of the product - a very important part of selling software.
Finally a setup provides upgrade and patching features for new versions as well as uninstall and cleanup of the system when the user wants to remove your software. A good setup may also be signed with digital certificates to ensure the file can not be hampered with in transit, and that the vendor is certifiable and hence serious. All of these things are crucial for a serious product.
It is important to remember that the setup experience is the users first encounter with the quality of your product. If the setup fails the product can't be evaluated at all. This would seem to be the most expensive error to make in software development.
Errors in deployment are cumulative in the sense that once you have a deployed error, you generally have no access to the machine in question for debugging - and the fix could easily do more damage. You are managing a delivery process, not just debugging code and binaries. Each delivery adds risk and complexity and pretty soon you can have an impossibility to maintain on your hands if you are not careful. Furthermore all machines your setup is run on will almost certainly be in a totally different state than another computer.
Deployment (setups) is therefore the complex process of migrating any computer from one stable state to another. This requires a disciplined approach. The setup should install all required files and settings and ensure the product is configured for first launch or ready to be configured upon launch without failure. This can be a very complex task. The list of things a setup may need to do is growing all the time, and for every new versions of Windows it seems new obstacles are put in place to make deployment harder. Such obstacles include the UAC prompts, self-repair lockdown on terminal servers, changed core MSI caching behavior, new folder redirects, virtualization features, new and changed signing features with encryption and digital certificates, Active X killbits security lockdown, 64 bit complexities, etc... The list goes on.
Application virtualization is a big issue these days. It essentially encapsulates computer programs from the underlying operating system on which it is executed. This essentially still involves a deployment package for your application, but a fully virtualized application is not installed in the traditional sense. The application behaves at runtime like it is directly interfacing with the original operating system and all the resources managed by it, but can be isolated or sandboxed to varying degrees.
An Overview of Deployment Tasks
The tasks and features needed in a setup range from the very fundamental and basic with built-in Windows Installer or third party tool support, to the highly customized ad hoc solutions where you have to actually code something yourself to deal with unique deployment requirements.
Deployment tools really contain most you would ever need for any deployment, but certain things are still coded on a case by case basis. These ad hoc solutions are implemented as "custom actions" in Windows Installer, and they are without a shadow of a doubt the leading cause of deployment failures. See the "Very Advanced" section for more on custom actions.
Overuse of custom actions and a lot of ad hoc coding tends to indicate flawed application design, but in certain cases you are just dealing with new technology and you have to roll your own solution to get your solution deployed. This is exactly what custom actions are for. Over time standardized solutions should be created and preferred. And small changes in application design can often eliminate complicated custom actions. This is a very important fact about software deployment - there are so many variables that one should opt for simplicity whenever possible.
At a basic overview level, deployment must account for:
Setup Fundamentals
All third party tools provide good support for these setup fundamentals, but there are some differences. The installation of prerequisites may be the area where third party tools and free frameworks like WiX differ the most in terms of ease of use - at the time of writing. The support is there, but it can be a little bit challenging to set up.
Check if the system is suitable for installation for the package in question.
Disk space.
OS type & version.
Language version.
Computer architecture x86/x64.
Unsuitable platforms: Thin Client / Citrix / Terminal Services
Customized setup required due to custom lockdown.
Maybe even malware situation (I wish - can cause mysterious deployment problems).
etc...
Scan for presence and if necessary install prerequisites and runtimes.
Allowing easy deployment of prerequisites and runtimes is a task with extensive support in third party deployment tools. There is limited support for this in Windows Installer itself. The basic feature for runtime distribution in Windows Installer is the merge module - essentially the "include file equivalent" for MSI files. The standard way to deploy shared files. A merge module is compiled into your MSI at build time - sort of early binding in developer terms.
Some prerequisites are installed via Windows Installer merge modules. Others are generally installed using their own setup file (various formats).
Examples: Active X for games, Crystal Reports, Microsoft Report Viewer Runtime, MySQL, SQL Server Runtime, VB6 Runtime, ASP.NET MVC Runtime, Java Runtime, Silverlight, Microsoft XNA, VC++ Runtime, .NET runtime versions, Visual Studio Tools For Office Runtime, Visual F# Runtime, MSXML Runtime, MS Access Runtime, Apache Tomcat, Various Primary Interop Assemblies, PowerShell versions, etc...
Finally, several core Microsoft components such as Windows Installer versions and PowerShell versions generally come down via Windows Update and might be better to exclude from your setup (just check for existence, and tell user to run Windows Update if component is missing). Actual practice here varies.
Provide a GUI suitable for input of required settings from the user.
It is common practice to enter and validate license keys in a setup.
Personally I think this is better done from the application itself for both practical and security reasons - making piracy more difficult, allowing trial installs, reducing excessive setup support calls (you wouldn't believe it...), etc...
For complex setups a lot of GUI could be required to gather deployment settings - particularly for server setups with IIS, MS SQL, COM+ and other advanced components.
Allow installation in silent mode for corporate use.
Extremely important - all corporate deployment is automatic and silent (no GUI shown during installation), except certain server installs.
Smaller companies may run your setup in GUI mode. In my experience they generally do.
Home users generally always run your setup in GUI mode.
Know your target group, and definitely make sure you support silent running if you target corporate customers. However all setups should work in silent mode, and if you follow MSI design rules and best practice it "comes for free".
Adding Basic Stuff
These basic tasks have full support in the Windows Installer engine itself, and all third party tools provide fairly equivalent support for all of them despite variations in GUI features and ease of use.
Install files and registry settings.
Install odbc, file associations, shortcuts and icons.
Update application and system-wide path settings.
Update and merge text based files such as INI files.
Register COM files and enable .NET COM Interop if need be.
Install .NET assemblies to the GAC, and run custom .NET installer classes.
Install side-by-side windows assemblies to WinSxS.
Deliver signed and certified files (also applies to the setup file itself).
Install and control Windows services.
Install Control Panel Applets.
Update environment variables.
I won't dwell on these issues or flesh them out with too many details. All of these deployment tasks should be reasonably well supported in all deployment tools and frameworks available. However, many people mess up their deployment by not using the built-in deployment features and instead relying on custom actions for such trivial tasks. Entirely added risk for no gain whatsoever.
In particular we often see custom actions used to install Windows services - and this is usually a sign of a very badly designed service, or at other times just ignorance of how to do deployment. Both issues together is also common. Deploying such a service often involves applying custom ACL permissioning and modified NT privileges to make a service run with user rights instead of as LocalSystem - which is generally the only correct way to run Windows services. Running a service with user credentials is a "deployment anti-pattern" worth mentioning in passing (more on this later).
Another common custom action use that is always wrong is to install files to the GAC via a custom action. There is good built-in support for this in Windows Installer and any excuses to install via a custom action is almost certainly hiding a bad design or some generalized madness :-). It is also a fact that many deploy far too many things to the GAC overall, but that is a development issue: When should I deploy my assemblies into the GAC?
Finally, .NET installer classes are intended for developers to test their components during development - it should not be used for deployment. It is essentially just the .NET equivalent of self-registration (which is also not acceptable for MSI - you need to extract the COM information and add to the MSI tables - see link for details). An MSI is declarative - it should contain all changes to be applied to the system so that proper rollback and management can be ensured. So the message is that .NET installer classes should only be used for development and testing. Once you build an MSI to deploy your application you should use MSI constructs to achieve proper deployment with rollback support and intelligent management. We see these .NET installer classes used mostly for service and GAC install. In an MSI this translates to using the ServiceInstall and ServiceControl tables for services, and just marking a component for GAC install to install to the GAC (must be a signed assembly). Once you know how, it is easy and you won't miss the .NET installer classes because MSI works like "automagic" when you do this right. You get reliable rollback for free, with ease.
Adding Advanced Stuff (often server stuff)
Despite support in all deployment tools for most of these issues, I have often found that I needed to implement custom actions and ad hoc solutions to achieve proper deployment in certain cases. This is particularly the case for COM+ and IIS deployment. WiX provides highly customized support for both types of deployment, but I have limited experience using it.
The update and installation of XML files is a task supported by each deployment tool since there is no built-in support for this in the Windows Installer engine - which is quite amazing at this point.
With regards to database installation and particularly updates, I am on the fence thinking it should be done from applications with proper user authentication and interactive use, instead of a "one shot" and impersonated deployment operation (that might fail seemingly without good exception management or recovery options). Or in other cases it seems updates should be a managed process involving users raising corporate tickets handled by professional DBOs. Some more details below.
Configuration of IIS, Apache, or other web servers.
This is a whole world of its own, particularly with regards to IIS. I have found deployment tools lacking in features to deploy sites as requested by developers and corporate teams.
Though largely untested by me, the WiX framework provides a very flexible implementation of IIS configuration and deployment.
I expect a lot of custom actions are in use to achieve special deployment configurations.
Run SQL server scripts against databases.
Create db, connect to db, update db, run stored procedures, maybe even trigger backups or schedule new tasks, etc... I don't know all that people do here.
Should this be done in the application instead, or by a DBO? That seems much more reliable. A setup is "one shot", an application can be restarted and you try again - a better exception handling.
Plus an MSI setup has a highly limited GUI severely limited in events due to the overall MSI design (proper Win32 dialogs can be spawned from the limited MSI GUI, but it takes a lot of effort - I have only done it once).
Crucially a setup can run with elevated rights, but that is just on the local machine. Authentication is still needed against the database (unless Windows Authentication is used).
A database update is a transaction on its own that would run as a part of the overall Windows Installer transaction. It is not obvious how to handle errors or what to do in terms of rollback if the installation fails.
Needless to say this can all get very complex to handle in your setup. It is an (enterprise) configuration task in my view, not just a deployment task. Insightful comments very welcome on this issue - I am on the fence with regards to best practice.
If you are delivering a client / server solution to your customers and need a way to set up the (server side?) databases "fresh" with defaults to help your customers "get going" with your solution, then database deployment definitely makes sense to me. But update scripts run as part of installation targeting existing databases would worry me in terms of reliability and management - not to mention safety.
For corporate database updates it would seem a proper process involving a DBO would be more secure. They can run a proper backup before updates are applied and then true rollback is in place if problems are found in UAT.
Installing ActiveX browser components (certificate based through browser).
Install of signed CAB file downloaded from a Web page (admin only, can be captured as an MSI for mass deployment with elevated rights).
Defaults to install in "C:\Windows\Downloaded Installations".
Complications can arise if the version in the CAB file differs from the version requested by the Web page (triggers CONFLICT folders to be generated as installs keep re-running).
Update and merge XML files.
Advanced because it is (amazingly) not natively supported by Windows Installer.
Supported with extensions by both WiX and third party deployment tools.
Configure and control COM+ components.
Tech note: I have failed several times to achieve this properly with several third party tools. There seems to be an overall lack of required features.
I normally end up manually configuring the COM+ application and then exporting an MSI from the Component Services administrative tool that is then used for deployment.
This exported MSI is not good at all - fragile if you try to make any modifications. It contains an undocumented .apl file with the application's attributes and any dependent DLL or data files are not auto-included.
WiX provides support for COM+ (not tested at all by me). I hope it is good :-).
Just for reference: Understanding COM+ Application Installation.
Add custom event logs, set up performance monitors, add firewall rules, and other windows extensions. Supported by most deployment tools these days - including WiX. These features are not natively supported by the Windows Installer engine.
Set up connections to mobile devices and deploy.
Can involve "some strangeness" and weird proprietary solutions.
A custom, native dll might be required to achieve smooth deployment (Pocket PC back in the day - not sure how things work these days).
Install drivers of various kinds.
Much easier and more reliable now for signed drivers than before.
Supported by all third party tools and WiX (using dpinst.exe in the background).
Hooking up the application to advanced server features (deployed separately).
Automatic update systems.
License servers. Floating licenses, or regular licenses.
Online resources of various types. Help, templates, discussions, SDKs, developer tools, etc...
Online stores.
Most of the time this just involves setting a link or registry key to point to the server resources, but sometimes it is more complex.
Adding Very Advanced Stuff (custom actions)
When there is no built-in support for a certain operation or task in Windows Installer itself, or in any of the various third party tools available, you are left having to implement the feature yourself.
When you use Windows Installer, this involves running custom actions of various types (Windows Installer's mechanism for running executable, custom installation logic during installation).
Custom actions are purpose built executables (binaries: dll, exe) and scripts capable of making advanced modifications to the system during installation that are not supported by Windows installer natively or by the deployment tool in use (WiX, Installshield, Advanced Installer, etc...).
Custom actions that make modifications to the system run with elevated rights so that changes can be made to the system even if the logged on user does not have admin rights. There is essentially no limit to what these custom actions can do. They are armed and dangerous.
Custom actions are the leading causes of deployment errors and failure.
Hands down. If an MSI install fails it is most often related to a failing custom action.
Custom actions are difficult to write and debug due to the complexity of Windows Installer. They must be used only when necessary and they must be written with full rollback support so that they are capable of undoing all changes that were applied to the system in case the installer fails and must roll back changes.
This is hard and difficult work and custom actions are a big, complex and error prone issue - a can of worms.
Often minor application design changes can allow custom actions to be replaced by standard MSI features, or various MSI extensions available in third party tools and in WiX.
Executables and scripts that run correctly on their own may fail when run as part of an MSI due to the complex impersonation, elevation and runtime design of Windows Installer. These are not trivial things to get right. An MSI install is an intricate transaction with elevated and impersonated sequences that is very hard to deal with.
Custom action types
Windows Installer supports custom actions implemented as purpose built, native (win32) executables and dlls as well as scripts such as JavaScript or VBScript.
Some even use .NET binaries (C#, VB.NET, DTF, etc...) to run custom actions - this is not recommended due to their prerequisite need for the .NET Framework. These binaries are referred to as "managed code" and can't run without the correct .NET framework installed.
Finally there are PowerShell custom actions that are both scrips and managed code combined - and they should not be used since they require the .NET framework.
In the future, when the .NET framework might be guaranteed to exist on all Windows computers this managed code might be a viable options for general use, but as of now the consensus seems to be that these actions are too risky and unreliable.
Common, sample custom actions (some common, custom tasks are frequently implemented as custom actions because they are not natively supported by Windows Installer but frequently needed).
Manage Windows Shares (usually create).
Apply custom ACL permissioning (there is some built-in MSI support for this).
Modify NT privileges.
Configure DCOM.
Manage groups and users.
Configure per-user Office Addins.
Persist installer properties (for repair and reinstall).
Custom and company specific launch conditions.
IP-Configuration redirects for IIS
Encrypt or obfuscate content for data security
Etc...
Most of the custom functionalities mentioned above are now available in the WiX framework as a custom C++ dll - and other tools have some similar, custom features. You should always prefer these ready-made solutions to your own custom actions since rollback is properly implemented in WiX and the implementation is well tested.
Applying custom ACL permissions and modifying NT privileges are considered "deployment anti-patterns" by most deployment specialists. The requirement to do so indicates poor (lazy) application design.
Custom action summary.
Writing a custom action on your own should be a rare event that is unique and that has not been done (better) before.
Minor application re-design can often eliminate unwise and complex deployment constructs. In fact, almost always.
For example: application configuration should happen on first application launch, and not during the setup.
The setup should prepare the application for first launch, and perform tasks that require elevated rights (only).
User data initialization is a particularly bad thing to use setup scripts to perform. All of this should be done in the application launch sequence.
You should enforce proper rollback support.
This is complex and hard work.
Almost all script custom actions I have seen do not implement rollback at all.
You should write with minimal dependencies.
Preferably use C++ or Installscript or maybe JavaScript (only for internal, corporate deployment in my view). Avoid VB Script, and definitely avoid .NET code in C#/DTF or PowerShell scripts. There is some discussion on the issue of managed code. MSI experts like Chris Painter believes C#/DTF custom actions are ready for prime time, whereas the general consensus seems to be to err on the side of caution and rely on C++ dlls until a proper .NET runtime environment can be guaranteed. Here is a long-winded "discussion" of this issue: Windows Installer fails on Win 10 but not Win 7 using WIX
Robust code is difficult write in script. Scripts are fragile, hard to debug, lack advanced language features (particularly error handling) and are vulnerable to anti-virus blocking.
The only real advantages of scripts are that they are transparent and inspectable and the whole source is embedded in the MSI file (no version control issues). Corporate teams that hand off work to each other frequently might use JavaScript (there is a lot of legacy VB Script use as well - but that language is very poor for error handling).
Managed code has runtime requirements that can't be guaranteed at the time of writing - and this has been the situation for a very long time now.
PowerShell is both managed code and a script. Avoid it. Installshield supports it as a type of custom action. It remains to be seen how successful it will be. I would never use it unless forced to.
And much more...
Additional complications For Deployment
There are many additional complications when delivering a professional setup such as delivering setups in different languages (localization), branding setups for different resellers (OEM), ensuring the setup works on all required operating systems in different language versions, delivering separate setups for x86 and x64 machines, delivering a scaled down "viewer version" of the application, making combined setups for client and server installations (can be run on both the server and the client installing different components - not recommended if you ask me - details), and not to mention deploying to different embedded devices such as phones, pocket pcs, smart phones etc...
Certain "Deployment Anti-Patters" are also problematic to deal with (the linked answer is an "experiment" and I am not too happy with it - a work in progress, but it is intended as a check list for developers for their deployment efforts to avoid really common problems). These are bad constructs required in setups to make poorly designed applications run properly. They include things such as applying custom permissioning (write access in otherwise locked down paths, etc...), customizing NT privileges (typically "run as service" for a user account, or much worse), or applying excessive use of complex custom actions that make unpredictable changes to the system (these can really be anything and be very dangerous). Messing up the silent install is also a huge, common problem - it is terrible for corporate use of your setup. Deploying excessive amounts of user-specific data with your setup can also be problematic (hard to control complications). And there are many other, more specific problems to relate to.
Here is a post with the overall issue of setup and deployment seen in the larger context of application marketing and sales.
Doing Your Own Deployment
You will need a tool or a framework to deliver your own setups. Here is an answer describing different tools used to create installers: What installation product to use? InstallShield, WiX, Wise, Advanced Installer, etc. All attempts have been made to make the descriptions as objective as possible - describing real world experience with positives and negatives.
The commercial tools described in the link above are most excellent tools - and they tend to speed things up with good GUIs and ready-made solutions for common requirements, but developers should consider trying WiX - the new way to create MSI files. Please read this post for background information:
Windows Installer and the creation of WiX (read this if you are trying to "find your feet with WiX" and want to understand what the technology is all about and where it is coming from).
WiX has a learning curve but is "developer friendly" in many ways. For one it is a project type in Visual Studio (once you install it), and it allows a setup to be defined in XML and compiled to MSI as you would a normal binary. This allows proper source control, branching and collaboration. Plus it is free and open source. I feel it is OK to recommend a free framework, especially since it is well maintained. Expect a learning experience though. Here are some suggestions for a "flying start" with WiX.
Many programs make use of graphics, sound, and other drivers which are supplied and maintained by third parties. In many cases, these drivers may use underlying hardware or other system features in ways that Windows itself knows nothing about. If two programs, each with its own driver and unaware of the other's existence, tried to use the same hardware, they would likely interfere with each other in unpredictable undesirable ways (e.g. one might overwrite graphical textures loaded by the other). To avoid such problems, Microsoft recommends that has applications install drivers in such a way that the two programs that need the same driver can share the same driver instance.
The approach Microsoft takes is not the only means of ensuring that multiple programs using the same hardware go through the same driver. A system could also have programs temporarily load drivers when they start, and have drivers automatically unload when they're done. The difficulty with that approach is that if a program which uses an old driver is launched, and while it is running a program which needs a newer version of that driver is launched, the new program would not be able to run unless or until the old program shuts down its driver and switches to using a new one. Such a hassle is probably unavoidable, but having to deal with such things every time a program is launched is probably less bothersome than dealing with it only once when a program is installed.
All that having been said, while it may be helpful to be able to install a program once and have any "driver" issues taken care of once and for all, there's also something to be said for being able to simply run a program without having to make "permanent" modifications to the system. There shouldn't be any particular obstacles to programs being able to use either "temporary" or permanent drivers, but I know of no particular efforts to facilitate such designs.
Beside copying the files for You, the installer may also add registry entries needed by the program (if any), add values to environment variables (PATH), create icons on desktop, so You don't have to do this manually etc.
To quote Wikipedia, "Installation typically involves code being copied/generated from the installation files to new files on the local computer for easier access by the operating system." For simple programs, there is no need to install anything, but more complex ones can update, add links, etc. automatically if installed.
I have a scenario where during the system install time, a few services were deployed on to the OSGi container and these services will be listening for other bundles that provide data and are dynamically installed and uninstalled at runtime.
These data providers do not expose any services and should not even invoke services; my idea is to enable the pre-deployed services to listen for the event of installation of these data provider bundles and if the pattern matches, then process and persist the data into the data store.
For example I have a WidgetService which will listen for installation or uninstallation event of Widget data provider bundles, ShppingCartService that will listen for the installation/uninstallation events from ShoppableItem data provider bundles, etc.
This helps me to keep the processing and persisting logic be centralized and my data providers need not write any code to make their data processed. All that is expected from the data provider bundles is the Service Name/id, Service Version,PreRequisites, and the data that they need to publish.
I have read several articles on OSGi that explain dynamic pluggability of services and the clients able to discover or discard services based on their availabity; however, those are all talking about scenarios where clients are to be intelligent to discover and execute the services they are interested in.
My intention is to make client completely unaware of any service discovery, for that matter any code. All that the client passes is the info about the service the client is interseted in, the dependencies, and the data; the client should be completely dumb.
Is this possible in OSGi? I'm ready to consider this architecture even at the cost of extending a few of the OSGi core framework classes!
I have found some what, may be, remotely related question on stack overflow at :
Discovering Bundle MetaData with out installing the bundle
However, I want a hook or an event that will call my respective service when one or more data provider bundles have been installed. These data provider bundles can be interested in any of the services that are installed in the system. I'm even ready to write a central bunle repository manager/listener kind of stuff that will listen to any bundle installation and invoke my Service facade that will decide which service to execute based on the meta data provided by the data provider bundle.
I'm just starting OSGi, so need a little direction on how to move forward...
I'll be really thankful to you guys/girls :) if you can help me achieve this!
I have a doubt deep in my mind that this may not be readily available in OSGi, and even if that is true I'm ready to spend time and extend the framework to achieve this. All I need is a few guidelines and a clear direction. Who knows, if OSGi is really lacking this functionality, then it would be a very useful add-on to a future OSGi spec.
You might have a look at section 4.7 (Events) of the OSGi Core spec. The Framework raises BundleEvents when there is a change in the lifecycle of a bundle, e.g. when it is installed or uninstalled. What you need to do is to implement a BundleListener, which then will receive the events, so your service can react on the changes.
I have described a design pattern that I call "OSGi Mediator", which may be a solution to your problem.
The items you want to mediate would only require to register with the service registry; all the dependencies could be managed by your mediator implementation.