OSGi: What happens when an imported service is stopped while the service is still running - service

Say I am using a service A which is imported in another service B. While B is running normally(ofcourse A is Active), what will happen is service A is uninstalled while service B is still running?
Service A -> Service B
What will be the different scenarios in case I am using ServiceReference, ServiceTracker & DS?

When a service is unpublished in OSGi, an event is sent to all bundles currently using that service to tell them that they should stop using it.
If you are using DS, your unbind method will be called. When it is called, you should make best efforts to stop using the service as soon as possible. But ultimately OSGi is a cooperative system, it cannot force you to release the service. However if you don't then you can cause problems, for example the service publisher will not be fully garbage-collected. You end up sabotaging the dynamics of the OSGi platform, possibly creating memory leaks and so on.
If you are using ServiceTracker then the removedService method will be called, and you need to respond in the same way. But didn't I tell you in the other question not to use ServiceTracker?? ;-)
If you are using ServiceReference then you need to explicitly register a ServiceListener in order to receive these events. This is why you really really shouldn't use this low-level API until you have gained a lot more experience (and once you do have that experience, you won't want to use it anyway!).

First of all: one of the advantages of OSGi is that the behaviour of the framework and standard services are clearly specified. Those specifications can be downloaded from the OSGi Alliance web site, or, if you don't like reading PDFs, ordered for print. The question you are asking is perfectly answered in those specifications.
That said, in summary: when a service is unregistered:
The ServiceReference object remains as it is. However, a call to ServiceReference.getService() will return null. Note that when using ServiceReferences you should release any references to the actual service object as retrieved via getService(), this normally requires some kind of tracking of the service.
For ServiceTracker ServiceTracker.remove is called. This normally results in a call to removedService() on the ServiceTracker or the defined ServiceTrackerCustomizer.
For DS, the defined unbind method for the referenced service is called (if specified). Furthermore, if the cardinality for the used service indicates that the service is mandatory, the using service may also be unregistered, even possibly deactivated or a new instance activated depending on the availability of alternative services and the policy defined for the service.

Related

Hit REST end point on startup - weblogic + ATG

I have a rest endpoint which would start the scheduler of loading a XML to memory. Whenever I hit that rest endpoint, it loads the XML in memory and would return the XML after its ready (would take 10 - 15 seconds). When the same endpoint is accessed again, it would return the cached XML. Everything works fine but for now I have to manually hit the endpoint for the scheduler to start. Is there a way to hit the endpoint automatically via a simple code in startup? Or is there any other solution for this?
Normally, a component in the Nucleus is instantiated at first access, not at system start-up.
The way to have anything done at start-up in ATG is to create your component, and then to add its nucleus path to the list of initial services in the /Initial component (or from one of the many other Initial components changed off of it)
The component should be globally scoped. Because /Initial is instantiated at start-up, the services it references will also be instantiated as dependencies.
If your component is a POJO, then the no argument constructor will be invoked on component start-up, then the setX method will be called for each property with a value defined in its properties file.
If your component is extended from Generic Service, then additionally, beforeSet and afterSet methods will be called, before and after the set methods are invoked, if they exist, and finally doStartUp will be called.
This is all part of the fundamental lifecycle of components that the Nucleus manages.
This gives you a number of hooks with which to invoke your custom code.
Now, in your question, you ask how to call a REST endpoint at start-up. However, I believe what you actually want to ask is how to ensure that a particular piece of code gets executed at system start-up. A REST endpoint is how you are triggering it today, manually, from outside the Nucleus. But that does not mean that it must call a REST end point if it is to be automatically called at start up.
The easiest way to achieve what you want is
define a class that extends GenericService
override the doStartUp method
put the code you want to execute in this method, or invoke the code on another component from here
define a globally scoped component for the class
Add the component to the initialServices property of the Initial component
Restart the server and check that your code is being called at start-up. Put some debug statements in, and switch debug logging on in your layer.
Note, you may actually also want to think about whether you really need to invoke your code at system start-up. Anything in initial services adds to the start time of the server. Depending on your requirements, it may be better to do it on first access of your application service rather than at server start-up.

MvvmCross DataService in an Android Broadcast listener

I am currently venturing into the MvvmCross realm and making some good headway, but ran into something that I have been unable to figure out on my own. I currently have an android service that is going to be running all the time. That service is going to be started either on a system boot or when the application first fires up.
That service/broadcast receiver will need access to the DataService that is created in a PCL project with MvvmCross. I have not been able to figure out how to get the instantiated data service into that service/broadcast receiver on creation of the service since there are not any view models that are associated with the service.
I know that it's probably relatively simple, but I haven't been figure it out on my own.
The easiest way to do this is probably to just request that the full Setup is completed during the first part of OnCreate for your service:
var setupSingleton = MvxAndroidSetupSingleton.EnsureSingletonAvailable(ApplicationContext);
setupSingleton.EnsureInitialized();

Can components be temporarily registered in an Autofac container?

I'm building a plugin for a 3rd party application and my plugin uses Autofac to wire up various components. The container is built at application startup, but the host application invokes my commands at a later time.
When a command is invoked, the host application provides a few instances of types that it defines and that my components will need to use. I'd like to register these instances in the container so that it can take care of wiring up the components that depend on these instances.
I'm aware that I can use a ContainerBuilder to update an existing container, but I'd like to remove these registrations when the command has completed as these instances will no longer be valid. Is this possible?
Maybe a better approach is to use 2 containers... The command could create a new container to register these instances and other components could be resolved from the application scoped container.
How could I hook up the 2 containers so that resolve calls bubble up to the application scoped container?
Are there any gotchas to be aware of with this approach? I imagine there may be component lifetime issues...
Edit: Now I've done a bit more research and testing and and it turns out I can just use the BeginLifetimeScope(Action<ContainerBuilder>) overload to register the host application provided instances for the nested lifetime only. For some reason I thought that adding registrations to the nested lifetime would result in them being added to the root container but that doesn't seem to be the case.
As noted in my edit above, it turns out that BeginLifetimeScope(Action<ContainerBuilder>) is exactly what I need. For some reason I thought that adding registrations to the nested lifetime would result in them being added to the root container and therefore being resolvable after the nested lifetime scope ends, but that doesn't seem to be the case.

Objective-c asynchronous communication: target/action or delegation pattern?

I'm dealing with some asynchronous communication situations (Event-driven XML parsing, NSURLConnection response processing, etc.). I'll try to briefly explain my problem:
In my current scenario, there is a service provider (that can talk to a xml parser or do some network communication) and a client that can ask the service provider to perform some of its tasks asynchronously. In this scenario, when the service provider finishes its processing, it must communicate back the results to the client.
I'm trying to find a kind of pattern or rule of thumb to implement this kind of things and I see 3 possible solutions:
1. Use the delegation pattern: the client is the service provider's delegate and it will receive the results upon task completion.
2. Use a target/action approach: The client asks the service provider to perform a task and pass a selector that will have to be invoked by the service provider once it has finished the task.
3. Use notifications.
(Update) After a while of trying solution #2 (target and actions), I came to the conclusion that, in my case, it is better to use the delegation approach (#1). Here are the pros and cons of each option, as I see them:
Delegation approach:
1 (+) The upside of option 1 is that we can check for compile-time errors because the client must implement the service provider's delegate protocol.
1 (-) This is also a downside because it causes the client to be tight-coupled with the service provider as it has to implement its delegate protocol.
1 (+) It allows the programmer to easily browse the code and find what method of the client, the service provider is invoking to pass its results.
1 (-) From the client point of view, it is not that easy to find what method will be invoked by the service provider once it has the results. It's still easy, just go to the delegate protocol methods and that's it, but the #2 approach is more direct.
1 (-) We have to write more code: Define the delegate protocol and implement it.
1 (-) Also, the delegation pattern should be used, indeed, to delegate behavior. This scenario wouldn't be an exact case of delegation, semantically speaking.
Action/Target Approach
2 (+) The upside of option 2 is that when the service provider method is being called, the #selector specifying the callback action must also be specified, so the programmer knows right there which method will be invoked back to process the results.
2 (-) In opposition to this, it's hard to find which method will be called back in the client while browsing the service provider code. The programmer must go to the service invocation and see which #selector is being passed along.
2 (+) It's a more dynamic solution, and causes less coupling between parts.
2 (-) Perhaps one of the most important things: It can cause run-time errors and side effects, as the client can pass a selector that does not exist to the service provider.
2 (-) Using the simple and standard approach (#performSelector:withArgument:withArgument:) the service provider can only pass up to 2 arguments.
Notifications:
I wouldn't choose notifications because I think they are supposed to be used when more than one object need to be updated. Also, in this situation, I'd like to tell directly the delegate/target object what to do after the results are built.
Conclusion: At this point, I would choose the delegation mechanism. This approach provides more safety and allows easily browsing the code to follow the consequences of sending the delegate the results of the service provider actions. The negative aspects about this solution are that: it is a more static solution, we need to write more code (Protocol related stuff) and, semantically speaking, we're not talking really about delegation because the service provider wouldn't be delegating anything.
Am I missing something? what do you recommend and why?
Thanks!
You did miss a third option – notifications.
You could have the client observe for a notification from the service provider indicating that it has new data available. When the client receives this notification it can consume the data from the service provider.
This allows for nice loose coupling; some of the decision is just down to whether you want a push/pull system though.
Very good question.
I dont think I am qualified, just yet (as I am a newbie), to comment on which design pattern is better than the other. But just wanted to mention that the downside you mentioned in point 2 (runtime exception) can be avoided by
if([delegate respondsToSelector:callback]){
//call to callback here
}
Hope that helps to weigh the options
Another downside for the Delegation approach:
A service provider can only have one delegate. If your service provider is a singleton, and you have multiple clients, this pattern does not work.
This caused me to go for the Action/Target approach. My service provider holds state and is shared among multiple clients.

Jboss Service / Managed Bean Question

I have a managed bean / service running inside of JBOSS. I then have a quartz job that will occasionally wake up and call a method of the managed bean. This method is sometimes long and drawn out, and since I don't want the quartz job to time out, I have implemented a thread within the managed bean to perform the processing. When the thread is finished I need to update a database table with the results. This is a very serial process and it needs to be based upon some business rules, etc.
My main question is that I can use an EntityManager within the service without a problem however I can not use it from within the thread, I get a NullPointerException. What would be the best way to address this?
Thanks,
Scott
As creating threads in appservers is discouraged, I'd modify the setup a bit.
I'd move the core of processing to a message driven bean, and have the Quartz job just send a message to the queue on which the MDB is listening. The MDB in turn can call your EJB, and like this everything remains within what's allowed by the standard.
As per the documentation and specification the Entity Manager is not thread safe and can not be used across different child threads as I had originally had in mind. I ended up going back to the original design similar to the one provided by fvu, however I found some annotations that would allow me to modify the been timeout period and allow the long running process to work properly. Here's the annotation that I used:
#PoolClass(value=org.jboss.ejb3.StrictMaxPool.class, timeout=360000000L)