I have a rest endpoint which would start the scheduler of loading a XML to memory. Whenever I hit that rest endpoint, it loads the XML in memory and would return the XML after its ready (would take 10 - 15 seconds). When the same endpoint is accessed again, it would return the cached XML. Everything works fine but for now I have to manually hit the endpoint for the scheduler to start. Is there a way to hit the endpoint automatically via a simple code in startup? Or is there any other solution for this?
Normally, a component in the Nucleus is instantiated at first access, not at system start-up.
The way to have anything done at start-up in ATG is to create your component, and then to add its nucleus path to the list of initial services in the /Initial component (or from one of the many other Initial components changed off of it)
The component should be globally scoped. Because /Initial is instantiated at start-up, the services it references will also be instantiated as dependencies.
If your component is a POJO, then the no argument constructor will be invoked on component start-up, then the setX method will be called for each property with a value defined in its properties file.
If your component is extended from Generic Service, then additionally, beforeSet and afterSet methods will be called, before and after the set methods are invoked, if they exist, and finally doStartUp will be called.
This is all part of the fundamental lifecycle of components that the Nucleus manages.
This gives you a number of hooks with which to invoke your custom code.
Now, in your question, you ask how to call a REST endpoint at start-up. However, I believe what you actually want to ask is how to ensure that a particular piece of code gets executed at system start-up. A REST endpoint is how you are triggering it today, manually, from outside the Nucleus. But that does not mean that it must call a REST end point if it is to be automatically called at start up.
The easiest way to achieve what you want is
define a class that extends GenericService
override the doStartUp method
put the code you want to execute in this method, or invoke the code on another component from here
define a globally scoped component for the class
Add the component to the initialServices property of the Initial component
Restart the server and check that your code is being called at start-up. Put some debug statements in, and switch debug logging on in your layer.
Note, you may actually also want to think about whether you really need to invoke your code at system start-up. Anything in initial services adds to the start time of the server. Depending on your requirements, it may be better to do it on first access of your application service rather than at server start-up.
Related
If I annotate a bean as #RefreshScope, I can get a new instance of it after its configuration changes (eg. by triggering the refresh by calling /refresh).
But this is exactly I'd like for each of my beans: why would I change a configuration file and then expect the configuration to take effect for some part of the application immediately and for some part only after restart?
So the question is whether it's possible to apply it as a default scope?
Also in a typical Spring Boot application a lot gets auto-configured (eg. datasource), and without a default scope, I'd have to build the beans myself and annotate them properly. (edit: #ConfigurationProperties are automatically refreshed, and since a Spring Boot Datasource auto configuration is based on that, it is refreshed indeed without #RefreshScope)
What am I missing here?
https://cloud.spring.io/spring-cloud-static/spring-cloud.html#_environment_changes and https://cloud.spring.io/spring-cloud-static/spring-cloud.html#_refresh_scope has all the answers.
#ConfigurationProperties are automatically refreshed when /refresh is called, so beans using these properties get the fresh values, for the others and #Value there's #RefreshScope.
I don't think making #RefreshScope default is possible.
The beans annotated with #RefreshScope don't get automatically refreshed after a configuration is changed. It gets refreshed only after the cache entry is invalidated.
From the docs:
Refresh scope beans are lazy proxies that initialize when they are used (i.e., when a method is called), and the scope acts as a cache of initialized values. To force a bean to re-initialize on the next method call, you just need to invalidate its cache entry.
One way to invalidate the cache is by using the /refresh endpoint.
It's valid to note that a refresh-scoped bean can lead to unexpected behavior, please refer to the docs mentioned above to understand why this is not the default behavior.
I understood that GWT apps are separated into Frontend and Backend code.
In the little example that I wrote the Backend operations (XXXServiceImpl) are alwyas triggered by the Frontend (button pressed, etc.).
Question: Is there a way to run code in the Backend automatically? An example would be some initialization stuff that's not triggered by the Frontend (preferably it would be only executed once, during startup of the web app).
Calls to the server do not have to be triggered by a user. When your web app launches, it can make a call like initialize() to the backend, which will tell your server-side code to run some initialization method once.
If this initialization process is not dependent on a single client instance, you can add a check to see if the initialization has been already done and skip it in this case.
Finally, you can run a simple servlet that you can trigger manually (or using a cron-job, deferred task, etc. - depends on your platform) when you deploy your code. The drawback here is that you have to remember to do it every time a new server instance is started.
Say I am using a service A which is imported in another service B. While B is running normally(ofcourse A is Active), what will happen is service A is uninstalled while service B is still running?
Service A -> Service B
What will be the different scenarios in case I am using ServiceReference, ServiceTracker & DS?
When a service is unpublished in OSGi, an event is sent to all bundles currently using that service to tell them that they should stop using it.
If you are using DS, your unbind method will be called. When it is called, you should make best efforts to stop using the service as soon as possible. But ultimately OSGi is a cooperative system, it cannot force you to release the service. However if you don't then you can cause problems, for example the service publisher will not be fully garbage-collected. You end up sabotaging the dynamics of the OSGi platform, possibly creating memory leaks and so on.
If you are using ServiceTracker then the removedService method will be called, and you need to respond in the same way. But didn't I tell you in the other question not to use ServiceTracker?? ;-)
If you are using ServiceReference then you need to explicitly register a ServiceListener in order to receive these events. This is why you really really shouldn't use this low-level API until you have gained a lot more experience (and once you do have that experience, you won't want to use it anyway!).
First of all: one of the advantages of OSGi is that the behaviour of the framework and standard services are clearly specified. Those specifications can be downloaded from the OSGi Alliance web site, or, if you don't like reading PDFs, ordered for print. The question you are asking is perfectly answered in those specifications.
That said, in summary: when a service is unregistered:
The ServiceReference object remains as it is. However, a call to ServiceReference.getService() will return null. Note that when using ServiceReferences you should release any references to the actual service object as retrieved via getService(), this normally requires some kind of tracking of the service.
For ServiceTracker ServiceTracker.remove is called. This normally results in a call to removedService() on the ServiceTracker or the defined ServiceTrackerCustomizer.
For DS, the defined unbind method for the referenced service is called (if specified). Furthermore, if the cardinality for the used service indicates that the service is mandatory, the using service may also be unregistered, even possibly deactivated or a new instance activated depending on the availability of alternative services and the policy defined for the service.
I am trying to implement a custom RMI activation scheme, in which remote Activatable objects will be hosted in a custom EXE process, instead of the standard Java.exe/Javaw.exe.
In RMI 'Activatable' objects can be persisted and restored or launched on demand. After an 'Activatable' object is registered with the RMI registry and requested for the first time, RMID launches a host child process (typically java.exe/javaw.exe), passes two pieces of information through the stdin of the child process and asks the child process to run the main method of a special hidden class 'sun.rmi.server.ActivationGroupInit'. This class is bootstraps everything else prepares the process to create and host instances of the 'Activatable' object. Here after the client and server communicate over RMI.
I've gotten as far as defining a simple Win32 EXE project, writing some JNI code to launch the JVM inside this EXE and managed to invoke the main method of the 'sun.rmi'server.ActivationGroupInit'. This class is able to parse stdin and extract whatever it needs to create the ActivationGroup.
However I am running into some issues that is ultimately causing the activation of the remote object to fail (with an UnknownObjectException) and I am in the process of troubleshooting it.
At this point I just wanted to take a step back and ask if anyone has attempted this before, and knows if there are any gotchas that I should know early on?
Thanks,
Ranjit
As we have discussed endlessly on the Oracle forums, you don't need any of this. Just copy java.exe or javaw.exe, or write your own wrapper that just starts a JVM using all the arguments it is passed in exactly the same way that java.exe does. You don't need to worry about what the activation system sends you on stdin etc, the existing activation classes will do all that for you.
I'm building a plugin for a 3rd party application and my plugin uses Autofac to wire up various components. The container is built at application startup, but the host application invokes my commands at a later time.
When a command is invoked, the host application provides a few instances of types that it defines and that my components will need to use. I'd like to register these instances in the container so that it can take care of wiring up the components that depend on these instances.
I'm aware that I can use a ContainerBuilder to update an existing container, but I'd like to remove these registrations when the command has completed as these instances will no longer be valid. Is this possible?
Maybe a better approach is to use 2 containers... The command could create a new container to register these instances and other components could be resolved from the application scoped container.
How could I hook up the 2 containers so that resolve calls bubble up to the application scoped container?
Are there any gotchas to be aware of with this approach? I imagine there may be component lifetime issues...
Edit: Now I've done a bit more research and testing and and it turns out I can just use the BeginLifetimeScope(Action<ContainerBuilder>) overload to register the host application provided instances for the nested lifetime only. For some reason I thought that adding registrations to the nested lifetime would result in them being added to the root container but that doesn't seem to be the case.
As noted in my edit above, it turns out that BeginLifetimeScope(Action<ContainerBuilder>) is exactly what I need. For some reason I thought that adding registrations to the nested lifetime would result in them being added to the root container and therefore being resolvable after the nested lifetime scope ends, but that doesn't seem to be the case.