Process Activator on Kubernetes - kubernetes

We have a microservices/ish archictecture we currently on a VM. Each of our applications are deployed as DLLs with no executable. To run them, we spawn a new instance of our activator, passing the path of the application as an argument. The process activator injects behaviors on the application via DI, such as proxies and service discovery logic.
This has the benefit that the applications and the process activator can be developed, managed and deployed independently of one another. If we have an update for the activator, we only need to deploy it and restart all applications for our changes to take effect; No need to re-deploy an application, much less to rebuild it.
As we are now developing a plan to migrate our archictecture to Kubernetes, however, we've hit a roadblock because of this design. We haven't been able to find a way to replicate this. We've thought of doing it by simply deploying the two together and setting the activator as the entrypoint; However, that would mean that anytime we update the activator, all applications' images would have to be updated as well, which completely defeats the purpose of this design.
We've also thought of deploying them as two different containers and somehow making the activator read the contents of the application container and then load its DLLs, but we don't know if it's possible for a container to read the contents of another.
In the end, is there a way to make this design work in Kubernetes?

If the design requires the following:
Inject files into the main container to change its behaviour
Then a viable choise is to use init containers. Init containers can perform operations before the main container (or containers) starts, for example they could copy some files for the main container to use.
You could have this activator as the main container and all the various apps being a different init container which contains the DLLs of that app.
When an init container starts, it copies the DLLs of that app on an ephemeral volume (aka emptyDir) to make them available to the main container. Then the activator container starts and find the DLLs at a path and can do whatever it wants with them.
This way:
If you need to update the activator, you need to update the main container image (bump its tag) and then update the definitions of all the Deployments / StatefulSets to use the new image.
If you need to update one of the apps, you need to update its single init container image (bump its tag) and then update the definition of the Deployment / StatefulSet of that particular app.
This strategy works perfectly fine (I think) if you are ok with the idea that you'll still need to define all the apps in the cluster. If you have 3 apps, A, B and C, you'll need to define 3 Deployments (or StatefulSets if the apps are stateful for some reasons) which uses the right init containers.
If the applications are mostly equal and only few things changes.. like only the DLLs to inject to the activator, you could think of using HELM to define your resources on the cluster, as it makes you able to template the resources and personalize them with very little overhead.
Some documentation:
Init Containers: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
Example of making a copy of files between Init container and Main container: https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/
HELM: https://helm.sh/docs/

Related

Change sling.home in Azure App Service container

I'm running the "apache/sling" Docker image in an Azure App Service. AFAICT, App Service only supports visibility/persistence in the "/home" directory in the container. Therefore, I think I need to change Sling's "sling.home" property from its default "/opt/sling" to something like "/home/sling".
I suspect I have to create my own Docker image based on "apache/sling", but still, between all the possible ways to set "sling.home", in OSGi framework properties, JAR files, command line arguments, etc., I'm lost to figure out what to actually change in my own Docker image. Should I build some part of this from modified source? Or what?
BTW, this is all in order to start working with Sling until my org eventually gets AEM.

How do i find the 'from' Chart version at a helm upgrade?

I am using helm built in object 'Release.isUpgrade' to ensure an init-container is only run at upgrade.
I want to only run the init-container when upgrading from a specific Chart version.
Is it possible to get the 'from' Chart version in a helm upgrade ?
It doesn't look like this information is published either in the .Release object or through information available to a hook job.
You probably want a pre-upgrade hook and not an init container. If you have multiple replicas on your deployments, the init container will run on all of them; even if you have just one, if the node it's on fails and is replaced, the replacement will re-run the init container. A pre-upgrade hook will run just once, regardless of how the corresponding deployments are configured.
That hook will be a separate pod (and will require writing code), so within that you can do whatever you want. You can give it read access to the Kubernetes API to get the definition of an existing deployment, for example, and then look at its labels or container image tag to find out what version of the chart/application is running now. (There are standard labels that can help with this.) You could also make the upgrade step just look for its own outputs: if object X is supposed to exist, create it if it's not there, without focusing on specific versions.

How can I let Tomcat run a command after it finishes deploying web application's .war files

We know that during Tomcat startup, it will deploy the .war files of its web applications. My question is after the deployment I need to run a command to modify a file inside WEB-INF/ of the web application which is generated after deployment, and I need to let Tomcat do this automatically for me, is this possible to achieve ? Something like post_run command after deployment.
I found that CustomEventHookListener can probably do this How to run script on Tomcat startup?, but this involves in making a new Java class, and I'm not allowed to do so. I have to figure out way to modify the existing Tomcat configs like server.xml or tomcat.conf in TOMCAT_HOME/conf to do so.
The main issue about not using a Event Hook Listener is that there's no reliable way to tell if the application is ready or not, as Catalina implements it's own lifecycle for each of their components (as seen in https://tomcat.apache.org/tomcat-8.5-doc/api/org/apache/catalina/Lifecycle.html).
Your best shot is to use tail or some external program infer the component's state, but AFAIK there's no way to implement listeners directly in the configuration files.

Passing RAILS_ENV into Torquebox without using a Deployment Descriptor

I am wondering if there is a way to pass a value for RAILS_ENV directly into the Torquebox server without going through a deployment descriptor; similar to how I can pass properties into Java with the -D option.
I have been wrestling with various deployment issues with Torquebox over the past couple weeks. I think a large part of the problem has to do with packaging the gems into the Knob file, which is the most practical way for managing them on a Window environment. I have tried archive deployment and expanded deployment; with and without external deployment descriptor.
With an external deployment descriptor, I found the packaged Gem dependencies were not properly deployed and I received errors about missing dependencies.
When expanded, I had to fudge around a lot with the dependencies and what got included in the Knob, but eventually I got it to deploy. However, certain files in the expanded Knob were marked as failed (possible duplicate dependencies?), but they did not affect the overall deployment. The problem was when the server restarted, deployment would fail the second time mentioning it could not redeploy one of the previously failed files.
The only one I have found to work consistently for me is archive without external deployment descriptor. However, I still need a way to tell the application in which environment it is running. I have different Torquebox instances for each environment and they only run the one application, so it would be fairly reasonable to configure this at the server level.
Any assistance in this matter would be greatly appreciated. Thank you very much!
The solution I finally came to was to pass in RAILS_ENV as a Java property to the Torquebox server and then to set ENV['RAILS_ENV'] to this value in the Rails boot.rb initializer.
Step 1: Set Java Property
First, you will need to set a Rails Environment java property for your Torquebox server. To keep with standard Java conventions, I called this rails.env.
Dependent on your platform and configuration, this change will need to be made in one of the following scripts:
Using JBoss Windows Service Wrapper: service.bat
Standalone environment: standalone.conf.bat (Windows) or standalone.conf (Unix)
Domain environment:: domain.conf.bat (Windows) or domain.conf (Unix)
Add the following line to the appropriate file above to set this Java property:
set JAVA_OPTS=%JAVA_OPTS% -Drails.env=staging
The -D option is used for setting Java system properties.
Step 2: Set ENV['RAILS_ENV'] based on Java Property
We want to set the RAILS_ENV as early as possible, since it is used by a lot of Rails initialization logic. Our first opportunity to inject application logic into the Rails Initialization Process is boot.rb.
See: http://guides.rubyonrails.org/initialization.html#config-boot-rb
The following line should be added to the top of boot.rb:
# boot.rb (top of the file)
ENV['RAILS_ENV'] = ENV_JAVA['rails.env'] if defined?(ENV_JAVA) && ENV_JAVA['rails.env']
This needs to be the first thing in the file, so Bundler can make intelligent decisions about the environment.
As you can see above, a seldom mentioned feature of JRuby is that it conveniently exposes all Java system properties via the ENV_JAVA global map (mirroring the ENV ruby map), so we can use it to access our Java system property.
We check that ENV_JAVA is defined (i.e. JRuby is being used), since we support multiple deployment environments.
I force the rails.env property to be used when present, as it appears that *RAILS_ENV* already has a default value at this point.

Sharing a fabfile across multiple projects

Fabric has become my deployment tool of choice both for deploying Django projects and for initially configuring Ubuntu slices. However, my current workflow with Fabric isn't very DRY, as I find myself:
copying the fabfile.py from one Django project to another and
modifying the fabfile.py as needed for each project (e.g., changing the webserver_restart task from Apache to Nginx, configuring the host and SSH port, etc.).
One advantage of this workflow is that the fabfile.py becomes part of my Git repository, so between the fabfile.py and the pip requirements.txt, I have a recreateable virtualenv and deployment process. I want to keep this advantage, while becoming more DRY. It seems that I could improve my workflow by:
being able to pip install the common tasks defined in the fabfile.py and
having a fab_config file containing the host configuration information for each project and overriding any tasks as needed
Any recommendations on how to increase the DRYness of my Fabric workflow?
I've done some work in this direction with class-based "server definitions" that include connection info and can override methods to do specific tasks in a different way. Then my stock fabfile.py (which never changes) just calls the right method on the server definition object.