I am using helm built in object 'Release.isUpgrade' to ensure an init-container is only run at upgrade.
I want to only run the init-container when upgrading from a specific Chart version.
Is it possible to get the 'from' Chart version in a helm upgrade ?
It doesn't look like this information is published either in the .Release object or through information available to a hook job.
You probably want a pre-upgrade hook and not an init container. If you have multiple replicas on your deployments, the init container will run on all of them; even if you have just one, if the node it's on fails and is replaced, the replacement will re-run the init container. A pre-upgrade hook will run just once, regardless of how the corresponding deployments are configured.
That hook will be a separate pod (and will require writing code), so within that you can do whatever you want. You can give it read access to the Kubernetes API to get the definition of an existing deployment, for example, and then look at its labels or container image tag to find out what version of the chart/application is running now. (There are standard labels that can help with this.) You could also make the upgrade step just look for its own outputs: if object X is supposed to exist, create it if it's not there, without focusing on specific versions.
Related
I have an application "A" which requires postgres database. Once I deploy the helm chart it deploys the child dependent chart postgres.
helm install application_A -n mynamspace ./applicationA-0.1.0.tgz
Now I have another application "B" which also requires postgres database. I wish to deploy the application B in the same namespace but it should not deploy new postgres POD as it is already available from Application A's deployment.
helm install application_B -n mynamspace ./applicationB-0.1.0.tgz
It fails with the following error -
Error: INSTALLATION FAILED: rendered manifests contain a resource that already exists
I want helm to recognize that the dependency of application 'B' is already deployed with the desired version and hence it should automatically avoid deploying the dependent chart.
I am aware of conditional deployment of subchart. It requires me to find out what is already available using shell script to toggle the condition while deploying Application B.
Is there any way in helm to automatically avoid deploying subchart if it is already deployed?
helm version - v3.8.0
Ideally there should not be any relation between two helm release installations. Each helm installation would always try to install their components. Here, it could be failing because of the hardcoded name of resource. Just for your use case, you can use helm in-built lookup function to see if resource exist do not create.
link:
https://helm.sh/docs/chart_template_guide/functions_and_pipelines/#using-the-lookup-function
I'm running the "apache/sling" Docker image in an Azure App Service. AFAICT, App Service only supports visibility/persistence in the "/home" directory in the container. Therefore, I think I need to change Sling's "sling.home" property from its default "/opt/sling" to something like "/home/sling".
I suspect I have to create my own Docker image based on "apache/sling", but still, between all the possible ways to set "sling.home", in OSGi framework properties, JAR files, command line arguments, etc., I'm lost to figure out what to actually change in my own Docker image. Should I build some part of this from modified source? Or what?
BTW, this is all in order to start working with Sling until my org eventually gets AEM.
We have a microservices/ish archictecture we currently on a VM. Each of our applications are deployed as DLLs with no executable. To run them, we spawn a new instance of our activator, passing the path of the application as an argument. The process activator injects behaviors on the application via DI, such as proxies and service discovery logic.
This has the benefit that the applications and the process activator can be developed, managed and deployed independently of one another. If we have an update for the activator, we only need to deploy it and restart all applications for our changes to take effect; No need to re-deploy an application, much less to rebuild it.
As we are now developing a plan to migrate our archictecture to Kubernetes, however, we've hit a roadblock because of this design. We haven't been able to find a way to replicate this. We've thought of doing it by simply deploying the two together and setting the activator as the entrypoint; However, that would mean that anytime we update the activator, all applications' images would have to be updated as well, which completely defeats the purpose of this design.
We've also thought of deploying them as two different containers and somehow making the activator read the contents of the application container and then load its DLLs, but we don't know if it's possible for a container to read the contents of another.
In the end, is there a way to make this design work in Kubernetes?
If the design requires the following:
Inject files into the main container to change its behaviour
Then a viable choise is to use init containers. Init containers can perform operations before the main container (or containers) starts, for example they could copy some files for the main container to use.
You could have this activator as the main container and all the various apps being a different init container which contains the DLLs of that app.
When an init container starts, it copies the DLLs of that app on an ephemeral volume (aka emptyDir) to make them available to the main container. Then the activator container starts and find the DLLs at a path and can do whatever it wants with them.
This way:
If you need to update the activator, you need to update the main container image (bump its tag) and then update the definitions of all the Deployments / StatefulSets to use the new image.
If you need to update one of the apps, you need to update its single init container image (bump its tag) and then update the definition of the Deployment / StatefulSet of that particular app.
This strategy works perfectly fine (I think) if you are ok with the idea that you'll still need to define all the apps in the cluster. If you have 3 apps, A, B and C, you'll need to define 3 Deployments (or StatefulSets if the apps are stateful for some reasons) which uses the right init containers.
If the applications are mostly equal and only few things changes.. like only the DLLs to inject to the activator, you could think of using HELM to define your resources on the cluster, as it makes you able to template the resources and personalize them with very little overhead.
Some documentation:
Init Containers: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
Example of making a copy of files between Init container and Main container: https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/
HELM: https://helm.sh/docs/
I have a query related with prometheus-operator helm chart & alert manager combination.
Currently we are using prometheus-operator helm chart:
https://github.com/helm/charts/tree/master/stable/prometheus-operator
and I wrote a simple rule in values.yml (this is just a sample code) to generate an alert:
further I am using alertmanager config/routes/receivers to send alerts. It's working perfectly fine.
But as part of real-time implementation, I may be having so many alert rules. Is there any way where I can bring these all rules in separate rules file & configure the path (rule file path) in values.yml (under: additionalPrometheusRules section)
I also saw kube-prometheus-stack & additionalPrometheusRulesMap (in values.yml):
https://github.com/prometheus-community/helm-charts/blob/main/charts/kube-prometheus-stack/values.yaml
But didn't fine any solution. Anyone can help me on this?
So helm doesn't allow includes in values.yaml files typically. I read that there's a way to do it, but it depends on how the chart is built and typically upstream maintainers don't use templates that way afaik (could be wrong there, but I've never noticed it).
Your problem is exactly the same problem I've been trying to solve adequately, and I think I came up with something. It's not perfect, but it is better than having one huge monolithic values.yaml file.
helm allows the operator to specify multiple values.yaml files using the paradigm, -f values1.yaml -f values2.yaml -f some-more-values.yaml, so I broke my values file up into multiple logically-divided yaml files.
There might be gotchas, so be aware, but so far for this use-case, it seems to be working. I'm still testing things out. https://helm.sh/docs/helm/helm_install/
You can also add your own custom rules file using config maps. In this way, you can avoid over alerting and get notified for specific alerts only.
I am very new to Helm and Kubernetes. I have a use case where, whenever there is a helm upgrade
helm upgrade xxx --values yy.yaml
I need to trigger invocation of a method in my docker image which is deployed onto the kubernetes pods ,so that i can work on the changes encountered in yy.yaml in the image code . Do we have some way of doing it ? Please help me here.
Though your problem is not clear to me but i think, You can use helm hook to do the same thing. Please check documentation.
You can certainly use pre-upgrade hook to invoke your function.