How to run a script in a pod once, manually, using helm - kubernetes

I'm looking for the correct way to run a one-time maintenance script on my Kubernetes cluster.
I've got my deployment configured via Helm, so everything is bundled in my chart and works extremely well from an automation point of view.
Problem is running a script just once. I know Helm has hooks, but I don't think those can be configured to run manually (only pre/post upgrade/install etc.). This is compared to running kubectl apply -f my-maintenance-script.yaml, which I can do just once and be done with.
Is there a best-practice way of doing this? I want to be able to use Helm since I can feed all my config/template values into the Job.

You can use Kubernetes Job, and use helm test to run the Job.

Related

Stopping all pods in Kubernetes cluster before running database migration job

I deploy my App into the Kubernetes cluster using Helm. App works with database, so i have to run db migrations before installing new version of the app. I run migrations with Kubernetes Job object using Helm "pre-upgrade" hook.
The problem is when the migration job starts old version pods are still working with database. They can block objects in database and because of that migration job may fail.
So, i want somehow to automatically stop all the pods in cluster before migration job starts. Is there any way to do that using Kubernetes + Helm? Will appreciate all the answers.
There are two ways I can see that you can do this.
First option is to scale down the pods before the deployment (for example, via Jenkins, CircleCI, GitLab CI, etc)
kubectl scale --replicas=0 -n {namespace} {deployment-name}
helm install .....
The second option (which might be easier depending on how you want to maintain this going forward) is to add an additional pre-upgrade hook with a higher priority than the migrations hook so it runs before the upgrade job; and then use that do the kubectl scale down.

Kubernetes exec script after helm upgrade

Is there a way to execute some code in pod containers when config maps are updated via helm. Preferrably without a custom sidecar doing constant file watching?
I am thinking along the lines of postStart and preExit lifecycle events of Kubernetes, but in my case a "postPatch".
This might be something a post-install or post-upgrade hook would be perfect for:
https://helm.sh/docs/topics/charts_hooks/
You can trigger these jobs to start after an install (post-install) and/or after an upgrade (post-upgrade) and they will run to completion before the chart is considered installed or upgraded.
So you can to the upgrade, then as part of that upgrade, the hook would trigger after the update and run your update code. I know the nginx ingress controller chart does something like this.

Helm chart copy shell script from local machine to remote pod , change permission and exeucte

Is there a way I can copy shell script from local machine to pod using charts and helm, change the script permission and execute the script inside the pod?
No, Helm cannot do this. In effect only the Kubernetes commands it can run are kubectl apply and kubectl delete, though it can apply templating before sending YAML off to the Kubernetes server. The sorts of imperative commands you're describing (kubectl cp and kubectl exec) aren't things Helm can do.
(The sorts of imperative commands you're describing aren't generally good form in Kubernetes in any case. Generally you'd need to package your script up in a Docker image to be able to run it in the cluster, and you want to try to set up your containers to be able to set themselves up as much as they can. Also remember that pods get deleted routinely, sometimes even outside of your control, and anything you've manually copied into a pod will get lost when this happens.)

How to use helm chart test to do integration tests?

I am trying to use it to run some integration tests, so to verify the service code I am deploying is actually doing the right thing.
Basically how I setup is (as described here: https://docs.helm.sh/developing_charts/#chart-tests) creating this templates/tests/integration-test.yaml chart test file, and inside it specify to run a container, which basically is a customized maven image with test code added in and the test container is simply started by command “mvn test”, which does some simple curl check on the kube service this whole helm release deploys.
In this way, the helm test does work.
However, the issue is, during the helm test is running, the new version of the service code is actually already online and being exposed to the outside world/users. I can of course immediately do a roll back if the helm test fails, but this will not stop me hosting the problem-version of the service code for a while to the outside world.
Is there a way, where one can run a service/integration test on a pod, after the pod is started but before it is exposed to the Kubernetes service?
Ideally you'll install and test on a test environment first, either a dedicated test cluster or namepsace. For an additional check you could install the chart first into a new namespace and let the tests run there and then delete that namespace when it is all passed. This does require writing the tests in a way that they can hit URLs that are specific to that namespace. Cluster-internal URLs based on service names will be namespace-relative anyway but if you use external URLs in the tests then you'd either need to switch them to internal or use prefixing.
Use the readiness and liveness probes in the pod spec to ensure that the deployment won't even roll out if there are probe failures.

Is there a way to make kubectl apply restart deployments whose image tag has not changed?

I've got a local deployment system that is mirroring our production system. Both are deployed by calling kubectl apply -f deployments-and-services.yaml
I'm tagging all builds with the current git hash, which means that for clean deploys to GKE, all the services have a new docker image tag which means that apply will restart them, but locally to minikube the tag is often not changing which means that new code is not run. Before I was working around this by calling kubectl delete and then kubectl create for deploying to minikube, but as the number of services I'm deploying has increased, that is starting to stretch the dev cycle too far.
Ideally, I'd like a better way to tell kubectl apply to restart a deployment rather than just depending on the tag?
I'm curious how people have been approaching this problem.
Additionally, I'm building everything with bazel which means that I have to be pretty explicit about setting up my build commands. I'm thinking maybe I should switch to just delete/creating the one service I'm working on and leave the others running.
But in that case, maybe I should just look at telepresence and run the service I'm dev'ing on outside of minikube all together? What are best practices here?
I'm not entirely sure I understood your question but that may very well be my reading comprehension :)
In any case here's a few thoughts that popped up while reading this (again not sure what you're trying to accomplish)
Option 1: maybe what you're looking for is to scale down and back up, i.e. scale your deployment to say 0 and then back up, given you're using configmap and maybe you only want to update that, the command would be kubectl scale --replicas=0 -f foo.yaml and then back to whatever
Option 2: if you want to apply the deployment and not kill any pods for example, you would use the cascade=false (google it)
Option 3: lookup the rollout option to manage deployments, not sure if it works on services though
Finally, and that's only me talking, share some more details like which version of k8s are you using? maybe provide an actual use case example to better describe the issue.
Kubernetes, only triggers a deployment when something has changed, if you have image pull policy to always you can delete your pods to get the new image, if you want kube to handle the deployment you can update the kubernetes yaml file to container a constantly changing metadata field (I use seconds since epoch) which will trigger a change. Ideally you should be tagging your images with unique tags from your CI/CD pipeline with the commit reference they have been built from. this gets around this issue and allows you to take full advantage of the kubernetes rollback feature.