Helm chart copy shell script from local machine to remote pod , change permission and exeucte - kubernetes-helm

Is there a way I can copy shell script from local machine to pod using charts and helm, change the script permission and execute the script inside the pod?

No, Helm cannot do this. In effect only the Kubernetes commands it can run are kubectl apply and kubectl delete, though it can apply templating before sending YAML off to the Kubernetes server. The sorts of imperative commands you're describing (kubectl cp and kubectl exec) aren't things Helm can do.
(The sorts of imperative commands you're describing aren't generally good form in Kubernetes in any case. Generally you'd need to package your script up in a Docker image to be able to run it in the cluster, and you want to try to set up your containers to be able to set themselves up as much as they can. Also remember that pods get deleted routinely, sometimes even outside of your control, and anything you've manually copied into a pod will get lost when this happens.)

Related

Kubernetes apply to get to desired state

I feel like I have a terrible knowledge gap when it comes to managing the resource states within Kubernetes.
Suppose I have 2 deployments in my cluster, foo1 and foo2. They are both defined in separate yaml files, foo1.yaml and foo2.yaml that are both inside a my-dir directory and have been applied with kubectl apply -f my-dir/
Now I want to make a third deployment, but also delete my second deployment. I know that I can do this in 2 steps:
Make another foo3.yaml file inside the directory and then do kubectl apply -f my-dir/foo3.yaml
Run kubectl delete -f my-dir/foo2.yaml to get rid of the second deployment.
My question is, can I do this in one shot by keeping the "desired state" in my directory. i.e. Is there any way that I can delete foo2.yaml, create a new foo3.yaml and then just do kubectl apply -f my-dir/ to let kubernetes handle the deletion of the removed resource file as well? What am I missing here?
The best and easiest way is to use some DevOps tools like jenkins, ansible or terraform for managing your deployments. If you don’t want to use external tools there is a python library for managing kubernetes. You can fetch the details of your kubernetes resources, deployments, pods etc., using this library you can also manage your kubernetes cluster. Similarly if you want to remove the deployment files you just need to add a few more lines for deleting the file.

Is it possible to view files in a pod without using the kubectl exec command?

I am using PostgreSQL in a pod on a self-hosted Kubernetes 1.23 (using Kubeadm). This pod was configured using the Helm Chart located at https://github.com/bitnami/charts/. I would like to be able to view files within this pod like individual log files (not the one exposed by kubectl logs) in order to find out what is wrong with the information being passed between my Entity Framework Core app and PostgreSQL.
However, this pod does not seem to have the ability to use kubectl exec, which is what I would normally use to view any file within a pod (and is how people seem to suggest doing so online). Is there a way to obtain a copy of or view the files within a pod running in Kubernetes without using kubectl exec, and if so, how would I do so?
Additionally, the storage is managed by Ceph, so I can't easily access the files from the node's filesystem.

How to run a script in a pod once, manually, using helm

I'm looking for the correct way to run a one-time maintenance script on my Kubernetes cluster.
I've got my deployment configured via Helm, so everything is bundled in my chart and works extremely well from an automation point of view.
Problem is running a script just once. I know Helm has hooks, but I don't think those can be configured to run manually (only pre/post upgrade/install etc.). This is compared to running kubectl apply -f my-maintenance-script.yaml, which I can do just once and be done with.
Is there a best-practice way of doing this? I want to be able to use Helm since I can feed all my config/template values into the Job.
You can use Kubernetes Job, and use helm test to run the Job.

Kubectl drain node failed: "Forbidden: node updates may only change labels, taints, or capacity"

When attempting to drain a node on an AKS K8s cluster using:
kubectl drain ${node_name} --ignore-daemonsets
I get the following error:
"The Node \"aks-agentpool-xxxxx-0\" is invalid: []: Forbidden: node updates may only change labels, taints, or capacity (or configSource, if the DynamicKubeletConfig feature gate is enabled)"
Is there something extra that needs to be done on AKS nodes to allow draining?
(Context: This is part of an automation script I'm writing to drain a kubernetes node for maintenance operations without downtime, so the draining is definitely a prerequisite here)
An additional troubleshooting note:
This command is being run via Ansible's "shell" module, but when the command is run directly in BASH, it works fine.
Further, the ansible is being run via a Jenkins pipeline. Debug statements seem to show:
the command being correctly formed and executed.
the context seems correct (so kubeconfig is accessible)
pods can be listed (so kubeconfig is active and correct)
This command is being run via Ansible's "shell" module, but when the
command is run directly in BASH, it works fine.
Further, the ansible is being run via a Jenkins pipeline.
It's good that you added this information because it totally changes the perspective from which we should look at the issue you experience.
For debugging purposes instead of running your command, try to run:
kubectl auth can-i drain node --all-namespaces
both directly in bash shell as well as via Ansible's shell module
It should at least give you an answer if this is not a permission issue.
Other commands that you may use to debugging in this case are:
ls -l .kube/config
cat .kube/config
whoami
Last one to make sure that Ansible uses the same user. If you already know that it uses different user, try to run the script as the same user you use for running it in a bash shell.
Once you check this, we can continue the debugging process.

Deploy a scalable application on Kubernetes which requires each replica Pod to have different args

I am trying to understand how to deploy an application on Kubernetes which requires each Pod of the same deployment to have different args used with the starting command.
I have this application which runs spark on Kubernetes and needs to spawn executor Pods on start. The problem is that each Pod of the application needs to spawn its own executors using its own port and spark app name.
I've read of stateful sets and searched the documentation but I didn't found a solution to my problem. Since every Pod needs to use a different port, I need that port to be declared in a service if I understood correctly, and also directly passed as an argument to the pod command in the args.
Is there a way to obtain this without using multiple deployments, one for each pod I need to create? Because this is the only solution i can think of but it can't be scaled after being deployed.
I'm using Helm to deploy the application, so I can easily create as many deployments and / or services as needed, but I would like to find a solution which can scale at runtime, if possible.
I don't think you can have a Deployment which creates PODs from different Specs. You can't have it in Kubernetes and Helm won't help here (since Helm is just a template manager over Kubernetes configurations).
What you can do is to specify each Pod as a separate configuration (if single Pod, you don't necessarily need Deployment) and let Helm manage it.
Posting the solution I used since it could be useful for other people searching around.
In the end I found a great configuration to solve my problem. I used a StatefulSet to declare the deployment of the Spark application. Associated with the StatefulSet, a headless Service which expose each pod on a specific port.
StatefulSet can declare a property spec.serviceName which can have the same name of a headless service to create a unique network name for each Pod. Something like <pod_name>.<service_name>
Additionally, each Pod has a unique and not-changing name which is created using the application name and an ordinal starting from 0 for each replica Pod.
Using a starting script in the docker image and inserting in the environment of each Pod the pod name from the metadata, I was able to use different configurations for each pod since, even with the same deployment, each pod have their own unique metadata name and I can use the StatefulSet service to obtain what I needed.
This way, the StatefulSet is scalable at run time and works as expected.
hey I am not sure if this will exactly match your scenario but I think this is what you can try. Use a sidecar container to run the replica instances, A sidecar is a container which runs along with the main container and also shares the same namespace and can share volumes across each container.
Now to pass the different arguments to each container or sidecar, you will have to tweak the dockerfile or rather tweak the way your container starts.
Create a start.sh script file which accepts the arguments and starts the container with those arguments, the trick here is to accept the argument from environment variables thus allowing you to later configure these from configmaps or pod env.
So here is an example of php/laravel application running the same code and starting with different arguments. And the start.sh the file looks like this.
#!/bin/sh
if [ "${CONTAINER_ROLE}" = "queue" ];
then
echo "Running the queue..."
php artisan queue:work --queue=${QUEUENAME}
echo "Queue Started"
else
echo "Running Iceberg."
exec apache2-foreground
fi
So a sample dockerfile looks like this
FROM php:7.1.24-apache
COPY . /srv/myapp
...
...
RUN chown -R www-data:www-data /srv/app \
&& a2enmod remoteip && a2enmod rewrite
WORKDIR /srv/app
RUN chmod +x .docker/start.sh
CMD [ "sh",".docker/start.sh"]
Let me know how it goes.