Create Symlink in Kubernetes deployment - kubernetes

I want to create a symlink using a kubernetes deployment yaml. Is this possible?
Thanks

Not really but you could set your command to something like [/bin/sh, -c, "ln -s whatever whatever && exec originalcommand"]. Kubernetes isn't involved per se, but it would probably do the job. Normally that should be part of your image build process, not a deployment-time thing.

Related

Way to stop deploy wrong kubernetes environment

We have a set of kubernetes yamls which is management by kustomize and they will be deployed to different clusters. Each cluster is slightly different which means every environment will have a sub directory (environ/<envname>) including some special kustomization overwrite.
We will manually deploy new version to different environments by command kubeclt apply -k environ/env. But sometimes we do stupid thing like this: kubectl apply -k environ/env1 to the cluster env2 . So is there some method to stop doing a kubectl apply to a wrong environment?
This is a community wiki answer. Feel free to expand it.
If you are aware that you made a mistake and want to cancel the command right away than there are some options for you:
$ kill -9 $! will kill the most recent process executed by the command ($! represents its process ID).
Suspend the current process by pressing Ctrl+z and then kill it using kill -9 %% or kill -9 %+. More details regarding this approach can be found here.
EDIT:
Including the solution proposed by VASャ from the comments:
I'd use shell scripts and different configs for each cluster, like
that: deploy-cluster1.sh where I'd have kubectl --kubeconfig .kube/cluster1 apply -k environ/cluster1 or even shorter: deploy.sh env1 where deploy.sh contains: kubectl --kubeconfig .kube/$1 apply -k environ/$1
More details regarding that approach can be found here.
Recently I got a new solution here.
direnv which can change env after switch into different directories this force me to switch the KUBECONFIG env. Follow Mastering the KUBECONFIG file to get more details.

Helm chart copy shell script from local machine to remote pod , change permission and exeucte

Is there a way I can copy shell script from local machine to pod using charts and helm, change the script permission and execute the script inside the pod?
No, Helm cannot do this. In effect only the Kubernetes commands it can run are kubectl apply and kubectl delete, though it can apply templating before sending YAML off to the Kubernetes server. The sorts of imperative commands you're describing (kubectl cp and kubectl exec) aren't things Helm can do.
(The sorts of imperative commands you're describing aren't generally good form in Kubernetes in any case. Generally you'd need to package your script up in a Docker image to be able to run it in the cluster, and you want to try to set up your containers to be able to set themselves up as much as they can. Also remember that pods get deleted routinely, sometimes even outside of your control, and anything you've manually copied into a pod will get lost when this happens.)

Deploy a scalable application on Kubernetes which requires each replica Pod to have different args

I am trying to understand how to deploy an application on Kubernetes which requires each Pod of the same deployment to have different args used with the starting command.
I have this application which runs spark on Kubernetes and needs to spawn executor Pods on start. The problem is that each Pod of the application needs to spawn its own executors using its own port and spark app name.
I've read of stateful sets and searched the documentation but I didn't found a solution to my problem. Since every Pod needs to use a different port, I need that port to be declared in a service if I understood correctly, and also directly passed as an argument to the pod command in the args.
Is there a way to obtain this without using multiple deployments, one for each pod I need to create? Because this is the only solution i can think of but it can't be scaled after being deployed.
I'm using Helm to deploy the application, so I can easily create as many deployments and / or services as needed, but I would like to find a solution which can scale at runtime, if possible.
I don't think you can have a Deployment which creates PODs from different Specs. You can't have it in Kubernetes and Helm won't help here (since Helm is just a template manager over Kubernetes configurations).
What you can do is to specify each Pod as a separate configuration (if single Pod, you don't necessarily need Deployment) and let Helm manage it.
Posting the solution I used since it could be useful for other people searching around.
In the end I found a great configuration to solve my problem. I used a StatefulSet to declare the deployment of the Spark application. Associated with the StatefulSet, a headless Service which expose each pod on a specific port.
StatefulSet can declare a property spec.serviceName which can have the same name of a headless service to create a unique network name for each Pod. Something like <pod_name>.<service_name>
Additionally, each Pod has a unique and not-changing name which is created using the application name and an ordinal starting from 0 for each replica Pod.
Using a starting script in the docker image and inserting in the environment of each Pod the pod name from the metadata, I was able to use different configurations for each pod since, even with the same deployment, each pod have their own unique metadata name and I can use the StatefulSet service to obtain what I needed.
This way, the StatefulSet is scalable at run time and works as expected.
hey I am not sure if this will exactly match your scenario but I think this is what you can try. Use a sidecar container to run the replica instances, A sidecar is a container which runs along with the main container and also shares the same namespace and can share volumes across each container.
Now to pass the different arguments to each container or sidecar, you will have to tweak the dockerfile or rather tweak the way your container starts.
Create a start.sh script file which accepts the arguments and starts the container with those arguments, the trick here is to accept the argument from environment variables thus allowing you to later configure these from configmaps or pod env.
So here is an example of php/laravel application running the same code and starting with different arguments. And the start.sh the file looks like this.
#!/bin/sh
if [ "${CONTAINER_ROLE}" = "queue" ];
then
echo "Running the queue..."
php artisan queue:work --queue=${QUEUENAME}
echo "Queue Started"
else
echo "Running Iceberg."
exec apache2-foreground
fi
So a sample dockerfile looks like this
FROM php:7.1.24-apache
COPY . /srv/myapp
...
...
RUN chown -R www-data:www-data /srv/app \
&& a2enmod remoteip && a2enmod rewrite
WORKDIR /srv/app
RUN chmod +x .docker/start.sh
CMD [ "sh",".docker/start.sh"]
Let me know how it goes.

Append/Extend LD_LIBRARY_PATH using Kubernetes Source Code

When a pod is being scheduled, I dynamically (and transparently) mount some shared libraries folder into the client containers through Kubernetes DevicePlugins. Now, in the container I want to append/extend these dynamically mounted shared libraries to LD_LIBRARY_PATH environmental variables.
Inside the container: This can be achieved by running command on the bash
"export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/some/new/directory"
From the Host: I can add the export command to the pod.yaml file under pod.spec.command and args.
But, I wanted to do it transparently without the client/admin specifying it in the yaml file using Kubernetes DevicePlugins or Extended-Schedulers
I am looking method/hack by which I can append/extend the LD_LIBRARY_PATH inside the container only using Kubernetes source code.
Thanks.
You can just bake into your Dockerfile and create an image that you use in Kubernetes for that. No need to hack the Kubernetes source code.
In your Dockerfile in some line:
ENV LD_LIBRARY_PATH /extra/path:$LD_LIBRARY_PATH
Then:
docker build -t <your-image-tag> .
docker push <your-image-tag>
Then, update your pod or deployment definition and deploy to Kubernetes.
Hope it helps.
If i understand your issue, all you need is to transparently add ld_library_path to the pod as it is scheduled. Maybe you can try to use mutatingadmission webhook. Which allows you to send patch command to kubernetes to modify the manifest. Theres a good documentation from banzai cloud. I have not tried it myself.
https://banzaicloud.com/blog/k8s-admission-webhooks/

kubernetes: Call command in another containers which are in same pod

Is there any approach that one container can call command in another container? The containers are in the same pod.
I need many command line tools which are shipped as image as well as in packages. But I don’t want to install all of them into one container because of some concerns.
This is very possible as long as you have k8s v1.17+. You must enable shareProcessNamespace: true and then all the container processes are available to other containers in the same pod.
Here are the docs, have a look.
In general, no, you can't do this in Kubernetes (or in plain Docker). You should either move the two interconnected things into the same container, or wrap some sort of network service around the thing you're trying to call (and then probably put it in a separate pod with a separate service in front of it).
There might be something you could do if you set up a service account, installed a Kubernetes API sidecar container, and used the Kubernetes API to do the equivalent of kubectl exec, but I'd consider this a solution of last resort.
Containers in pod are isolated from each other except that they share volume and network namespace. So you would not be able to execute command from one container into another. However, you could expose the commands in container through APIs
We are currently running EKS v1.20 and we were able to achieve this using the shareProcessNamespace: true that mr haven mention. In our particular case, we needed a debian 10 php container to execute a SAS binary command with arguments. SAS is installed and running in a centos 7 container in the same pod. Using helm, we enabled shareProcessNamespace and in the container's arguments and command fields we built symlinks to that binary using bash -c once the pod came online. We grabbed the pid of the shared container by using pgrep and since we know that the centos container's entry point is tail -f /dev/null so we just look for that process $(pgrep tail) initially.
- image: some_php_container
command: ["bash", "-c"]
args: [ "SAS_PROC_PID=$(pgrep tail) && \
ln -sf /proc/$SAS_PROC_PID/root/usr/local/SAS/SAS_9.4/SASFoundation/9.4/bin/sas_u8 /usr/bin/sas && \
ln -sf /proc/$SAS_PROC_PID/root/usr/local/SAS /usr/local/SAS && \
. /opt/script_runner.sh" ]
Now the php container is able to execute the sas command with arguments and process data files using the SAS software running on the centos container.
One issue we quickly found out is if the resulting SAS container happened to die in the pod, the pid would change and thus the symlinks would be broken on the php container. So we just put in liveness probe to frequently check to see if the path to binary using current pid exist, if the probe fails, it restarts the php container and thus rebuilding the symlinks with the right pid.
livenessProbe:
exec:
command:
- bash
- -c
- SAS_PROC_PID=$(pgrep tail)
- test -f /proc/$SAS_PROC_PID/root/usr/local/SAS/SAS_9.4/SASFoundation/9.4/bin/sas_u8
initialDelaySeconds: 5
periodSeconds: 5
failureThreshold: 1
Hopefully above info can help someone else.
You can do this without shareProcessNamespace by using a shared volume and some named pipes. It manages all the I/O for you and is trivially simple and extremely fast.
For a complete description and code, see this solution I created. Contains examples.