Checking result of command in helm chart (helm-hooks) - kubernetes-helm

I am trying to execute a pre install job using helm charts. Can someone help getting result of command (parameter in yaml file) that I put in the below file:
apiVersion: batch/v1
kind: Job
metadata:
name: pre-install-job
annotations:
"helm.sh/hook": "pre-install"
spec:
template:
spec:
containers:
- name: pre-install
image: busybox
imagePullPolicy: IfNotPresent
command: ['sh', '-c', 'touch somefile.txt && echo $PWD && sleep 15']
restartPolicy: OnFailure
terminationGracePeriodSeconds: 0
backoffLimit: 3
completions: 1
parallelism: 1
I want to know where somefile.txt is created and echo is printed. And the reason I know it is working because "sleep 15" works. I see a 15 second difference in start and end time of pod creation.

Any file you create in a container environment is created inside the container filesystem. Unless you've mounted some storage into the container, the file will be lost as soon as the container exits.
Anything a Kubernetes process writes to its stdout will be captured by the Kubernetes log system. You can retrieve it using kubectl logs pre-install-job-... -c pre-install.

Related

When should I use commands or args in readinessProbes

I am working my way through killer.sh.for the CKAD. I encountered a pod definition file that has a command field under the readiness probe and the container executes another command but uses args.
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: pod6
name: pod6
spec:
containers:
- args:
- sh
- -c
- touch /tmp/ready && sleep 1d
image: busybox:1.31.0
name: pod6
resources: {}
readinessProbe: # add
exec: # add
command: # add
- sh # add
- -c # add
- cat /tmp/ready # add
initialDelaySeconds: 5 # add
periodSeconds: 10 # add
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
If the readiness probe weren't used and this pod were created implicitly, args wouldn't be utilized.
kubectl run pod6 --image=busybox:1.31.0 --dry-run=client --command -- sh -c "touch /tmp/ready && sleep 1d" > 6.yaml
The output YAML would look like this:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: pod69
name: pod69
spec:
containers:
- command:
- sh
- -c
- touch /tmp/ready && sleep 1d
image: busybox:1.31.9
name: pod69
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
Why is command not used on both the readinessProbe and the container?
When do commands become args?
Is there a way to tell?
I've read through this document: https://kubernetes.io/docs/tasks/inject-data-application/_print/
but I still haven't had much luck understanding this situation and when to switch to args.
The reason why you have both cmd + args in Kubernetes is because it gives you options to override the default Commands + Args from the image that you are trying to run.
In your specific case, the busybox image does not have any default Commands with the image so specifying the starting command in either cmd or args in the Pod.yaml file is essentially the same.
To your question of when do commands become args - they dont, when a container is spun up using your image, it simply executes cmd + args. And if the cmd is empty in (both the image & the yaml file) then only the args are executed.
The thread here may give you some more explanation

Handling cronjobs in a Pod with multiple containers

I have a requirement in which I need to create a cronjob in kubernetes but the pod is having multiple containers (with single container its working fine).
Is it possible?
The requirement is something like this:
1. First container: Run the shell script to do a job.
2. Second container: run fluentbit conf to parse the log and send it.
Previously I thought to have a deployment in place and that is working fine but since that deployment was used just for 10 mins jobs I thought to make it a cron job.
Any help is really appreciated.
Also about the cronjob I am not sure if a pod can support multiple containers to do that same.
Thank you,
Sunny
Yes you can create a cronjob with multiple containers. CronJob is an abstraction on top of pod. So in the pod spec you can have multiple containers just like you can have in a normal pod. As an example
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
namespace: default
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
- name: app
image: alpine
command:
- echo
- Hello World!
restartPolicy: OnFailure
I need to agree with the answer provided by #Arghya Sadhu. It shows how you can run multi container Pod with a CronJob. Before the answer I would like to give more attention to the comment provided by #Chris Stryczynski:
It's not clear whether the containers are run in parallel or sequentially
It is not entirely clear if the workload that you are trying to run:
The requirement is something like this:
First container: Run the shell script to do a job.
Second container: run fluentbit conf to parse the log and send it.
could be used in parallel (both running at the same time) or require sequential approach (after X completed successfully, run Y).
If the workload could be run in parallel the answer provided by #Arghya Sadhu is correct, however if one workload is depending on another, I'd reckon you should be using initContainers instead of multi container Pods.
The example of a CronJob that implements the initContainer could be following:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
restartPolicy: Never
containers:
- name: ubuntu
image: ubuntu
command: [/bin/bash]
args: ["-c","cat /data/hello_there.txt"]
volumeMounts:
- name: data-dir
mountPath: /data
initContainers:
- name: echo
image: busybox
command: ["bin/sh"]
args: ["-c", "echo 'General Kenobi!' > /data/hello_there.txt"]
volumeMounts:
- name: data-dir
mountPath: "/data"
volumes:
- name: data-dir
emptyDir: {}
This CronJob will write a specific text to a file with an initContainer and then a "main" container will display its result. It's worth to mention that the main container will not start if the initContainer won't succeed with its operations.
$ kubectl logs hello-1234567890-abcde
General Kenobi!
Additional resources:
Linchpiner.github.io: K8S multi container pods
Whats about sidecar container for logging as second container which keep running without exit code. Even the job might run the state of the job still failed.

Invoking a script file in helm k8s job template

I am trying to invoke a script inside a helm k8s job template. When I run helm with
helm install ./mychartname/ --generate-name
The job runs however, it couldn't find the script file (run.sh). Is this possible with helm?
apiVersion: batch/v1
kind: Job
metadata:
name: pre-install-job
spec:
template:
spec:
containers:
- name: pre-install
image: busybox
imagePullPolicy: IfNotPresent
command: ['sh', '-c', '../run.sh']
restartPolicy: OnFailure
terminationGracePeriodSeconds: 0
backoffLimit: 3
completions: 1
parallelism: 1
Here is my directory structure
├── mychartname
│ ├── templates
│ │ ├── test.job
│ │──run.sh
Below is the way to achieve running of script inside helm template
apiVersion: batch/v1
kind: Job
metadata:
name: pre-install-job-v05
spec:
template:
spec:
containers:
- name: pre-install-v05
image: busybox
imagePullPolicy: IfNotPresent
command: ["/bin/sh", "-c", {{.Files.Get "scripts/run.sh" }}]
restartPolicy: OnFailure
terminationGracePeriodSeconds: 0
backoffLimit: 3
completions: 1
parallelism: 1
In general, Kubernetes only runs software that's packaged in Docker images. Kubernetes will never run things off of your local system. In your example, the cluster will create a new unmodified busybox container, then from that container's root directory, try to run sh -c ../run.sh; since that script isn't part of the stock busybox image, it won't run.
The best approach here is to build an image out of your script and push it to a Docker registry. This is the standard way to run any custom software in Kubernetes, so you probably already have a workflow to do it. (For a test setup in Minikube you can point your local Docker at the Minikube environment and build a local image, but this doesn't scale to things hosted in the cloud or otherwise running on a multi-host environment.)
In principle you could upload the script in a config map in a separate Helm template file, mount it into your job spec, and run it (you may need to explicitly sh run.sh to get around file-permission issues). Depending on your environment this may work as well as actually having an image, but if you need to update and redeploy the Helm chart every time the script changes, it's "more normal" to do the same work by having your CI system build and upload a new image (and it'll be the same approach as for your application deployments).

Issue Deleting Temporary pods

I am trying to delete temporary pods and other artifacts using helm delete. I am trying to run this helm delete to run on a schedule. Here is my stand alone command which works
helm delete --purge $(helm ls -a -q temppods.*)
However if i try to run this on a schedule as below i am running into issues.
Here is what mycron.yaml looks like:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: cronbox
namespace: mynamespace
spec:
serviceAccount: cron-z
successfulJobsHistoryLimit: 1
schedule: "*/5 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: cronbox
image: alpine/helm:2.9.1
args: ["delete", "--purge", "$(helm ls -a -q temppods.*)"
env:
- name: TILLER_NAMESPACE
value: mynamespace-build
- name: KUBECONFIG
value: /kube/config
volumeMounts:
- mountPath: /kube
name: kubeconfig
restartPolicy: OnFailure
volumes:
- name: kubeconfig
configMap:
name: cronjob-kubeconfig
I ran
oc create -f ./mycron.yaml
This created the cronjob
Every 5th minute a pod is getting created and the helm command that is part of the cron job runs.
I am expecting the artifacts/pods name beginning with temppods* to be deleted.
What i see in the logs of the pod is:
Error: invalid release name, must match regex ^(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])+$ and the length must not longer than 53
The CronJob container spec is trying to delete a release named (literally):
$(helm ls -a -q temppods.*)
This release doesn't exist, and fails helms expected naming conventions.
Why
The alpine/helm:2.9.1 container image has an entrypoint of helm. This means any arguments are passes directly to the helm binary via exec. No shell expansion ($()) occurs as there is no shell running.
Fix
To do what you are expecting you can use sh which is available in alpine images.
sh -uexc 'releases=$(helm ls -a -q temppods.*); helm delete --purge $releases'
In a Pod spec this translates to:
spec:
containers:
- name: cronbox
command: 'sh'
args:
- '-uexc'
- 'releases=$(helm ls -a -q temppods.*); helm delete --purge $releases;'
Helm
As a side note, helm is not the most reliable tool when clusters or releases get into vague states. Running multiple helm commands interacting with within the same release at the same time usually spells disaster and this seems on the surface like that is likely. Maybe there is a question in other ways to achieve this process your are implementing?

Replication Controller replica ID in an environment variable?

I'm attempting to inject a ReplicationController's randomly generated pod ID extension (i.e. multiverse-{replicaID}) into a container's environment variables. I could manually get the hostname and extract it from there, but I'd prefer if I didn't have to add the special case into the script running inside the container, due to compatibility reasons.
If a pod is named multiverse-nffj1, INSTANCE_ID should equal nffj1. I've scoured the docs and found nothing.
apiVersion: v1
kind: ReplicationController
metadata:
name: multiverse
spec:
replicas: 3
template:
spec:
containers:
- env:
- name: INSTANCE_ID
value: $(replicaID)
I've tried adding a command into the controller's template configuration to create the environment variable from the hostname, but couldn't figure out how to make that environment variable available to the running script.
Is there a variable I'm missing, or does this feature not exist? If it doesn't, does anyone have any ideas on how to make this to work without editing the script inside of the container?
There is an answer provided by Anton Kostenko about inserting DB credentials into container environment variables, but it could be applied to your case also. It is all about the content of the InitContainer spec.
You can use InitContainer to get the hash from the container’s hostname and put it to the file on the shared volume that you mount to the container.
In this example InitContainer put the Pod name into the INSTANCE_ID environment variable, but you can modify it according to your needs:
Create the init.yaml file with the content:
apiVersion: v1
kind: Pod
metadata:
name: init-test
spec:
containers:
- name: init-test
image: ubuntu
args: [bash, -c, 'source /data/config && echo $INSTANCE_ID && while true ; do sleep 1000; done ']
volumeMounts:
- name: config-data
mountPath: /data
initContainers:
- name: init-init
image: busybox
command: ["sh","-c","echo -n INSTANCE_ID=$(hostname) > /data/config"]
volumeMounts:
- name: config-data
mountPath: /data
volumes:
- name: config-data
emptyDir: {}
Create the pod using following command:
kubectl create -f init.yaml
Check if Pod initialization is done and is Running:
kubectl get pod init-test
Check the logs to see the results of this example configuration:
$ kubectl logs init-test
init-test