Start kubernetes container with specific command - kubernetes

Using fleet I can specify a command to be run inside the container when it is started. It seems like this should be easily possible with Kubernetes as well, but I can't seem to find anything that says how. It seems like you have to create the container specifically to launch with a certain command.
Having a general purpose container and launching it with different arguments is far simpler than creating many different containers for specific cases, or setting and getting environment variables.
Is it possible to specify the command a kubernetes pod runs within the Docker image at startup?

I spend 45 minutes looking for this. Then I post a question about it and find the solution 9 minutes later.
There is an hint at what I wanted inside the Cassandra example. The command line below the image:
id: cassandra
kind: Pod
apiVersion: v1beta1
desiredState:
manifest:
version: v1beta1
id: cassandra
containers:
- name: cassandra
image: kubernetes/cassandra
command:
- /run.sh
cpu: 1000
ports:
- name: cql
containerPort: 9042
- name: thrift
containerPort: 9160
env:
- key: MAX_HEAP_SIZE
value: 512M
- key: HEAP_NEWSIZE
value: 100M
labels:
name: cassandra
Despite finding the solution, it would be nice if there was somewhere obvious in the Kubernetes project where I could see all of the possible options for the various configuration files (pod, service, replication controller).

for those looking to use a command with parameters, you need to provide an array
for example
command: [ "bin/bash", "-c", "mycommand" ]
or also
command:
- "bin/bash"
- "-c"
- "mycommand"

To answer Derek Mahar's question in the comments above:
What is the purpose of args if one could specify all arguments using command?
Dockerfiles can have an Entrypoint only or a CMD only or both of them together.
If used together then whatever is in CMD is passed to the command in ENTRYPOINT as arguments i.e.
ENTRYPOINT ["print"]
CMD ["hello", "world"]
So in Kubernetes when you specify a command i.e.
command: ["print"]
It will override the value of Entrypoint in the container's Dockerfile.
If you only specify arguments then those arguments will be passed to whatever command is in the container's Entrypoint.

In order to specify the command a kubernetes pod runs within the Docker image at startup we need to include the command and args fields inside the yaml file for command and arguments to be passed. For example,
apiVersion: v1
kind: Pod
metadata:
name: command-demo
labels:
purpose: demo-command
spec:
containers:
- name: command-demo-container
image: ubuntu
command: ["/bin/sh"]
args: ["-c", "while true; do echo hello; sleep 10;done"]

Additionally to the accepted answer, you can use variables with values from secrets in the commands as follows:
command: ["/some_command","-instances=$(<VARIABLE_NAME>)"]
env:
- name: <VARIABLE_NAME>
valueFrom:
secretKeyRef:
name: <secret_name>
key: <secret_key>

Related

Ensure pod deletion when one of container terminates [duplicate]

I have a Kubernetes JOB that does database migrations on a CloudSQL database.
One way to access the CloudSQL database from GKE is to use the CloudSQL-proxy container and then connect via localhost. Great - that's working so far. But because I'm doing this inside a K8s JOB the job is not marked as successfully finished because the proxy keeps on running.
$ kubectrl get po
NAME READY STATUS RESTARTS AGE
db-migrations-c1a547 1/2 Completed 0 1m
Even though the output says 'completed' one of the initially two containers is still running - the proxy.
How can I make the proxy exit on completing the migrations inside container 1?
The best way I have found is to share the process namespace between containers and use the SYS_PTRACE securityContext capability to allow you to kill the sidecar.
apiVersion: batch/v1
kind: Job
metadata:
name: my-db-job
spec:
template:
spec:
restartPolicy: OnFailure
shareProcessNamespace: true
containers:
- name: my-db-job-migrations
command: ["/bin/sh", "-c"]
args:
- |
<your migration commands>;
sql_proxy_pid=$(pgrep cloud_sql_proxy) && kill -INT $sql_proxy_pid;
securityContext:
capabilities:
add:
- SYS_PTRACE
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.17
command:
- "/cloud_sql_proxy"
args:
- "-instances=$(DB_CONNECTION_NAME)=tcp:5432"
One possible solution would be a separate cloudsql-proxy deployment with a matching service. You would then only need your migration container inside the job that connects to your proxy service.
This comes with some downsides:
higher network latency, no pod local mysql communication
possible security issue if you provide the sql port to your whole kubernetes cluster
If you want to open cloudsql-proxy to the whole cluster you have to replace tcp:3306 with tcp:0.0.0.0:3306 in the -instance parameter on the cloudsql-proxy.
There are 3 ways of doing this.
1- Use private IP to connect your K8s job to Cloud SQL, as described by #newoxo in one of the answers. To do that, your cluster needs to be a VPC-native cluster. Mine wasn't and I was not whiling to move all my stuff to a new cluster. So I wasn't able to do this.
2- Put the Cloud SQL Proxy container in a separate deployment with a service, as described by #Christian Kohler. This looks like a good approach, but it is not recommended by Google Cloud Support.
I was about to head in this direction (solution #2) but I decided to try something else.
And here is the solution that worked for me:
3- You can communicate between different containers in the same Pod/Job using the file system. The idea is to tell the Cloud SQL Proxy container when the main job is done, and then kill the cloud sql proxy. Here is how to do it:
In the yaml file (my-job.yaml)
apiVersion: v1
kind: Pod
metadata:
name: my-job-pod
labels:
app: my-job-app
spec:
restartPolicy: OnFailure
containers:
- name: my-job-app-container
image: my-job-image:0.1
command: ["/bin/bash", "-c"]
args:
- |
trap "touch /lifecycle/main-terminated" EXIT
{ your job commands here }
volumeMounts:
- name: lifecycle
mountPath: /lifecycle
- name: cloudsql-proxy-container
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/bin/sh", "-c"]
args:
- |
/cloud_sql_proxy -instances={ your instance name }=tcp:3306 -credential_file=/secrets/cloudsql/credentials.json &
PID=$!
while true
do
if [[ -f "/lifecycle/main-terminated" ]]
then
kill $PID
exit 0
fi
sleep 1
done
securityContext:
runAsUser: 2 # non-root user
allowPrivilegeEscalation: false
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: lifecycle
mountPath: /lifecycle
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
- name: lifecycle
emptyDir:
Basically, when your main job is done, it will create a file in /lifecycle that will be identified by the watcher added to the cloud-sql-proxy container, which will kill the proxy and terminate the container.
I hope it helps! Let me know if you have any questions.
Based on: https://stackoverflow.com/a/52156131/7747292
Doesn't look like Kubernetes can do this alone, you would need to manually kill the proxy once the migration exits. Similar question asked here: Sidecar containers in Kubernetes Jobs?
Google cloud sql has recently launched private ip address connectivity for cloudsql. If the cloud sql instance and kubernetes cluster is in same region you can connect to cloudsql without using cloud sql proxy.
https://cloud.google.com/sql/docs/mysql/connect-kubernetes-engine#private-ip
A possible solution would be to set the concurrencyPolicy: Replace in the job spec ... this will agnostically replace the current pod with the new instance whenever it needs to run again. But, you have to make sure that the subsequent cron runs are separated enough.
Unfortunately the other answers weren't working for me because of CloudSQLProxy running in a distroless environment where there is no shell.
I managed to get around this by bundling a CloudSQLProxy binary with my deployment and running a bash script to start up CloudSQLProxy followed by my app.
Dockerfile:
FROM golang:1.19.4
RUN apt update
COPY . /etc/mycode/
WORKDIR /etc/mycode
RUN chmod u+x ./scripts/run_migrations.sh
RUN chmod u+x ./bin/cloud_sql_proxy.linux-amd64
RUN go install
ENTRYPOINT ["./scripts/run_migrations.sh"]
Shell Script (run_migrations.sh):
#!/bin/sh
# This script is run from the parent directory
dbConnectionString=$1
cloudSQLProxyPort=$2
echo "Starting Cloud SQL Proxy"
./bin/cloud_sql_proxy.linux-amd64 -instances=${dbConnectionString}=tcp:5432 -enable_iam_login -structured_logs &
CHILD_PID=$!
echo "CloudSQLProxy PID: $CHILD_PID"
echo "Migrating DB..."
go run ./db/migrations/main.go
MAIN_EXIT_CODE=$?
kill $CHILD_PID;
echo "Migrations complete.";
exit $MAIN_EXIT_CODE
K8s (via Pulumi):
import * as k8s from '#pulumi/kubernetes'
const jobDBMigrations = new k8s.batch.v1.Job("job-db-migrations", {
metadata: {
namespace: namespaceName,
labels: appLabels,
},
spec: {
backoffLimit: 4,
template: {
spec: {
containers: [
{
image: pulumi.interpolate`gcr.io/${gcpProject}/${migrationsId}:${migrationsVersion}`,
name: "server-db-migration",
args: [
dbConnectionString,
],
},
],
restartPolicy: "Never",
serviceAccount: k8sSAMigration.metadata.name,
},
},
},
},
{
provider: clusterProvider,
});

Handling cronjobs in a Pod with multiple containers

I have a requirement in which I need to create a cronjob in kubernetes but the pod is having multiple containers (with single container its working fine).
Is it possible?
The requirement is something like this:
1. First container: Run the shell script to do a job.
2. Second container: run fluentbit conf to parse the log and send it.
Previously I thought to have a deployment in place and that is working fine but since that deployment was used just for 10 mins jobs I thought to make it a cron job.
Any help is really appreciated.
Also about the cronjob I am not sure if a pod can support multiple containers to do that same.
Thank you,
Sunny
Yes you can create a cronjob with multiple containers. CronJob is an abstraction on top of pod. So in the pod spec you can have multiple containers just like you can have in a normal pod. As an example
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
namespace: default
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
- name: app
image: alpine
command:
- echo
- Hello World!
restartPolicy: OnFailure
I need to agree with the answer provided by #Arghya Sadhu. It shows how you can run multi container Pod with a CronJob. Before the answer I would like to give more attention to the comment provided by #Chris Stryczynski:
It's not clear whether the containers are run in parallel or sequentially
It is not entirely clear if the workload that you are trying to run:
The requirement is something like this:
First container: Run the shell script to do a job.
Second container: run fluentbit conf to parse the log and send it.
could be used in parallel (both running at the same time) or require sequential approach (after X completed successfully, run Y).
If the workload could be run in parallel the answer provided by #Arghya Sadhu is correct, however if one workload is depending on another, I'd reckon you should be using initContainers instead of multi container Pods.
The example of a CronJob that implements the initContainer could be following:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
restartPolicy: Never
containers:
- name: ubuntu
image: ubuntu
command: [/bin/bash]
args: ["-c","cat /data/hello_there.txt"]
volumeMounts:
- name: data-dir
mountPath: /data
initContainers:
- name: echo
image: busybox
command: ["bin/sh"]
args: ["-c", "echo 'General Kenobi!' > /data/hello_there.txt"]
volumeMounts:
- name: data-dir
mountPath: "/data"
volumes:
- name: data-dir
emptyDir: {}
This CronJob will write a specific text to a file with an initContainer and then a "main" container will display its result. It's worth to mention that the main container will not start if the initContainer won't succeed with its operations.
$ kubectl logs hello-1234567890-abcde
General Kenobi!
Additional resources:
Linchpiner.github.io: K8S multi container pods
Whats about sidecar container for logging as second container which keep running without exit code. Even the job might run the state of the job still failed.

How to run binary using kuberneates config-map

I used config map with files but i am experimenting with portable services like supervisor d and other internal tools.
we have golang binary that can be run in any image. what i am trying is to run these binary using configmap.
Example :-
We have a internal tool written in Go(size is less than 7MB) can be store in config map and we want to mount that config map inside kuberneates pod and want to run it inside pod
Question :- does anyone use it ? Is it a good approach ? What is the best practice ?
I don't believe you can put 7MB of content in a ConfigMap. See here for example. What you're trying to do sounds like a very unusual practice. The standard practice to run binaries in Pods in Kubernetes is to build a container image that includes the binary and configure the image or the Pod to run that binary.
I too faced similar issue while storing elastic.jks keystore binary file in k8s pod.
AFAIK there are two options:
Make use of configmap to store binary data. Check this out.
OR
Store your binary file remotely somewhere like in s3 bucket and pull that binary before running actual pod using initContainers concept.
apiVersion: v1
kind: Pod
metadata:
name: alpine
namespace: default
spec:
containers:
- name: myapp-container
image: alpine:3.1
command: ['sh', '-c', 'if [ -f /jks/elastic.jks ]; then sleep 99999; fi']
volumeMounts:
- name: jksdata
mountPath: /jks
initContainers:
- name: init-container
image: atlassian/pipelines-awscli
command: ["/bin/sh","-c"]
args: ['aws s3 sync s3://my-artifacts/$CLUSTER /jks/']
imagePullPolicy: IfNotPresent
volumeMounts:
- name: jksdata
mountPath: /jks
env:
- name: CLUSTER
value: dev-elastic
volumes:
- name: jksdata
emptyDir: {}
restartPolicy: Always
As #amit-kumar-gupta mentioned the configmap size constraint.
I recommend the second way.
Hope this helps.

How to set environment variable in container from Kubernetes?

I want to set an environment variable (I'll just name it ENV_VAR_VALUE) to a container during deployment through Kubernetes.
I have the following in the pod yaml configuration:
...
...
spec:
containers:
- name: appname-service
image: path/to/registry/image-name
ports:
- containerPort: 1234
env:
- name: "ENV_VAR_VALUE"
value: "some.important.value"
...
...
The container needs to use the ENV_VAR_VALUE's value.
But in the container's application logs, it's value always comes out empty.
So, I tried checking it's value from inside the container:
$ kubectl exec -it appname-service bash
root#appname-service:/# echo $ENV_VAR_VALUE
some.important.value
root#appname-service:/#
So, the value was successfully set.
I imagine it's because the environment variables defined from Kubernetes are set after the container is already initialized.
So, I tried overriding the container's CMD from the pod yaml configuration:
...
...
spec:
containers:
- name: appname-service
image: path/to/registry/image-name
ports:
- containerPort: 1234
env:
- name: "ENV_VAR_VALUE"
value: "some.important.value"
command: ["/bin/bash"]
args: ["-c", "application-command"]
...
...
Even still, the value of ENV_VAR_VALUE is still empty during the execution of the command.
Thankfully, the application has a restart function
-- because when I restart the app, ENV_VAR_VALUE get used successfully.
-- I can at least do some other tests in the mean time.
So, the question is...
How should I configure this in Kubernetes so it isn't a tad too late in setting the environment variables?
As requested, here is the Dockerfile.
I apologize for the abstraction...
FROM ubuntu:18.04
RUN apt-get update && apt-get install -y some-dependencies
COPY application-script.sh application-script.sh
RUN ./application-script.sh
# ENV_VAR_VALUE is set in this file which is populated when application-command is executed
COPY app-config.conf /etc/app/app-config.conf
CMD ["/bin/bash", "-c", "application-command"]
You can try also running two commands in Kubernetes POD spec:
(read in env vars): "source /env/required_envs.env" (would come via secret mount in volume)
(main command): "application-command"
Like this:
containers:
- name: appname-service
image: path/to/registry/image-name
ports:
- containerPort: 1234
command: ["/bin/sh", "-c"]
args:
- source /env/db_cred.env;
application-command;
Why don't you move the
RUN ./application-script.sh
below
COPY app-config.conf /etc/app/app-config.conf
Looks like the app is running before the env conf is available for it.

Replication Controller replica ID in an environment variable?

I'm attempting to inject a ReplicationController's randomly generated pod ID extension (i.e. multiverse-{replicaID}) into a container's environment variables. I could manually get the hostname and extract it from there, but I'd prefer if I didn't have to add the special case into the script running inside the container, due to compatibility reasons.
If a pod is named multiverse-nffj1, INSTANCE_ID should equal nffj1. I've scoured the docs and found nothing.
apiVersion: v1
kind: ReplicationController
metadata:
name: multiverse
spec:
replicas: 3
template:
spec:
containers:
- env:
- name: INSTANCE_ID
value: $(replicaID)
I've tried adding a command into the controller's template configuration to create the environment variable from the hostname, but couldn't figure out how to make that environment variable available to the running script.
Is there a variable I'm missing, or does this feature not exist? If it doesn't, does anyone have any ideas on how to make this to work without editing the script inside of the container?
There is an answer provided by Anton Kostenko about inserting DB credentials into container environment variables, but it could be applied to your case also. It is all about the content of the InitContainer spec.
You can use InitContainer to get the hash from the container’s hostname and put it to the file on the shared volume that you mount to the container.
In this example InitContainer put the Pod name into the INSTANCE_ID environment variable, but you can modify it according to your needs:
Create the init.yaml file with the content:
apiVersion: v1
kind: Pod
metadata:
name: init-test
spec:
containers:
- name: init-test
image: ubuntu
args: [bash, -c, 'source /data/config && echo $INSTANCE_ID && while true ; do sleep 1000; done ']
volumeMounts:
- name: config-data
mountPath: /data
initContainers:
- name: init-init
image: busybox
command: ["sh","-c","echo -n INSTANCE_ID=$(hostname) > /data/config"]
volumeMounts:
- name: config-data
mountPath: /data
volumes:
- name: config-data
emptyDir: {}
Create the pod using following command:
kubectl create -f init.yaml
Check if Pod initialization is done and is Running:
kubectl get pod init-test
Check the logs to see the results of this example configuration:
$ kubectl logs init-test
init-test