My problem is the following:
I should execute the "envsubst" command from inside a POD, I'm using Kubernetes.
Actually I'm executing the command manually accessing to the POD and then executing it, but I would do it automatically inside my configuration file, which is a .yml file.
I've found some references on the web and I've tried some examples, but the result was always that the POD didn't start correctly, presenting the error CrashBackLoopOff error.
I would execute the following command:
envsubst < /usr/share/nginx/html/env_token.js > /usr/share/nginx/html/env.js
There's the content of my .yml file (not all, just the most relevant part)
spec:
containers:
- name: example 1
image: imagename/docker_console:${deploy.version}
env:
- name: PIPPO_ID
valueFrom:
secretKeyRef:
name: pippo-${deploy.env}-secret
key: accessKey
- name: PIPPO
valueFrom:
secretKeyRef:
name: pippo-${deploy.env}-secret
key: secretAccessKey
- name: ENV
value: ${deploy.env}
- name: CREATION_TIMESTAMP
value: ${deploy.creation_timestamp}
- name: TEST
value: ${consoleenv}
command: ["/bin/sh"]
args: ["envsubst", "/usr/share/nginx/html/assets/env_token.js /usr/share/nginx/html/assets/env.js"]
The final two rows, "command" and "args", should be written in this way? I've already tried to put the "envsubst" in the command but it didn't work. I've also tried using commas in the args row to separate each parameter, same error.
Do you have some suggestions you know they work for sure?
Thanks
Related
I have this kubernetes script on argo workflows template
- name: rendition-composer
inputs:
parameters:
- name: original_resolution
script:
image: node:9.1-alpine
command: [node]
source: |
// some node.js script
...
console.log($(SD_RENDITION));
volumeMounts:
- name: workdir
mountPath: /mnt/vol
- name: config
mountPath: /config
readOnly: true
env:
- name: SD_RENDITION
valueFrom:
configMapKeyRef:
name: rendition-specification
key: res480p
In here console.log($(SD_RENDITION)); I can't get the env value. it returns error
ReferenceError: $ is not defined
I already did all the setup for the ConfigMap on this kubernetes official documentation
Is there anything I miss?
process.env.SD_RENDITION
The above syntax solved my problem. It seems I miss some essential concepts about js' process object
I have the following Argo Workflow using a Secret from Kubernetes:
args:
- |
export TEST_FILENAME="./test.txt"
echo "$TEST_DATA" > $TEST_FILENAME
chmod 400 $TEST_FILENAME
env:
- name: TEST_DATA
valueFrom:
secretKeyRef:
name: test_data
key: testing
I need to redirect TEST_DATA to a file when I run the Argo Workflow, but the data of TEST_DATA always shows in the argo-ui log. How can I redirect the data to the file without showing the data in the log?
echo shouldn't be writing $TEST_DATA to logs the way your code is written. So I'm not sure what's going wrong.
However, I think there's an easier way to write a secret to a file. Add a volume to your Workflow spec, and a volume mount to the container section of the step spec.
containers:
- name: some-pod
image: some-image
volumeMounts:
- name: test-mount
mountPath: "/some/path/"
readOnly: true
volumes:
- name: test-volume
secret:
secretName: test_data
items:
- key: testing
path: test.txt
I would like to update image version for my running kubernetes pod.
My current config is:
spec:
containers:
- name: ADMIN_USER
valueFrom:
secretKeyRef:
key: jenkins-admin-user
name: jenkins
image: jenkins/jenkins:latest
I would like to update it to
spec:
containers:
- name: ADMIN_USER
valueFrom:
secretKeyRef:
key: jenkins-admin-user
name: jenkins
image: jenkins/jenkins:2.247
I have tried to run an apply as I understood by reading documentation kubectl apply -f jenkins.yaml --namespace=infrastructure, but nothing changed (nor my pod was restarted automatically).
Can someone advice how to do this?
You can use replace
kubectl replace -f jenkins.yaml --namespace=infrastructure
Probably image: jenkins/jenkins:2.247 is the same as image: jenkins/jenkins:latest and because of that, no update occurred.
Tip: Try to not use latest tag but to set the specific tag always.
I am trying to deploy cloudsql proxy as sidecar contaier like this:
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy",
"-instances=${CLOUDSQL_INSTANCE}=tcp:5432",
"-credential_file=/secrets/cloudsql/google_application_credentials.json"]
env:
- name: CLOUDSQL_INSTANCE
valueFrom:
secretKeyRef:
name: persistence-cloudsql-instance-creds
key: instance_name
volumeMounts:
- name: my-secrets-volume
mountPath: /secrets/cloudsql
readOnly: true
But when I deploy this, I get following error in logs:
2019/06/20 13:42:38 couldn't connect to "${CLOUDSQL_INSTANCE}": googleapi: Error 400: Missing parameter: project., required
How could I use environment variable in command that runs inside kubernetes container?
If you want to reference environment variables in the command you need to put them in parentheses, something like: $(CLOUDSQL_INSTANCE).
Using fleet I can specify a command to be run inside the container when it is started. It seems like this should be easily possible with Kubernetes as well, but I can't seem to find anything that says how. It seems like you have to create the container specifically to launch with a certain command.
Having a general purpose container and launching it with different arguments is far simpler than creating many different containers for specific cases, or setting and getting environment variables.
Is it possible to specify the command a kubernetes pod runs within the Docker image at startup?
I spend 45 minutes looking for this. Then I post a question about it and find the solution 9 minutes later.
There is an hint at what I wanted inside the Cassandra example. The command line below the image:
id: cassandra
kind: Pod
apiVersion: v1beta1
desiredState:
manifest:
version: v1beta1
id: cassandra
containers:
- name: cassandra
image: kubernetes/cassandra
command:
- /run.sh
cpu: 1000
ports:
- name: cql
containerPort: 9042
- name: thrift
containerPort: 9160
env:
- key: MAX_HEAP_SIZE
value: 512M
- key: HEAP_NEWSIZE
value: 100M
labels:
name: cassandra
Despite finding the solution, it would be nice if there was somewhere obvious in the Kubernetes project where I could see all of the possible options for the various configuration files (pod, service, replication controller).
for those looking to use a command with parameters, you need to provide an array
for example
command: [ "bin/bash", "-c", "mycommand" ]
or also
command:
- "bin/bash"
- "-c"
- "mycommand"
To answer Derek Mahar's question in the comments above:
What is the purpose of args if one could specify all arguments using command?
Dockerfiles can have an Entrypoint only or a CMD only or both of them together.
If used together then whatever is in CMD is passed to the command in ENTRYPOINT as arguments i.e.
ENTRYPOINT ["print"]
CMD ["hello", "world"]
So in Kubernetes when you specify a command i.e.
command: ["print"]
It will override the value of Entrypoint in the container's Dockerfile.
If you only specify arguments then those arguments will be passed to whatever command is in the container's Entrypoint.
In order to specify the command a kubernetes pod runs within the Docker image at startup we need to include the command and args fields inside the yaml file for command and arguments to be passed. For example,
apiVersion: v1
kind: Pod
metadata:
name: command-demo
labels:
purpose: demo-command
spec:
containers:
- name: command-demo-container
image: ubuntu
command: ["/bin/sh"]
args: ["-c", "while true; do echo hello; sleep 10;done"]
Additionally to the accepted answer, you can use variables with values from secrets in the commands as follows:
command: ["/some_command","-instances=$(<VARIABLE_NAME>)"]
env:
- name: <VARIABLE_NAME>
valueFrom:
secretKeyRef:
name: <secret_name>
key: <secret_key>