How to use environment variable in kubernetes container command? - kubernetes

I am trying to deploy cloudsql proxy as sidecar contaier like this:
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy",
"-instances=${CLOUDSQL_INSTANCE}=tcp:5432",
"-credential_file=/secrets/cloudsql/google_application_credentials.json"]
env:
- name: CLOUDSQL_INSTANCE
valueFrom:
secretKeyRef:
name: persistence-cloudsql-instance-creds
key: instance_name
volumeMounts:
- name: my-secrets-volume
mountPath: /secrets/cloudsql
readOnly: true
But when I deploy this, I get following error in logs:
2019/06/20 13:42:38 couldn't connect to "${CLOUDSQL_INSTANCE}": googleapi: Error 400: Missing parameter: project., required
How could I use environment variable in command that runs inside kubernetes container?

If you want to reference environment variables in the command you need to put them in parentheses, something like: $(CLOUDSQL_INSTANCE).

Related

Argo Workflow fails with no such directory error when using input parameters

I'm currently doing a PoC to validate usage of Argo Workflow. I created a workflow spec with the following template (this is just a small portion of the workflow yaml):
templates:
- name: dummy-name
inputs:
parameters:
- name: params
container:
name: container-name
image: <image>
volumeMounts:
- name: vault-token
mountPath: "/etc/secrets"
readOnly: true
imagePullPolicy: IfNotPresent
command: ['workflow', 'f10', 'reports', 'expiry', '.', '--days-until-expiry', '30', '--vault-token-file-path', '/etc/secrets/token', '--environment', 'corporate', '--log-level', 'debug']
The above way of passing the commands works without any issues upon submitting the workflow. However, if I replace the command with {{inputs.parameters.params}} like this:
templates:
- name: dummy-name
inputs:
parameters:
- name: params
container:
name: container-name
image: <image>
volumeMounts:
- name: vault-token
mountPath: "/etc/secrets"
readOnly: true
imagePullPolicy: IfNotPresent
command: ['workflow', '{{inputs.parameters.params}}']
it fails with the following error:
DEBU[2023-01-20T18:11:07.220Z] Log line
content="Error: failed to find name in PATH: exec: \"workflow f10 reports expiry . --days-until-expiry 30 --vault-token-file-path /etc/secrets/token --environment corporate --log-level debug\":
stat workflow f10 reports expiry . --days-until-expiry 30 --vault-token-file-path /etc/secrets/token --environment corporate --log-level debug: no such file or directory"
Am I missing something here?
FYI: The Dockerfile that builds the container has the following ENTRYPOINT set:
ENTRYPOINT ["workflow"]

How to use env variable on kubernetes script?

I have this kubernetes script on argo workflows template
- name: rendition-composer
inputs:
parameters:
- name: original_resolution
script:
image: node:9.1-alpine
command: [node]
source: |
// some node.js script
...
console.log($(SD_RENDITION));
volumeMounts:
- name: workdir
mountPath: /mnt/vol
- name: config
mountPath: /config
readOnly: true
env:
- name: SD_RENDITION
valueFrom:
configMapKeyRef:
name: rendition-specification
key: res480p
In here console.log($(SD_RENDITION)); I can't get the env value. it returns error
ReferenceError: $ is not defined
I already did all the setup for the ConfigMap on this kubernetes official documentation
Is there anything I miss?
process.env.SD_RENDITION
The above syntax solved my problem. It seems I miss some essential concepts about js' process object

Kubernetes - How to execute a command inside the .yml file

My problem is the following:
I should execute the "envsubst" command from inside a POD, I'm using Kubernetes.
Actually I'm executing the command manually accessing to the POD and then executing it, but I would do it automatically inside my configuration file, which is a .yml file.
I've found some references on the web and I've tried some examples, but the result was always that the POD didn't start correctly, presenting the error CrashBackLoopOff error.
I would execute the following command:
envsubst < /usr/share/nginx/html/env_token.js > /usr/share/nginx/html/env.js
There's the content of my .yml file (not all, just the most relevant part)
spec:
containers:
- name: example 1
image: imagename/docker_console:${deploy.version}
env:
- name: PIPPO_ID
valueFrom:
secretKeyRef:
name: pippo-${deploy.env}-secret
key: accessKey
- name: PIPPO
valueFrom:
secretKeyRef:
name: pippo-${deploy.env}-secret
key: secretAccessKey
- name: ENV
value: ${deploy.env}
- name: CREATION_TIMESTAMP
value: ${deploy.creation_timestamp}
- name: TEST
value: ${consoleenv}
command: ["/bin/sh"]
args: ["envsubst", "/usr/share/nginx/html/assets/env_token.js /usr/share/nginx/html/assets/env.js"]
The final two rows, "command" and "args", should be written in this way? I've already tried to put the "envsubst" in the command but it didn't work. I've also tried using commas in the args row to separate each parameter, same error.
Do you have some suggestions you know they work for sure?
Thanks

Argo Workflow args using echo to redirect to file without printing

I have the following Argo Workflow using a Secret from Kubernetes:
args:
- |
export TEST_FILENAME="./test.txt"
echo "$TEST_DATA" > $TEST_FILENAME
chmod 400 $TEST_FILENAME
env:
- name: TEST_DATA
valueFrom:
secretKeyRef:
name: test_data
key: testing
I need to redirect TEST_DATA to a file when I run the Argo Workflow, but the data of TEST_DATA always shows in the argo-ui log. How can I redirect the data to the file without showing the data in the log?
echo shouldn't be writing $TEST_DATA to logs the way your code is written. So I'm not sure what's going wrong.
However, I think there's an easier way to write a secret to a file. Add a volume to your Workflow spec, and a volume mount to the container section of the step spec.
containers:
- name: some-pod
image: some-image
volumeMounts:
- name: test-mount
mountPath: "/some/path/"
readOnly: true
volumes:
- name: test-volume
secret:
secretName: test_data
items:
- key: testing
path: test.txt

How to pass environmental variables in envconsul config file?

I read in the envconsul documentation this:
For additional security, tokens may also be read from the environment
using the CONSUL_TOKEN or VAULT_TOKEN environment variables
respectively. It is highly recommended that you do not put your tokens
in plain-text in a configuration file.
So, I have this envconsul.hcl file:
# the settings to connect to vault server
# "http://10.0.2.2:8200" is the Vault's address on the host machine when using Minikube
vault {
address = "${env(VAULT_ADDR)}"
renew_token = false
retry {
backoff = "1s"
}
token = "${env(VAULT_TOKEN)}"
}
# the settings to find the endpoint of the secrets engine
secret {
no_prefix = true
path = "secret/app/config"
}
However, I get this error:
[WARN] (view) vault.read(secret/app/config): vault.read(secret/app/config): Get $%7Benv%28VAULT_ADDR%29%7D/v1/secret/app/config: unsupported protocol scheme "" (retry attempt 1 after "1s")
As I understand it, it cannot do the variable substitution.
I tried to set "http://10.0.2.2:8200" and it works.
The same happens with the VAULT_TOKEN var.
If I hardcode the VAULT_ADDR, then I get this error:
[WARN] (view) vault.read(secret/app/config): vault.read(secret/app/config): Error making API request.
URL: GET http://10.0.2.2:8200/v1/secret/app/config
Code: 403. Errors:
* permission denied (retry attempt 2 after "2s")
Is there a way for this file to understand the environmental variables?
EDIT 1
This is my pod.yml file
---
apiVersion: v1
kind: Pod
metadata:
name: sample
spec:
serviceAccountName: vault-auth
restartPolicy: Never
# Add the ConfigMap as a volume to the Pod
volumes:
- name: vault-token
emptyDir:
medium: Memory
# Populate the volume with config map data
- name: config
configMap:
# `name` here must match the name
# specified in the ConfigMap's YAML
# -> kubectl create configmap vault-cm --from-file=./vault-configs/
name: vault-cm
items:
- key : vault-agent-config.hcl
path: vault-agent-config.hcl
- key : envconsul.hcl
path: envconsul.hcl
initContainers:
# Vault container
- name: vault-agent-auth
image: vault
volumeMounts:
- name: vault-token
mountPath: /home/vault
- name: config
mountPath: /etc/vault
# This assumes Vault running on local host and K8s running in Minikube using VirtualBox
env:
- name: VAULT_ADDR
value: http://10.0.2.2:8200
# Run the Vault agent
args:
[
"agent",
"-config=/etc/vault/vault-agent-config.hcl",
"-log-level=debug",
]
containers:
- name: python
image: myappimg
imagePullPolicy: Never
ports:
- containerPort: 5000
volumeMounts:
- name: vault-token
mountPath: /home/vault
- name: config
mountPath: /etc/envconsul
env:
- name: HOME
value: /home/vault
- name: VAULT_ADDR
value: http://10.0.2.2:8200
I. Within container specification set environmental variables (values in double quotes):
env:
- name: VAULT_TOKEN
value: "abcd1234"
- name: VAULT_ADDR
value: "http://10.0.2.2:8200"
Then refer to the values in envconsul.hcl
vault {
address = ${VAULT_ADDR}
renew_token = false
retry {
backoff = "1s"
}
token = ${VAULT_TOKEN}
}
II. Another option is to unseal the vault cluster (with the unseal key which was printed while initializing the vault cluster)
$ vault operator unseal
and then authenticate to the vault cluster using a root token.
$ vault login <your-generated-root-token>
More details
I tried many suggestions and nothing worked until I passed -vault-token argument to envconsul command like this:
envconsul -vault-token=$VAULT_TOKEN -config=/app/config.hcl -secret="/secret/debug/service" env
and in config.hcl it should be like this:
vault {
address = "http://kvstorage.try.direct:8200"
token = "${env(VAULT_TOKEN)}"
}