Argo Workflows: Set container image from enviroment variables - argo-workflows

I'm trying to set the container image for a template from environment variables.
I've tried this:
- name: sftp-to-gcp-bucket
script:
image: "gcr.io/{{$CONTAINER}}/imagename:{{$VERSION}}"
...
...
env:
- name: CONTAINER
valueFrom:
secretKeyRef:
name: enviroment-vars
key: contenedor
- name: VERSION
valueFrom:
secretKeyRef:
name: enviroment-vars
key: version
And I have the k8s secrets set correctly:
Name: enviroment-vars
Namespace: argo
Labels: <none>
Annotations: <none>
Type: Opaque
Data
====
container: 17 bytes
version: 5 bytes
But the env variables doesn't seem to get injected into the image field... Do I have to make another template to parse the secrets and from that output inject them into the image?

The environment variables only have meaning in the container created by Argo Workflows. They are not accessible in the Workflow itself.
There are a number of ways to load Kubernetes resources and use them as variables in a Workflow.
In this case, I'd recommend loading parameters from a ConfigMap.
apiVersion: argoproj.io/v1alpha1
kind: Workflow
spec:
templates:
- name: sftp-to-gcp-bucket
inputs:
parameters:
- name: container
valueFrom:
configMapKeyRef:
name: enviroment-vars
key: contenedor
- name: version
valueFrom:
configMapKeyRef:
name: enviroment-vars
key: version
script:
image: "gcr.io/{{inputs.parameters.container}}/imagename:{{inputs.parameters.version}}"

Related

ConfigMap value as input for another variable inside container

How to use ConfigMap for $LOCAL_IP_DB variable declared in below section as input for another variable declared? $LOCAL_IP_DB is a generic key defined inside db-secret configmap, but there is another environment variable which needs it? How to make it work?
spec:
containers:
- env:
- name: LOCAL_IP_DB
valueFrom:
configMapKeyRef:
name: db-secret
key: LOCAL_IP_DB
- name: LOG_Files
value: \\${LOCAL_IP_DB}\redis\files\
The key is using: $() instead of ${}
example-pod.yaml:
apiVersion: v1
kind: Pod
metadata:
name: example
spec:
containers:
- name: example
image: bash
args: [printenv]
env:
- name: LOCAL_IP_DB
valueFrom:
configMapKeyRef:
name: db-secret
key: LOCAL_IP_DB
- name: LOG_FILES
value: \$(LOCAL_IP_DB)\redis\files\
example-configmap.yaml:
apiVersion: v1
data:
LOCAL_IP_DB: 192.168.0.1
kind: ConfigMap
metadata:
name: db-secret
test:
controlplane $ kubectl apply -f example-pod.yaml -f example-configmap.yaml
controlplane $ kubectl logs example | grep 192
LOCAL_IP_DB=192.168.0.1
LOG_FILES=\192.168.0.1\redis\files\
You can find more information about this function here: link
Note, if you want to manage secrets Secret is the recommended way to do that.

Providing a .env file in Kubernetes

How do I provide a .env file in Kubernetes. I am using a Node.JS package that populates my process.env via my .env file.
You can do it in two ways:
Providing env variable for the container:
During creation of a pod, you can set environment variables for the containers that run in that Pod. To set environment variables, include the env field in the configuration file.
ex:
apiVersion: v1
kind: Pod
metadata:
name: envar-demo
labels:
purpose: demonstrate-envars
spec:
containers:
- name: envar-demo-container
image: gcr.io/google-samples/node-hello:1.0
env:
- name: DEMO_GREETING
value: "Hello from the environment"
- name: DEMO_FAREWELL
value: "Such a sweet sorrow"
Using ConfigMaps:
first you need to create a ConfigMaps, ex is below, here data field refers your values in a key-value pair.
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
SPECIAL_LEVEL: very
SPECIAL_TYPE: charm
Now, use envFrom to define all of the ConfigMap's data as container environment variables, ex:
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
envFrom:
- configMapRef:
name: special-config
restartPolicy: Never
you can even specify individual field by giving env like below:
env:
- name: SPECIAL_LEVEL_KEY
valueFrom:
configMapKeyRef:
name: special-config
key: SPECIAL_LEVEL
- name: SPECIAL_TYPE_KEY
valueFrom:
configMapKeyRef:
name: special-config
key: SPECIAL_TYPE
Ref: configmap and env set

Kubernetes job does not recognize environment

I am using the following job template:
apiVersion: batch/v1
kind: Job
metadata:
name: rotatedevcreds2
spec:
template:
metadata:
name: rotatedevcreds2
spec:
containers:
- name: shell
image: akanksha/dsserver:v7
env:
- name: DEMO
value: "Hello from the environment"
- name: personal_AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: rotatecreds-env
key: personal_aws_secret_access_key
- name: personal_AWS_SECRET_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: rotatecreds-env
key: personal_aws_secret_access_key_id
- name: personal_GIT_TOKEN
valueFrom:
secretKeyRef:
name: rotatecreds-env
key: personal_git_token
command:
- "bin/bash"
- "-c"
- "whoami; pwd; /root/rotateCreds.sh"
restartPolicy: Never
imagePullSecrets:
- name: regcred
The shell script runs some ansible tasks which results in:
TASK [Get the existing access keys for the functional backup ID] ***************
fatal: [localhost]: FAILED! => {"changed": false, "cmd": "aws iam list-access-keys --user-name ''", "failed_when_result": true, "msg": "[Errno 2] No such file or directory", "rc": 2}
However if I spin a pod using the same iamge using the following
apiVersion: batch/v1
kind: Job
metadata:
name: rotatedevcreds3
spec:
template:
metadata:
name: rotatedevcreds3
spec:
containers:
- name: shell
image: akanksha/dsserver:v7
env:
- name: DEMO
value: "Hello from the environment"
- name: personal_AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: rotatecreds-env
key: personal_aws_secret_access_key
- name: personal_AWS_SECRET_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: rotatecreds-env
key: personal_aws_secret_access_key_id
- name: personal_GIT_TOKEN
valueFrom:
secretKeyRef:
name: rotatecreds-env
key: personal_git_token
command:
- "bin/bash"
- "-c"
- "whoami; pwd; /root/rotateCreds.sh"
restartPolicy: Never
imagePullSecrets:
- name: regcred
This creates a POD and I am able to login to the pod and run /root/rotateCreds.sh
While running the job it seems it not able to recognose the aws cli. I tried debugging whoami and pwd which is equal to root and / respectively and that is fine. Any pointers what is missing? I am new to jobs.
For further debugging in the job template I added a sleep for 10000 seconds so that I can login to the container and see what's happening. I noticed after logging in I was able to run the script manually too. aws command was recognised properly.
It is likely your PATH is not set correctly,
a quick fix is to define the absolute path of aws-cli like /usr/local/bin/aws in /root/rotateCreds.sh script
Ok so I added an export command to update the path and that fixed the issue. The issue was: I was using command resource so it was not in bash environment. So either we can use a shell resource with bash argument as described here:
https://docs.ansible.com/ansible/latest/modules/shell_module.html
or export new PATH.

Combining multiple k8s secrets into an env variable

My k8s namespace contains a Secret which is created at deployment time (by svcat), so the values are not known in advance.
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: my-database-credentials
data:
hostname: ...
port: ...
database: ...
username: ...
password: ...
A Deployment needs to inject these values in a slightly different format:
...
containers:
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: my-database-credentials
key: jdbc:postgresql:<hostname>:<port>/<database> // ??
- name: DATABASE_USERNAME
valueFrom:
secretKeyRef:
name: my-database-credentials
key: username
- name: DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: my-database-credentials
key: password
The DATABASE_URL needs to be composed out of the hostname, port, 'database` from the previously defined secret.
Is there any way to do this composition?
Kubernetes allows you to use previously defined environment variables as part of subsequent environment variables elsewhere in the configuration. From the Kubernetes API reference docs:
Variable references $(VAR_NAME) are expanded using the previous defined environment variables in the container and any service environment variables.
This $(...) syntax defines interdependent environment variables for the container.
So, you can first extract the required secret values into environment variables, and then compose the DATABASE_URL with those variables.
...
containers:
env:
- name: DB_URL_HOSTNAME // part 1
valueFrom:
secretKeyRef:
name: my-database-credentials
key: hostname
- name: DB_URL_PORT // part 2
valueFrom:
secretKeyRef:
name: my-database-credentials
key: port
- name: DB_URL_DBNAME // part 3
valueFrom:
secretKeyRef:
name: my-database-credentials
key: database
- name: DATABASE_URL // combine
value: jdbc:postgresql:$(DB_URL_HOSTNAME):$(DB_URL_PORT)/$(DB_URL_DBNAME)
...
If all the pre-variables are defined as env variables:
- { name: DATABASE_URL, value: '{{ printf "jdbc:postgresql:$(DATABASE_HOST):$(DATABASE_PORT)/$(DB_URL_DBNAME)" }}'}
With this statement you may also bring in vlaues from the values.yaml file as well:
For Example:
If you may have defined DB_URL_DBNAME in the values file:
- { name: DATABASE_URL, value: '{{ printf "jdbc:postgresql:$(DATABASE_HOST):$(DATABASE_PORT)/%s" .Values.database.DB_URL_DBNAME }}'}
You can do a couple of things I can think of:
Use a secrets volume and make a startup script that reads the secrets from the volume and then starts your application with the DATABASE_URL environment variable.
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: mypod
image: your_db_container
command: [ "yourscript.sh" ]
volumeMounts:
- name: mycreds
mountPath: "/etc/credentials"
volumes:
- name: mycreds
secret:
secretName: my-database-credentials
defaultMode: 256
Pass the env variable in the command key of your container spec:
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: mypod
image: your_db_container
command: [ "/bin/sh", "-c", "DATABASE_URL=jdbc:postgresql:<hostname>:<port>/<database>/$(DATABASE_USERNAME):$(DATABASE_PASSWORD) /start/yourdb" ]
env:
- name: DATABASE_USERNAME
valueFrom:
secretKeyRef:
name: my-database-credentials
key: username
- name: DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: my-database-credentials
key: password
There are several ways to go (in increasing complexity order):
Mangle the parameter before putting it into the Secret (extend whatever you use to insert the info there).
Add a script into your Pod/Container to mangle the incoming parameters (environmental variables or command arguments) into what is needed. If you cannot or don't want to have your own container image, you can add your extra script as a Volume to the container, and set the Container's command field to override the container image start command.
Add a facility to your Kubernetes to do an automatic mangling "behind the scenes": you can add a Dynamic Admission Controller to do your mangling, or you can create a Kubernetes Operator and add a Custom Resource Definition (the operator would be told by the CRD which Secrets to watch for changes, and the operator would read the values and generate whatever other entries you want).

How to set GOOGLE_APPLICATION_CREDENTIALS on GKE running through Kubernetes

with the help of kubernetes I am running daily jobs on GKE, On a daily basis based on cron configured in kubernetes a new container spins up and try to insert some data into BigQuery.
The setup that we have is we have 2 different projects in GCP in one project we maintain the data in BigQuery in other project we have all the GKE running so when GKE has to interact with different project resource my guess is I have to set an environment variable with name GOOGLE_APPLICATION_CREDENTIALS which points to a service account json file, but since every day kubernetes is spinning up a new container I am not sure how and where I should set this variable.
Thanks in Advance!
NOTE: this file is parsed as a golang template by the drone-gke plugin.
---
apiVersion: v1
kind: Secret
metadata:
name: my-data-service-account-credentials
type: Opaque
data:
sa_json: "bas64JsonServiceAccount"
---
apiVersion: v1
kind: Pod
metadata:
name: adtech-ads-apidata-el-adunit-pod
spec:
containers:
- name: adtech-ads-apidata-el-adunit-container
volumeMounts:
- name: service-account-credentials-volume
mountPath: "/etc/gcp"
readOnly: true
volumes:
- name: service-account-credentials-volume
secret:
secretName: my-data-service-account-credentials
items:
- key: sa_json
path: sa_credentials.json
This is our cron jobs for loading the AdUnit Data
apiVersion: batch/v2alpha1
kind: CronJob
metadata:
name: adtech-ads-apidata-el-adunit
spec:
schedule: "*/5 * * * *"
suspend: false
concurrencyPolicy: Replace
successfulJobsHistoryLimit: 10
failedJobsHistoryLimit: 10
jobTemplate:
spec:
template:
spec:
containers:
- name: adtech-ads-apidata-el-adunit-container
image: {{.image}}
args:
- -cp
- opt/nyt/DFPDataIngestion-1.0-jar-with-dependencies.jar
- com.nyt.cron.AdUnitJob
env:
- name: ENV_APP_NAME
value: "{{.env_app_name}}"
- name: ENV_APP_CONTEXT_NAME
value: "{{.env_app_context_name}}"
- name: ENV_GOOGLE_PROJECTID
value: "{{.env_google_projectId}}"
- name: ENV_GOOGLE_DATASETID
value: "{{.env_google_datasetId}}"
- name: ENV_REPORTING_DATASETID
value: "{{.env_reporting_datasetId}}"
- name: ENV_ADBRIDGE_DATASETID
value: "{{.env_adbridge_datasetId}}"
- name: ENV_SALESFORCE_DATASETID
value: "{{.env_salesforce_datasetId}}"
- name: ENV_CLOUD_PLATFORM_URL
value: "{{.env_cloud_platform_url}}"
- name: ENV_SMTP_HOST
value: "{{.env_smtp_host}}"
- name: ENV_TO_EMAIL
value: "{{.env_to_email}}"
- name: ENV_FROM_EMAIL
value: "{{.env_from_email}}"
- name: ENV_AWS_USERNAME
value: "{{.env_aws_username}}"
- name: ENV_CLIENT_ID
value: "{{.env_client_id}}"
- name: ENV_REFRESH_TOKEN
value: "{{.env_refresh_token}}"
- name: ENV_NETWORK_CODE
value: "{{.env_network_code}}"
- name: ENV_APPLICATION_NAME
value: "{{.env_application_name}}"
- name: ENV_SALESFORCE_USERNAME
value: "{{.env_salesforce_username}}"
- name: ENV_SALESFORCE_URL
value: "{{.env_salesforce_url}}"
- name: GOOGLE_APPLICATION_CREDENTIALS
value: "/etc/gcp/sa_credentials.json"
- name: ENV_CLOUD_SQL_URL
valueFrom:
secretKeyRef:
name: secrets
key: cloud_sql_url
- name: ENV_AWS_PASSWORD
valueFrom:
secretKeyRef:
name: secrets
key: aws_password
- name: ENV_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: secrets
key: dfp_client_secret
- name: ENV_SALESFORCE_PASSWORD
valueFrom:
secretKeyRef:
name: secrets
key: salesforce_password
restartPolicy: OnFailure
So, if your GKE project is project my-gke, and the project containing the services/things your GKE containers need access to is project my-data, one approach is to:
Create a service account in the my-data project. Give it whatever GCP roles/permissions are needed (ex. roles/bigquery.
dataViewer if you have some BigQuery tables that your my-gke GKE containers need to read).
Create a service account key for that service account. When you do this in the console following https://cloud.google.com/iam/docs/creating-managing-service-account-keys, you should automatically download a .json file containing the SA credentials.
Create a Kubernetes secret resource for those service account credentials. It might look something like this:
apiVersion: v1
kind: Secret
metadata:
name: my-data-service-account-credentials
type: Opaque
data:
sa_json: <contents of running 'base64 the-downloaded-SA-credentials.json'>
Mount the credentials in the container that needs access:
[...]
spec:
containers:
- name: my-container
volumeMounts:
- name: service-account-credentials-volume
mountPath: /etc/gcp
readOnly: true
[...]
volumes:
- name: service-account-credentials-volume
secret:
secretName: my-data-service-account-credentials
items:
- key: sa_json
path: sa_credentials.json
Set the GOOGLE_APPLICATION_CREDENTIALS environment variable in the container to point to the path of the mounted credentials:
[...]
spec:
containers:
- name: my-container
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /etc/gcp/sa_credentials.json
With that, any official GCP clients (ex. the GCP Python client, GCP Java Client, gcloud CLI, etc. should respect the GOOGLE_APPLICATION_CREDENTIALS env var and, when making API requests, automatically use the credentials of the my-data service account that you created and mounted the credentials .json file for.