Workflow:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: my-workflow-
spec:
entrypoint: main
arguments:
parameters:
- name: configmap
value: my-configmap
- name: secret
value: my-secret
templates:
- name: main
steps:
- - name: main
templateRef:
name: my-template
template: main
arguments:
parameters:
- name: configmap
value: "{{workflow.parameters.configmap}}"
- name: secret
value: "{{workflow.parameters.secret}}"
Template:
apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
metadata:
name: my-template
spec:
entrypoint: main
templates:
- name: main
inputs:
parameters:
- name: configmap
parameters:
- name: secret
container:
image: my-image:1.2.3
envFrom:
- configMapRef:
name: "{{inputs.parameters.configmap}}"
- secretRef:
name: "{{inputs.parameters.secret}}"
When deployed through the Argo UI I receive the following error from Kubernetes when starting the pod:
spec.containers[1].envFrom: Invalid value: \"\": must specify one of: `configMapRef` or `secretRef`
Using envFrom is supported and documented in the Argo documentation: https://argoproj.github.io/argo-workflows/fields/. Why is Kubernetes complaining here?
As mentioned in the comments, there are a couple issues with your manifests. They're valid YAML, but that YAML does not deserialize into valid Argo custom resources.
In the Workflow, you have duplicated the parameters key in spec.templates[0].inputs.
In the WorkflowTemplate, you have placed the configMapRef and secretRef names at the same level as the keys. configMapRef and secretRef are objects, so the name key should be nested under each of those.
These are the corrected manifests:
apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
metadata:
name: my-template
spec:
entrypoint: main
templates:
- name: main
inputs:
parameters:
- name: configmap
- name: secret
container:
image: my-image:1.2.3
envFrom:
- configMapRef:
name: "{{inputs.parameters.configmap}}"
- secretRef:
name: "{{inputs.parameters.secret}}"
---
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: my-workflow-
spec:
entrypoint: main
arguments:
parameters:
- name: configmap
value: my-configmap
- name: secret
value: my-secret
templates:
- name: main
steps:
- - name: main
templateRef:
name: my-template
template: main
arguments:
parameters:
- name: configmap
value: "{{workflow.parameters.configmap}}"
- name: secret
value: "{{workflow.parameters.secret}}"
Argo Workflows supports IDE-based validation which should help you find/avoid these issues.
Related
I want to remove a few environment variables in a container with kustomize? Is that possible? When I patch, it just adds as you may know.
If it's not possible, can we replace environment variable name, and secret key name/key pair all together?
containers:
- name: container1
env:
- name: NAMESPACE
valueFrom:
secretKeyRef:
name: x
key: y
Any help on this will be appreciated! Thanks!
If you're looking remove that NAMESPACE variable from the manifest, you can use the special $patch: delete directive to do so.
If I start with this Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: example
spec:
template:
spec:
containers:
- name: example
image: docker.io/traefik/whoami:latest
env:
- name: ENV_VAR_1
valueFrom:
secretKeyRef:
name: someSecret
key: someKeyName
- name: ENV_VAR_2
value: example-value
If I write in my kustomization.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
patches:
- patch: |
apiVersion: apps/v1
kind: Deployment
metadata:
name: example
spec:
template:
spec:
containers:
- name: example
env:
- name: ENV_VAR_1
$patch: delete
Then the output of kustomize build is:
apiVersion: apps/v1
kind: Deployment
metadata:
name: example
spec:
template:
spec:
containers:
- env:
- name: ENV_VAR_2
value: example-value
image: docker.io/traefik/whoami:latest
name: example
Using a strategic merge patch like this has an advantage over a JSONPatch style patch like Nijat's answer because it doesn't depend on the order in which the environment variables are defined.
How do I provide a .env file in Kubernetes. I am using a Node.JS package that populates my process.env via my .env file.
You can do it in two ways:
Providing env variable for the container:
During creation of a pod, you can set environment variables for the containers that run in that Pod. To set environment variables, include the env field in the configuration file.
ex:
apiVersion: v1
kind: Pod
metadata:
name: envar-demo
labels:
purpose: demonstrate-envars
spec:
containers:
- name: envar-demo-container
image: gcr.io/google-samples/node-hello:1.0
env:
- name: DEMO_GREETING
value: "Hello from the environment"
- name: DEMO_FAREWELL
value: "Such a sweet sorrow"
Using ConfigMaps:
first you need to create a ConfigMaps, ex is below, here data field refers your values in a key-value pair.
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
SPECIAL_LEVEL: very
SPECIAL_TYPE: charm
Now, use envFrom to define all of the ConfigMap's data as container environment variables, ex:
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
envFrom:
- configMapRef:
name: special-config
restartPolicy: Never
you can even specify individual field by giving env like below:
env:
- name: SPECIAL_LEVEL_KEY
valueFrom:
configMapKeyRef:
name: special-config
key: SPECIAL_LEVEL
- name: SPECIAL_TYPE_KEY
valueFrom:
configMapKeyRef:
name: special-config
key: SPECIAL_TYPE
Ref: configmap and env set
Why does the workflow just end on an arrow pointing down?
apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
metadata:
name: workflow-template-whalesay-template
spec:
templates:
- name: whalesay-template
inputs:
parameters:
- name: message
container:
image: docker/whalesay
command: [cowsay]
This is the worflowtemplate I'm using. I applied this to k8s before the next step.
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
name: workflow-template-dag-diamond
generateName: workflow-template-dag-diamond-
spec:
entrypoint: diamond
templates:
- name: diamond
dag:
tasks:
- name: A
templateRef:
name: workflow-template-whalesay-template
template: whalesay-template
arguments:
parameters:
- name: message
value: A
This workflow references the previous step. Workflow is doing what its suppose to do but I can't see the green dots on the UI.
I have multiple Secrets in a Kubernetes. All of them contain many values, as example:
apiVersion: v1
kind: Secret
metadata:
name: paypal-secret
type: Opaque
data:
PAYPAL_CLIENT_ID: base64_PP_client_id
PAYPAL_SECRET: base64_pp_secret
stringData:
PAYPAL_API: https://api.paypal.com/v1
PAYPAL_HOST: api.paypal.com
I'm curious how to pass all of the values from all Secrets to a ReplicaSet for example.
I tried this one approach:
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: pp-debts
labels:
environment: prod
spec:
replicas: 1
selector:
matchLabels:
environment: prod
template:
metadata:
labels:
environment: prod
spec:
containers:
- name: integration-app
image: my-container-image
envFrom:
- secretRef:
name: intercom-secret
envFrom:
- secretRef:
name: paypal-secret
envFrom:
- secretRef:
name: postgres-secret
envFrom:
- secretRef:
name: redis-secret
But when I connected to the pod, and looked on the env variables, I was able to see only values from the redis-secret.
Try using one envFrom with multiple entries under it as below:
- name: integration-app
image: my-container-image
envFrom:
- secretRef:
name: intercom-secret
- secretRef:
name: paypal-secret
- secretRef:
name: postgres-secret
- secretRef:
name: redis-secret
There's an example at the bottom of this blog post by David Chua
Im on kubernetes 1.3.5, we are using Deployments with rollingupdates to update the pods in our cluster. However, on rollingupdate, the newly added environment variable never gets added to the pod, is it by design ? what are the ways to get around that ?
Following is the sample deployment yaml files. Basically the deployment was deployed with first version then we updated the yaml with newly added env variable NEW_KEY and basically run through the rolling updated. But the new env does not show up in the PODS.
first version yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: APP_NAME-deployment
labels:
name: APP_NAME
environment: DEV
spec:
revisionHistoryLimit: 2
strategy:
type: RollingUpdate
replicas: 2
template:
metadata:
labels:
name: APP_NAME
environment: DEV
spec:
containers:
- name: APP_NAME
image: repo.app_name:latest
env:
- name: NODE_ENV
value: 'development'
- name: APP_KEY
value: '123'
updated yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: APP_NAME-deployment
labels:
name: APP_NAME
environment: DEV
spec:
revisionHistoryLimit: 2
strategy:
type: RollingUpdate
replicas: 2
template:
metadata:
labels:
name: APP_NAME
environment: DEV
spec:
containers:
- name: APP_NAME
image: repo.app_name:latest
env:
- name: NODE_ENV
value: 'development'
- name: APP_KEY
value: '123'
- name: NEW_KEY
value: 'new'
You can store the env variable in either a ConfigMap or secretKeyRef. For a ConfigMap you would do:
env:
- name: SPECIAL_LEVEL_KEY
valueFrom:
configMapKeyRef:
name: node_env
key: node.dev
Or with a secretKeyRef:
env:
- name: SPECIAL_LEVEL_KEY
valueFrom:
secretKeyRef:
name: node_env
key: node.dev