Is there a way to use an environment variable as the default for another? For example:
apiVersion: v1
kind: Pod
metadata:
name: Work
spec:
containers:
- name: envar-demo-container
image: gcr.io/google-samples/node-hello:1.0
env:
- name: ALWAYS_SET
value: "default"
- name: SOMETIMES_SET
value: "custom"
- name: RESULT
value: "$(SOMETIMES_SET) ? $(SOMETIMES_SET) : $(ALWAYS_SET)"
I don't think there is a way to do that but anyway you can try something like this
apiVersion: v1
kind: Pod
metadata:
name: Work
spec:
containers:
- name: envar-demo-container
image: gcr.io/google-samples/node-hello:1.0
args:
- RESULT=${SOMETIMES_SET:-${ALWAYS_SET}}; command_to_run_app
command:
- sh
- -c
env:
- name: ALWAYS_SET
value: "default"
- name: SOMETIMES_SET
value: "custom"
Related
I am unable to pass arguments to a container entry point.
The error I am getting is:
Error: container create failed: time="2022-09-30T12:23:31Z" level=error msg="runc create failed: unable to start container process: exec: \"--base-currency=EUR\": executable file not found in $PATH"
How can I pass an arg with -- in it to the arg option in a PodSpec?
I have tried using args directly, and as environment variables, both don't work. I have also tried arguments without "--" - still no joy.
The DeploymentConfig is:
apiVersion: apps/v1
kind: Deployment
metadata:
name: market-pricing
spec:
replicas: 1
selector:
matchLabels:
application: market-pricing
template:
metadata:
labels:
application: market-pricing
spec:
containers:
- name: market-pricing
image: quay.io/brbaker/market-pricing:v0.5.0
args: ["$(BASE)","$(CURRENCIES)", "$(OUTPUT)"]
imagePullPolicy: Always
volumeMounts:
- name: config
mountPath: "/app/config"
readOnly: true
env:
- name: BASE
value: "--base-currency=EUR"
- name: CURRENCIES
value: "--currencies=AUD,NZD,USD"
- name: OUTPUT
value: "--dry-run"
volumes:
- name: config
configMap:
name: app-config # Provide the name of the ConfigMap you want to mount.
items: # An array of keys from the ConfigMap to create as files
- key: "kafka.properties"
path: "kafka.properties"
- key: "app-config.properties"
path: "app-config.properties"
The Dockerfile is:
FROM registry.redhat.io/ubi9:9.0.0-1640
WORKDIR /app
COPY bin/market-pricing-svc .
CMD ["/app/market-pricing-svc"]
The ConfigMap is:
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
kafka.properties: |
bootstrap.servers=fx-pricing-dev-kafka-bootstrap.kafka.svc.cluster.local:9092
security.protocol=plaintext
acks=all
app-config.properties: |
market-data-publisher=kafka-publisher
market-data-source=ecb
reader=time-reader
Below is my app definition that uses azure csi store provider. Unfortunately, this definition throws Error: secret 'my-kv-secrets' not found why is that?
SecretProviderClass
apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
name: my-app-dev-spc
spec:
provider: azure
secretObjects:
- secretName: my-kv-secrets
type: Opaque
data:
- objectName: DB-HOST
key: DB-HOST
parameters:
keyvaultName: my-kv-name
objects: |
array:
- |
objectName: DB-HOST
objectType: secret
tenantId: "xxxxx-yyyy-zzzz-rrrr-vvvvvvvv"
Pod
apiVersion: v1
kind: Pod
metadata:
labels:
run: debug
name: debug
spec:
containers:
- args:
- sleep
- 1d
name: debug
image: alpine
env:
- name: DB_HOST
valueFrom:
secretKeyRef:
name: my-kv-secrets
key: DB-HOST
volumes:
- name: kv-secrets
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: my-app-dev-spc
nodePublishSecretRef:
name: my-sp-secrets
It turned out that secrets store csi works only with volumeMounts. So if you forget to specify it in your yaml definition then it will not work! Below is fix.
Pod
apiVersion: v1
kind: Pod
metadata:
labels:
run: debug
name: debug
spec:
containers:
- args:
- sleep
- 1d
name: debug
image: alpine
env:
- name: DB_HOST
valueFrom:
secretKeyRef:
name: my-kv-secrets
key: DB-HOST
volumeMounts:
- name: kv-secrets
mountPath: /mnt/kv_secrets
readOnly: true
volumes:
- name: kv-secrets
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: my-app-dev-spc
nodePublishSecretRef:
name: my-sp-secrets
I need to use theSAML 2.0 Authentication (https://www.bookstackapp.com/docs/admin/saml2-auth/) in the Kubernetes deployment file of BookStack.
Is it possible to configure the variables from the above link in the Kubernetes deployment file?
Thanks in advance.
Look into this example - seems it was created just for you :)
https://github.com/BookStackApp/BookStack/issues/1776
Just use own variables in the deployment file from saml2-auth page
apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: "bookstack-test-x5"
namespace: "default"
labels:
app: "bookstack-test-x5"
spec:
replicas: 1
selector:
matchLabels:
app: "bookstack-test-x5"
template:
metadata:
labels:
app: "bookstack-test-x5"
spec:
containers:
- name: "bookstack-sha256"
image: "gcr.io/<PATH_TO_MY_CONTAINER>"
env:
- name: "DB_USER"
value: "bookstack22"
- name: "DB_HOST"
value: "127.0.0.1"
- name: DB_PORT
value: "3306"
- name: DB_DATABASE
value: "bookstack"
- name: "DB_PSWD"
value: "my_secret_passowrd"
- name: "APP_DEBUG"
value: "true"
- name: CACHE_DRIVER
value: "database"
- name: SESSION_DRIVER
value: "database"
The following yaml file works fine
apiVersion: apps/v1
kind: Deployment
metadata:
name: something
spec:
replicas: 2
selector:
matchLabels:
app: something
template:
metadata:
labels:
app: something
spec:
volumes:
- name: shared-logs
emptyDir: {}
containers:
- name: something
image: docker.io/manuchadha25/something
volumeMounts:
- name: shared-logs
mountPath: /deploy/codingjediweb-1.0/logs/
env:
- name: DB_CASSANDRA_URI
value: cassandra://34.91.5.44
- name: DB_PASSWORD
value: something
- name: DB_KEYSPACE_NAME
value: something
- name: DB_USERNAME
value: something
- name: EMAIL_SERVER
value: something
- name: EMAIL_USER
value: something
- name: EMAIL_PASSWORD
value: something
- name: ALLOWED_NODES
value: 34.105.134.5
ports:
- containerPort: 9000
#- name: logging
# image: busybox
#volumeMounts:
# - name: shared-logs
# mountPath: /deploy/codingjediweb-1.0/logs/
#command: ['sh', '-c', "while true; do sleep 86400; done"]
But when I add the following two lines in env section, I get error
apiVersion: apps/v1
kind: Deployment
metadata:
name: something
spec:
replicas: 2
selector:
matchLabels:
app: something
template:
metadata:
labels:
app: something
spec:
volumes:
- name: shared-logs
emptyDir: {}
containers:
- name: something
image: docker.io/manuchadha25/something
volumeMounts:
- name: shared-logs
mountPath: /deploy/codingjediweb-1.0/logs/
env:
- name: DB_CASSANDRA_URI
value: cassandra://34.91.5.44
- name: DB_CASSANDRA_PORT <--- NEW LINE
value: 9042<--- NEW LINE
- name: DB_PASSWORD
value: something
- name: DB_KEYSPACE_NAME
value: something
- name: DB_USERNAME
value: something
- name: EMAIL_SERVER
value: something
- name: EMAIL_USER
value: something
- name: EMAIL_PASSWORD
value: something
- name: ALLOWED_NODES
value: 34.105.134.5
ports:
- containerPort: 9000
#- name: logging
# image: busybox
#volumeMounts:
# - name: shared-logs
# mountPath: /deploy/codingjediweb-1.0/logs/
#command: ['sh', '-c', "while true; do sleep 86400; done"]
$ kubectl apply -f codingjediweb-nodes.yaml
Error from server (BadRequest): error when creating "codingjediweb-nodes.yaml": Deployment in version "v1" cannot be handled as a Deployment: v1.Deployment.Spec: v1.DeploymentSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container: v1.Container.Env: []v1.EnvVar: v1.EnvVar.Value: ReadString: expects " or n, but found 9, error found in #10 byte of ...|,"value":9042},{"nam|..., bigger context ...|.1.85.10"},{"name":"DB_CASSANDRA_PORT","value":9042},{"name":"DB_PASSWORD","value":"1GFGc1Q|...
The following website validates that the YAML is correct.
What am I doing wrong?
Could you please add 9042 in double qoutes “9042” and try this. I think it’s looking for string and getting numbers instead so please add the value in double quotes
How can I use ConfigMap to write cluster node information to a JSON file?
The below gives me Node information :
kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(#.type=="Hostname")].address}'
How can I use Configmap to write the above output to a text file?
You can save the output of command in any file.
Then use the file or data inside file to create configmap.
After creating the configmap you can mount it as a file in your deployment/pod.
For example:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: appname
name: appname
namespace: development
spec:
selector:
matchLabels:
app: appname
tier: sometier
template:
metadata:
creationTimestamp: null
labels:
app: appname
tier: sometier
spec:
containers:
- env:
- name: NODE_ENV
value: development
- name: PORT
value: "3000"
- name: SOME_VAR
value: xxx
image: someimage
imagePullPolicy: Always
name: appname
volumeMounts:
- name: your-volume-name
mountPath: "your/path/to/store/the/file"
readOnly: true
volumes:
- name: your-volume-name
configMap:
name: your-configmap-name
items:
- key: your-filename-inside-pod
path: your-filename-inside-pod
I added the following configuration in deployment:
volumeMounts:
- name: your-volume-name
mountPath: "your/path/to/store/the/file"
readOnly: true
volumes:
- name: your-volume-name
configMap:
name: your-configmap-name
items:
- key: your-filename-inside-pod
path: your-filename-inside-pod
To create ConfigMap from file:
kubectl create configmap your-configmap-name --from-file=your-file-path
Or just create ConfigMap with the output of your command:
apiVersion: v1
kind: ConfigMap
metadata:
name: your-configmap-name
namespace: your-namespace
data:
your-filename-inside-pod: |
output of command
At first save output of kubect get nodes command into JSON file:
$ exampleCommand > node-info.json
Then create proper ConfigMap.
Here is an example:
apiVersion: v1
kind: ConfigMap
metadata:
name: example-config
data:
node-info.json: |
{
"array": [
1,
2
],
"boolean": true,
"number": 123,
"object": {
"a": "egg",
"b": "egg1"
},
"string": "Welcome"
}
Then remember to add following lines below specification section in pod configuration file:
env:
- name: NODE_CONFIG_JSON
valueFrom:
configMapKeyRef:
name: example-config
key: node-info.json
You can also use PodPresent.
PodPreset is an object that enable to inject information egg. environment variables into pods during creation time.
Look at the example below:
apiVersion: settings.k8s.io/v1alpha1
kind: PodPreset
metadata:
name: example
spec:
selector:
matchLabels:
app: your-pod
env:
- name: DB_PORT
value: "6379"
envFrom:
- configMapRef:
name: etcd-env-config
key: node-info.json
but remember that you have to also add:
env:
- name: NODE_CONFIG_JSON
valueFrom:
configMapKeyRef:
name: example-config
key: node-info.json
section to your pod definition proper to your PodPresent and ConfigMap configuration.
More information you can find here: podpresent, pod-present-configuration.