Kubernetes - Passing in environment variable and service name (from DNS) - kubernetes

I can't seem to find an example of the correct syntax for inserting a environment variable along with the service name:
So I have a service defined as:
apiVersion: v1
kind: Service
metadata:
name: test
spec:
type: NodePort
ports:
- name: http
port: 3000
targetPort: 3000
selector:
app: test
I then use a secrets file with the following:
apiVersion: v1
kind: Secret
metadata:
name: test
labels:
app: test
data:
password: fxxxxxxxxxxxxxxx787xx==
And just to confirm I'm using envFrom to set that password as an env variable:
apiVersion: v1
kind: Deployment
metadata:
name: test
spec:
replicas: 1
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: xxxxxxxxxxx
imagePullPolicy: Always
envFrom:
- configMapRef:
name: test
- secretRef:
name: test
ports:
- containerPort: 3000
Now in my config file I want to refer to that password as well as the service name itself - is this the correct way to do so:
apiVersion: v1
kind: ConfigMap
metadata:
name: test
labels:
app: test
data:
WORKING_URI: "http://somedomain:${password}#test"

The yaml configuration does not work the way you provided as an example.
If you want to setup Kubernetes with a complex configuration and
use variables or dynamic assignment to some of them, you have to
use an external parser to replace variable place holders. I use
bash and sed to accomplish it. I changed your config a bit:
apiVersion: v1
kind: ConfigMap
metadata:
name: test
labels:
app: test
data:
WORKING_URI: "http://somedomain:VAR_PASSWORD#test"
After saving, I created a simple shell script containing desired values.
#!/bin/sh
export PASSWORD="verysecretpwd"
cat deploy.yaml | sed "s/VAR_PASSWORD/$PASSWORD/g" | kubectl -f -

Related

Value of Kubernetes secret in environment variable seems incorrect

I'm deploying a test application onto kubernetes on my local computer (minikube) and trying to pass database connection details into a deployment via environment variables.
I'm passing in these details using two methods - a ConfigMap and a Secret. The username (DB_USERNAME) and connection url (DB_URL) are passed via a ConfigMap, while the DB password is passed in as a secret (DB_PASSWORD).
My issue is that while the values passed via ConfigMap are fine, the DB_PASSWORD from the secret appears jumbled - like there's some encoding issue (see image below).
My deployment yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
envFrom:
- configMapRef:
name: gweb-cm
- secretRef:
name: password
My ConfigMap and Secret yaml
apiVersion: v1
data:
DB_URL: jdbc:mysql://mysql/test?serverTimezone=UTC
DB_USERNAME: webuser
SPRING_PROFILES_ACTIVE: prod
SPRING_DDL_AUTO: create
kind: ConfigMap
metadata:
name: gweb-cm
---
apiVersion: v1
kind: Secret
metadata:
name: password
type: Generic
data:
DB_PASSWORD: test
Not sure if I'm missing something in my Secret definition?
The secret value should be base64 encoded. Instead of test, use the output of
echo -n 'test' | base64
P.S. the Secret's type should be Opaque, not Generic

Can we create a setter contains list of object using kpt?

I've noticed that we can crate a setter contains list of string based on kpt [documentation][1]. Then I found out that complex setter contains list of object is not supported based on [this github issue][1]. Since the issue itself mentioned that this should be supported in kpt function can we use it with the current kpt function version?
[1]: Kpt Apply Setters. https://catalog.kpt.dev/apply-setters/v0.1/
[1]: Setters for list of objects. https://github.com/GoogleContainerTools/kpt/issues/1533
I've discussed a bit with my coworkers, and turned out this is possible by doing the following setup:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
spec:
replicas: 4 # kpt-set: ${nginx-replicas}
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: "nginx:1.16.1" # kpt-set: nginx:${tag}
ports:
- protocol: TCP
containerPort: 80
---
apiVersion: v1
kind: MyKind
metadata:
name: foo
environments: # kpt-set: ${env}
- dev
- stage
---
apiVersion: v1
kind: MyKind
metadata:
name: bar
environments: # kpt-set: ${nested-env}
- key: some-key
value: some-value
After that we can define the following setters:
apiVersion: v1
kind: ConfigMap
metadata:
name: setters
data:
env: |-
- prod
- dev
nested-env: |-
- key: some-other-key
value: some-other-value
nginx-replicas: "3"
tag: 1.16.2
And then we can call the following command:
$ kpt fn render apply-setters-simple
I've send a Pull Request to the repository to add documentation about this.

Kubernetes env variable not attached via PodDefault

I am working in kubeflow notebook server. I need to add some configurations which are environment variables. So that, I have decided create the configmap and the PodDefault.
apiVersion: v1
kind: ConfigMap
metadata:
name: test-configmap
namespace: app
data:
PLACE: /auth
USERNAME: root
PASSWORD: l3tm3in
This is my configmap file. I have attached this file in PodDefault object using below syntax
apiVersion: "kubeflow.org/v1alpha1"
kind: PodDefault
metadata:
name: test-configmap
namespace: app
spec:
selector:
matchLabels:
test-configmap: "true"
desc: "Test Configmap"
envFrom:
- configMapRef:
name: test-configmap
Actually the values are coming kubeflow configuration section. But it's not attached in the notebook(Pod)
Could anyone know about how to fix this issue?
Thanks in advance
I have never used kubeflow but based on the sourcecode, this should be the solution:
apiVersion: "kubeflow.org/v1alpha1"
kind: PodDefault
metadata:
name: test-configmap
namespace: app
spec:
selector:
matchLabels:
test-configmap: "true"
desc: "Test Configmap"
containers:
- envFrom:
- configMapRef:
name: test-configmap

Kubernetes: Is there a way to retrieve or inject local env vars into configmap.yaml? [duplicate]

I am setting up the kubernetes setup for django webapp.
I am passing environment variable while creating deployment as below
kubectl create -f deployment.yml -l key1=value1
I am getting error as below
error: no objects passed to create
Able to create the deployment successfully, If i remove the env variable -l key1=value1 while creating deployment.
deployment.yaml as below
#Deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
service: sigma-service
name: $key1
What will be the reason for causing the above error while creating deployment?
I used envsubst (https://www.gnu.org/software/gettext/manual/html_node/envsubst-Invocation.html) for this. Create a deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: $NAME
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
Then:
export NAME=my-test-nginx
envsubst < deployment.yaml | kubectl apply -f -
Not sure what OS are you using to run this. On macOS, envsubst installed like:
brew install gettext
brew link --force gettext
This isn't a right way to use the deployment, you can't provide half details in yaml and half in kubectl commands. If you want to pass environment variables in your deployment you should add those detail in the deployment spec.template.spec:
You should add following block to your deployment.yaml
spec:
containers:
- env:
- name: var1
value: val1
This will export your environment variables inside the container.
The other way to export the environment variable is use kubectl run (not advisable) as it is going to be depreciated very soon. You can use following command:
kubectl run nginx --image=nginx --restart=Always --replicas=1 --env=var1=val1
The above command will create a deployment nginx with replica 1 and environment variable var1=val1
You cannot pass variables to "kubectl create -f". YAML files should be complete manifests without variables. Also you cannot use "-l" flag to "kubectl create -f".
If you want to pass environment variables to pod you should do like that:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
env:
- name: MY_VAT
value: MY_VALUE
ports:
- containerPort: 80
Read more here: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/
Follow the below steps
create test-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: MYAPP
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
using sed command you can update the deployment name at deployment time
sed -e 's|MYAPP|my-nginx|g' test-deploy.yaml | kubectl apply -f -
File: ./deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: MYAPP
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
File: ./service.yaml
apiVersion: v1
kind: Service
metadata:
name: MYAPP
labels:
app: nginx
spec:
type: ClusterIP
ports:
- port: 80
selector:
app: nginx
File: ./kustomization.yaml
resources:
- deployment.yaml
- service.yaml
If you're using https://kustomize.io/, you can do this trick in a CI:
sh '( echo "images:" ; echo " - name: $IMAGE" ; echo " newTag: $VERSION" ) >> ./kustomization.yaml'
sh "kubectl apply --kustomize ."
I chose yq since it is yaml aware and gives a precise control where text substitutions happen.
To set an image from bash env var:
export IMAGE="your_image:latest"
yq eval '.spec.template.spec.containers[0].image = "'$IMAGE'"' manifests/daemonset.yaml | kubectl apply -f -
yq is available on MacPorts (as of 19/04/21 v4.4.1) with sudo port install yq
I was facing the same problem. I created a python script to change simple/complex or add values to the YAML file.
This became very handy in a similar situation that you describe. Also, switching to the python domain can allow for more complex scenarios.
The code and how to use it are available at this gist.
https://gist.github.com/washraf/f81153270c80b0b4ecf90a53872abde7
Please try following
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: kdpd00201
name: frontend
labels:
app: nginx
spec:
replicas: 6
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: frontend
image: ifccncf/nginx:1.14.2
ports:
- containerPort: 8001
env:
- name: NGINX_PORT
value: "8001"
My solution is then
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: frontend
name: frontend
namespace: kdpd00201
spec:
replicas: 4
selector:
matchLabels:
app: frontend
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: frontend
spec:
containers:
- env: # modified level
- name: NGINX_PORT
value: "8080"
image: lfccncf/nginx:1.13.7
name: nginx
ports:
- containerPort: 8080

Add namespace to kubernetes hostname

Currently my pods are all named "some-deployment-foo-bar" which does not help me track down issues when an error is reported with just the hostname.
So I want "$POD_NAMESPACE.$POD_NAME" as hostname.
I tried pod.beta.kubernetes.io/hostname: "foo" but that only sets an absolute name ... and subdomain did not work ...
The only other solution I saw was using a wrapper script that modifies the hostname and then executes the actual command ... which is pretty hacky and adds overhead ot every container.
Any way of doing this nicely?
current config is:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: foo
labels:
project: foo
spec:
selector:
matchLabels:
project: foo
template:
metadata:
name: foo
labels:
project: foo
spec:
containers:
- image: busybox
name: foo
PodSpec has a subdomain field, which can be used to specify the Pod’s subdomain. This field value takes precedence over the pod.beta.kubernetes.io/subdomain annotation value.
There is more information here, below is an example.
apiVersion: v1
kind: Service
metadata:
name: default-subdomain
spec:
selector:
name: busybox
clusterIP: None
ports:
- name: foo # Actually, no port is needed.
port: 1234
targetPort: 1234
---
apiVersion: v1
kind: Pod
metadata:
name: busybox1
labels:
name: busybox
spec:
hostname: busybox-1
subdomain: default-subdomain
containers:
- image: busybox
command:
- sleep
- "3600"
name: busybox
---
apiVersion: v1
kind: Pod
metadata:
name: busybox2
labels:
name: busybox
spec:
hostname: busybox-2
subdomain: default-subdomain
containers:
- image: busybox
command:
- sleep
- "3600"
name: busybox