Kubernetes env variable not attached via PodDefault - kubernetes

I am working in kubeflow notebook server. I need to add some configurations which are environment variables. So that, I have decided create the configmap and the PodDefault.
apiVersion: v1
kind: ConfigMap
metadata:
name: test-configmap
namespace: app
data:
PLACE: /auth
USERNAME: root
PASSWORD: l3tm3in
This is my configmap file. I have attached this file in PodDefault object using below syntax
apiVersion: "kubeflow.org/v1alpha1"
kind: PodDefault
metadata:
name: test-configmap
namespace: app
spec:
selector:
matchLabels:
test-configmap: "true"
desc: "Test Configmap"
envFrom:
- configMapRef:
name: test-configmap
Actually the values are coming kubeflow configuration section. But it's not attached in the notebook(Pod)
Could anyone know about how to fix this issue?
Thanks in advance

I have never used kubeflow but based on the sourcecode, this should be the solution:
apiVersion: "kubeflow.org/v1alpha1"
kind: PodDefault
metadata:
name: test-configmap
namespace: app
spec:
selector:
matchLabels:
test-configmap: "true"
desc: "Test Configmap"
containers:
- envFrom:
- configMapRef:
name: test-configmap

Related

Value of Kubernetes secret in environment variable seems incorrect

I'm deploying a test application onto kubernetes on my local computer (minikube) and trying to pass database connection details into a deployment via environment variables.
I'm passing in these details using two methods - a ConfigMap and a Secret. The username (DB_USERNAME) and connection url (DB_URL) are passed via a ConfigMap, while the DB password is passed in as a secret (DB_PASSWORD).
My issue is that while the values passed via ConfigMap are fine, the DB_PASSWORD from the secret appears jumbled - like there's some encoding issue (see image below).
My deployment yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
envFrom:
- configMapRef:
name: gweb-cm
- secretRef:
name: password
My ConfigMap and Secret yaml
apiVersion: v1
data:
DB_URL: jdbc:mysql://mysql/test?serverTimezone=UTC
DB_USERNAME: webuser
SPRING_PROFILES_ACTIVE: prod
SPRING_DDL_AUTO: create
kind: ConfigMap
metadata:
name: gweb-cm
---
apiVersion: v1
kind: Secret
metadata:
name: password
type: Generic
data:
DB_PASSWORD: test
Not sure if I'm missing something in my Secret definition?
The secret value should be base64 encoded. Instead of test, use the output of
echo -n 'test' | base64
P.S. the Secret's type should be Opaque, not Generic

Kubernetes pod level configuration externalization in spring boot app

I need some help from the community, I'm still new to K8 and Spring Boot. Thanks all in advance.
what I need is to have 4 K8 pods running in K8 environment and each pod have slightly different configuration from each other, for example, I have a property in one of my java class called regions, it extract it's value from Application.yml, like
#Value("${regions}")
Private String regions;
Now after deploy it to K8 env I want to have 4 pods(I can configure it in helm file) running and in each pod the regions field should have different value.
Is this something achievable ? Can anyone please give any advice ?
If you want to run 4 different pods with different configurations, you have to deploy the 4 different deployments in kubernetes.
You can create the different configmap as per need storing the whole Application.yaml file or environment variables and inject it to different deployments.
how to store whole application.yaml inside config map
apiVersion: v1
kind: ConfigMap
metadata:
name: yaml-region-first
data:
application.yaml: |-
data: test,
region: first-region
the same way you can create the config map with the second deployment.
apiVersion: v1
kind: ConfigMap
metadata:
name: yaml-region-second
data:
application.yaml: |-
data: test,
region: second-region
you can inject this configmap to each deployment
example :
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: hello-app
name: hello-app
namespace: default
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: hello-app
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: hello-app
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /etc/nginx/app.yaml
name: yaml-file
readOnly: true
volumes:
- configMap:
name: yaml-region-second
optional: false
name: yaml-file
accordingly, you can also create the helm chart.
If you just to pass the single environment instead of storing the whole file inside the configmap you can directly add value to the deployment.
Example :
apiVersion: v1
kind: Pod
metadata:
name: print-greeting
spec:
containers:
- name: env-print-demo
image: bash
env:
- name: REGION
value: "one"
- name: HONORIFIC
value: "The Most Honorable"
- name: NAME
value: "Kubernetes"
command: ["echo"]
args: ["$(GREETING) $(HONORIFIC) $(NAME)"]
https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/
for each deployment, your environment will be different and in helm, you can dynamically also update or overwrite it using CLI command.

Use variable in a patchesJson6902 of a kustomization.yaml file

I would like to set the name field in a Namespace resource and also replace the namespace field in a Deployment resource with the same value, for example my-namespace.
Here is kustomization.json:
namespace: <NAMESPACE>
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
patchesJson6902:
- patch: |-
- op: replace
path: /metadata/name
value: <NAMESPACE>
target:
kind: Namespace
name: system
version: v1
resources:
- manager.yaml
and manager.yaml:
apiVersion: v1
kind: Namespace
metadata:
labels:
control-plane: controller-manager
name: system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: controller-manager
namespace: system
spec:
selector:
matchLabels:
control-plane: controller-manager
replicas: 1
template:
metadata:
labels:
control-plane: controller-manager
spec:
containers:
- command:
- /manager
args:
- --enable-leader-election
image: controller:latest
name: manager
I tried using kustomize edit set namespace my-namespace && kustomize build, but it only changes the namespace field in the Deployment object.
Is there a way to change both field without using sed, in 'pure' kustomize and without having to change manually value in kustomization.json?
Is there a way to change both field without using sed, in 'pure' kustomize and without having to change manually value in kustomization.json?
I managed to achieve somewhat similar with the following configuration:
kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: my-namespace
resources:
- deployment.yaml
depyloment.yaml
---
apiVersion: v1
kind: Namespace
metadata:
name: nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx:1.14.2
name: nginx
ports:
- containerPort: 80
And here is the output of the command that you used:
➜ kustomize kustomize edit set namespace my-namespace7 && kustomize build .
apiVersion: v1
kind: Namespace
metadata:
name: my-namespace7
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: my-namespace7
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx:1.14.2
name: nginx
ports:
- containerPort: 80
What is happening here is that once you set the namespace globally in kustomization.yaml it will apply it to your targets which looks to me that looks an easier way to achieve what you want.
I cannot test your config without manager_patch.yaml content. If you wish to go with your way further you will have update the question with the file content.

imagePullSecrets not working with Kind deployment

I'm tying to create a deployment with 3 replicas, whcih will pull image from a private registry. I have stored the credentials in a secret and using the imagePullSecrets in the deployment file. Im getting below error in the deploy it.
error: error validating "private-reg-pod.yaml": error validating data: [ValidationError(Deployment.spec): unknown field "containers" in io.k8s.api.apps.v1.DeploymentSpec, ValidationError(Deployment.spec): unknown field "imagePullSecrets" in io.k8s.api.apps.v1.DeploymentSpec, ValidationError(Deployment.spec): missing required field "selector" in io.k8s.api.apps.v1.DeploymentSpec, ValidationError(Deployment.spec): missing required field "template" in io.k8s.api.apps.v1.DeploymentSpec]; if you choose to ignore these errors, turn validation off with --validate=false
Any help on this?
Below is my deployment file :
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: test-pod-deployment
labels:
app: test-pod
spec:
replicas: 3
selector:
matchLabels:
app: test-pod
template:
metadata:
labels:
app: test-pod
spec:
containers:
- name: test-pod
image: <private-registry>
imagePullSecrets:
- name: regcred
Thanks,
Sundar
Image section should be placed in container specification. ImagePullSecret should be placed in spec section so proper yaml file looks like this (please note indent):
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: test-pod-deployment
labels:
app: test-pod
spec:
replicas: 3
selector:
matchLabels:
app: test-pod
template:
metadata:
labels:
app: test-pod
spec:
containers:
- name: test-pod
image: <private-registry>
imagePullSecrets:
- name: regcred
Very common issue with kubernetes Deployment.
The valid format for pulling image from private repository in your Kubernetes Deployment file is:
spec:
imagePullSecrets:
- name: <your secret name>
containers:
Please make sure you have created the secret,then please try to make it like the below .
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: test-pod-deployment
labels:
app: test-pod
spec:
replicas: 3
selector:
matchLabels:
app: test-pod
template:
metadata:
labels:
app: test-pod
spec:
containers:
- name: test-pod
image: nginx
imagePullSecrets:
- name: regcred
Both #Jakub-Bujny and #itmaven are correct. The indentation is really important in creating and using .yaml (or .yml) file. The yaml file has been parsed based on these indentations. So, both of these are correct:
1)
spec:
imagePullSecrets:
- name: regcred
containers:
- name: test-pod
image:
2)
spec:
containers:
- name: test-pod
image: <private-registry>
imagePullSecrets:
- name: regcred
Note: before you used the imagePullSecrets you have to create that using the following code:
kubectl create secret docker-registry <private-registry> --docker-server=
<cluster_CA_domain>:[some port] --docker-username=<user_name> --docker-
password=<user_password> --docker-email=<user_email>
also check if the imagePullSecrets was created successfully using the following code:
kubectl get secret

Kubernetes - Passing in environment variable and service name (from DNS)

I can't seem to find an example of the correct syntax for inserting a environment variable along with the service name:
So I have a service defined as:
apiVersion: v1
kind: Service
metadata:
name: test
spec:
type: NodePort
ports:
- name: http
port: 3000
targetPort: 3000
selector:
app: test
I then use a secrets file with the following:
apiVersion: v1
kind: Secret
metadata:
name: test
labels:
app: test
data:
password: fxxxxxxxxxxxxxxx787xx==
And just to confirm I'm using envFrom to set that password as an env variable:
apiVersion: v1
kind: Deployment
metadata:
name: test
spec:
replicas: 1
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: xxxxxxxxxxx
imagePullPolicy: Always
envFrom:
- configMapRef:
name: test
- secretRef:
name: test
ports:
- containerPort: 3000
Now in my config file I want to refer to that password as well as the service name itself - is this the correct way to do so:
apiVersion: v1
kind: ConfigMap
metadata:
name: test
labels:
app: test
data:
WORKING_URI: "http://somedomain:${password}#test"
The yaml configuration does not work the way you provided as an example.
If you want to setup Kubernetes with a complex configuration and
use variables or dynamic assignment to some of them, you have to
use an external parser to replace variable place holders. I use
bash and sed to accomplish it. I changed your config a bit:
apiVersion: v1
kind: ConfigMap
metadata:
name: test
labels:
app: test
data:
WORKING_URI: "http://somedomain:VAR_PASSWORD#test"
After saving, I created a simple shell script containing desired values.
#!/bin/sh
export PASSWORD="verysecretpwd"
cat deploy.yaml | sed "s/VAR_PASSWORD/$PASSWORD/g" | kubectl -f -