Define unique IDs for wso2.carbon: in deployment.yaml || Kubernetes deployment - kubernetes

Requirement:
Need to assign two different id's for each pod in deployment.yaml for id parameter, wso2.carbon section.
i.e. wso2-am-analytics_1 and wso2-am-analytic_2
SetUp:
This deployment is a kubernetes deployment deployment.yaml
---
apiVersion: v1
kind: ConfigMap
metadata:
name: wso2apim-mnt-worker-conf
namespace: wso2
data:
deployment.yaml: |
wso2.carbon:
type: wso2-apim-analytics
id: wso2-am-analytics
name: WSO2 API Manager Analytics Server
ports:
# port offset
offset: 0
.
.
.
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: wso2apim-mnt-worker
namespace: wso2
spec:
replicas: 2
strategy:
type: Recreate
selector:
matchLabels:
deployment: wso2apim-mnt-worker
.
.
.

You don't have to necessarily have the ID as wso2-am-analytics_1 and wso2-am-analytics_2. This just has to be a unique value. Hence you can use something like the pod IP for this. If you are strict about the ID value you can create the config map based on a config file and then have some logic to populate the config files' ID field appropriately. If you use helm this would be pretty easy.
If you are ok to use some other unique value you can do the following. In WSO2 configs you can read values from environment variables, hence you can do something like this.
---
apiVersion: v1
kind: ConfigMap
metadata:
name: wso2apim-mnt-worker-conf
namespace: wso2
data:
deployment.yaml: |
wso2.carbon:
type: wso2-apim-analytics
id: ${NODE_ID}
name: WSO2 API Manager Analytics Server
ports:
# port offset
offset: 0
# Then in the Deployment pass environment variable
env:
-
name: NODE_ID
valueFrom:
fieldRef:
fieldPath: status.podIP

Related

Value of Kubernetes secret in environment variable seems incorrect

I'm deploying a test application onto kubernetes on my local computer (minikube) and trying to pass database connection details into a deployment via environment variables.
I'm passing in these details using two methods - a ConfigMap and a Secret. The username (DB_USERNAME) and connection url (DB_URL) are passed via a ConfigMap, while the DB password is passed in as a secret (DB_PASSWORD).
My issue is that while the values passed via ConfigMap are fine, the DB_PASSWORD from the secret appears jumbled - like there's some encoding issue (see image below).
My deployment yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
envFrom:
- configMapRef:
name: gweb-cm
- secretRef:
name: password
My ConfigMap and Secret yaml
apiVersion: v1
data:
DB_URL: jdbc:mysql://mysql/test?serverTimezone=UTC
DB_USERNAME: webuser
SPRING_PROFILES_ACTIVE: prod
SPRING_DDL_AUTO: create
kind: ConfigMap
metadata:
name: gweb-cm
---
apiVersion: v1
kind: Secret
metadata:
name: password
type: Generic
data:
DB_PASSWORD: test
Not sure if I'm missing something in my Secret definition?
The secret value should be base64 encoded. Instead of test, use the output of
echo -n 'test' | base64
P.S. the Secret's type should be Opaque, not Generic

Can we create a setter contains list of object using kpt?

I've noticed that we can crate a setter contains list of string based on kpt [documentation][1]. Then I found out that complex setter contains list of object is not supported based on [this github issue][1]. Since the issue itself mentioned that this should be supported in kpt function can we use it with the current kpt function version?
[1]: Kpt Apply Setters. https://catalog.kpt.dev/apply-setters/v0.1/
[1]: Setters for list of objects. https://github.com/GoogleContainerTools/kpt/issues/1533
I've discussed a bit with my coworkers, and turned out this is possible by doing the following setup:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
spec:
replicas: 4 # kpt-set: ${nginx-replicas}
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: "nginx:1.16.1" # kpt-set: nginx:${tag}
ports:
- protocol: TCP
containerPort: 80
---
apiVersion: v1
kind: MyKind
metadata:
name: foo
environments: # kpt-set: ${env}
- dev
- stage
---
apiVersion: v1
kind: MyKind
metadata:
name: bar
environments: # kpt-set: ${nested-env}
- key: some-key
value: some-value
After that we can define the following setters:
apiVersion: v1
kind: ConfigMap
metadata:
name: setters
data:
env: |-
- prod
- dev
nested-env: |-
- key: some-other-key
value: some-other-value
nginx-replicas: "3"
tag: 1.16.2
And then we can call the following command:
$ kpt fn render apply-setters-simple
I've send a Pull Request to the repository to add documentation about this.

Kubernetes pod level configuration externalization in spring boot app

I need some help from the community, I'm still new to K8 and Spring Boot. Thanks all in advance.
what I need is to have 4 K8 pods running in K8 environment and each pod have slightly different configuration from each other, for example, I have a property in one of my java class called regions, it extract it's value from Application.yml, like
#Value("${regions}")
Private String regions;
Now after deploy it to K8 env I want to have 4 pods(I can configure it in helm file) running and in each pod the regions field should have different value.
Is this something achievable ? Can anyone please give any advice ?
If you want to run 4 different pods with different configurations, you have to deploy the 4 different deployments in kubernetes.
You can create the different configmap as per need storing the whole Application.yaml file or environment variables and inject it to different deployments.
how to store whole application.yaml inside config map
apiVersion: v1
kind: ConfigMap
metadata:
name: yaml-region-first
data:
application.yaml: |-
data: test,
region: first-region
the same way you can create the config map with the second deployment.
apiVersion: v1
kind: ConfigMap
metadata:
name: yaml-region-second
data:
application.yaml: |-
data: test,
region: second-region
you can inject this configmap to each deployment
example :
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: hello-app
name: hello-app
namespace: default
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: hello-app
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: hello-app
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /etc/nginx/app.yaml
name: yaml-file
readOnly: true
volumes:
- configMap:
name: yaml-region-second
optional: false
name: yaml-file
accordingly, you can also create the helm chart.
If you just to pass the single environment instead of storing the whole file inside the configmap you can directly add value to the deployment.
Example :
apiVersion: v1
kind: Pod
metadata:
name: print-greeting
spec:
containers:
- name: env-print-demo
image: bash
env:
- name: REGION
value: "one"
- name: HONORIFIC
value: "The Most Honorable"
- name: NAME
value: "Kubernetes"
command: ["echo"]
args: ["$(GREETING) $(HONORIFIC) $(NAME)"]
https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/
for each deployment, your environment will be different and in helm, you can dynamically also update or overwrite it using CLI command.

Kubernetes - ConfigMap for nested variables

We have an image deployed in an AKS cluster for which we need to update a config entry during deployment using configmaps.
The configuration file has the following key and we are trying to replace the value of the "ChildKey" without replacing the entire file -
{
"ParentKey": {
"ChildKey": "123"
}
}
The configmap looks like -
apiVersion: v1
data:
ParentKey: |
ChildKey: 456
kind: ConfigMap
name: cf
And in the deployment, the configmap is used like this -
apiVersion: extensions/v1beta1
kind: Deployment
spec:
template:
metadata:
creationTimestamp: null
labels:
app: abc
spec:
containers:
- env:
- name: ParentKey
valueFrom:
configMapKeyRef:
key: ParentKey
name: cf
The replacement is not working with the setup above. Is there a different way to declare the key names for nested structures?
We have addressed this in the following manner -
The configmap carries a simpler structure - only the child element -
apiVersion: v1
data:
ChildKey: 456
kind: ConfigMap
name: cf
In the deployment, the environment variable key refers to the child key like this -
apiVersion: extensions/v1beta1
kind: Deployment
spec:
template:
metadata:
creationTimestamp: null
labels:
app: abc
spec:
containers:
- env:
- name: ParentKey__ChildKey
valueFrom:
configMapKeyRef:
key: ChildKey
name: cf
Posting this for reference.
use the double underscore for nested environment variables and arrays as explained here
To avoid explicit environment variables and typing names twice, you can use envFrom
configMap.yaml
apiVersion: v1
data:
ParentKey__ChildKey: 456
kind: ConfigMap
name: cf
deployment.yml
containers:
- name: $(name)
image: $(image)
envFrom:
- configMapRef:
name: common-config
- configMapRef:
name: specific-config

Kubernetes - Passing in environment variable and service name (from DNS)

I can't seem to find an example of the correct syntax for inserting a environment variable along with the service name:
So I have a service defined as:
apiVersion: v1
kind: Service
metadata:
name: test
spec:
type: NodePort
ports:
- name: http
port: 3000
targetPort: 3000
selector:
app: test
I then use a secrets file with the following:
apiVersion: v1
kind: Secret
metadata:
name: test
labels:
app: test
data:
password: fxxxxxxxxxxxxxxx787xx==
And just to confirm I'm using envFrom to set that password as an env variable:
apiVersion: v1
kind: Deployment
metadata:
name: test
spec:
replicas: 1
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: xxxxxxxxxxx
imagePullPolicy: Always
envFrom:
- configMapRef:
name: test
- secretRef:
name: test
ports:
- containerPort: 3000
Now in my config file I want to refer to that password as well as the service name itself - is this the correct way to do so:
apiVersion: v1
kind: ConfigMap
metadata:
name: test
labels:
app: test
data:
WORKING_URI: "http://somedomain:${password}#test"
The yaml configuration does not work the way you provided as an example.
If you want to setup Kubernetes with a complex configuration and
use variables or dynamic assignment to some of them, you have to
use an external parser to replace variable place holders. I use
bash and sed to accomplish it. I changed your config a bit:
apiVersion: v1
kind: ConfigMap
metadata:
name: test
labels:
app: test
data:
WORKING_URI: "http://somedomain:VAR_PASSWORD#test"
After saving, I created a simple shell script containing desired values.
#!/bin/sh
export PASSWORD="verysecretpwd"
cat deploy.yaml | sed "s/VAR_PASSWORD/$PASSWORD/g" | kubectl -f -