Can we create a setter contains list of object using kpt? - kubernetes

I've noticed that we can crate a setter contains list of string based on kpt [documentation][1]. Then I found out that complex setter contains list of object is not supported based on [this github issue][1]. Since the issue itself mentioned that this should be supported in kpt function can we use it with the current kpt function version?
[1]: Kpt Apply Setters. https://catalog.kpt.dev/apply-setters/v0.1/
[1]: Setters for list of objects. https://github.com/GoogleContainerTools/kpt/issues/1533

I've discussed a bit with my coworkers, and turned out this is possible by doing the following setup:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
spec:
replicas: 4 # kpt-set: ${nginx-replicas}
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: "nginx:1.16.1" # kpt-set: nginx:${tag}
ports:
- protocol: TCP
containerPort: 80
---
apiVersion: v1
kind: MyKind
metadata:
name: foo
environments: # kpt-set: ${env}
- dev
- stage
---
apiVersion: v1
kind: MyKind
metadata:
name: bar
environments: # kpt-set: ${nested-env}
- key: some-key
value: some-value
After that we can define the following setters:
apiVersion: v1
kind: ConfigMap
metadata:
name: setters
data:
env: |-
- prod
- dev
nested-env: |-
- key: some-other-key
value: some-other-value
nginx-replicas: "3"
tag: 1.16.2
And then we can call the following command:
$ kpt fn render apply-setters-simple
I've send a Pull Request to the repository to add documentation about this.

Related

How to use Kustomize to configure Traefik 2.x IngressRoute (metadata.name, spec.routes[0].services[0].name & spec.routes[0].match = Host() )

We have a EKS cluster running with Traefik deployed in CRD style (full setup on GitHub) and wan't to deploy our app https://gitlab.com/jonashackt/microservice-api-spring-boot with the Kubernetes objects Deployment, Service and IngressRoute (see configuration repository here). The manifests look like this:
deployment.yml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: microservice-api-spring-boot
spec:
replicas: 3
revisionHistoryLimit: 3
selector:
matchLabels:
app: microservice-api-spring-boot
branch: main
template:
metadata:
labels:
app: microservice-api-spring-boot
branch: main
spec:
containers:
- image: registry.gitlab.com/jonashackt/microservice-api-spring-boot:c25a74c8f919a72e3f00928917dc4ab2944ab061
name: microservice-api-spring-boot
ports:
- containerPort: 8098
imagePullSecrets:
- name: gitlab-container-registry
service.yml:
apiVersion: v1
kind: Service
metadata:
name: microservice-api-spring-boot
spec:
ports:
- port: 80
targetPort: 8098
selector:
app: microservice-api-spring-boot
branch: main
traefik-ingress-route.yml:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: microservice-api-spring-boot-ingressroute
namespace: default
spec:
entryPoints:
- web
routes:
- match: Host(`microservice-api-spring-boot-BRANCHNAME.tekton-argocd.de`)
kind: Rule
services:
- name: microservice-api-spring-boot
port: 80
We already use Kustomize and especially the kustomize CLI (on a Mac or in GitHub Actions install with brew install kustomize) with the following folder structure:
├── deployment.yml
├── kustomization.yaml
├── service.yml
└── traefik-ingress-route.yml
Our kustomization.yaml looks like this:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yml
- service.yml
- traefik-ingress-route.yml
images:
- name: registry.gitlab.com/jonashackt/microservice-api-spring-boot
newTag: foobar
commonLabels:
branch: foobar
nameSuffix: foobar
Now changing the metadata.name dynamically to add a suffix to the Deployment's, Service's and IngressRoute's .metadata.name from within our GitHub Actions workflow is easy with kustomize CLI (because we want the suffix to use a prefixed -, we need to use the -- -barfoo syntax here):
kustomize edit set namesuffix -- -barfoo
Check the result with
kustomize build .
Also changing the .spec.selector.matchLabels.branch, .spec.template.metadata.labels.branch and .spec.selector.branch in the Deployment and Service is no problem:
kustomize edit set label branch:barfoo
Changing the .spec.template.spec.containers[0].image of our Deployment works with:
kustomize edit set image registry.gitlab.com/jonashackt/microservice-api-spring-boot:barfoo
But looking into our IngressRoute it seems that .spec.routes[0].services[0].name and .spec.routes[0].match = Host() can't be changed with Kustomize out of the box?! So how can we change both fields without the need for a replacement tooling like yq or even sed/ envsubst?
1. Change the IngressRoutes .spec.routes[0].services[0].name with Kustomize
Changing the IngressRoutes .spec.routes[0].services[0].name is possible with Kustomize using a NameReference transformer (see docs here) - luckily I found inspiration in this issue. Therefore we need to include the configurations keyword in our kustomize.yaml:
nameSuffix: foobar
configurations:
# Tie target Service metadata.name to IngressRoute's spec.routes.services.name
# Once Service name is changed, the IngressRoute referrerd service name will be changed as well.
- nameReference.yml
We also need to add file called nameReference.yml:
nameReference:
- kind: Service
fieldSpecs:
- kind: IngressRoute
path: spec/routes/services/name
As you can see we tie the Service's name to the IngressRoutes spec/routes/services/name. Now running
kustomize edit set namesuffix barfoo
will not only change the metadata.name tags of the Deployment, Service and IngressRoute - but also the .spec.routes[0].services[0].name of the IngressRoute, since it is now linked to the metadata.name of the Service. Note that this only, if both the referrer and the target's have a name tag.
2. Change a part of the IngressRoutes .spec.routes[0].match = Host()
The second part of the question ask how to change a part of the IngressRoutes .spec.routes[0].match = Host(). There's an open issue in the Kustomize GitHub project. Right now Kustomize doesn't support this use case - only writing a custom generator plugin for Kustomize. As this might not be a preferred option, there's another way inspired by this blog post. As we can create yaml files inline in our console using the syntax cat > ./myyamlfile.yml <<EOF ... EOF we could also use the inline variable substitution.
So first define the branch name as variable:
RULE_HOST_BRANCHNAME=foobar
And then use the described syntax to create a ingressroute-patch.yml file inline:
cat > ./ingressroute-patch.yml <<EOF
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: microservice-api-spring-boot-ingressroute
namespace: default
spec:
entryPoints:
- web
routes:
- match: Host(\`microservice-api-spring-boot-$RULE_HOST_BRANCHNAME.tekton-argocd.de\`)
kind: Rule
services:
- name: microservice-api-spring-boot
port: 80
EOF
The last step is to use the ingressroute-patch.yml file as patchesStrategicMerge inside our kustomization.yaml like this:
patchesStrategicMerge:
- ingressroute-patch.yml
Now running kustomize build . should output the correct Deployment, Service and IngressRoute for our setup:
apiVersion: v1
kind: Service
metadata:
labels:
branch: barfoo
name: microservice-api-spring-boot-barfoo
spec:
ports:
- port: 80
targetPort: 8098
selector:
app: microservice-api-spring-boot
branch: barfoo
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
branch: barfoo
name: microservice-api-spring-boot-barfoo
spec:
replicas: 3
revisionHistoryLimit: 3
selector:
matchLabels:
app: microservice-api-spring-boot
branch: barfoo
template:
metadata:
labels:
app: microservice-api-spring-boot
branch: barfoo
spec:
containers:
- image: registry.gitlab.com/jonashackt/microservice-api-spring-boot:barfoo
name: microservice-api-spring-boot
ports:
- containerPort: 8098
imagePullSecrets:
- name: gitlab-container-registry
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
labels:
branch: barfoo
name: microservice-api-spring-boot-ingressroute-barfoo
namespace: default
spec:
entryPoints:
- web
routes:
- kind: Rule
match: Host(`microservice-api-spring-boot-barfoo.tekton-argocd.de`)
services:
- name: microservice-api-spring-boot-barfoo
port: 80

Kubernetes :Validation Error(Deployment.spec.template.spec.container[0]): unknown field "ConfigMapref" in io.k8s.api.core.v1.Container

I am doing my first deployment in Kubernetes and I've hosted my API in my namespace and it's up and running. So I tried to connect my API with MongoDB. Added my database details in ConfigMaps via Rancher.
I tried to invoke the DB in my deployment YAML file but got an error stating Unknown Field - ConfigMapref
Below is my deployment YAML file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myfistproject
namespace: Owncloud
spec
replicas: 2
selector:
matchLables:
app: myfirstproject
version: 1.0.0
template:
metadata:
labels:
app: myfirstproject
version: 1.0.0
spec:
containers:
- name: myfirstproject
image: **my image repo location**
imagePullPolicy: always
ports:
- containerPort: 80
configMapRef:
- name: myfirstprojectdb # This is the name of the config map created via rancher
myfirstprojectdb ConfigMap will store all the details like the database name, username, password, etc.
On executing the pipeline I get the below error.
How do I need to refer my config map in deployment yaml?
Validation Error(Deployment.spec.template.spec.container[0]): unknown field "ConfigMapref" in io.k8s.api.core.v1.Container
There are some more typos (e.g. missing : after spec or Always should be with capital letter). Also indentation should be consistent in the whole yaml file - see yaml indentation and separation.
I corrected your yaml so it passes api server's check + added config map reference (considering it contains env variables):
apiVersion: apps/v1
kind: Deployment
metadata:
name: myfistproject
namespace: Owncloud
spec:
replicas: 2
selector:
matchLabels:
app: myfirstproject
version: 1.0.0
template:
metadata:
labels:
app: myfirstproject
version: 1.0.0
spec:
containers:
- name: myfirstproject
image: **my image repo location**
imagePullPolicy: Always
ports:
- containerPort: 80
envFrom:
- configMapRef:
name: myfirstprojectdb
Useful link:
Configure all key-value pairs in a ConfigMap as container environment variables which is related to this question.

Kubernetes - ConfigMap for nested variables

We have an image deployed in an AKS cluster for which we need to update a config entry during deployment using configmaps.
The configuration file has the following key and we are trying to replace the value of the "ChildKey" without replacing the entire file -
{
"ParentKey": {
"ChildKey": "123"
}
}
The configmap looks like -
apiVersion: v1
data:
ParentKey: |
ChildKey: 456
kind: ConfigMap
name: cf
And in the deployment, the configmap is used like this -
apiVersion: extensions/v1beta1
kind: Deployment
spec:
template:
metadata:
creationTimestamp: null
labels:
app: abc
spec:
containers:
- env:
- name: ParentKey
valueFrom:
configMapKeyRef:
key: ParentKey
name: cf
The replacement is not working with the setup above. Is there a different way to declare the key names for nested structures?
We have addressed this in the following manner -
The configmap carries a simpler structure - only the child element -
apiVersion: v1
data:
ChildKey: 456
kind: ConfigMap
name: cf
In the deployment, the environment variable key refers to the child key like this -
apiVersion: extensions/v1beta1
kind: Deployment
spec:
template:
metadata:
creationTimestamp: null
labels:
app: abc
spec:
containers:
- env:
- name: ParentKey__ChildKey
valueFrom:
configMapKeyRef:
key: ChildKey
name: cf
Posting this for reference.
use the double underscore for nested environment variables and arrays as explained here
To avoid explicit environment variables and typing names twice, you can use envFrom
configMap.yaml
apiVersion: v1
data:
ParentKey__ChildKey: 456
kind: ConfigMap
name: cf
deployment.yml
containers:
- name: $(name)
image: $(image)
envFrom:
- configMapRef:
name: common-config
- configMapRef:
name: specific-config

Kubernetes - Passing in environment variable and service name (from DNS)

I can't seem to find an example of the correct syntax for inserting a environment variable along with the service name:
So I have a service defined as:
apiVersion: v1
kind: Service
metadata:
name: test
spec:
type: NodePort
ports:
- name: http
port: 3000
targetPort: 3000
selector:
app: test
I then use a secrets file with the following:
apiVersion: v1
kind: Secret
metadata:
name: test
labels:
app: test
data:
password: fxxxxxxxxxxxxxxx787xx==
And just to confirm I'm using envFrom to set that password as an env variable:
apiVersion: v1
kind: Deployment
metadata:
name: test
spec:
replicas: 1
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: xxxxxxxxxxx
imagePullPolicy: Always
envFrom:
- configMapRef:
name: test
- secretRef:
name: test
ports:
- containerPort: 3000
Now in my config file I want to refer to that password as well as the service name itself - is this the correct way to do so:
apiVersion: v1
kind: ConfigMap
metadata:
name: test
labels:
app: test
data:
WORKING_URI: "http://somedomain:${password}#test"
The yaml configuration does not work the way you provided as an example.
If you want to setup Kubernetes with a complex configuration and
use variables or dynamic assignment to some of them, you have to
use an external parser to replace variable place holders. I use
bash and sed to accomplish it. I changed your config a bit:
apiVersion: v1
kind: ConfigMap
metadata:
name: test
labels:
app: test
data:
WORKING_URI: "http://somedomain:VAR_PASSWORD#test"
After saving, I created a simple shell script containing desired values.
#!/bin/sh
export PASSWORD="verysecretpwd"
cat deploy.yaml | sed "s/VAR_PASSWORD/$PASSWORD/g" | kubectl -f -

How to set environment variables in CoreOS

I need to set environment variable in kubernetes slave which is a coreos system.
I have tried using exportand declare but it keeps reading each argument as a separate command
don't set variables in the command field, take a look at the env field.
apiVersion: v1
kind: ReplicationController
metadata:
labels:
name: api
name: api
spec:
replicas: 1
selector:
name: api
template:
metadata:
labels:
name: api
spec:
containers:
- env:
- name: VARIABLE <---- declare an env variable NAME
value: "value-of-variable" <--- here is the value
- name: ANOTHER_VARIABLE
value: "another-value"
image: myregistry/api
imagePullPolicy: Always
name: api