How can I make a dependent on K8S configuration file? - kubernetes

I have below k8s configuration yml file but when run kubectl apply, it gives me the error namespaces "aws-observability" not found.
I understand that the aws-observability namespace is not deployed when deploying the ConfigMap.
It can be solved by split this config to two files and deploy the namespace first then the ConfigMap. But I'd like to put them in one file and deploy them in one go. How can I add a depend between these two configurations?
kind: Namespace
apiVersion: v1
metadata:
name: aws-observability
labels:
aws-observability: enabled
kind: ConfigMap
apiVersion: v1
metadata:
name: aws-logging
namespace: aws-observability
labels:
data:
output.conf: |
[OUTPUT]
Name cloudwatch
Match *
region <ap-southeast-2>
log_group_name elk-fluent-bit-cloudwatch
log_stream_prefix from-elk-fluent-bit-
auto_create_group true

You should add separator (---) between two components. I have tested below YAML on my machine and its working as expected:
kind: Namespace
apiVersion: v1
metadata:
name: aws-observability
labels:
aws-observability: enabled
---
kind: ConfigMap
apiVersion: v1
metadata:
name: aws-logging
namespace: aws-observability
labels:
data:
output.conf: |
[OUTPUT]
Name cloudwatch
Match *
region <ap-southeast-2>
log_group_name elk-fluent-bit-cloudwatch
log_stream_prefix from-elk-fluent-bit-
auto_create_group true

Related

Delete successful jobs with Ansible kubernetes.core.k8s module?

I'm trying to delete Kubernetes successful jobs with Ansible kubernetes.core.k8s module.
Job:
apiVersion: v1
kind: Pod
metadata:
name: helm-install-traefik-crd-n2gbz
generateName: helm-install-traefik-crd-
namespace: kube-system
uid: 8615f527-e6fa-4d48-af5a-8b087d6d229a
resourceVersion: '2218'
creationTimestamp: '2023-02-14T02:35:30Z'
labels:
controller-uid: 032b353d-24d7-4e8c-a5a6-f77bbf949a36
helmcharts.helm.cattle.io/chart: traefik-crd
job-name: helm-install-traefik-crd
ownerReferences:
- apiVersion: batch/v1
kind: Job
name: helm-install-traefik-crd
uid: 032b353d-24d7-4e8c-a5a6-f77bbf949a36
controller: true
blockOwnerDeletion: true
There are multiple jobs to be deleted, each with different pod names, so I tried:
- name: Get pod info
kubernetes.core.k8s_info:
api_version: v1
kind: Pod
label_selectors:
- helmcharts.helm.cattle.io/chart: traefik-crd
- job-name: helm-install-traefik-crd
namespace: kube-system
What is the correct format for label_selectors? I could not find any documentation examples.
Ideally, I would like to use kubernetes.core.k8s_info and get the pod names with label_selectors, then use that list of pod names with kubernetes.core.k8s to delete them.
label and label value should be separated by =:
kind: Pod
label_selectors:
- helmcharts.helm.cattle.io/chart = traefik-crd

use Kustomize to create additional namespaces with a suffix

Just currently battling an issue with kustomize and not having much look.
I have my config setup and are using kustomize (v4.5.7) to have separate base, variants and environment configuration. I’m trying to use the setup to deploy a copy of my dev environment onto the same cluster using different namespaces and a suffix.
The idea is that everything would be deployed using a suffix for the name (and got this working but it only does the names and not the namespaces) and drop them into separate namespaces with a suffix.
I’m currently defining all the namspaces with the following config:
apiVersion: v1
kind: Namespace
metadata:
name: mynamespace
Now i want to be able to deploy copies of the NS named mynamespace-mysuffix
I’ve managed to implemented a suffix for the names of the object alongside a PrefixSuffixTransformer to update the namespaces in the objects created to mynamespace-mysuffix
This unfortunately doesn’t update the namespace configuration and leaves things in tact. In short it tries to deploy the objects into namespaces that do not exist.
This is the working PrefixSuffixTransformer amending the namespace set in the various objects:
apiVersion: builtin
kind: PrefixSuffixTransformer
metadata:
name: customSuffixer
suffix: "-mysuffix"
fieldSpecs:
- path: metadata/name
- path: metadata/namespace
trying to target the namespace objects unsuccessfully with the following additional PrefixSuffixTransformer
apiVersion: builtin
kind: PrefixSuffixTransformer
metadata:
name: nsSuffixer
suffix: "-mysuffix"
fieldSpecs:
- kind: Namespace
path: metadata/name
Was hoping on this last part working but no success. Anyone any suggestions as to how I can get the additional namespaces created with a suffix?
If I understand your question correctly, the solution is just to add a namespace: declaration to the kustomization.yaml file in your variants.
For example, if I have a base directory that contains:
namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: example
spec: {}
service.yaml
apiVersion: v1
kind: Service
metadata:
name: example
spec:
ports:
- name: http
port: 80
targetPort: http
kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: example
resources:
- namespace.yaml
- service.yaml
And I create a variant in overlays/example, with this kustomization.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: example-mysuffix
resources:
- ../../base
nameSuffix: -mysuffix
Then running kustomize build overlays/example results in:
apiVersion: v1
kind: Namespace
metadata:
name: example
spec: {}
---
apiVersion: v1
kind: Service
metadata:
name: example-mysuffix
namespace: example
spec:
ports:
- name: http
port: 80
targetPort: http
As you have described in your question, the Namespace resource wasn't renamed by the nameSuffix configuration. But if I simply add a namespace: declaration to the kustomization.yaml, like this:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: example-mysuffix
resources:
- ../../base
nameSuffix: -mysuffix
Then I get the desired output:
apiVersion: v1
kind: Namespace
metadata:
name: example-mysuffix
spec: {}
---
apiVersion: v1
kind: Service
metadata:
name: example-mysuffix
namespace: example-mysuffix
spec:
ports:
- name: http
port: 80
targetPort: http

How to make Deployment, Ingress and Service Yaml file on one file

I want to make a YAML file with Deployment, Ingress, and Service (maybe with clusterissuer, issuer and cert) on one file, how can I do that? I tried
kubectl apply -f (name_file.yaml)
You can it with three dashes on your yaml file
like this
apiVersion: v1
kind: Service
metadata:
name: mock
spec:
...
---
apiVersion: v1
kind: ReplicationController
metadata:
name: mock
spec:
Source : https://levelup.gitconnected.com/kubernetes-merge-multiple-yaml-into-one-e8844479a73a

Create a namespace with resource quota in one go

Is there a way to combine a namespace creation with a resource quota in one go?
I'm looking for something like:
apiVersion: v1
kind: Namespace
metadata:
name: custom-namespace
quota: {"cpu": "400m", "memory": "1Gi"}
You can combine different documents in the same YAML file using dashes as separator.
For your example it would like
apiVersion: v1
kind: Namespace
metadata:
name: custom-namespace
---
apiVersion: v1
kind: ResourceQuota
metadata:
name: cpu
namespace: custom-namespace
spec:
hard:
limits.cpu: "400m"
limits.memory: 1Gi
You can then apply the file or pipe it from stdin.
$ kubectl apply -f temp.yaml
namespace/custom-namespace created
resourcequota/cpu created

How to correctly update apiVersion of manifests prior to cluster upgrade?

So I did update the manifest and replaced apiVersion: extensions/v1beta1 to apiVersion: apps/v1
apiVersion: apps/v1
kind: Deployment
metadata:
name: secretmanager
namespace: kube-system
spec:
selector:
matchLabels:
app: secretmanager
template:
metadata:
labels:
app: secretmanager
spec:
...
I then applied the change
k apply -f deployment.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
deployment.apps/secretmanager configured
I also tried
k replace --force -f deployment.yaml
That recreated the POD (downtime :( ) but still if you try to output the yaml config of the deployment I see the old value
k get deployments -n kube-system secretmanager -o yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"apps/v1","kind":"Deployment",
"metadata":{"annotations":{},"name":"secretmanager","namespace":"kube-system"}....}
creationTimestamp: "2020-08-21T21:43:21Z"
generation: 2
name: secretmanager
namespace: kube-system
resourceVersion: "99352965"
selfLink: /apis/extensions/v1beta1/namespaces/kube-system/deployments/secretmanager
uid: 3d49aeb5-08a0-47c8-aac8-78da98d4c342
spec:
So I still see this apiVersion: extensions/v1beta1
What I am doing wrong?
I am preparing eks kubernetes v1.15 to be migrated over to v1.16
The Deployment exists in multiple apiGroups, so it is ambiguous. Try to specify e.g. apps/v1 with:
kubectl get deployments.v1.apps
and you should see your Deployment but with apps/v1 apiGroup.