How to debug Unknown field error in Kubernetes? - kubernetes

This might be rookie question. I am not well versed with kubernetes. I added this to my deployment.yaml
ad.datadoghq.com/helm-chart.check_names: |
["openmetrics"]
ad.datadoghq.com/helm-chart.init_configs: |
[{}]
ad.datadoghq.com/helm-chart.instances: |
[
{
"prometheus_url": "http://%%host%%:7071/metrics",
"namespace": "custom-metrics",
"metrics": [ "jvm*" ]
}
]
But I get this error
error validating data: [ValidationError(Deployment.spec.template.metadata): unknown field "ad.datadoghq.com/helm-chart.check_names" in io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta, ValidationError(Deployment.spec.template.metadata)
What does this error mean? Does it mean that I need to define ad.datadoghq.com/helm-chart.check_names somewhere? If so, where?

You are probably adding this in a wrong place - according to your error messages - you're adding this to Deployment.spec.template.metadata
you can check official helm deployment template and this example on documentation - values such ad.datadoghq.com/helm-chart.check_names are annotations, so these needs to be defined under path Deployment.spec.template.metadata.annotations
annotations: a map of string keys and values that can be used by
external tooling to store and retrieve arbitrary metadata about this
object
(see the annotations docs)
apiVersion: apps/v1
kind: Deployment
metadata:
name: datadog-cluster-agent
namespace: default
spec:
selector:
matchLabels:
app: datadog-cluster-agent
template:
metadata: # <- not directly under 'metadata'
labels:
app: datadog-cluster-agent
name: datadog-agent
annotations: # <- add here
ad.datadoghq.com/datadog-cluster-agent.check_names: '["prometheus"]'
ad.datadoghq.com/datadog-cluster-agent.init_configs: '[{}]'
ad.datadoghq.com/datadog-cluster-agent.instances: '[{"prometheus_url": "http://%%host%%:5000/metrics","namespace": "datadog.cluster_agent","metrics": ["go_goroutines","go_memstats_*","process_*","api_requests","datadog_requests","external_metrics", "cluster_checks_*"]}]'
spec:

Related

How to reuse variables in a kubernetes yaml?

I have a number of repeated values in my kubernetes yaml file and I wondering if there was a way I could store variables somewhere in the file, ideally at the top, that I can reuse further down
sort of like
variables:
- appName: &appname myapp
- buildNumber: &buildno 1.0.23
that I can reuse further down like
labels:
app: *appname
tags.datadoghq.com/version:*buildno
containers:
- name: *appname
...
image: 123456.com:*buildno
if those are possible
I know anchors are a thing in yaml I just couldn't find anything on setting variables
You can't do this in Kubernetes manifests, because you need a processor to manipulate the YAML files. Though you can share the anchors in the same YAML manifest like this:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: &cmname myconfig
namespace: &namespace default
labels:
name: *cmname
deployedInNamespace: *namespace
data:
config.yaml: |
[myconfig]
example_field=1
This will result in:
apiVersion: v1
data:
config.yaml: |
[myconfig]
example_field=1
kind: ConfigMap
metadata:
creationTimestamp: "2023-01-25T10:06:27Z"
labels:
deployedInNamespace: default
name: myconfig
name: myconfig
namespace: default
resourceVersion: "147712"
uid: 4039cea4-1e64-4d1a-bdff-910d5ff2a485
As you can see the labels name && deployedInNamespace have the values resulted from the anchor evaluation.
Based on your use case description, what you would need is going the Helm chart path and template your manifests. You can then leverage helper functions and easily customize when you want these fields. From my experience, when you have an use case like this, Helm is the way to go, because it will help you customize everything within your manifests when you decide to change something else.
I guess there is a similar question with answer.
Please check below
How to reuse an environment variable in a YAML file?

Helm lookup function dynamically fetch value from a configmap

I am attempting to use the Helm lookup function to dynamically lookup a key ORGANIZATION_NAME from a ConfigMap and use that value.
apiVersion: apps/v1
kind: Deployment
metadata:
name: celery-beat
labels:
app: celery-beat
tags.datadoghq.com/env: {{ (lookup "v1" "configmap" "default" "api-env").items.ORGANIZATION_NAME | quote }}
...
But I am getting the error:
Error: UPGRADE FAILED: template: celery-beat/templates/deployment.yaml:9:66: executing "celery-beat/templates/deployment.yaml" at <"api-env">: nil pointer evaluating interface {}.ORGANIZATION_NAME
The key .items only gets populated when there is more than one resource that matches the lookup.
Since there is only one ConfigMap named api-env on the default namespace , you can access directly the data you want:
(lookup "v1" "configmap" "default" "api-env").data.ORGANIZATION_NAME

Why doesn't the label "version" appear in running pod json?

kubectl get pod pod_name -n namespace_name -o json shows:
"labels": {
"aadpodidbinding": "sa-customerxyz-uat-msi",
"app": "cloudsitemanager",
"customer": "customerxyz",
"istio.io/rev": "default",
"pod-template-hash": "b87d9fcbf",
"security.istio.io/tlsMode": "istio",
"service.istio.io/canonical-name": "cloudsitemanager",
"service.istio.io/canonical-revision": "latest"
}
I am deploying with the following manifest yaml snippet:
apiVersion: apps/v1
kind: Deployment
metadata:
name: cloudsitemanager
labels:
app: cloudsitemanager
customer: customerxyz
version: 0.1.0-beta.201
spec:
replicas: 1
selector:
matchLabels:
app: cloudsitemanager
customer: customerxyz
template:
metadata:
labels:
app: cloudsitemanager
customer: customerxyz
version: 0.1.0-beta.201
aadpodidbinding: sa-customerxyz-uat-msi
I expect to see 4 custom labels in the running pod manifest: app, customer, version, aadpodidbinding. However, I only see 3 of the custom labels. The label "version" does not show.
I had the same issue with running Istio + Kiali as Kiali was not showing the version. I tried adding the "version" label under the spec of the Deployment but it didn't work. After adding the "version" label for the POD's spec (.spec.template.metadata.labels) it started applying the "version" labels for the newly created pod and Kiali now shows the version number instead of "latest"

configmap is not working for service and loadBalancerIP

I'm used following kubernetes API. Configmap is not working with service and loadbalancer.
Here is code -
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/azure-load-balancer-resource-group: Resource_group
valueFrom :
configMapKeyRef :
name : app-configmap
key : Resource_group
name: appliance-ui
spec:
loadBalancerIP: Static_public_ip
valueFrom :
configMapKeyRef :
name : app-configmap
key : Static_public_ip
type: LoadBalancer
ports:
- port: 80
selector:
app: appliance-ui
Here is error -
error: error validating "ab.yml": error validating data: [ValidationError(Service.metadata.annotations.valueFrom): invalid type for io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta.annotations: got "map", expected "string", ValidationError(Service.spec): unknown field "valueFrom" in io.k8s.api.core.v1.ServiceSpec]; if you choose to ignore these errors, turn validation off with --validate=false
I have tried with --validate=false. It didn't work. Please let me know whether configmap will work for service and loadBalancerIP field or not.
Unfortunately, you cannot configure a certain value for Service manifest through configMapKeyRef. AFAIK ConfigMap should mount to a Pod(container) for referring the values, so it does not allow other resource except Pod. Refer Configure a Pod to Use a ConfigMap or ConfigMap and Pods for more details.

kubernetes: selector field missing in kompose

I have a docker-compose.yml file we have been using to set up our development environment.
The file declares some services, all of them more or less following the same pattern:
services:
service_1:
image: some_image_1
enviroment:
- ENV_VAR_1
- ENV_VAR_2
depends_on:
- another_service_of_the_same_compose_file
In the view of migrating to kubernetes, when running:
kompose convert -f docker-compose.yml
produces, for each service, a pair of deployment/service manifests.
Two questions about the deployment generated:
1.
the examples in the official documentation seem to hint that the selector field is needed for a Deployment to be aware of the pods to manage.
However the deployment manifests created do not include a selector field, and are as follows:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml
kompose.version: 1.6.0 (e4adfef)
creationTimestamp: null
labels:
io.kompose.service: service_1
name: service_1
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: service_1
spec:
containers:
- image: my_image
name: my_image_name
resources: {}
restartPolicy: Always
status: {}
2.
the apiVersion in the generated deployment manifest is extensions/v1beta1, however the examples in the Deployments section of the official documentation default to apps/v1.
The recommendation seems to be
for versions before 1.9.0 use apps/v1beta2
Which is the correct version to use? (using kubernetes 1.8)
Let's begin by saying that Kubernetes and Kompose are two different independent systems. Kompose is trying to match all of the dependency with kubernetes.
At the moment all of the selector's fields are generated by kubernetes.In future, It might be done by us.
If you would like to check your selector's fields use following commands
kubectl get deploy
kubectl describe deploy DEPLOY_NAME
After version k8s 1.9 all of the long-running objects would be part of /apps group.
We’re excited to announce General Availability (GA) of the apps/v1 Workloads API, which is now enabled by default. The Apps Workloads API groups the DaemonSet, Deployment, ReplicaSet, and StatefulSet APIs together to form the foundation for long-running stateless and stateful workloads in Kubernetes. Note that the Batch Workloads API (Job and CronJob) is not part of this effort and will have a separate path to GA stability.
I have attached the link for further research
kubernetes-19-workloads
As a selector field isn't required for deployments and Kompose doesn't know your cluster's nodes, it doesn't set a selector (which basically tells k8s in which nodes run pods).
I wouldn't edit apiversion cause Kompose assumes that version defining the rest of the resource. Also, if you are using kubernetes 1.8 read 1.8 docs https://v1-8.docs.kubernetes.io/docs/
In kubernetes 1.16 the deployment's spec.selector became required. Kompose (as of 1.20) does not yet do this automatically. You will have to add this to every *-deployment.yaml file it creates:
selector:
matchLabels:
io.kompose.service: alignment-processor
If you use an IDE like jetbrains you can use the following search/replace patterns on the folder where you put the conversion results:
Search for this multiline regexp:
io.kompose.service: (.*)
name: \1
spec:
replicas: 1
template:
Replace with this pattern:
io.kompose.service: $1
name: $1
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: $1
template:
The (.*) captures the name of the service, the \1 matches the (first and only) capture, and the $1 substitutes the capture in the replacement.
You will also have to substitute all extensions/v1beta1 with apps/v1 in all *-deployment.yaml files.
I also found that secrets have to massaged a bit but this goes beyond the scope of this question.