Why doesn't the label "version" appear in running pod json? - kubernetes

kubectl get pod pod_name -n namespace_name -o json shows:
"labels": {
"aadpodidbinding": "sa-customerxyz-uat-msi",
"app": "cloudsitemanager",
"customer": "customerxyz",
"istio.io/rev": "default",
"pod-template-hash": "b87d9fcbf",
"security.istio.io/tlsMode": "istio",
"service.istio.io/canonical-name": "cloudsitemanager",
"service.istio.io/canonical-revision": "latest"
}
I am deploying with the following manifest yaml snippet:
apiVersion: apps/v1
kind: Deployment
metadata:
name: cloudsitemanager
labels:
app: cloudsitemanager
customer: customerxyz
version: 0.1.0-beta.201
spec:
replicas: 1
selector:
matchLabels:
app: cloudsitemanager
customer: customerxyz
template:
metadata:
labels:
app: cloudsitemanager
customer: customerxyz
version: 0.1.0-beta.201
aadpodidbinding: sa-customerxyz-uat-msi
I expect to see 4 custom labels in the running pod manifest: app, customer, version, aadpodidbinding. However, I only see 3 of the custom labels. The label "version" does not show.

I had the same issue with running Istio + Kiali as Kiali was not showing the version. I tried adding the "version" label under the spec of the Deployment but it didn't work. After adding the "version" label for the POD's spec (.spec.template.metadata.labels) it started applying the "version" labels for the newly created pod and Kiali now shows the version number instead of "latest"

Related

How to debug Unknown field error in Kubernetes?

This might be rookie question. I am not well versed with kubernetes. I added this to my deployment.yaml
ad.datadoghq.com/helm-chart.check_names: |
["openmetrics"]
ad.datadoghq.com/helm-chart.init_configs: |
[{}]
ad.datadoghq.com/helm-chart.instances: |
[
{
"prometheus_url": "http://%%host%%:7071/metrics",
"namespace": "custom-metrics",
"metrics": [ "jvm*" ]
}
]
But I get this error
error validating data: [ValidationError(Deployment.spec.template.metadata): unknown field "ad.datadoghq.com/helm-chart.check_names" in io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta, ValidationError(Deployment.spec.template.metadata)
What does this error mean? Does it mean that I need to define ad.datadoghq.com/helm-chart.check_names somewhere? If so, where?
You are probably adding this in a wrong place - according to your error messages - you're adding this to Deployment.spec.template.metadata
you can check official helm deployment template and this example on documentation - values such ad.datadoghq.com/helm-chart.check_names are annotations, so these needs to be defined under path Deployment.spec.template.metadata.annotations
annotations: a map of string keys and values that can be used by
external tooling to store and retrieve arbitrary metadata about this
object
(see the annotations docs)
apiVersion: apps/v1
kind: Deployment
metadata:
name: datadog-cluster-agent
namespace: default
spec:
selector:
matchLabels:
app: datadog-cluster-agent
template:
metadata: # <- not directly under 'metadata'
labels:
app: datadog-cluster-agent
name: datadog-agent
annotations: # <- add here
ad.datadoghq.com/datadog-cluster-agent.check_names: '["prometheus"]'
ad.datadoghq.com/datadog-cluster-agent.init_configs: '[{}]'
ad.datadoghq.com/datadog-cluster-agent.instances: '[{"prometheus_url": "http://%%host%%:5000/metrics","namespace": "datadog.cluster_agent","metrics": ["go_goroutines","go_memstats_*","process_*","api_requests","datadog_requests","external_metrics", "cluster_checks_*"]}]'
spec:

Is it possible to use an autoscaling/v2beta2 HPA with apiregistration.k8s.io/v1 APIService?

I've created a deployment which exposes a custom metric through an endpoint and an APIService that registers this custom metric, so I can use it in an HPA to autoscale the deployment. To achieve this, I've followed this tutorial.
It worked well while using an apiregistration.k8s.io/v1beta1 APIService. The metric was exposed correctly and the HPA could read it and scale accordingly. I've tried to update the APIService to version apiregistration.k8s.io/v1 (as v1beta1 is deprecated and removed in Kubernetes v1.22), but then the HPA couldn't pick the metric anymore, with this message:
Message
-------
unable to get metric threatmessages: Service on test services-metrics-service/unable to fetch
metrics from custom metrics API: the server is currently unable to handle the request
(get services.custom.metrics.k8s.io services-metrics-service)
If I manually request the metric, it exists though:
kubectl get --raw /apis/custom.metrics.k8s.io/v1/namespaces/test/services/services-metrics-service/threatmessages |jq .
{
"kind": "MetricValueList",
"apiVersion": "custom.metrics.k8s.io/v1",
"metadata": {
"selfLink": "custom.metrics.k8s.io/v1"
},
"items": [
{
"metricName": "threatmessages",
"timestamp": "2021-02-09T14:43:39.321Z",
"value": "0",
"describedObject": {
"kind": "Service",
"namespace": "test",
"name": "services-metrics-service",
"apiVersion": "/v1"
}
}
]
}
Here are my APIService and HPA resources:
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
name: v1.custom.metrics.k8s.io
spec:
insecureSkipTLSVerify: true
group: custom.metrics.k8s.io
groupPriorityMinimum: 1000
versionPriority: 5
service:
name: services-metrics-service
namespace: test
port: 443
version: v1
---
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: services-parallel-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: services-parallel-deployment
minReplicas: 1
maxReplicas: 10
metrics:
- type: Object
object:
describedObject:
kind: Service
name: services-metrics-service
metric:
name: threatmessages
target:
type: AverageValue
averageValue: 4k
behavior:
scaleDown:
stabilizationWindowSeconds: 30
policies:
- type: Pods
value: 1
periodSeconds: 30
What am I doing wrong? Or are these 2 versions just not compatible for some reason?
According to APIService 1.22 documentation you can find information:
Migrate manifests and API clients to use the apiregistration.k8s.io/v1 API version, available since v1.10.
All existing persisted objects are accessible via the new API
No notable changes
First of all, v1 is available for the version you are using (v.1.19).
Secondly, and more important, All existing persisted objects are accessible via the new API and No notable changes. This means that the objects created using v1beta1 don't need to be updated or modified. They will be available and working even if they were created using v1beta1. That is, after upgrading to version v 1.22, you should have no issue, the same objects will simply be accessible (and, I would think, accessed by HPA) as if they had been created using v1. What is more, they may already (in version 1.19) be accessible as v1, as I will explain next, so you can check now if everything is fine.
I have run some quick tests on a v 1.19 GKE cluster and found if something contains apiVersion: apiregistration.k8s.io/v1beta1 (in fact, using exactly what the OP provided in in question:
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
name: v1.custom.metrics.k8s.io
spec:
insecureSkipTLSVerify: true
group: custom.metrics.k8s.io
groupPriorityMinimum: 1000
versionPriority: 5
service:
name: services-metrics-service
namespace: test
port: 443
version: v1
except using v1beta1 as apiVersion) is applied, and then a get command is used to obtain what was created $ kubectl get apiservices/v1.custom.metrics.k8s.io --output=json, an object marked with apiVersion v1, not v1beta1, will be obtained ("apiVersion": "apiregistration.k8s.io/v1"). And if, intead, apiVersion .../v1 is used during the creation instead of apiVersion: apiregistration.k8s.io/v1beta1, the same is obtained. If either of those is deleted, the other one is gone. It is the same object behind the scenes. Yet it is marked as v1.
As a result of all the above, you should simply revert back to what you were doing when you deployed using v1beta1, given it worked and will keep on working according to APIService 1.22 documentation. You can also run $ kubectl get apiservices/v1.custom.metrics.k8s.io --output=json command, either $ kubectl get APIService --output=json or $ kubectl get APIService.apiregistration.k8s.io --output=json once deployed v1beta1 version, to understand if object is already marked as using v1 behind the scenes despite having created the object with v1beta1 - like is happening in my case.
Creating objects with v1 is not necessary if they were already created with v1beta1.

Kubectl error: the object has been modified; please apply your changes to the latest version and try again

I am getting below error while trying to apply patch :
core#dgoutam22-1-coreos-5760 ~ $ kubectl apply -f ads-central-configuration.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
Error from server (Conflict): error when applying patch:
{"data":{"default":"{\"dedicated_redis_cluster\": {\"nodes\": [{\"host\": \"192.168.1.94\", \"port\": 6379}]}}"},"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"data\":{\"default\":\"{\\\"dedicated_redis_cluster\\\": {\\\"nodes\\\": [{\\\"host\\\": \\\"192.168.1.94\\\", \\\"port\\\": 6379}]}}\"},\"kind\":\"ConfigMap\",\"metadata\":{\"annotations\":{},\"creationTimestamp\":\"2018-06-27T07:19:13Z\",\"labels\":{\"acp-app\":\"acp-discovery-service\",\"version\":\"1\"},\"name\":\"ads-central-configuration\",\"namespace\":\"acp-system\",\"resourceVersion\":\"1109832\",\"selfLink\":\"/api/v1/namespaces/acp-system/configmaps/ads-central-configuration\",\"uid\":\"64901676-79da-11e8-bd65-fa163eaa7a28\"}}\n"},"creationTimestamp":"2018-06-27T07:19:13Z","resourceVersion":"1109832","uid":"64901676-79da-11e8-bd65-fa163eaa7a28"}}
to:
&{0xc4200bb380 0xc420356230 acp-system ads-central-configuration ads-central-configuration.yaml 0xc42000c970 4434 false}
**for: "ads-central-configuration.yaml": Operation cannot be fulfilled on configmaps "ads-central-configuration": the object has been modified; please apply your changes to the latest version and try again**
core#dgoutam22-1-coreos-5760 ~ $
It seems likely that your yaml configurations were copy pasted from what was generated, and thus contains fields such as creationTimestamp (and resourceVersion, selfLink, and uid), which don't belong in a declarative configuration file.
Go through your yaml and clean it up. Remove things that are instance specific. Your final yaml should be simple enough that you can easily understand it.
Remove these lines from the file:
creationTimestamp:
resourceVersion:
selfLink:
uid:
Then try to apply again.
Give attention to put the last resourceVersion in your update, you can get it running:
kubectl get deployment <DEPLOYMENT-NAME> -o yaml | grep resourceVersion
you may have been edited the same exported deployment file..
1 - try to reexport it with:
kubectl get deployment <DEPLOYMENT-NAME> -o yaml > deployment-file.yaml
2 - make the needed modifications in "deployment-file.yaml"
3 - apply the changes with:
kubectl apply -f deployment-file.yaml
OR:
you may want to edit the deployment directly.. use :
kubectl edit deployment <DEPLOYMENT-NAME> -o yaml
change the default editor if you aren't familiar with VI editor : export EDITOR=nano
I am able to reproduce the issue in my test environment. Steps to reproduce:
Create a deployment from Kubernetes Engine > Workloads > Deploy
Input your Application Name, Namespace, Labels
Select cluster or create new cluster
You are able to view the YAML file here and here is the sample:
---
apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: "nginx-1"
namespace: "default"
labels:
app: "nginx-1"
spec:
replicas: 3
selector:
matchLabels:
app: "nginx-1"
template:
metadata:
labels:
app: "nginx-1"
spec:
containers:
- name: "nginx"
image: "nginx:latest"
---
apiVersion: "autoscaling/v2beta1"
kind: "HorizontalPodAutoscaler"
metadata:
name: "nginx-1-hpa"
namespace: "default"
labels:
app: "nginx-1"
spec:
scaleTargetRef:
kind: "Deployment"
name: "nginx-1"
apiVersion: "apps/v1"
minReplicas: 1
maxReplicas: 5
metrics:
- type: "Resource"
resource:
name: "cpu"
targetAverageUtilization: 80
After deployment if you go to Kubernetes Engine > Workloads > nginx-1 (click on it)
a.) You will get Deployment details (Overview, Details, Revision history, events, YAML)
b.) click on YAML and copy the content from YAML tab
c.) create new YAML file and paste the content and save the file
d.) Now if you run the command $kubectl apply -f newyamlfile.yaml, it will shows you the below error:
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
Error from server (Conflict): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{\"deployment.kubernetes.io/revision\":\"1\"},\"creationTimestamp\":\"2019-09-17T21:34:39Z\",\"generation\":1,\"labels\":{\"app\":\"nginx-1\"},\"name\":\"nginx-1\",\"namespace\":\"default\",\"resourceVersion\":\"218884\",\"selfLink\":\"/apis/apps/v1/namespaces/default/deployments/nginx-1\",\"uid\":\"f41c5b6f-d992-11e9-9adc-42010a80023b\"},\"spec\":{\"progressDeadlineSeconds\":600,\"replicas\":3,\"revisionHistoryLimit\":10,\"selector\":{\"matchLabels\":{\"app\":\"nginx-1\"}},\"strategy\":{\"rollingUpdate\":{\"maxSurge\":\"25%\",\"maxUnavailable\":\"25%\"},\"type\":\"RollingUpdate\"},\"template\":{\"metadata\":{\"creationTimestamp\":null,\"labels\":{\"app\":\"nginx-1\"}},\"spec\":{\"containers\":[{\"image\":\"nginx:latest\",\"imagePullPolicy\":\"Always\",\"name\":\"nginx\",\"resources\":{},\"terminationMessagePath\":\"/dev/termination-log\",\"terminationMessagePolicy\":\"File\"}],\"dnsPolicy\":\"ClusterFirst\",\"restartPolicy\":\"Always\",\"schedulerName\":\"default-scheduler\",\"securityContext\":{},\"terminationGracePeriodSeconds\":30}}},\"status\":{\"availableReplicas\":3,\"conditions\":[{\"lastTransitionTime\":\"2019-09-17T21:34:47Z\",\"lastUpdateTime\":\"2019-09-17T21:34:47Z\",\"message\":\"Deployment has minimum availability.\",\"reason\":\"MinimumReplicasAvailable\",\"status\":\"True\",\"type\":\"Available\"},{\"lastTransitionTime\":\"2019-09-17T21:34:39Z\",\"lastUpdateTime\":\"2019-09-17T21:34:47Z\",\"message\":\"ReplicaSet \\\"nginx-1-7b4bb7fbf8\\\" has successfully progressed.\",\"reason\":\"NewReplicaSetAvailable\",\"status\":\"True\",\"type\":\"Progressing\"}],\"observedGeneration\":1,\"readyReplicas\":3,\"replicas\":3,\"updatedReplicas\":3}}\n"},"generation":1,"resourceVersion":"218884"},"spec":{"replicas":3},"status":{"availableReplicas":3,"observedGeneration":1,"readyReplicas":3,"replicas":3,"updatedReplicas":3}}
to:
Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
Name: "nginx-1", Namespace: "default"
Object: &{map["apiVersion":"apps/v1" "metadata":map["name":"nginx-1" "namespace":"default" "selfLink":"/apis/apps/v1/namespaces/default/deployments/nginx-1" "uid":"f41c5b6f-d992-11e9-9adc-42010a80023b" "generation":'\x02' "labels":map["app":"nginx-1"] "annotations":map["deployment.kubernetes.io/revision":"1"] "resourceVersion":"219951" "creationTimestamp":"2019-09-17T21:34:39Z"] "spec":map["replicas":'\x01' "selector":map["matchLabels":map["app":"nginx-1"]] "template":map["metadata":map["labels":map["app":"nginx-1"] "creationTimestamp":<nil>] "spec":map["containers":[map["imagePullPolicy":"Always" "name":"nginx" "image":"nginx:latest" "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "restartPolicy":"Always" "terminationGracePeriodSeconds":'\x1e' "dnsPolicy":"ClusterFirst" "securityContext":map[] "schedulerName":"default-scheduler"]] "strategy":map["type":"RollingUpdate" "rollingUpdate":map["maxUnavailable":"25%" "maxSurge":"25%"]] "revisionHistoryLimit":'\n' "progressDeadlineSeconds":'\u0258'] "status":map["observedGeneration":'\x02' "replicas":'\x01' "updatedReplicas":'\x01' "readyReplicas":'\x01' "availableReplicas":'\x01' "conditions":[map["message":"Deployment has minimum availability." "type":"Available" "status":"True" "lastUpdateTime":"2019-09-17T21:34:47Z" "lastTransitionTime":"2019-09-17T21:34:47Z" "reason":"MinimumReplicasAvailable"] map["lastTransitionTime":"2019-09-17T21:34:39Z" "reason":"NewReplicaSetAvailable" "message":"ReplicaSet \"nginx-1-7b4bb7fbf8\" has successfully progressed." "type":"Progressing" "status":"True" "lastUpdateTime":"2019-09-17T21:34:47Z"]]] "kind":"Deployment"]}
for: "test.yaml": Operation cannot be fulfilled on deployments.apps "nginx-1": the object has been modified; please apply your changes to the latest version and try again
To solve the problem, you need to find the exact yaml file and then edit it as per your requirement, after that you can run $kubectl apply -f nginx-1.yaml
Hope this information finds you well.
This error is coming because the deployment.yaml has an entry for resourceVersion. Remove it as it's not needed and you will be able to apply the new configuration.

How do I update a Kubernetes autoscaler?

I have created a Kubernetes autoscaler, but I need to change its parameters. How do I update it?
I've tried the following, but it fails:
✗ kubectl autoscale -f docker/production/web-controller.yaml --min=2 --max=6
Error from server: horizontalpodautoscalers.extensions "web" already exists
You can always interactively edit the resources in your cluster. For your autoscale controller called web, you can edit it via:
kubectl edit hpa web
If you're looking for a more programmatic way to update your horizontal pod autoscaler, you would have better luck describing your autoscaler entity in a yaml file, as well. For example, here's a simple Replication Controller, paired with a Horizontal Pod Autoscale entity:
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx
spec:
replicas: 2
template:
metadata:
labels:
run: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: nginx
namespace: default
spec:
maxReplicas: 3
minReplicas: 2
scaleTargetRef:
apiVersion: v1
kind: ReplicationController
name: nginx
With those contents in a file called nginx.yaml, updating the autoscaler could be done via kubectl apply -f nginx.yaml.
You can use the kubectl patch command as well, to see its current status
kubectl get hpa <autoscaler-name-here> -o json
An example output:
{
"apiVersion": "autoscaling/v1",
"kind": "HorizontalPodAutoscaler",
"metadata": {
...
"name": "your-auto-scaler",
"namespace": "your-namespace",
...
},
"spec": {
"maxReplicas": 50,
"minReplicas": 2,
"scaleTargetRef": {
"apiVersion": "extensions/v1beta1",
"kind": "Deployment",
"name": "your-deployment"
},
"targetCPUUtilizationPercentage": 40
},
"status": {
"currentReplicas": 1,
"desiredReplicas": 2,
"lastScaleTime": "2017-12-13T16:23:41Z"
}
}
If you want to update the minimum number of replicas:
kubectl -n your-namespace patch hpa your-auto-scaler --patch '{"spec":{"minReplicas":1}}'
The same logic applies to other parameters found in the autoscaler configuration, change minReplicas to maxReplicas if you want to update the maximum number of allowed replicas.
First delete the autoscaler and then re-create it:
✗ kubectl delete hpa web
✗ kubectl autoscale -f docker/production/web-controller.yaml --min=2 --max=6
I tried multiple options but the one which works the best is
First delete the existing hpa
kubectl delete hpa web
then recreate another one
kubectl autoscale -f docker/production/web-controller.yaml --min=2 --max=6

Upgrade image in a Deployment's pods

I have a Deployment with 3 replicas of a pod running the image 172.20.20.20:5000/my_app, that is located in my private registry.
I want do a rolling-update in the deployment when I push a new latest version of that image to my registry.
I push the new image this way (tag v3.0 to latest):
$ docker tag 172.20.20.20:5000/my_app:3.0 172.20.20.20:5000/my_app
$ docker push 172.20.20.20:5000/my_app
But nothing happens. Pods' images are not upgraded. This is my deployment definition:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: myapp-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: app
image: 172.20.20.20:5000/my_app:latest
ports:
- containerPort: 8080
Is there a way to do that automatically? Should I run a command like rolling-update like in ReplicaControllers?
In order to upgrade a Deployment you have to modify the Deployment resource with the new image. So for example, change 172.20.20.20:5000/my_app:v1 to 172.20.20.20:5000/my_app:v2. Since you're just modifying the image within the Docker registry doesn't notice the change.
If you (manually) kill the individual Pods, the Deployment will restart them. Since the Deployment image specifies the "latest" tag Kubernetes will download the latest version (now "v3" in your case) due to the implied "Always" ImagePullPolicy.