How to extract Kubernestes PODS mac addresses from annotations object - kubernetes

I am trying to extract the mac or ips addresses under metadata.annotations using either kubectl get po in json filter or using jq. other objects are easy to manipulate to get those values.
kubectl get po -o json -n multus|jq -r .items
Under annotations, there is duplication CNI info but it is ok. I like to extract those MAC addresses using jq. it seems to be tricky on this one.
[
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"annotations": {
"k8s.v1.cni.cncf.io/network-status": "[{\n \"name\": \"eps-cni\",\n \"ips\": [\n \"172.31.83.216\"\n ],\n \"default\": true,\n \"dns\": {}\n},{\n \"name\": \"ipvlan1-busybox1\",\n \"interface\": \"net1\",\n \"ips\": [\n \"172.31.230.70\"\n ],\n \"mac\": \"0a:2d:40:c6:f8:ea\",\n \"dns\": {}\n},{\n \"name\": \"ipvlan2-busybox1\",\n \"interface\": \"net2\",\n \"ips\": [\n \"172.31.232.70\"\n ],\n \"mac\": \"0a:52:8a:62:5d:f4\",\n \"dns\": {}\n}]",
"k8s.v1.cni.cncf.io/networks": "ipvlan1-busybox1, ipvlan2-busybox1",
"k8s.v1.cni.cncf.io/networks-status": "[{\n \"name\": \"eps-cni\",\n \"ips\": [\n \"172.31.83.216\"\n ],\n \"default\": true,\n \"dns\": {}\n},{\n \"name\": \"ipvlan1-busybox1\",\n \"interface\": \"net1\",\n \"ips\": [\n \"172.31.230.70\"\n ],\n \"mac\": \"0a:2d:40:c6:f8:ea\",\n \"dns\": {}\n},{\n \"name\": \"ipvlan2-busybox1\",\n \"interface\": \"net2\",\n \"ips\": [\n \"172.31.232.70\"\n ],\n \"mac\": \"0a:52:8a:62:5d:f4\",\n \"dns\": {}\n}]",
"kubernetes.io/psp": "eps.privileged"
},
"creationTimestamp": "2020-05-24T17:09:10Z",
"generateName": "busybox1-f476958bd-",
"labels": {
"app": "busybox",
"pod-template-hash": "f476958bd"
},
"name": "busybox1-f476958bd-hds4w",
"namespace": "multus",
"ownerReferences": [
{
"apiVersion": "apps/v1",
"blockOwnerDeletion": true,
"controller": true,
"kind": "ReplicaSet",
"name": "busybox1-f476958bd",
"uid": "5daf9b52-e1b3-4df7-b5a1-028b48e7fcc0"
}
],
"resourceVersion": "965176",
"selfLink": "/api/v1/namespaces/multus/pods/busybox1-f476958bd-hds4w",
"uid": "0051b85d-9774-4f89-8658-f34065222bf0"
},
for basic jq,
[root#ip-172-31-103-214 ~]# kubectl get po -o json -n multus|jq -r '.items[] | .spec.volumes'
[
{
"name": "test-busybox1-token-f6bdj",
"secret": {
"defaultMode": 420,
"secretName": "test-busybox1-token-f6bdj"
}
}
]
I can switch the get pod to yaml format then using normal grep cmd.
kubectl get po -o yaml -n multus|egrep 'mac'|sort -u
"mac": "0a:2d:40:c6:f8:ea",
"mac": "0a:52:8a:62:5d:f4",
Thanks

Starting with the original JSON and using jq's -r command-line option, the following jq filter yields the output shown below:
.[]
| .metadata.annotations[]
| (fromjson? // empty)
| .[]
| select(has("mac"))
| {mac}
Output:
{"mac":"0a:2d:40:c6:f8:ea"}
{"mac":"0a:52:8a:62:5d:f4"}
{"mac":"0a:2d:40:c6:f8:ea"}
{"mac":"0a:52:8a:62:5d:f4"}

Please try the below command and should get the expected output.
cat abc.json | jq -r '.metadata.annotations."k8s.v1.cni.cncf.io/networks-status" | fromjson | .[].mac '
where abc.json is your son file.

Related

Patch through Kuberentes rest API

I am trying to patch horizontal pod autoscaler by setting minimum replica through kubernetes api
Here is the curl which I am using
curl -k \
--request PATCH \
--header "Authorization: Bearer $KUBE_TOKEN" \
--header "Content-Type: application/strategic-merge-patch+json" \
--data '{
"apiVersion": "autoscaling/v1",
"kind": "HorizontalPodAutoscaler",
"metadata": {
"labels": {
"app.kubernetes.io/instance": "test"
},
"name": "test",
"namespace": "default"
},
"spec": {
"maxReplicas": 2,
"minReplicas": 1,
"scaleTargetRef": {
"apiVersion": "apps/v1",
"kind": "Deployment",
"name": "test"
},
"targetCPUUtilizationPercentage": 60
}
}' \
https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_PORT_443_TCP_PORT/apis/autoscaling/v1/namespaces/default/horizontalpodautoscalers
I receive following response
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "the server does not allow this method on the requested resource",
"reason": "MethodNotAllowed",
"details": {
},
"code": 405
}
Can anyone know where am I missing?
Thanks
The URL path must contain the name:
/apis/autoscaling/v1/namespaces/{namespace}/horizontalpodautoscalers/{name}
Its documented on this page https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#horizontalpodautoscaler-v1-autoscaling

How to obtain the specified part of the content from the output content in k8s?

When using kubectl get -o yaml/json to obtain resource information, the output content is too detailed, how to obtain the specified part of the content?
[root#ops-harbor ~]# kubectl get -n monitoring prometheus-prome-prometheus-operator-prometheus-0 -ojson
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"creationTimestamp": "2021-11-02T08:58:33Z",
"generateName": "prometheus-prome-prometheus-operator-prometheus-",
"labels": {
"app": "prometheus",
"controller-revision-hash": "prometheus-prome-prometheus-operator-prometheus-c56894959",
"prometheus": "prome-prometheus-operator-prometheus",
"statefulset.kubernetes.io/pod-name": "prometheus-prome-prometheus-operator-prometheus-0"
},
"name": "prometheus-prome-prometheus-operator-prometheus-0",
"namespace": "monitoring",
"ownerReferences": [
{
"apiVersion": "apps/v1",
"blockOwnerDeletion": true,
"controller": true,
"kind": "StatefulSet",
"name": "prometheus-prome-prometheus-operator-prometheus",
"uid": "3c02e78b-610c-4e9c-9171-cc47b00274a3"
}
],
"resourceVersion": "2640925",
"selfLink": "/api/v1/namespaces/monitoring/pods/prometheus-prome-prometheus-operator-prometheus-0",
"uid": "e728914c-2a3c-4d6a-8a18-5ebec0e0cebd"
},
# ...long long content
For example, I only want to get the following 2 sections of information.
"apiVersion": "v1",
"kind": "Pod",
"ownerReferences": [
{
"apiVersion": "apps/v1",
"blockOwnerDeletion": true,
"controller": true,
"kind": "StatefulSet",
"name": "prometheus-prome-prometheus-operator-prometheus",
"uid": "3c02e78b-610c-4e9c-9171-cc47b00274a3"
}
kubectl get pod -n monitoring prometheus-prome-prometheus-operator-prometheus-0 -o json | jq .metadata.ownerReferences
or
kubectl get pod -n monitoring prometheus-prome-prometheus-operator-prometheus-0 -o jsonpath={.metadata.ownerReferences} | jq

Istio -- Delete istio-control-plane Process Is Frozen

I was trying to uninstall and reinstall Istio from k8s cluster following the steps:
But I made a mistake that I deleted the namespace before deleting the istio-control-plane: kubectl delete istiooperator istio-control-plane -n istio-system. Then when I try to delete the istio-control-plane again, it froze.
I tried to remove the finalizer using the following steps but it said Error from server (NotFound): istiooperators.install.istio.io "istio-control-plane" not found
kubectl get istiooperator -n istio-system -o json > output.json
nano output.json # and remove finalizer
kubectl replace --raw "/apis/install.istio.io/v1alpha1/namespaces/istio-system/istiooperators/istio-control-plane/finalize" -f output.json
Here is the content of kubectl get istiooperator -n istio-system -o json:
{
"apiVersion": "v1",
"items": [
{
"apiVersion": "install.istio.io/v1alpha1",
"kind": "IstioOperator",
"metadata": {
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"install.istio.io/v1alpha1\",\"kind\":\"IstioOperator\",\"metadata\":{\"annotations\":{},\"name\":\"istio-control-plane\",\"namespace\":\"istio-system\"},\"spec\":{\"addonComponents\":{\"prometheus\":{\"enabled\":false},\"tracing\":{\"enabled\":false}},\"hub\":\"hub.docker.prod.walmart.com/istio\",\"profile\":\"default\",\"values\":{\"global\":{\"defaultNodeSelector\":{\"beta.kubernetes.io/os\":\"linux\"}}}}}\n"
},
"creationTimestamp": "2020-12-05T23:39:34Z",
"deletionGracePeriodSeconds": 0,
"deletionTimestamp": "2020-12-07T16:41:41Z",
"finalizers": [
],
"generation": 2,
"name": "istio-control-plane",
"namespace": "istio-system",
"resourceVersion": "11750055",
"selfLink": "/apis/install.istio.io/v1alpha1/namespaces/istio-system/istiooperators/istio-control-plane",
"uid": "fda8ee4f-54e7-45e8-91ec-c328fad1a86f"
},
"spec": {
"addonComponents": {
"prometheus": {
"enabled": false
},
"tracing": {
"enabled": false
}
},
"hub": "hub.docker.prod.walmart.com/istio",
"profile": "default",
"values": {
"global": {
"defaultNodeSelector": {
"beta.kubernetes.io/os": "linux"
}
}
}
},
"status": {
"componentStatus": {
"Base": {
"status": "HEALTHY"
},
"IngressGateways": {
"status": "HEALTHY"
},
"Pilot": {
"status": "HEALTHY"
}
},
"status": "HEALTHY"
}
}
],
"kind": "List",
"metadata": {
"resourceVersion": "",
"selfLink": ""
}
}
Any ideas on how can I uninstall istio-control-plane manually?
You can use below command to change istio operator finalizer and delete it, it's a jq/kubectl oneliner made by #Rico here. I have tried also with kubectl patch but it didn't work.
kubectl get istiooperator -n istio-system istio-control-plane -o=json | \
jq '.metadata.finalizers = null' | kubectl apply -f -
Additionally I have used istioctl operator remove
istioctl operator remove
Removing Istio operator...
Removed Deployment:istio-operator:istio-operator.
Removed Service:istio-operator:istio-operator.
Removed ServiceAccount:istio-operator:istio-operator.
Removed ClusterRole::istio-operator.
Removed ClusterRoleBinding::istio-operator.
✔ Removal complete
Results from kubectl get
kubectl get istiooperator istio-control-plane -n istio-system
Error from server (NotFound): namespaces "istio-system" not found

how to get objects's metadata name in kubernetes

I can get all the things of a list of objects, such as Secrets and ConfigMaps.
{
"kind": "SecretList",
"apiVersion": "v1",
"metadata": {
"selfLink": "/api/v1/namespaces/kube-system/secrets",
"resourceVersion": "499638"
},
"items": [{
"metadata": {
"name": "aaa",
"namespace": "kube-system",
"selfLink": "/api/v1/namespaces/kube-system/secrets/aaa",
"uid": "96b0fbee-f14c-423d-9734-53fed20ae9f9",
"resourceVersion": "1354",
"creationTimestamp": "2020-02-24T11:20:23Z"
},
"data": "aaa"
}]
}
but I only want the name list, for this example :"aaa". Is there any way?
Yes, you can achieve it by using jsonpath output. Note that the specification you posted will look quite differently once applied. It will create one Secret object in your kube-system namespace and when you run:
$ kubectl get secret -n kube-system aaa -o json
the output will look similar to the following:
{
"apiVersion": "v1",
"kind": "Secret",
"metadata": {
"creationTimestamp": "2020-02-25T11:08:21Z",
"name": "aaa",
"namespace": "kube-system",
"resourceVersion": "34488887",
"selfLink": "/api/v1/namespaces/kube-system/secrets/aaa",
"uid": "229edeb3-57bf-11ea-b366-42010a9c0093"
},
"type": "Opaque"
}
To get only the name of your Secret you need to run:
kubectl get secret aaa -n kube-system -o jsonpath='{.metadata.name}'
i think this should work.
kubectl get secretlist -o=jsonpath="{.items[*].metadata.name]}" | grep -v HEAD | head -n1
check link in the below for more info.
https://kubernetes.io/docs/reference/kubectl/jsonpath/

How can I filter events for the cluster autoscaler in kubernetes?

I see the following event from kubectl get events:
{
"apiVersion": "v1",
"count": 1,
"eventTime": null,
"firstTimestamp": "2019-12-04T19:52:51Z",
"involvedObject": {
"apiVersion": "v1",
"kind": "Pod",
"name": "example-deployment-55f789d54c-tlwnz",
"namespace": "default",
"resourceVersion": "82663",
"uid": "2fdbd034-16cf-11ea-bc4a-42010a800186"
},
"kind": "Event",
"lastTimestamp": "2019-12-04T19:52:51Z",
"message": "Unable to mount volumes for pod \"example-deployment-55f789d54c-tlwnz_default(2fdbd034-16cf-11ea-bc4a-42010a800186)\": timeout expired waiting for volumes to attach or mount for pod \"default\"/\"example-deployment-55f789d54c-tlwnz\". list of unmounted volumes=[nfs-volume]. list of unattached volumes=[nfs-volume default-token-kc7ks]",
"metadata": {
"creationTimestamp": "2019-12-04T19:52:51Z",
"name": "example-deployment-55f789d54c-tlwnz.15dd430deb31e8fd",
"namespace": "default",
"resourceVersion": "1529",
"selfLink": "/api/v1/namespaces/default/events/example-deployment-55f789d54c-tlwnz.15dd430deb31e8fd",
"uid": "a7c80266-16cf-11ea-bc4a-42010a800186"
},
"reason": "FailedMount",
"reportingComponent": "",
"reportingInstance": "",
"source": {
"component": "kubelet",
"host": "gke-test-a2e50ea5b9f1dd9-my-node-pool-5a20b1ac-vk9q"
},
"type": "Warning"
}
....
I've tried filtering by: kubectl get events --all-namespaces -o json --field-selector source.component=cluster-autoscaler but that errors with:
{
"apiVersion": "v1",
"items": [],
"kind": "List",
"metadata": {
"resourceVersion": "",
"selfLink": ""
}
}
Error from server (BadRequest): Unable to find "/v1, Resource=events" that match label selector "", field selector "source.component=cluster-autoscaler": field label not supported: source.component
How can I filter this?
Can be done using jq (though it does not return a JSON array - but individual JSON objects seperated by newlines):
kubectl get events --all-namespaces -o json | jq '.items[]|select(.source.component=="cluster-autoscaler")'