kube-apiserver authentification ( Unauthorized ) - kubernetes

i created user teamcity to be able to use kube-apiserver
kubectl create serviceaccount teamcity
With the command below i get the secrets name
kubectl get accourcissements teamcity -o yaml
To find the token who were generated by the last command i use
kubectl get secret teamcity-token-lmr6z -o yaml
when i try to connect by curl i've an error and i dont understand where is my mistake :(
curl -v -Sskk -H "Authorization: bearer ZXlKaGJH......wWHNIVzZ3" https://10.109.0.88:6443/api/v1/namespaces
HTTP/1.1 401 Unauthorized
Content-Type: application/json
Date: Thu, 05 Jul 2018 13:14:00 GMT
Content-Length: 165
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "Unauthorized",
"reason": "Unauthorized",
"code": 401
* Connection #0 to host 10.109.0.88 left intact
I found a small description on kubernetes about why i get this error (section : Anonymous requests) https://kubernetes.io/docs/reference/access-authn-authz/authentication/
But i still not understand where is my mistake because with kubectl it's work
kubectl --token=ZXlKaGJHY2lPaUpTVXpJ........swWHNIVzZ3 get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-kubernetes NodePort 192.2.0.159 <none> 80:17502/TCP 13d
hello-kubernetes-olivier NodePort 192.2.0.235 <none> 80:17296/TCP 13d
kubernetes ClusterIP 192.2.0.1 <none> 443/TCP 14d

It might be your typo the part of "bearer", as i remeber it's "Bearer".
Some command sample is as follows, Kubernetes - Accessing Clusters
$ APISERVER=$(kubectl config view | grep server | cut -f 2- -d ":" | tr -d " ")
$ TOKEN=$(kubectl describe secret $(kubectl get secrets | grep default | cut -f1 -d ' ') | grep -E '^token' | cut -f2 -d':' | tr -d '\t')
$ curl $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure
{
"kind": "APIVersions",
"versions": [
"v1"
],
"serverAddressByClientCIDRs": [
{
"clientCIDR": "0.0.0.0/0",
"serverAddress": "10.0.1.149:443"
}
]
}

Related

Installing kong with postgresql and add service with REST API

I currently have a problem installing kong with postgresql
and add the service through REST calls to kong admin server.
My install command as below :
helm install kong kong/kong -n kong \
--set ingressController.installCRDs=false \
--set admin.enabled=true \
--set admin.http.enabled=true \
--set postgresql.enabled=true \
--set postgresql.auth.username=kong \
--set postgresql.auth.database=kong \
--set postgresql.service.ports.postgresql=5432 \
--set postgresql.image.tag=13.6.0-debian-10-r52 \
--set migrations.init=false \
--set migrations.preUpgrade=false \
--set migrations.postUpgrade=false
It installs normally
After registering the service, the following message appears.
Don't worry, LoadBalance pending will be modified to NodePort later!
root#nlu-framework-master-1:~# k get all -n kong
NAME READY STATUS RESTARTS AGE
pod/kong-kong-5b685cd4b9-t95mx 2/2 Running 1 3m22s
pod/kong-postgresql-0 1/1 Running 1 3m22s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kong-kong-admin NodePort 10.233.7.63 <none> 8001:31422/TCP,8444:31776/TCP 3m22s
service/kong-kong-proxy LoadBalancer 10.233.0.19 <pending> 80:30511/TCP,443:30358/TCP 3m22s
service/kong-postgresql ClusterIP 10.233.42.35 <none> 5432/TCP 3m22s
service/kong-postgresql-headless ClusterIP None <none> 5432/TCP 3m22s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/kong-kong 1/1 1 1 3m22s
NAME DESIRED CURRENT READY AGE
replicaset.apps/kong-kong-5b685cd4b9 1 1 1 3m22s
NAME READY AGE
statefulset.apps/kong-postgresql 1/1 3m22s
My add service command as below :
curl -X POST http://10.233.7.63:8001/services \
-H 'Content-Type: application/json' \
-d '{"name":"k8s-api","url":"https://192.168.0.50:6443/api/v1/"}'
add service result message as below:
{"code":12,"message":"cannot create 'services' entities when not using a database","name":"operation unsupported"}
please anybody help me
I solve problem by myself
postgresql version seems to have a bug in version 14 or higher, install helm chart cetic/postgresql.
helm chart cetic/postgresql's postgresql version 11.5
https://artifacthub.io/packages/helm/cetic/postgresql
helm install postgres cetic/postgresql -n kong \
--set postgresql.username=kong \
--set postgresql.password=kong \
--set postgresql.database=kong \
--set postgresql.port=5432
install bitnami/kong with external postgresql
helm install kong -n kong bitnami/kong \
--set postgresql.enabled=false \
--set postgresql.external.host=postgres-postgresql \
--set postgresql.external.user=kong \
--set postgresql.external.password=kong \
--set postgresql.external.database=kong
k get all -n kong
NAME READY STATUS RESTARTS AGE
pod/kong-9688f7f55-42cfm 2/2 Running 3 2m15s
pod/kong-9688f7f55-5ntvw 2/2 Running 3 2m15s
pod/postgres-postgresql-0 1/1 Running 0 4m54s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kong ClusterIP 10.233.39.160 <none> 80/TCP,443/TCP 2m15s
service/postgres-postgresql ClusterIP 10.233.23.169 <none> 5432/TCP 4m54s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/kong 2/2 2 2 2m15s
NAME DESIRED CURRENT READY AGE
replicaset.apps/kong-9688f7f55 2 2 2 2m15s
NAME READY AGE
statefulset.apps/postgres-postgresql 1/1 4m54s
Change the service type to nodeport andadd admin service
k edit service/kong -n kong
- name: http-admin
port: 8001
protocol: TCP
targetPort: http-admin
- name: https-admin
port: 8444
protocol: TCP
targetPort: https-admin
Test add service with admin service clusterIP.
curl -X POST http://10.233.39.160:8001/services \
> -H 'Content-Type: application/json' \
> -d '{"name":"k8s-api","url":"https://192.168.0.50:6443/api/v1/"}'
Check service has been successfully added.
curl http://10.233.39.160:8001/services | jq
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 401 100 401 0 0 97k 0 --:--:-- --:--:-- --:--:-- 97k
{
"next": null,
"data": [
{
"id": "5cc7f7ce-3494-44fa-b76c-47795192f541",
"host": "192.168.0.50",
"path": "/api/v1/",
"protocol": "https",
"retries": 5,
"ca_certificates": null,
"write_timeout": 60000,
"port": 6443,
"tags": null,
"name": "k8s-api",
"tls_verify": null,
"client_certificate": null,
"tls_verify_depth": null,
"connect_timeout": 60000,
"enabled": true,
"created_at": 1649513267,
"updated_at": 1649513267,
"read_timeout": 60000
}
]
}

kubectl exec error dialing backend: x509: certificate signed by unknown authority

After a long struggle I just created my cluster, deployed a sample container busybox now i am trying to run the command exec and i get the following error:
error dialing backend: x509: certificate signed by unknown authority
How do i solve this one: here is the command output with v=9 log level.
kubectl exec -v=9 -ti busybox -- nslookup kubernetes
I also noticed in the logs that this curl command that failed is actually the second command the first GET command passed and it return results without any issues.. ( GET https://myloadbalancer.local:6443/api/v1/namespaces/default/pods/busybox 200 OK)
curl -k -v -XPOST -H "X-Stream-Protocol-Version: v4.channel.k8s.io" -H "X-Stream-Protocol-Version: v3.channel.k8s.io" -H "X-Stream-Protocol-Version: v2.channel.k8s.io" -H "X-Stream-Protocol-Version: channel.k8s.io" -H "User-Agent: kubectl/v1.19.0 (linux/amd64) kubernetes/e199641" 'https://myloadbalancer.local:6443/api/v1/namespaces/default/pods/busybox/exec?command=nslookup&command=kubernetes&container=busybox&stdin=true&stdout=true&tty=true'
I1018 02:19:40.776134 129813 round_trippers.go:443] POST https://myloadbalancer.local:6443/api/v1/namespaces/default/pods/busybox/exec?command=nslookup&command=kubernetes&container=busybox&stdin=true&stdout=true&tty=true 500 Internal Server Error in 43 milliseconds
I1018 02:19:40.776189 129813 round_trippers.go:449] Response Headers:
I1018 02:19:40.776206 129813 round_trippers.go:452] Content-Type: application/json
I1018 02:19:40.776234 129813 round_trippers.go:452] Date: Sun, 18 Oct 2020 02:19:40 GMT
I1018 02:19:40.776264 129813 round_trippers.go:452] Content-Length: 161
I1018 02:19:40.776277 129813 round_trippers.go:452] Cache-Control: no-cache, private
I1018 02:19:40.777904 129813 helpers.go:216] server response object: [{
"metadata": {},
"status": "Failure",
"message": "error dialing backend: x509: certificate signed by unknown authority",
"code": 500
}]
F1018 02:19:40.778081 129813 helpers.go:115] Error from server: error dialing backend: x509: certificate signed by unknown authority
goroutine 1 [running]:
Adding more information:
This is on UBUNTU 20.04. I went through step by step creating my cluster manually as a beginner I need that experience instead of spinning up with tools like kubeadm or minikube
xxxx#master01:~$ kubectl exec -ti busybox -- nslookup kubernetes
Error from server: error dialing backend: x509: certificate signed by unknown authority
xxxx#master01:~$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default busybox 1/1 Running 52 2d5h
kube-system coredns-78cb77577b-lbp87 1/1 Running 0 2d5h
kube-system coredns-78cb77577b-n7rvg 1/1 Running 0 2d5h
kube-system weave-net-d9jb6 2/2 Running 7 2d5h
kube-system weave-net-nsqss 2/2 Running 0 2d14h
kube-system weave-net-wnbq7 2/2 Running 7 2d5h
kube-system weave-net-zfsmn 2/2 Running 0 2d14h
kubernetes-dashboard dashboard-metrics-scraper-7b59f7d4df-dhcpn 1/1 Running 0 2d3h
kubernetes-dashboard kubernetes-dashboard-665f4c5ff-6qnzp 1/1 Running 7 2d3h
tinashe#master01:~$ kubectl logs busybox
Error from server: Get "https://worker01:10250/containerLogs/default/busybox/busybox": x509: certificate signed by unknown authority
xxxx#master01:~$
xxxx#master01:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.3", GitCommit:"1e11e4a2108024935ecfcb2912226cedeafd99df", GitTreeState:"clean", BuildDate:"2020-10-14T12:50:19Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.3", GitCommit:"1e11e4a2108024935ecfcb2912226cedeafd99df", GitTreeState:"clean", BuildDate:"2020-10-14T12:41:49Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"}
**Edited for simplicity:
my cluster operator kube-apiserver was degraded, causing my certificate failures. Resolving that degradation was necessary to resolve the overarching problem, resulting in x509 errors. Validate that all masters are in READY, pods in your apiserver projects are also scheduled and ready. See below KCS for more information:
https://access.redhat.com/solutions/4849711
**removed below outdated/incorrect information about local cert pull/export.

panic: Get https://10.96.0.1:443/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-csrf: dial tcp 10.96.0.1:443: i/o timeout

When I deploy kubernetes v1.16.0 dashboard kubernetesui/dashboard:v2.0.0, it throws this error:
[miaoyou#MeowK8SMaster1 ~]$ kubectl logs kubernetes-dashboard-56484d4c5-7jcvh -n kubernetes-dashboard
2020/06/14 06:47:33 Starting overwatch
2020/06/14 06:47:33 Using namespace: kubernetes-dashboard
2020/06/14 06:47:33 Using in-cluster config to connect to apiserver
2020/06/14 06:47:33 Using secret token for csrf signing
2020/06/14 06:47:33 Initializing csrf token from kubernetes-dashboard-csrf secret
panic: Get https://10.96.0.1:443/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-csrf: dial tcp 10.96.0.1:443: i/o timeout
goroutine 1 [running]:
github.com/kubernetes/dashboard/src/app/backend/client/csrf.(*csrfTokenManager).init(0xc00000d540)
/home/travis/build/kubernetes/dashboard/src/app/backend/client/csrf/manager.go:41 +0x446
github.com/kubernetes/dashboard/src/app/backend/client/csrf.NewCsrfTokenManager(...)
/home/travis/build/kubernetes/dashboard/src/app/backend/client/csrf/manager.go:66
github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).initCSRFKey(0xc00048e880)
/home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:501 +0xc6
github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).init(0xc00048e880)
/home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:469 +0x47
github.com/kubernetes/dashboard/src/app/backend/client.NewClientManager(...)
/home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:550
main.main()
/home/travis/build/kubernetes/dashboard/src/app/backend/dashboard.go:105 +0x20d
The kubernetes cluster running fine before I tweak the token ttl time.
After I tweak ttl time, restart throw this error.How to fix this problem?
[miaoyou#MeowK8SMaster1 ~]$ kubectl get pods -n kubernetes-dashboard -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
dashboard-metrics-scraper-c79c65bb7-5bgrw 0/1 CrashLoopBackOff 3569 8d 10.244.0.3 meowk8smaster1 <none> <none>
kubernetes-dashboard-56484d4c5-7jcvh 0/1 CrashLoopBackOff 13 51m 10.244.0.6 meowk8smaster1 <none> <none>
[miaoyou#MeowK8SMaster1 ~]$ curl -k https://10.96.0.1:443/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-csrf
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "secrets \"kubernetes-dashboard-csrf\" is forbidden: User \"system:anonymous\" cannot get resource \"secrets\" in API group \"\" in the namespace \"kubernetes-dashboard\"",
"reason": "Forbidden",
"details": {
"name": "kubernetes-dashboard-csrf",
"kind": "secrets"
},
"code": 403
}

Kubernetes API server , serving pod logs

The REST API requests , GET , POST , PUT etc to Kubernetes API server are request , responses and simple to understand , such as kubectl create <something>. I wonder how the API server serves the pod logs when I do kubectl logs -f <pod-name> ( and similar operations like kubectl attach <pod> ), Is it just an http response to GET in a loop?
My advice is to always check what kubectl does under the cover, and for that use -v=9 with your command. It will provide you with full request and responses that are going between the client and the server.
Yep, it looks like it's currently just a HTTP GET that kubectl is using, when looking at the source of logs.go although there seems to be a desire to unify and upgrade a couple of commands (exec, port-forward, logs, etc.) to WebSockets.
Showing Maciej's excellent suggestion in action:
$ kubectl run test --image centos:7 \
-- sh -c "while true ; do echo Work ; sleep 2 ; done"
$ kubectl get po
NAME READY STATUS RESTARTS AGE
test-769f6f8c9f-2nx7m 1/1 Running 0 2m
$ kubectl logs -v9 -f test-769f6f8c9f-2nx7m
I1019 13:49:34.282007 71247 loader.go:359] Config loaded from file /Users/mhausenblas/.kube/config
I1019 13:49:34.284698 71247 loader.go:359] Config loaded from file /Users/mhausenblas/.kube/config
I1019 13:49:34.292620 71247 loader.go:359] Config loaded from file /Users/mhausenblas/.kube/config
I1019 13:49:34.293136 71247 round_trippers.go:386] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.12.0 (darwin/amd64) kubernetes/0ed3388" 'https://192.168.64.13:8443/api/v1/namespaces/default/pods/test-769f6f8c9f-2nx7m'
I1019 13:49:34.305016 71247 round_trippers.go:405] GET https://192.168.64.13:8443/api/v1/namespaces/default/pods/test-769f6f8c9f-2nx7m 200 OK in 11 milliseconds
I1019 13:49:34.305039 71247 round_trippers.go:411] Response Headers:
I1019 13:49:34.305047 71247 round_trippers.go:414] Date: Fri, 19 Oct 2018 12:49:34 GMT
I1019 13:49:34.305054 71247 round_trippers.go:414] Content-Type: application/json
I1019 13:49:34.305062 71247 round_trippers.go:414] Content-Length: 2390
I1019 13:49:34.305125 71247 request.go:942] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"test-769f6f8c9f-2nx7m","generateName":"test-769f6f8c9f-","namespace":"default","selfLink":"/api/v1/namespaces/default/pods/test-769f6f8c9f-2nx7m","uid":"0581b0fa-d39d-11e8-9827-42a64713caf8","resourceVersion":"892912","creationTimestamp":"2018-10-19T12:46:39Z","labels":{"pod-template-hash":"3259294759","run":"test"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"test-769f6f8c9f","uid":"057f3ad4-d39d-11e8-9827-42a64713caf8","controller":true,"blockOwnerDeletion":true}]},"spec":{"volumes":[{"name":"default-token-fbx4m","secret":{"secretName":"default-token-fbx4m","defaultMode":420}}],"containers":[{"name":"test","image":"centos:7","args":["sh","-c","while true ; do echo Work ; sleep 2 ; done"],"resources":{},"volumeMounts":[{"name":"default-token-fbx4m","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"minikube","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}]},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2018-10-19T12:46:39Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2018-10-19T12:46:40Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":null},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2018-10-19T12:46:39Z"}],"hostIP":"192.168.64.13","podIP":"172.17.0.11","startTime":"2018-10-19T12:46:39Z","containerStatuses":[{"name":"test","state":{"running":{"startedAt":"2018-10-19T12:46:39Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"centos:7","imageID":"docker-pullable://centos#sha256:67dad89757a55bfdfabec8abd0e22f8c7c12a1856514726470228063ed86593b","containerID":"docker://5c25f5fce576d68d743afc9b46a9ea66f3cd245f5075aa95def623b6c2d93256"}],"qosClass":"BestEffort"}}
I1019 13:49:34.316531 71247 loader.go:359] Config loaded from file /Users/mhausenblas/.kube/config
I1019 13:49:34.317000 71247 round_trippers.go:386] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.12.0 (darwin/amd64) kubernetes/0ed3388" 'https://192.168.64.13:8443/api/v1/namespaces/default/pods/test-769f6f8c9f-2nx7m/log?follow=true'
I1019 13:49:34.339341 71247 round_trippers.go:405] GET https://192.168.64.13:8443/api/v1/namespaces/default/pods/test-769f6f8c9f-2nx7m/log?follow=true 200 OK in 22 milliseconds
I1019 13:49:34.339380 71247 round_trippers.go:411] Response Headers:
I1019 13:49:34.339390 71247 round_trippers.go:414] Content-Type: text/plain
I1019 13:49:34.339407 71247 round_trippers.go:414] Date: Fri, 19 Oct 2018 12:49:34 GMT
Work
Work
Work
^C
If you extract any Kubernetes object using kubectl on the highest debugging level -v 9 with a streaming option -f, as for example kubectl logs -f <pod-name> -v 9, you can realize that kubectl passing follow=true flag to API request by acquiring logs from target Pod accordingly, and stream to the output as well:
curl -k -v -XGET -H "Accept: application/json, /" -H "User-Agent:
kubectl/v1.12.1 (linux/amd64) kubernetes/4ed3216"
'https://API_server_IP/api/v1/namespaces/default/pods/Pod-name/log?follow=true'
You can consider launching own API requests by following the next steps:
Obtain token for authorization purpose:
MY_TOKEN="$(kubectl get secret <default-secret> -o jsonpath='{$.data.token}' | base64 -d)"
Then you can retrieve manually the required data from API server directly:
curl -k -v -H "Authorization : Bearer $MY_TOKEN" https://API_server_IP/api/v1/namespaces/default/pods

Kubectl patch and curl patch unable to patch a resource

I tried to add an extended resource to one node in my cluster. I followed this task, from official documentation
I've followed the instructions step by step, but the PATCH doesn't seem to have an effect.
After running:
curl --header "Content-Type: application/json-patch+json" --request PATCH --data '[{"op": "add", "path": "/status/capacity/example.com~1dongle", "value": "4"}]' http://localhost:8001/api/v1/nodes/kubernetes-3/status
I get a response, with added extended resource
"capacity": {
"cpu": "8",
"example.com/dongle": "4",
"memory": "8218052Ki",
"pods": "110"
},
But if I run kubectl describe node kubernetes-3 the capacity has old values:
Capacity:
cpu: 8
memory: 8218052Ki
pods: 110
I've checked the apiserver logs and everything looks good:
PATCH /api/v1/nodes/kubernetes-3/status: (39.112896ms) 200 [[curl/7.59.0] 127.0.0.1:49234]
However, if I use the kubectl patch command, the command returns node "kubernetes-3" not patched
The command I ran: kubectl patch node kubernetes-3 --type='json' -p '[{"op": "add", "path": "/status/capacity/example.com~1dongle", "value": "4"}]'
And again, the apiserver logs, which show, that the response was successful (status 200):
PATCH /api/v1/nodes/kubernetes-3: (4.831866ms) 200 [[kubectl/v1.8.0+coreos.0 (linux/amd64) kubernetes/a65654e] 127.0.0.1:50004]
kubectl version output:
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0+coreos.0", GitCommit:"a65654ef5b593ac19fbfaf33b1a1873c0320353b", GitTreeState:"clean", BuildDate:"2017-09-29T21:51:03Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0+coreos.0", GitCommit:"a65654ef5b593ac19fbfaf33b1a1873c0320353b", GitTreeState:"clean", BuildDate:"2017-09-29T21:51:03Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
I've tried it on the Kubernetes cluster v1.11.1
Curl version is working fine, but it takes some time (5-10 seconds) to show it on the "get" output:
curl --header "Content-Type: application/json-patch+json" \
--request PATCH \
--data '[{"op": "add", "path": "/status/capacity/example.com~1dongle", "value": "2"}]' \
http://localhost:8001/api/v1/nodes/node-name/status
kubectl get node node-name -o yaml
...
capacity:
cpu: "2"
ephemeral-storage: 20263528Ki
example.com/dongle: "2"
example2.com/dongle: "4"
example3.com/dongle: "4"
example4.com/dongle: "4"
hugepages-1Gi: "0"
hugepages-2Mi: "0"
memory: 7652316Ki
pods: "110"
...
kubectl version still doesn't work, but I guess it's because it requests the wrong address /api/v1/nodes/node-name, instead of /api/v1/nodes/node-name/status
The command
kubectl -v=9 patch node/node-name --type='json' -p='[{"op": "add", "path": "/status/capacity/example.com-dongle", "value": "6"}]'
gave me the log:
I0803 13:08:38.552155 694 round_trippers.go:386] curl -k -v
-XPATCH -H "Accept: application/json" -H "Content-Type: application/json-patch+json" -H "User-Agent: kubectl/v1.11.1
(linux/amd64) kubernetes/b1b2997"
'https://10.156.0.8:6443/api/v1/nodes/node-name'
If we check the similar request on kubeclt proxy connection:
It doesn’t work:
curl -XPATCH -H "Accept: application/json" -H "Content-Type: application/json-patch+json" -H "User-Agent: kubectl/v1.11.1 (linux/amd64) kubernetes/b1b2997" --data '[{"op": "add", "path": "/status/capacity/example4.com~1dongle", "value": "4"}]' \
'http://127.0.0.1:8001/api/v1/nodes/node-name'
But with “/status” in the end it works well:
curl -XPATCH -H "Accept: application/json" -H "Content-Type: application/json-patch+json" -H "User-Agent: kubectl/v1.11.1 (linux/amd64) kubernetes/b1b2997" --data '[{"op": "add", "path": "/status/capacity/example4.com~1dongle", "value": "4"}]' \
'http://127.0.0.1:8001/api/v1/nodes/node-name/status'