I tried to add an extended resource to one node in my cluster. I followed this task, from official documentation
I've followed the instructions step by step, but the PATCH doesn't seem to have an effect.
After running:
curl --header "Content-Type: application/json-patch+json" --request PATCH --data '[{"op": "add", "path": "/status/capacity/example.com~1dongle", "value": "4"}]' http://localhost:8001/api/v1/nodes/kubernetes-3/status
I get a response, with added extended resource
"capacity": {
"cpu": "8",
"example.com/dongle": "4",
"memory": "8218052Ki",
"pods": "110"
},
But if I run kubectl describe node kubernetes-3 the capacity has old values:
Capacity:
cpu: 8
memory: 8218052Ki
pods: 110
I've checked the apiserver logs and everything looks good:
PATCH /api/v1/nodes/kubernetes-3/status: (39.112896ms) 200 [[curl/7.59.0] 127.0.0.1:49234]
However, if I use the kubectl patch command, the command returns node "kubernetes-3" not patched
The command I ran: kubectl patch node kubernetes-3 --type='json' -p '[{"op": "add", "path": "/status/capacity/example.com~1dongle", "value": "4"}]'
And again, the apiserver logs, which show, that the response was successful (status 200):
PATCH /api/v1/nodes/kubernetes-3: (4.831866ms) 200 [[kubectl/v1.8.0+coreos.0 (linux/amd64) kubernetes/a65654e] 127.0.0.1:50004]
kubectl version output:
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0+coreos.0", GitCommit:"a65654ef5b593ac19fbfaf33b1a1873c0320353b", GitTreeState:"clean", BuildDate:"2017-09-29T21:51:03Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0+coreos.0", GitCommit:"a65654ef5b593ac19fbfaf33b1a1873c0320353b", GitTreeState:"clean", BuildDate:"2017-09-29T21:51:03Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
I've tried it on the Kubernetes cluster v1.11.1
Curl version is working fine, but it takes some time (5-10 seconds) to show it on the "get" output:
curl --header "Content-Type: application/json-patch+json" \
--request PATCH \
--data '[{"op": "add", "path": "/status/capacity/example.com~1dongle", "value": "2"}]' \
http://localhost:8001/api/v1/nodes/node-name/status
kubectl get node node-name -o yaml
...
capacity:
cpu: "2"
ephemeral-storage: 20263528Ki
example.com/dongle: "2"
example2.com/dongle: "4"
example3.com/dongle: "4"
example4.com/dongle: "4"
hugepages-1Gi: "0"
hugepages-2Mi: "0"
memory: 7652316Ki
pods: "110"
...
kubectl version still doesn't work, but I guess it's because it requests the wrong address /api/v1/nodes/node-name, instead of /api/v1/nodes/node-name/status
The command
kubectl -v=9 patch node/node-name --type='json' -p='[{"op": "add", "path": "/status/capacity/example.com-dongle", "value": "6"}]'
gave me the log:
I0803 13:08:38.552155 694 round_trippers.go:386] curl -k -v
-XPATCH -H "Accept: application/json" -H "Content-Type: application/json-patch+json" -H "User-Agent: kubectl/v1.11.1
(linux/amd64) kubernetes/b1b2997"
'https://10.156.0.8:6443/api/v1/nodes/node-name'
If we check the similar request on kubeclt proxy connection:
It doesn’t work:
curl -XPATCH -H "Accept: application/json" -H "Content-Type: application/json-patch+json" -H "User-Agent: kubectl/v1.11.1 (linux/amd64) kubernetes/b1b2997" --data '[{"op": "add", "path": "/status/capacity/example4.com~1dongle", "value": "4"}]' \
'http://127.0.0.1:8001/api/v1/nodes/node-name'
But with “/status” in the end it works well:
curl -XPATCH -H "Accept: application/json" -H "Content-Type: application/json-patch+json" -H "User-Agent: kubectl/v1.11.1 (linux/amd64) kubernetes/b1b2997" --data '[{"op": "add", "path": "/status/capacity/example4.com~1dongle", "value": "4"}]' \
'http://127.0.0.1:8001/api/v1/nodes/node-name/status'
Related
I install kubernetes by kubeadm and shut down and boot again.
Then only kubectl port-forward does not work.
The following command works.
kubectl top nodes
But kubectl port-forward fails. Is there any suggestion?
It seems only port-forward goes to localhost:8080.
Does not work
sudo kubectl port-forward -n istio-system svc/istio-ingressgateway 80:80 --address 0.0.0.0
Logs are following for normal mode.
The connection to the server localhost:8080 was refused - did you specify the right host or port?
For debug mode log
I0930 01:44:46.904299 47718 round_trippers.go:423] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.17.12 (linux/amd64) kubernetes/5ec4722" 'http://localhost:8080/api?timeout=32s'
I0930 01:44:46.904927 47718 round_trippers.go:443] GET http://localhost:8080/api?timeout=32s in 0 milliseconds
I0930 01:44:46.904948 47718 round_trippers.go:449] Response Headers:
I0930 01:44:46.904987 47718 cached_discovery.go:121] skipped caching discovery info due to Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
I0930 01:44:46.905009 47718 shortcut.go:89] Error loading discovery information: Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
I0930 01:44:46.905067 47718 round_trippers.go:423] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.17.12 (linux/amd64) kubernetes/5ec4722" 'http://localhost:8080/api?timeout=32s'
I0930 01:44:46.905222 47718 round_trippers.go:443] GET http://localhost:8080/api?timeout=32s in 0 milliseconds
I0930 01:44:46.905240 47718 round_trippers.go:449] Response Headers:
I0930 01:44:46.905263 47718 cached_discovery.go:121] skipped caching discovery info due to Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
I0930 01:44:46.905325 47718 round_trippers.go:423] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.17.12 (linux/amd64) kubernetes/5ec4722" 'http://localhost:8080/api?timeout=32s'
I0930 01:44:46.905465 47718 round_trippers.go:443] GET http://localhost:8080/api?timeout=32s in 0 milliseconds
I0930 01:44:46.905482 47718 round_trippers.go:449] Response Headers:
I0930 01:44:46.905504 47718 cached_discovery.go:121] skipped caching discovery info due to Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
I0930 01:44:46.905556 47718 round_trippers.go:423] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.17.12 (linux/amd64) kubernetes/5ec4722" 'http://localhost:8080/api?timeout=32s'
I0930 01:44:46.905695 47718 round_trippers.go:443] GET http://localhost:8080/api?timeout=32s in 0 milliseconds
I0930 01:44:46.905712 47718 round_trippers.go:449] Response Headers:
I0930 01:44:46.905734 47718 cached_discovery.go:121] skipped caching discovery info due to Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
I0930 01:44:46.905759 47718 helpers.go:221] Connection error: Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
F0930 01:44:46.905785 47718 helpers.go:114] The connection to the server localhost:8080 was refused - did you specify the right host or port?
kubectl config view is follows
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://172.31.3.157:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin#kubernetes
current-context: kubernetes-admin#kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
As a reference, working kubectl top nodes debug mode is follows,
(Log is too long, I attached header only)
I0930 01:46:47.625764 49195 loader.go:375] Config loaded from file: /home/ubuntu/.kube/config
I0930 01:46:47.641064 49195 round_trippers.go:423] curl -k -v -XGET -H "User-Agent: kubectl/v1.17.12 (linux/amd64) kubernetes/5ec4722" -H "Accept: application/json, */*" 'https://172.31.3.157:6443/api?timeout=32s'
I0930 01:46:47.680091 49195 round_trippers.go:443] GET https://172.31.3.157:6443/api?timeout=32s 200 OK in 39 milliseconds
I0930 01:46:47.680116 49195 round_trippers.go:449] Response Headers:
I0930 01:46:47.680123 49195 round_trippers.go:452] Cache-Control: no-cache, private
I0930 01:46:47.680128 49195 round_trippers.go:452] Content-Type: application/json
I0930 01:46:47.680134 49195 round_trippers.go:452] Content-Length: 135
I0930 01:46:47.680139 49195 round_trippers.go:452] Date: Wed, 30 Sep 2020 01:46:47 GMT
I0930 01:46:47.680340 49195 request.go:1017] Response Body: {"kind":"APIVersions","versions":["v1"],"serverAddressByClientCIDRs":[{"clientCIDR":"0.0.0.0/0","serverAddress":"172.31.3.157:6443"}]}
I0930 01:46:47.680594 49195 round_trippers.go:423] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.17.12 (linux/amd64) kubernetes/5ec4722" 'https://172.31.3.157:6443/apis?timeout=32s'
I would like to create job from cronjob using curl command.
I am aware of kubernetes kubectl or openshift oc commands. The following command works. But I am looking for curl command.
Kubernetes:
kubectl create job --from=cronjob/
OpenShift
oc create job --from=cronjob/
Please help. I am using OpenShift 3.11.
You can run the kubectl command with high verbosity level and it should show the curl command and the request body which is internally being used.
kubectl create job --from=cronjob/test-job --v=10
I0324 10:46:36.071067 44400 round_trippers.go:423] curl -k -v -XGET -H "Accept: application/json" -H "User-Agent: kubectl/v1.17.0 (darwin/amd64) kubernetes/70132b0" 'https://127.0.0.1:32768/apis/batch/v1beta1/namespaces/default/cronjobs/test-job'
I0324 10:46:36.110550 44400 round_trippers.go:443] GET https://127.0.0.1:32768/apis/batch/v1beta1/namespaces/default/cronjobs/test-job 200 OK in 39 milliseconds
I0324 10:46:36.110573 44400 round_trippers.go:449] Response Headers:
I0324 10:46:36.110579 44400 round_trippers.go:452] Content-Type: application/json
I0324 10:46:36.110585 44400 round_trippers.go:452] Content-Length: 898
I0324 10:46:36.110590 44400 round_trippers.go:452] Date: Tue, 24 Mar 2020 05:16:36 GMT
I0324 10:46:36.110631 44400 request.go:1017] Response Body: {"kind":"CronJob","apiVersion":"batch/v1beta1","metadata":{"name":"test-job","namespace":"default","selfLink":"/apis/batch/v1beta1/namespaces/default/cronjobs/test-job","uid":"11813788-123d-4379-a103-79e18c7e954c","resourceVersion":"64182","creationTimestamp":"2020-03-24T05:16:03Z"},"spec":{"schedule":"*/1 * * * *","concurrencyPolicy":"Allow","suspend":false,"jobTemplate":{"metadata":{"name":"test-job","creationTimestamp":null},"spec":{"template":{"metadata":{"creationTimestamp":null},"spec":{"containers":[{"name":"test-job","image":"busybox","resources":{},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"Always"}],"restartPolicy":"OnFailure","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","securityContext":{},"schedulerName":"default-scheduler"}}}},"successfulJobsHistoryLimit":3,"failedJobsHistoryLimit":1},"status":{}}
I0324 10:46:36.117139 44400 request.go:1017] Request Body: {"kind":"Job","apiVersion":"batch/v1","metadata":{"name":"job","creationTimestamp":null,"annotations":{"cronjob.kubernetes.io/instantiate":"manual"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"CronJob","name":"test-job","uid":"11813788-123d-4379-a103-79e18c7e954c","controller":true,"blockOwnerDeletion":true}]},"spec":{"template":{"metadata":{"creationTimestamp":null},"spec":{"containers":[{"name":"test-job","image":"busybox","resources":{},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"Always"}],"restartPolicy":"OnFailure","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","securityContext":{},"schedulerName":"default-scheduler"}}},"status":{}}
I0324 10:46:36.117189 44400 round_trippers.go:423] curl -k -v -XPOST -H "Accept: application/json, */*" -H "Content-Type: application/json" -H "User-Agent: kubectl/v1.17.0 (darwin/amd64) kubernetes/70132b0" 'https://127.0.0.1:32768/apis/batch/v1/namespaces/default/jobs'
I am deploying HA kubernetes master(stacked etcd) with kubeadm ,I followed
the instructions on official website :
https://kubernetes.io/docs/setup/independent/high-availability/
four nodes are planned in my cluster for now:
One HAProxy server node used for master loadbalance.
three etcd stacked master nodes.
I deployed haproxy with following configuration:
global
daemon
maxconn 256
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend haproxy_kube
bind *:6443
mode tcp
option tcplog
timeout client 10800s
default_backend masters
backend masters
mode tcp
option tcplog
balance leastconn
timeout server 10800s
server master01 <master01-ip>:6443 check
my kubeadm-config.yaml is like this:
apiVersion: kubeadm.k8s.io/v1beta1
kind: InitConfiguration
nodeRegistration:
name: "master01"
---
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
apiServer:
certSANs:
- "<haproxyserver-dns>"
controlPlaneEndpoint: "<haproxyserver-dns>:6443"
networking:
serviceSubnet: "172.24.0.0/16"
podSubnet: "172.16.0.0/16"
my initial command is:
kubeadm init --config=kubeadm-config.yaml -v 11
but after I running the command above on the master01, it kept logging the following information:
I0122 11:43:44.039849 17489 manifests.go:113] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0122 11:43:44.041038 17489 local.go:57] [etcd] wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
I0122 11:43:44.041068 17489 waitcontrolplane.go:89] [wait-control-plane] Waiting for the API server to be healthy
I0122 11:43:44.042665 17489 loader.go:359] Config loaded from file /etc/kubernetes/admin.conf
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0122 11:43:44.044971 17489 round_trippers.go:419] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.2 (linux/amd64) kubernetes/cff46ab" 'https://<haproxyserver-dns>:6443/healthz?timeout=32s'
I0122 11:43:44.120973 17489 round_trippers.go:438] GET https://<haproxyserver-dns>:6443/healthz?timeout=32s in 75 milliseconds
I0122 11:43:44.120988 17489 round_trippers.go:444] Response Headers:
I0122 11:43:44.621201 17489 round_trippers.go:419] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.2 (linux/amd64) kubernetes/cff46ab" 'https://<haproxyserver-dns>:6443/healthz?timeout=32s'
I0122 11:43:44.703556 17489 round_trippers.go:438] GET https://<haproxyserver-dns>:6443/healthz?timeout=32s in 82 milliseconds
I0122 11:43:44.703577 17489 round_trippers.go:444] Response Headers:
I0122 11:43:45.121311 17489 round_trippers.go:419] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.2 (linux/amd64) kubernetes/cff46ab" 'https://<haproxyserver-dns>:6443/healthz?timeout=32s'
I0122 11:43:45.200493 17489 round_trippers.go:438] GET https://<haproxyserver-dns>:6443/healthz?timeout=32s in 79 milliseconds
I0122 11:43:45.200514 17489 round_trippers.go:444] Response Headers:
I0122 11:43:45.621338 17489 round_trippers.go:419] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.2 (linux/amd64) kubernetes/cff46ab" 'https://<haproxyserver-dns>:6443/healthz?timeout=32s'
I0122 11:43:45.698633 17489 round_trippers.go:438] GET https://<haproxyserver-dns>:6443/healthz?timeout=32s in 77 milliseconds
I0122 11:43:45.698652 17489 round_trippers.go:444] Response Headers:
I0122 11:43:46.121323 17489 round_trippers.go:419] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.2 (linux/amd64) kubernetes/cff46ab" 'https://<haproxyserver-dns>:6443/healthz?timeout=32s'
I0122 11:43:46.199641 17489 round_trippers.go:438] GET https://<haproxyserver-dns>:6443/healthz?timeout=32s in 78 milliseconds
I0122 11:43:46.199660 17489 round_trippers.go:444] Response Headers:
after quitting the loop with Ctrl-C, I run the curl command mannually, but every thing seems ok:
curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.2 (linux/amd64) kubernetes/cff46ab" 'https://<haproxyserver-dns>:6443/healthz?timeout=32s'
* About to connect() to <haproxyserver-dns> port 6443 (#0)
* Trying <haproxyserver-ip>...
* Connected to <haproxyserver-dns> (10.135.64.223) port 6443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* skipping SSL peer certificate verification
* NSS: client certificate not found (nickname not specified)
* SSL connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
* Server certificate:
* subject: CN=kube-apiserver
* start date: Jan 22 03:43:38 2019 GMT
* expire date: Jan 22 03:43:38 2020 GMT
* common name: kube-apiserver
* issuer: CN=kubernetes
> GET /healthz?timeout=32s HTTP/1.1
> Host: <haproxyserver-dns>:6443
> Accept: application/json, */*
> User-Agent: kubeadm/v1.13.2 (linux/amd64) kubernetes/cff46ab
>
< HTTP/1.1 200 OK
< Date: Tue, 22 Jan 2019 04:09:03 GMT
< Content-Length: 2
< Content-Type: text/plain; charset=utf-8
<
* Connection #0 to host <haproxyserver-dns> left intact
ok
I don't know how to find out the essential cause of this issue, hoping someone who know about this can give me some suggestion. Thanks!
After several days of finding and trying, again, I can solve this problem by myself. In fact, the problem perhaps came with a very rare situation:
I set proxy on master node in both /etc/profile and docker.service.d, which made the request to haproxy don't work well.
I don't know which setting cause this problem. But after adding a no proxy rule, the problem solved and kubeadm successfully initialized a master after the haproxy load balancer. Here is my proxy settings :
/etc/profile:
...
export http_proxy=http://<my-proxy-server-dns:port>/
export no_proxy=<my-k8s-master-loadbalance-server-dns>,<my-proxy-server-dns>,localhost
/etc/systemd/system/docker.service.d/http-proxy.conf:
[Service]
Environment="HTTP_PROXY=http://<my-proxy-server-dns:port>/" "NO_PROXY<my-k8s-master-loadbalance-server-dns>,<my-proxy-server-dns>,localhost, 127.0.0.0/8, 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16"
The REST API requests , GET , POST , PUT etc to Kubernetes API server are request , responses and simple to understand , such as kubectl create <something>. I wonder how the API server serves the pod logs when I do kubectl logs -f <pod-name> ( and similar operations like kubectl attach <pod> ), Is it just an http response to GET in a loop?
My advice is to always check what kubectl does under the cover, and for that use -v=9 with your command. It will provide you with full request and responses that are going between the client and the server.
Yep, it looks like it's currently just a HTTP GET that kubectl is using, when looking at the source of logs.go although there seems to be a desire to unify and upgrade a couple of commands (exec, port-forward, logs, etc.) to WebSockets.
Showing Maciej's excellent suggestion in action:
$ kubectl run test --image centos:7 \
-- sh -c "while true ; do echo Work ; sleep 2 ; done"
$ kubectl get po
NAME READY STATUS RESTARTS AGE
test-769f6f8c9f-2nx7m 1/1 Running 0 2m
$ kubectl logs -v9 -f test-769f6f8c9f-2nx7m
I1019 13:49:34.282007 71247 loader.go:359] Config loaded from file /Users/mhausenblas/.kube/config
I1019 13:49:34.284698 71247 loader.go:359] Config loaded from file /Users/mhausenblas/.kube/config
I1019 13:49:34.292620 71247 loader.go:359] Config loaded from file /Users/mhausenblas/.kube/config
I1019 13:49:34.293136 71247 round_trippers.go:386] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.12.0 (darwin/amd64) kubernetes/0ed3388" 'https://192.168.64.13:8443/api/v1/namespaces/default/pods/test-769f6f8c9f-2nx7m'
I1019 13:49:34.305016 71247 round_trippers.go:405] GET https://192.168.64.13:8443/api/v1/namespaces/default/pods/test-769f6f8c9f-2nx7m 200 OK in 11 milliseconds
I1019 13:49:34.305039 71247 round_trippers.go:411] Response Headers:
I1019 13:49:34.305047 71247 round_trippers.go:414] Date: Fri, 19 Oct 2018 12:49:34 GMT
I1019 13:49:34.305054 71247 round_trippers.go:414] Content-Type: application/json
I1019 13:49:34.305062 71247 round_trippers.go:414] Content-Length: 2390
I1019 13:49:34.305125 71247 request.go:942] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"test-769f6f8c9f-2nx7m","generateName":"test-769f6f8c9f-","namespace":"default","selfLink":"/api/v1/namespaces/default/pods/test-769f6f8c9f-2nx7m","uid":"0581b0fa-d39d-11e8-9827-42a64713caf8","resourceVersion":"892912","creationTimestamp":"2018-10-19T12:46:39Z","labels":{"pod-template-hash":"3259294759","run":"test"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"test-769f6f8c9f","uid":"057f3ad4-d39d-11e8-9827-42a64713caf8","controller":true,"blockOwnerDeletion":true}]},"spec":{"volumes":[{"name":"default-token-fbx4m","secret":{"secretName":"default-token-fbx4m","defaultMode":420}}],"containers":[{"name":"test","image":"centos:7","args":["sh","-c","while true ; do echo Work ; sleep 2 ; done"],"resources":{},"volumeMounts":[{"name":"default-token-fbx4m","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"minikube","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}]},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2018-10-19T12:46:39Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2018-10-19T12:46:40Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":null},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2018-10-19T12:46:39Z"}],"hostIP":"192.168.64.13","podIP":"172.17.0.11","startTime":"2018-10-19T12:46:39Z","containerStatuses":[{"name":"test","state":{"running":{"startedAt":"2018-10-19T12:46:39Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"centos:7","imageID":"docker-pullable://centos#sha256:67dad89757a55bfdfabec8abd0e22f8c7c12a1856514726470228063ed86593b","containerID":"docker://5c25f5fce576d68d743afc9b46a9ea66f3cd245f5075aa95def623b6c2d93256"}],"qosClass":"BestEffort"}}
I1019 13:49:34.316531 71247 loader.go:359] Config loaded from file /Users/mhausenblas/.kube/config
I1019 13:49:34.317000 71247 round_trippers.go:386] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.12.0 (darwin/amd64) kubernetes/0ed3388" 'https://192.168.64.13:8443/api/v1/namespaces/default/pods/test-769f6f8c9f-2nx7m/log?follow=true'
I1019 13:49:34.339341 71247 round_trippers.go:405] GET https://192.168.64.13:8443/api/v1/namespaces/default/pods/test-769f6f8c9f-2nx7m/log?follow=true 200 OK in 22 milliseconds
I1019 13:49:34.339380 71247 round_trippers.go:411] Response Headers:
I1019 13:49:34.339390 71247 round_trippers.go:414] Content-Type: text/plain
I1019 13:49:34.339407 71247 round_trippers.go:414] Date: Fri, 19 Oct 2018 12:49:34 GMT
Work
Work
Work
^C
If you extract any Kubernetes object using kubectl on the highest debugging level -v 9 with a streaming option -f, as for example kubectl logs -f <pod-name> -v 9, you can realize that kubectl passing follow=true flag to API request by acquiring logs from target Pod accordingly, and stream to the output as well:
curl -k -v -XGET -H "Accept: application/json, /" -H "User-Agent:
kubectl/v1.12.1 (linux/amd64) kubernetes/4ed3216"
'https://API_server_IP/api/v1/namespaces/default/pods/Pod-name/log?follow=true'
You can consider launching own API requests by following the next steps:
Obtain token for authorization purpose:
MY_TOKEN="$(kubectl get secret <default-secret> -o jsonpath='{$.data.token}' | base64 -d)"
Then you can retrieve manually the required data from API server directly:
curl -k -v -H "Authorization : Bearer $MY_TOKEN" https://API_server_IP/api/v1/namespaces/default/pods
i created user teamcity to be able to use kube-apiserver
kubectl create serviceaccount teamcity
With the command below i get the secrets name
kubectl get accourcissements teamcity -o yaml
To find the token who were generated by the last command i use
kubectl get secret teamcity-token-lmr6z -o yaml
when i try to connect by curl i've an error and i dont understand where is my mistake :(
curl -v -Sskk -H "Authorization: bearer ZXlKaGJH......wWHNIVzZ3" https://10.109.0.88:6443/api/v1/namespaces
HTTP/1.1 401 Unauthorized
Content-Type: application/json
Date: Thu, 05 Jul 2018 13:14:00 GMT
Content-Length: 165
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "Unauthorized",
"reason": "Unauthorized",
"code": 401
* Connection #0 to host 10.109.0.88 left intact
I found a small description on kubernetes about why i get this error (section : Anonymous requests) https://kubernetes.io/docs/reference/access-authn-authz/authentication/
But i still not understand where is my mistake because with kubectl it's work
kubectl --token=ZXlKaGJHY2lPaUpTVXpJ........swWHNIVzZ3 get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-kubernetes NodePort 192.2.0.159 <none> 80:17502/TCP 13d
hello-kubernetes-olivier NodePort 192.2.0.235 <none> 80:17296/TCP 13d
kubernetes ClusterIP 192.2.0.1 <none> 443/TCP 14d
It might be your typo the part of "bearer", as i remeber it's "Bearer".
Some command sample is as follows, Kubernetes - Accessing Clusters
$ APISERVER=$(kubectl config view | grep server | cut -f 2- -d ":" | tr -d " ")
$ TOKEN=$(kubectl describe secret $(kubectl get secrets | grep default | cut -f1 -d ' ') | grep -E '^token' | cut -f2 -d':' | tr -d '\t')
$ curl $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure
{
"kind": "APIVersions",
"versions": [
"v1"
],
"serverAddressByClientCIDRs": [
{
"clientCIDR": "0.0.0.0/0",
"serverAddress": "10.0.1.149:443"
}
]
}