Kubernetes API: list pods with a label - kubernetes

I have namespace with few deployments. One of the deployments has a specific label (my-label=yes). I want to get all the pods with this label.
This is how it done with kubectl:
kdev get pods -l my-label=yes
it's working.
Now I want to do it with Kubernetes API. This is the closest point I get:
curl https://kubernetes.default.svc/api/v1/namespaces/XXX/pods --silent --header "Authorization: Bearer $TOKEN" --insecure
This command get all the pods in the namespace. I want to filter the results to all the pods with this requested label. How to do it?
Even more wide question: Is this possible to "translate" kubectl command into REST API call?

Is this possible to "translate" kubectl command into REST API call?
When you execute any command using kubectl it internally gets translated into a REST call with json payload before sending it to Kubernetes API Server. An easy way to inspect that is to run the command with verbosity increased
kubectl get pods -n kube-system -l=tier=control-plane --v=8
I0730 15:21:01.907211 5320 loader.go:375] Config loaded from file: /Users/arghyasadhu/.kube/config
I0730 15:21:01.912119 5320 round_trippers.go:420] GET https://xx.xx.xxx.xxx:6443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane&limit=500
I0730 15:21:01.912135 5320 round_trippers.go:427] Request Headers:
I0730 15:21:01.912139 5320 round_trippers.go:431] Accept: application/json;as=Table;v=v1;g=meta.k8s.io,application/json;as=Table;v=v1beta1;g=meta.k8s.io,application/json
I0730 15:21:01.912143 5320 round_trippers.go:431] User-Agent: kubectl/v1.18.0 (darwin/amd64) kubernetes/9e99141
I0730 15:21:02.071778 5320 round_trippers.go:446] Response Status: 200 OK in 159 milliseconds
I0730 15:21:02.071842 5320 round_trippers.go:449] Response Headers:
I0730 15:21:02.071858 5320 round_trippers.go:452] Cache-Control: no-cache, private
I0730 15:21:02.071865 5320 round_trippers.go:452] Content-Type: application/json
I0730 15:21:02.071870 5320 round_trippers.go:452] Date: Thu, 30 Jul 2020 09:51:02 GMT
I0730 15:21:02.114281 5320 request.go:1068] Response Body: {"kind":"Table","apiVersion":"meta.k8s.io/v1","metadata":{"selfLink":"/api/v1/namespaces/kube-system/pods","resourceVersion":"1150005"},"columnDefinitions":[{"name":"Name","type":"string","format":"name","description":"Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names","priority":0},{"name":"Ready","type":"string","format":"","description":"The aggregate readiness state of this pod for accepting traffic.","priority":0},{"name":"Status","type":"string","format":"","description":"The aggregate status of the containers in this pod.","priority":0},{"name":"Restarts","type":"integer","format":"","description":"The number of times the containers in this pod have been restarted.","priority":0},{"name":"Age","type":"strin [truncated 16503 chars]

Found it.
curl https://kubernetes.default.svc/api/v1/namespaces/XXX/pods?labelSelector=my-label%3Dyes --silent --header "Authorization: Bearer $TOKEN" --insecure

Related

Helm not adding timeout param in API server call

When we use helm to create Istio VirtualService or DestinationRules, it takes more than 30seconds. So we have set up a higher timeout of 5m.
Command used
helm upgrade --install --wait --timeout 5m --v 9 helloworld ./templateDir
However, I see that helm is not passing the new timeout value in APIserver call.
When we use kubectl to create the same, we set --request-timeout param for kubectl and everything is fine
Since it is a timeout from apiserver, is it possible to set a higher timeout value for all API server requests from helm? Is there any other workaround you can suggest for this problem ?
I0528 17:52:57.664992 11148 round_trippers.go:423] curl -k -v -XPOST -H "Content-Type: application/json" -H "Accept: application/json" 'https://aaaa.sk1.us-east-1.eks.amazonaws.com/apis/networking.istio.io/v1beta1/namespaces/default/destinationrules '
I0528 17:53:27.981691 11148 round_trippers.go:443] POST https://aaaa.sk1.us-east-1.eks.amazonaws.com/apis/networking.istio.io/v1beta1/namespaces/default/destinationrules 504 Gateway Timeout in 30316 milliseconds
I0528 17:53:27.981691 11148 round_trippers.go:449] Response Headers:
I0528 17:53:27.981691 11148 round_trippers.go:452] Audit-Id: cba23005-b8db-47f7-8801-4c89e6447cd3
I0528 17:53:27.981691 11148 round_trippers.go:452] Content-Type: application/json
I0528 17:53:27.981691 11148 round_trippers.go:452] Content-Length: 187
I0528 17:53:27.981691 11148 round_trippers.go:452] Date: Thu, 28 May 2020 12:23:27 GMT
I0528 17:53:27.981691 11148 request.go:1017] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Timeout: request did not complete within requested timeout 30s","reason":"Timeout","details":{},"code":504}
I0528 17:53:27.982759 11148 request.go:1017] Request Body: {"apiVersion":"networking.istio.io/v1beta1","kind":"VirtualService","metadata":{"name":"mbrsvc","namespace":"default"},"spec":{"hosts":["mbrsvc"],"http":[{"route":[{"destination":{"host":"mbrsvc","subset":"1.0.0"},"weight":100}]}]}}
Documentation states that the --timeout should be in seconds.
--timeout: A value in seconds to wait for Kubernetes commands to complete This defaults to 5m0s
--wait: Waits until all Pods are in a ready state, PVCs are bound, Deployments have minimum (Desired minus maxUnavailable) Pods in ready state and Services have an IP address (and Ingress if a LoadBalancer) before marking the release as successful. It will wait for as long as the --timeout value. If timeout is reached, the release will be marked as FAILED. Note: In scenario where Deployment has replicas set to 1 and maxUnavailable is not set to 0 as part of rolling update strategy, --wait will return as ready as it has satisfied the minimum Pod in ready condition.
Hope that helps.
This timeout comes from the server (Code 504) not from the client (Helm). Providing a --timeout is not going to have an influence here, I'm afraid

Unable to use-context from kubeconfig file

I'm trying to add a new cluster and its context in the kubeconfig file using username and password but its failing. Below are the commands I'm using to set-context.
kubectl config set-cluster lab101 --server=https://api-kube.example.com:8443 --insecure-skip-tls-verify --context=lab101
kubectl config set-credentials kubeadmin --username=kubeadmin --password=xxxxxxx --cluster=lab101
kubectl config set-context lab101 --cluster=lab101 --namespace=default --user=kubeadmin
kubectl config use-context lab101
Logs:
GET https://api-kube.example.com:8443/api?timeout=32s 403 Forbidden in 19 milliseconds
I0422 11:37:31.741005 18972 round_trippers.go:411] Response Headers:
I0422 11:37:31.741005 18972 round_trippers.go:414] Cache-Control: no-cache, private
I0422 11:37:31.741005 18972 round_trippers.go:414] Content-Type: application/json
I0422 11:37:31.741005 18972 round_trippers.go:414] X-Content-Type-Options: nosniff
I0422 11:37:31.741005 18972 round_trippers.go:414] Content-Length: 188
I0422 11:37:31.741005 18972 round_trippers.go:414] Date: Wed, 22 Apr 2020 15:37:31 GMT
I0422 11:37:31.762977 18972 request.go:897] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/api\"","reason":"Forbidden","details":{},"code":403}
Note: If I use the same user,password with oc login they work fine. Doesn't understand why it won't work if I set kubeconfig manually.
It's unlikely that OpenShift Kubernetes allows authentication using username and password. The oc login command internally authenticate using username and password to a oAuth server to get a bearer token which automatically is stored in the kubeconfig file used by kubectl. When you do any kubectl command that token is used to authenticate with the Kubernetes cluster.
You can check the token via oc config view. You can get the token and set it with kubectl config set-credentials kubeadmin --token=bearertoken and it should work.
Refer the docs here.
Alternatively you can follow this doc here to get a bearer token.

Setup Kubernetes HA cluster with kubeadm and F5 as load-balancer

I'm trying to setup a Kubernetes HA cluster using kubeadm as installer and F5 as load-balancer (cannot use HAproxy). I'm experiencing issues with the F5 configuration.
I'm using self-signed certificates and passed the apiserver.crt and apiserver.key to the load balancer.
For some reasons the kubeadm init script fails with the following error:
[apiclient] All control plane components are healthy after 33.083159 seconds
I0805 10:09:11.335063 1875 uploadconfig.go:109] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0805 10:09:11.340266 1875 request.go:947] Request Body: {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"kubeadm-config","namespace":"kube-system","creationTimestamp":null},"data":{"ClusterConfiguration":"apiServer:\n certSANs:\n - $F5_LOAD_BALANCER_VIP\n extraArgs:\n authorization-mode: Node,RBAC\n timeoutForControlPlane: 4m0s\napiVersion: kubeadm.k8s.io/v1beta2\ncertificatesDir: /etc/kubernetes/pki\nclusterName: kubernetes\ncontrolPlaneEndpoint: $F5_LOAD_BALANCER_VIP:6443\ncontrollerManager: {}\ndns:\n type: CoreDNS\netcd:\n local:\n dataDir: /var/lib/etcd\nimageRepository: k8s.gcr.io\nkind: ClusterConfiguration\nkubernetesVersion: v1.15.1\nnetworking:\n dnsDomain: cluster.local\n podSubnet: 192.168.0.0/16\n serviceSubnet: 10.96.0.0/12\nscheduler: {}\n","ClusterStatus":"apiEndpoints:\n lnxkbmaster02:\n advertiseAddress: $MASTER01_IP\n bindPort: 6443\napiVersion: kubeadm.k8s.io/v1beta2\nkind: ClusterStatus\n"}}
I0805 10:09:11.340459 1875 round_trippers.go:419] curl -k -v -XPOST -H "Accept: application/json, */*" -H "Content-Type: application/json" -H "User-Agent: kubeadm/v1.15.1 (linux/amd64) kubernetes/4485c6f" 'https://$F5_LOAD_BALANCER_VIP:6443/api/v1/namespaces/kube-system/configmaps'
I0805 10:09:11.342399 1875 round_trippers.go:438] POST https://$F5_LOAD_BALANCER_VIP:6443/api/v1/namespaces/kube-system/configmaps 403 Forbidden in 1 milliseconds
I0805 10:09:11.342449 1875 round_trippers.go:444] Response Headers:
I0805 10:09:11.342479 1875 round_trippers.go:447] Content-Type: application/json
I0805 10:09:11.342507 1875 round_trippers.go:447] X-Content-Type-Options: nosniff
I0805 10:09:11.342535 1875 round_trippers.go:447] Date: Mon, 05 Aug 2019 08:09:11 GMT
I0805 10:09:11.342562 1875 round_trippers.go:447] Content-Length: 285
I0805 10:09:11.342672 1875 request.go:947] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"configmaps is forbidden: User \"system:anonymous\" cannot create resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"","reason":"Forbidden","details":{"kind":"configmaps"},"code":403}
error execution phase upload-config/kubeadm: error uploading the kubeadm ClusterConfiguration: unable to create ConfigMap: configmaps is forbidden: User "system:anonymous" cannot create resource "configmaps" in API group "" in the namespace "kube-system"
The init is really basic:
kubeadm init --config=kubeadm-config.yaml --upload-certs
Here's the kubeadm-config.yaml:
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: stable
controlPlaneEndpoint: "$F5_LOAD_BALANCER_VIP:6443"
networking:
podSubnet: "192.168.0.0/16"
If I setup the cluster using a HAproxy the init runs smoothly:
#---------------------------------------------------------------------
# kubernetes
#---------------------------------------------------------------------
frontend kubernetes
bind $HAPROXY_LOAD_BALANCER_IP:6443
option tcplog
mode tcp
default_backend kubernetes-master-nodes
backend kubernetes-master-nodes
mode tcp
balance roundrobin
option tcp-check
server master01.my-domain $MASTER_01_IP:6443 check fall 3 rise 2
server master02.my-domain $MASTER_02_IP:6443 check fall 3 rise 2
server master03.my-domain $MASTER_03_IP:6443 check fall 3 rise 2
END
My solution has been to deploy the cluster without the proxy (F5) with a configuration as follows:
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: stable
controlPlaneEndpoint: "$MASTER_1_IP:6443"
networking:
podSubnet: "192.168.0.0/16"
Afterwards it was necessary to deploy on the cluster the F5 BIG-IP Controller for Kubernetes to manage the F5 device from Kubernetes.
Detailed guide can be found here:
https://clouddocs.f5.com/products/connectors/k8s-bigip-ctlr/v1.10/
Beware that it requires an additional F5 license and admin privileges.

error: the server doesn't have a resource type "svc"

Getting error: the server doesn't have a resource type "svc" when testing kubectl configuration whilst following this guide:
https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html
Detailed Error
$ kubectl get svc -v=8
I0712 15:30:24.902035 93745 loader.go:357] Config loaded from file /Users/matt.canty/.kube/config-test
I0712 15:30:24.902741 93745 round_trippers.go:383] GET https://REDACTED.yl4.us-east-1.eks.amazonaws.com/api
I0712 15:30:24.902762 93745 round_trippers.go:390] Request Headers:
I0712 15:30:24.902768 93745 round_trippers.go:393] User-Agent: kubectl/v1.10.3 (darwin/amd64) kubernetes/2bba012
I0712 15:30:24.902773 93745 round_trippers.go:393] Accept: application/json, */*
I0712 15:30:25.425614 93745 round_trippers.go:408] Response Status: 401 Unauthorized in 522 milliseconds
I0712 15:30:25.425651 93745 round_trippers.go:411] Response Headers:
I0712 15:30:25.425657 93745 round_trippers.go:414] Content-Type: application/json
I0712 15:30:25.425662 93745 round_trippers.go:414] Content-Length: 129
I0712 15:30:25.425670 93745 round_trippers.go:414] Date: Thu, 12 Jul 2018 14:30:25 GMT
I0712 15:30:25.426757 93745 request.go:874] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Unauthorized","reason":"Unauthorized","code":401}
I0712 15:30:25.428104 93745 cached_discovery.go:124] skipped caching discovery info due to Unauthorized
I0712 15:30:25.428239 93745 round_trippers.go:383] GET https://REDACTED.yl4.us-east-1.eks.amazonaws.com/api
I0712 15:30:25.428258 93745 round_trippers.go:390] Request Headers:
I0712 15:30:25.428268 93745 round_trippers.go:393] Accept: application/json, */*
I0712 15:30:25.428278 93745 round_trippers.go:393] User-Agent: kubectl/v1.10.3 (darwin/amd64) kubernetes/2bba012
I0712 15:30:25.577788 93745 round_trippers.go:408] Response Status: 401 Unauthorized in 149 milliseconds
I0712 15:30:25.577818 93745 round_trippers.go:411] Response Headers:
I0712 15:30:25.577838 93745 round_trippers.go:414] Content-Type: application/json
I0712 15:30:25.577854 93745 round_trippers.go:414] Content-Length: 129
I0712 15:30:25.577868 93745 round_trippers.go:414] Date: Thu, 12 Jul 2018 14:30:25 GMT
I0712 15:30:25.578876 93745 request.go:874] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Unauthorized","reason":"Unauthorized","code":401}
I0712 15:30:25.579492 93745 cached_discovery.go:124] skipped caching discovery info due to Unauthorized
I0712 15:30:25.579851 93745 round_trippers.go:383] GET https://REDACTED.yl4.us-east-1.eks.amazonaws.com/api
I0712 15:30:25.579864 93745 round_trippers.go:390] Request Headers:
I0712 15:30:25.579873 93745 round_trippers.go:393] Accept: application/json, */*
I0712 15:30:25.579879 93745 round_trippers.go:393] User-Agent: kubectl/v1.10.3 (darwin/amd64) kubernetes/2bba012
I0712 15:30:25.729513 93745 round_trippers.go:408] Response Status: 401 Unauthorized in 149 milliseconds
I0712 15:30:25.729541 93745 round_trippers.go:411] Response Headers:
I0712 15:30:25.729547 93745 round_trippers.go:414] Content-Type: application/json
I0712 15:30:25.729552 93745 round_trippers.go:414] Content-Length: 129
I0712 15:30:25.729557 93745 round_trippers.go:414] Date: Thu, 12 Jul 2018 14:30:25 GMT
I0712 15:30:25.730606 93745 request.go:874] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Unauthorized","reason":"Unauthorized","code":401}
I0712 15:30:25.731228 93745 cached_discovery.go:124] skipped caching discovery info due to Unauthorized
I0712 15:30:25.731254 93745 factory_object_mapping.go:93] Unable to retrieve API resources, falling back to hardcoded types: Unauthorized
F0712 15:30:25.731493 93745 helpers.go:119] error: the server doesn't have a resource type "svc"
Screenshot of EKS Cluster in AWS
Version
kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-28T20:03:09Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
error: You must be logged in to the server (the server has asked for the client to provide credentials)
Config
Kubctl Config
$ kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://REDACTED.yl4.us-east-1.eks.amazonaws.com
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: aws
name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- token
- -i
- test
command: heptio-authenticator-aws
env:
- name: AWS_PROFILE
value: personal
AWS Config
cat .aws/config
[profile personal]
source_profile = personal
AWS Credentials
$ cat .aws/credentials
[personal]
aws_access_key_id = REDACTED
aws_secret_access_key = REDACTED
 ~/.kube/config-test
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACETED
server: https://REDACTED.yl4.us-east-1.eks.amazonaws.com
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: aws
name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- token
- -i
- test
command: heptio-authenticator-aws
env:
- name: AWS_PROFILE
value: personal
Similar issues
error-the-server-doesnt-have-resource-type-svc
the-connection-to-the-server-localhost8080-was-refused-did-you-specify-the-ri
I just had a similar issue which I managed to resolve with aws support. The issue I was having was that the cluster was created with a role that was assumed by the user, but kubectl was not assuming this role with the default kube config created by the aws-cli.
I fixed the issue by providing the role in the users section of the kube config
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- token
- -i
- test
- -r
- <arn::of::your::role>
command: aws-iam-authenticator
env:
- name: AWS_PROFILE
value: personal
I believe the heptio-aws-authenticator has now been changed to the aws-iam-authenticator, but this change was what allowed me to use the cluster.
The 401s look like a permissions issue. Did your user create the cluster?
In the docs: "When you create an Amazon EKS cluster, the IAM entity (user or role) is automatically granted system:master permissions in the cluster's RBAC configuration. To grant additional AWS users the ability to interact with your cluster, you must edit the aws-auth ConfigMap within Kubernetes."
If it was created by a different user, you'll need to use that user, having it configured in the CLI to execute kubectl
Just delete cache and http-cache in .kube folder and try running the command
kubectl get svc
Also make sure that your config- file is properly indented. Due to syntax errors sometime it may throw that error.
Need to make sure the credentials used to create Cluster and execute kubectl in CLI are same. In my case I created cluster via console which took AWS temporary vending machine credentials that has expiry where as kubectl used the actual permanent credentials.
To fix the error, I created the cluster as well from the AWS CLI.
I had this issue where my KUBECONFIG environment variable had more than one value, it looked something like:
:/Users/my-user/.kube/config-firstcluster:/Users/my-user/.kube/config-secondcluster
Try unsetting and resetting the environment variable to have only 1 value and see if that works for you.
Possible solution if you created the cluster in the UI
If you created the cluster in the UI, it's possible the AWS root user created the cluster. According to the docs, "When an Amazon EKS cluster is created, the IAM entity (user or role) that creates the cluster is added to the Kubernetes RBAC authorization table as the administrator (with system:master) permissions. Initially, only that IAM user can make calls to the Kubernetes API server using kubectl. "
You'll need to first login to the AWS CLI as the root user in order to update the permissions of the IAM user you want to have access to the cluster.
You'll need to get an access key for the root user and put this info in .aws/credentials under the default user. You can do this using the command aws configure
Now kubectl get svc works, since you're logged in as the root user that initially created the cluster.
Apply the aws-auth ConfigMap to the cluster. Follow step 2 from these docs, using the NodeInstanceRole value you got as the Output from Step 3: Launch and Configure Amazon EKS Worker Nodes
To add a non-root IAM user or role to an Amazon EKS cluster, follow step 3 from these docs.
Edit the configmap/aws-auth and add other users that need kubectl access in the mapUsers section.
Run aws configure again and add the access key info from your non-root user.
Now you can access your cluster from the AWS CLI and using kubectl.
I ran into this error, and it was a DIFFERENT kube config issue, so the
error: the server doesn't have a resource type “svc”
error is probably very generic.
Im my case, the solution was to remove the quotes around the certificate-authority-data
Example
(not working)
certificate-authority-data:"xyxyxyxyxyxy"
(working)
certificate-authority-data: xyxyxyxyxyxy
I had a similar issue where was not able to list any of the kubernetes objects using kubectl. I tried following commands but I got the same "error: the server doesn't have a resource type object_name"
kubectl get pod
kubectl get service
kubectl get configmap
kubectl get namespace
I checked my k8s dashboard, it was working fine for me. Hence, I understood that there is a problem when kubectl is trying to make a connection with kube-apiserver. I decided to curl apiserver with existing certificates but it required certificate key and crt file. By default, kubectl reads the config from $HOME/.kube/config and look for context. In case of multiple clusters, check value of current-context: your_user#cluster_name. In the users section, check your_user and save the value of client-certificate/client-certificate-data and client-key/client-key-data in a file after following steps.
echo "value of client-certificate-data" | base64 --decode > your_user.crt
echo "value of client-key-data" | base64 --decode > your_user.key
#check the validality of certificate
openssl x509 -in your_user.crt -text
If the certificate had expired then create a new certificate and try to authenticate
openssl genrsa -out your_user.key 2048
openssl req -new -key your_user.key -subj "/CN=check_cn_from_existing_certificate_crt_file" -out your_user.csr
openssl x509 -req -in your_user.csr -CA /$PATH/ca.crt -CAkey /$PATH/ca.key -out your_user.crt -days 30
# Get the apiserver ip
APISERVER=$(cat ~/.kube/config | grep server | cut -f 2- -d ":" | tr -d " ")
# Authenticate with apiserver using curl command
curl $APISERVER/api/v1/pods \
--cert your_user.crt \
--key your_user.key \
--cacert /$PATH/ca.crt
If you are able to see pods then update the certificate in config file
Final output of $HOME/.kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority: /$PATH/ca.crt
server: https://192.168.0.143:8443 ($APISERVER)
name: cluster_name
contexts:
- context:
cluster: cluster_name
user: your_user
name: your_user#cluster_name
current-context: your_user#cluster_name
kind: Config
preferences: {}
users:
- name: your_user
user:
client-certificate: /$PATH/your_user.crt
client-key: /$PATH/your_user.key
Now, you should successfully able to list pod or other resources using kubectl

Kubernetes API Access from Pod

I'm trying to access the Kubernetes API in order to discover pods from within a deployed container. Although I'll do this programatically, right now, I'm just using cURL to check for issues.
I run this from a pod terminal:
curl -vvv -H "Authorization: Bearer $(</var/run/secrets/kubernetes.io/serviceaccount/token)" "https://kubernetes.default/api/v1/namespaces/$(</var/run/secrets/kubernetes.io/serviceaccount/namespace)/endpoints" --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
And I get a 403 result:
* About to connect() to kubernetes.default port 443 (#0)
* Trying 172.30.0.1...
* Connected to kubernetes.default (172.30.0.1) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* CAfile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
CApath: none
* NSS: client certificate not found (nickname not specified)
* SSL connection using TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
* Server certificate:
* subject: CN=10.0.75.2
* start date: Nov 23 16:55:27 2017 GMT
* expire date: Nov 23 16:55:28 2019 GMT
* common name: 10.0.75.2
* issuer: CN=openshift-signer#1511456125
> GET /api/v1/namespaces/myproject/endpoints HTTP/1.1 s/$(</var/run/secrets/kubernetes.io/serv
> User-Agent: curl/7.29.0
> Host: kubernetes.default
> Accept: */*> Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJteXByb2plY3QiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoiZGVmYXVsdC10b2tlbi00cXZidCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMjg3NzAzYjEtZDA4OC0xMWU3LTkzZjQtNmEyNGZhYWZjYzQxIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om15cHJvamVjdDpkZWZhdWx0In0.yl2HUhmxjrb4UqkAioq1TixWl_YqUPoxSvQPPSgl9Hzr97Hjm7icdL_mdptwEnOSErfzqSUBiMKJcIRdIa3Z7mfkgEk-f2H-M7TUU8GpXmD2Zex6Bcn_dq-Hsoed6W2PYpeFDoy98p5rSNTUL5MPMATOodeAulB0NG_zF01-8qTbLO_I6FRa3BCVXVMaZWBoZgwZ1acQbd4fJqDRsYmQMSi5P8a3nYgjBdifkQeTTb3S8Kmnszct41LoUlh9Xv29YVEyr1uQc5DSLAgQKj_NdSxkVq-MJP8z1PWV3OmHULNChocXr7RGKaNwlVpwpgNqsDAOqIyE1ozxlntIrotLBw
>
< HTTP/1.1 403 Forbidden
< Cache-Control: no-store
< Content-Type: application/json
< Date: Thu, 23 Nov 2017 22:18:01 GMT
< Content-Length: 282
<
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "User \"system:serviceaccount:myproject:default\" cannot list endpoints in project \"myproject\"",
"reason": "Forbidden",
"details": {
"kind": "endpoints"
},
"code": 403
}
* Connection #0 to host kubernetes.default left intact
I've tried to access a number of resources, like, endpoints, pods, etc. I've also omitted the namespace (as to access the whole cluster resources) to no avail.
I'm currently using OpenShift Origin, clean (just ran oc cluster up and deployed a test image to access the terminal in the web console).
It looks like you're on fully RBAC enabled cluster, and your default service account system:serviceaccount:myproject:default, as expected, is unauthorised. You should create and use dedicated service account for this pod and explicitly grant it access to what it needs to read.
https://kubernetes.io/docs/admin/authorization/rbac/
Pass an authorization token bearer within curl command. Without it, it's expected to be unauthorized.
More at: kubernetes documentation