Setup Kubernetes HA cluster with kubeadm and F5 as load-balancer - kubernetes

I'm trying to setup a Kubernetes HA cluster using kubeadm as installer and F5 as load-balancer (cannot use HAproxy). I'm experiencing issues with the F5 configuration.
I'm using self-signed certificates and passed the apiserver.crt and apiserver.key to the load balancer.
For some reasons the kubeadm init script fails with the following error:
[apiclient] All control plane components are healthy after 33.083159 seconds
I0805 10:09:11.335063 1875 uploadconfig.go:109] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0805 10:09:11.340266 1875 request.go:947] Request Body: {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"kubeadm-config","namespace":"kube-system","creationTimestamp":null},"data":{"ClusterConfiguration":"apiServer:\n certSANs:\n - $F5_LOAD_BALANCER_VIP\n extraArgs:\n authorization-mode: Node,RBAC\n timeoutForControlPlane: 4m0s\napiVersion: kubeadm.k8s.io/v1beta2\ncertificatesDir: /etc/kubernetes/pki\nclusterName: kubernetes\ncontrolPlaneEndpoint: $F5_LOAD_BALANCER_VIP:6443\ncontrollerManager: {}\ndns:\n type: CoreDNS\netcd:\n local:\n dataDir: /var/lib/etcd\nimageRepository: k8s.gcr.io\nkind: ClusterConfiguration\nkubernetesVersion: v1.15.1\nnetworking:\n dnsDomain: cluster.local\n podSubnet: 192.168.0.0/16\n serviceSubnet: 10.96.0.0/12\nscheduler: {}\n","ClusterStatus":"apiEndpoints:\n lnxkbmaster02:\n advertiseAddress: $MASTER01_IP\n bindPort: 6443\napiVersion: kubeadm.k8s.io/v1beta2\nkind: ClusterStatus\n"}}
I0805 10:09:11.340459 1875 round_trippers.go:419] curl -k -v -XPOST -H "Accept: application/json, */*" -H "Content-Type: application/json" -H "User-Agent: kubeadm/v1.15.1 (linux/amd64) kubernetes/4485c6f" 'https://$F5_LOAD_BALANCER_VIP:6443/api/v1/namespaces/kube-system/configmaps'
I0805 10:09:11.342399 1875 round_trippers.go:438] POST https://$F5_LOAD_BALANCER_VIP:6443/api/v1/namespaces/kube-system/configmaps 403 Forbidden in 1 milliseconds
I0805 10:09:11.342449 1875 round_trippers.go:444] Response Headers:
I0805 10:09:11.342479 1875 round_trippers.go:447] Content-Type: application/json
I0805 10:09:11.342507 1875 round_trippers.go:447] X-Content-Type-Options: nosniff
I0805 10:09:11.342535 1875 round_trippers.go:447] Date: Mon, 05 Aug 2019 08:09:11 GMT
I0805 10:09:11.342562 1875 round_trippers.go:447] Content-Length: 285
I0805 10:09:11.342672 1875 request.go:947] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"configmaps is forbidden: User \"system:anonymous\" cannot create resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"","reason":"Forbidden","details":{"kind":"configmaps"},"code":403}
error execution phase upload-config/kubeadm: error uploading the kubeadm ClusterConfiguration: unable to create ConfigMap: configmaps is forbidden: User "system:anonymous" cannot create resource "configmaps" in API group "" in the namespace "kube-system"
The init is really basic:
kubeadm init --config=kubeadm-config.yaml --upload-certs
Here's the kubeadm-config.yaml:
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: stable
controlPlaneEndpoint: "$F5_LOAD_BALANCER_VIP:6443"
networking:
podSubnet: "192.168.0.0/16"
If I setup the cluster using a HAproxy the init runs smoothly:
#---------------------------------------------------------------------
# kubernetes
#---------------------------------------------------------------------
frontend kubernetes
bind $HAPROXY_LOAD_BALANCER_IP:6443
option tcplog
mode tcp
default_backend kubernetes-master-nodes
backend kubernetes-master-nodes
mode tcp
balance roundrobin
option tcp-check
server master01.my-domain $MASTER_01_IP:6443 check fall 3 rise 2
server master02.my-domain $MASTER_02_IP:6443 check fall 3 rise 2
server master03.my-domain $MASTER_03_IP:6443 check fall 3 rise 2
END

My solution has been to deploy the cluster without the proxy (F5) with a configuration as follows:
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: stable
controlPlaneEndpoint: "$MASTER_1_IP:6443"
networking:
podSubnet: "192.168.0.0/16"
Afterwards it was necessary to deploy on the cluster the F5 BIG-IP Controller for Kubernetes to manage the F5 device from Kubernetes.
Detailed guide can be found here:
https://clouddocs.f5.com/products/connectors/k8s-bigip-ctlr/v1.10/
Beware that it requires an additional F5 license and admin privileges.

Related

Kubernetes API: list pods with a label

I have namespace with few deployments. One of the deployments has a specific label (my-label=yes). I want to get all the pods with this label.
This is how it done with kubectl:
kdev get pods -l my-label=yes
it's working.
Now I want to do it with Kubernetes API. This is the closest point I get:
curl https://kubernetes.default.svc/api/v1/namespaces/XXX/pods --silent --header "Authorization: Bearer $TOKEN" --insecure
This command get all the pods in the namespace. I want to filter the results to all the pods with this requested label. How to do it?
Even more wide question: Is this possible to "translate" kubectl command into REST API call?
Is this possible to "translate" kubectl command into REST API call?
When you execute any command using kubectl it internally gets translated into a REST call with json payload before sending it to Kubernetes API Server. An easy way to inspect that is to run the command with verbosity increased
kubectl get pods -n kube-system -l=tier=control-plane --v=8
I0730 15:21:01.907211 5320 loader.go:375] Config loaded from file: /Users/arghyasadhu/.kube/config
I0730 15:21:01.912119 5320 round_trippers.go:420] GET https://xx.xx.xxx.xxx:6443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane&limit=500
I0730 15:21:01.912135 5320 round_trippers.go:427] Request Headers:
I0730 15:21:01.912139 5320 round_trippers.go:431] Accept: application/json;as=Table;v=v1;g=meta.k8s.io,application/json;as=Table;v=v1beta1;g=meta.k8s.io,application/json
I0730 15:21:01.912143 5320 round_trippers.go:431] User-Agent: kubectl/v1.18.0 (darwin/amd64) kubernetes/9e99141
I0730 15:21:02.071778 5320 round_trippers.go:446] Response Status: 200 OK in 159 milliseconds
I0730 15:21:02.071842 5320 round_trippers.go:449] Response Headers:
I0730 15:21:02.071858 5320 round_trippers.go:452] Cache-Control: no-cache, private
I0730 15:21:02.071865 5320 round_trippers.go:452] Content-Type: application/json
I0730 15:21:02.071870 5320 round_trippers.go:452] Date: Thu, 30 Jul 2020 09:51:02 GMT
I0730 15:21:02.114281 5320 request.go:1068] Response Body: {"kind":"Table","apiVersion":"meta.k8s.io/v1","metadata":{"selfLink":"/api/v1/namespaces/kube-system/pods","resourceVersion":"1150005"},"columnDefinitions":[{"name":"Name","type":"string","format":"name","description":"Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names","priority":0},{"name":"Ready","type":"string","format":"","description":"The aggregate readiness state of this pod for accepting traffic.","priority":0},{"name":"Status","type":"string","format":"","description":"The aggregate status of the containers in this pod.","priority":0},{"name":"Restarts","type":"integer","format":"","description":"The number of times the containers in this pod have been restarted.","priority":0},{"name":"Age","type":"strin [truncated 16503 chars]
Found it.
curl https://kubernetes.default.svc/api/v1/namespaces/XXX/pods?labelSelector=my-label%3Dyes --silent --header "Authorization: Bearer $TOKEN" --insecure

Unable to use-context from kubeconfig file

I'm trying to add a new cluster and its context in the kubeconfig file using username and password but its failing. Below are the commands I'm using to set-context.
kubectl config set-cluster lab101 --server=https://api-kube.example.com:8443 --insecure-skip-tls-verify --context=lab101
kubectl config set-credentials kubeadmin --username=kubeadmin --password=xxxxxxx --cluster=lab101
kubectl config set-context lab101 --cluster=lab101 --namespace=default --user=kubeadmin
kubectl config use-context lab101
Logs:
GET https://api-kube.example.com:8443/api?timeout=32s 403 Forbidden in 19 milliseconds
I0422 11:37:31.741005 18972 round_trippers.go:411] Response Headers:
I0422 11:37:31.741005 18972 round_trippers.go:414] Cache-Control: no-cache, private
I0422 11:37:31.741005 18972 round_trippers.go:414] Content-Type: application/json
I0422 11:37:31.741005 18972 round_trippers.go:414] X-Content-Type-Options: nosniff
I0422 11:37:31.741005 18972 round_trippers.go:414] Content-Length: 188
I0422 11:37:31.741005 18972 round_trippers.go:414] Date: Wed, 22 Apr 2020 15:37:31 GMT
I0422 11:37:31.762977 18972 request.go:897] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/api\"","reason":"Forbidden","details":{},"code":403}
Note: If I use the same user,password with oc login they work fine. Doesn't understand why it won't work if I set kubeconfig manually.
It's unlikely that OpenShift Kubernetes allows authentication using username and password. The oc login command internally authenticate using username and password to a oAuth server to get a bearer token which automatically is stored in the kubeconfig file used by kubectl. When you do any kubectl command that token is used to authenticate with the Kubernetes cluster.
You can check the token via oc config view. You can get the token and set it with kubectl config set-credentials kubeadmin --token=bearertoken and it should work.
Refer the docs here.
Alternatively you can follow this doc here to get a bearer token.

Exposing virtual service with istio and mTLS globally enabled

I've this configuration on my service mesh:
mTLS globally enabled and meshpolicy default
simple-web deployment exposed as clusterip on port 8080
http gateway for port 80 and virtualservice routing on my service
Here the gw and vs yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: http-gateway
spec:
selector:
istio: ingressgateway # Specify the ingressgateway created for us
servers:
- port:
number: 80 # Service port to watch
name: http-gateway
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: simple-web
spec:
gateways:
- http-gateway
hosts:
- '*'
http:
- match:
- uri:
prefix: /simple-web
rewrite:
uri: /
route:
- destination:
host: simple-web
port:
number: 8080
Both vs and gw are in the same namespace.
The deployment was created and exposed with these commands:
k create deployment --image=yeasy/simple-web:latest simple-web
k expose deployment simple-web --port=8080 --target-port=80 --name=simple-web
and with k get pods I receive this:
pod/simple-web-9ffc59b4b-n9f85 2/2 Running
What happens is that from outside, pointing to ingress-gateway load balancer I receive 503 HTTP error.
If I try to curl from ingressgateway pod I can reach the simple-web service.
Why I can't reach the website with mTLS enabled? What's the correct configuration?
As #suren mentioned in his answer this issue is not present in istio version 1.3.2 . So one of solutions is to use newer version.
If you chose to upgrade istio to newer version please review documentation 1.3 Upgrade Notice and Upgrade Steps as Istio is still in development and changes drastically with each version.
Also as mentioned in comments by #Manuel Castro this is most likely issue addressed in Avoid 503 errors while reconfiguring service routes and newer version simply handles them better.
Creating both the VirtualServices and DestinationRules that define the
corresponding subsets using a single kubectl call (e.g., kubectl apply
-f myVirtualServiceAndDestinationRule.yaml is not sufficient because the resources propagate (from the configuration server, i.e.,
Kubernetes API server) to the Pilot instances in an eventually
consistent manner. If the VirtualService using the subsets arrives
before the DestinationRule where the subsets are defined, the Envoy
configuration generated by Pilot would refer to non-existent upstream
pools. This results in HTTP 503 errors until all configuration objects
are available to Pilot.
It should be possible to avoid this issue by temporarily disabling mTLS or by using permissive mode during the deployment.
I just installed istio-1.3.2, and k8s 1.15.1, to reproduced your issue, and it worked without any modifications. This is what I did:
0.- create a namespace called istio and enable sidecar injection automatically.
1.- $ kubectl run nginx --image nginx -n istio
2.- $ kubectl expose deploy nginx --port 8080 --target-port 80 --name simple-web -n istio
3.- $kubectl craete -f gw.yaml -f vs.yaml
Note: these are your files.
The test:
$ curl a.b.c.d:31380/simple-web -I
HTTP/1.1 200 OK
server: istio-envoy
date: Fri, 11 Oct 2019 10:04:26 GMT
content-type: text/html
content-length: 612
last-modified: Tue, 24 Sep 2019 14:49:10 GMT
etag: "5d8a2ce6-264"
accept-ranges: bytes
x-envoy-upstream-service-time: 4
[2019-10-11T10:04:26.101Z] "HEAD /simple-web HTTP/1.1" 200 - "-" "-" 0 0 6 4 "10.132.0.36" "curl/7.52.1" "4bbc2609-a928-9f79-9ae8-d6a3e32217d7" "a.b.c.d:31380" "192.168.171.73:80" outbound|8080||simple-web.istio.svc.cluster.local - 192.168.171.86:80 10.132.0.36:37078 - -
And to be sure mTLS was enabled, this is from ingress-gateway describe command:
--controlPlaneAuthPolicy
MUTUAL_TLS
So, I don't know what is wrong, but you might want to go through these steps and discard things.
Note: the reason I am attacking istio gateway on port 31380 is because my k8s is on VMs right now, and I didn't want to spin up a GKE cluster for a test.
EDIT
Just deployed another deployment with your image, exposed it as simple-web-2, and worked again. May be I'm lucky with istio:
$ curl a.b.c.d:31380/simple-web -I
HTTP/1.1 200 OK
server: istio-envoy
date: Fri, 11 Oct 2019 10:28:45 GMT
content-type: text/html
content-length: 354
last-modified: Fri, 11 Oct 2019 10:28:46 GMT
x-envoy-upstream-service-time: 4
[2019-10-11T10:28:46.400Z] "HEAD /simple-web HTTP/1.1" 200 - "-" "-" 0 0 5 4 "10.132.0.36" "curl/7.52.1" "df0dd00a-875a-9ae6-bd48-acd8be1cc784" "a.b.c.d:31380" "192.168.171.65:80" outbound|8080||simple-web-2.istio.svc.cluster.local - 192.168.171.86:80 10.132.0.36:42980 - -
What's your k8s environment?
EDIT2
# istioctl authn tls-check curler-6885d9fd97-vzszs simple-web.istio.svc.cluster.local -n istio
HOST:PORT STATUS SERVER CLIENT AUTHN POLICY DESTINATION RULE
simple-web.istio.svc.cluster.local:8080 OK mTLS mTLS default/ default/istio-system

error: the server doesn't have a resource type "svc"

Getting error: the server doesn't have a resource type "svc" when testing kubectl configuration whilst following this guide:
https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html
Detailed Error
$ kubectl get svc -v=8
I0712 15:30:24.902035 93745 loader.go:357] Config loaded from file /Users/matt.canty/.kube/config-test
I0712 15:30:24.902741 93745 round_trippers.go:383] GET https://REDACTED.yl4.us-east-1.eks.amazonaws.com/api
I0712 15:30:24.902762 93745 round_trippers.go:390] Request Headers:
I0712 15:30:24.902768 93745 round_trippers.go:393] User-Agent: kubectl/v1.10.3 (darwin/amd64) kubernetes/2bba012
I0712 15:30:24.902773 93745 round_trippers.go:393] Accept: application/json, */*
I0712 15:30:25.425614 93745 round_trippers.go:408] Response Status: 401 Unauthorized in 522 milliseconds
I0712 15:30:25.425651 93745 round_trippers.go:411] Response Headers:
I0712 15:30:25.425657 93745 round_trippers.go:414] Content-Type: application/json
I0712 15:30:25.425662 93745 round_trippers.go:414] Content-Length: 129
I0712 15:30:25.425670 93745 round_trippers.go:414] Date: Thu, 12 Jul 2018 14:30:25 GMT
I0712 15:30:25.426757 93745 request.go:874] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Unauthorized","reason":"Unauthorized","code":401}
I0712 15:30:25.428104 93745 cached_discovery.go:124] skipped caching discovery info due to Unauthorized
I0712 15:30:25.428239 93745 round_trippers.go:383] GET https://REDACTED.yl4.us-east-1.eks.amazonaws.com/api
I0712 15:30:25.428258 93745 round_trippers.go:390] Request Headers:
I0712 15:30:25.428268 93745 round_trippers.go:393] Accept: application/json, */*
I0712 15:30:25.428278 93745 round_trippers.go:393] User-Agent: kubectl/v1.10.3 (darwin/amd64) kubernetes/2bba012
I0712 15:30:25.577788 93745 round_trippers.go:408] Response Status: 401 Unauthorized in 149 milliseconds
I0712 15:30:25.577818 93745 round_trippers.go:411] Response Headers:
I0712 15:30:25.577838 93745 round_trippers.go:414] Content-Type: application/json
I0712 15:30:25.577854 93745 round_trippers.go:414] Content-Length: 129
I0712 15:30:25.577868 93745 round_trippers.go:414] Date: Thu, 12 Jul 2018 14:30:25 GMT
I0712 15:30:25.578876 93745 request.go:874] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Unauthorized","reason":"Unauthorized","code":401}
I0712 15:30:25.579492 93745 cached_discovery.go:124] skipped caching discovery info due to Unauthorized
I0712 15:30:25.579851 93745 round_trippers.go:383] GET https://REDACTED.yl4.us-east-1.eks.amazonaws.com/api
I0712 15:30:25.579864 93745 round_trippers.go:390] Request Headers:
I0712 15:30:25.579873 93745 round_trippers.go:393] Accept: application/json, */*
I0712 15:30:25.579879 93745 round_trippers.go:393] User-Agent: kubectl/v1.10.3 (darwin/amd64) kubernetes/2bba012
I0712 15:30:25.729513 93745 round_trippers.go:408] Response Status: 401 Unauthorized in 149 milliseconds
I0712 15:30:25.729541 93745 round_trippers.go:411] Response Headers:
I0712 15:30:25.729547 93745 round_trippers.go:414] Content-Type: application/json
I0712 15:30:25.729552 93745 round_trippers.go:414] Content-Length: 129
I0712 15:30:25.729557 93745 round_trippers.go:414] Date: Thu, 12 Jul 2018 14:30:25 GMT
I0712 15:30:25.730606 93745 request.go:874] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Unauthorized","reason":"Unauthorized","code":401}
I0712 15:30:25.731228 93745 cached_discovery.go:124] skipped caching discovery info due to Unauthorized
I0712 15:30:25.731254 93745 factory_object_mapping.go:93] Unable to retrieve API resources, falling back to hardcoded types: Unauthorized
F0712 15:30:25.731493 93745 helpers.go:119] error: the server doesn't have a resource type "svc"
Screenshot of EKS Cluster in AWS
Version
kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-28T20:03:09Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
error: You must be logged in to the server (the server has asked for the client to provide credentials)
Config
Kubctl Config
$ kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://REDACTED.yl4.us-east-1.eks.amazonaws.com
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: aws
name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- token
- -i
- test
command: heptio-authenticator-aws
env:
- name: AWS_PROFILE
value: personal
AWS Config
cat .aws/config
[profile personal]
source_profile = personal
AWS Credentials
$ cat .aws/credentials
[personal]
aws_access_key_id = REDACTED
aws_secret_access_key = REDACTED
 ~/.kube/config-test
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACETED
server: https://REDACTED.yl4.us-east-1.eks.amazonaws.com
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: aws
name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- token
- -i
- test
command: heptio-authenticator-aws
env:
- name: AWS_PROFILE
value: personal
Similar issues
error-the-server-doesnt-have-resource-type-svc
the-connection-to-the-server-localhost8080-was-refused-did-you-specify-the-ri
I just had a similar issue which I managed to resolve with aws support. The issue I was having was that the cluster was created with a role that was assumed by the user, but kubectl was not assuming this role with the default kube config created by the aws-cli.
I fixed the issue by providing the role in the users section of the kube config
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- token
- -i
- test
- -r
- <arn::of::your::role>
command: aws-iam-authenticator
env:
- name: AWS_PROFILE
value: personal
I believe the heptio-aws-authenticator has now been changed to the aws-iam-authenticator, but this change was what allowed me to use the cluster.
The 401s look like a permissions issue. Did your user create the cluster?
In the docs: "When you create an Amazon EKS cluster, the IAM entity (user or role) is automatically granted system:master permissions in the cluster's RBAC configuration. To grant additional AWS users the ability to interact with your cluster, you must edit the aws-auth ConfigMap within Kubernetes."
If it was created by a different user, you'll need to use that user, having it configured in the CLI to execute kubectl
Just delete cache and http-cache in .kube folder and try running the command
kubectl get svc
Also make sure that your config- file is properly indented. Due to syntax errors sometime it may throw that error.
Need to make sure the credentials used to create Cluster and execute kubectl in CLI are same. In my case I created cluster via console which took AWS temporary vending machine credentials that has expiry where as kubectl used the actual permanent credentials.
To fix the error, I created the cluster as well from the AWS CLI.
I had this issue where my KUBECONFIG environment variable had more than one value, it looked something like:
:/Users/my-user/.kube/config-firstcluster:/Users/my-user/.kube/config-secondcluster
Try unsetting and resetting the environment variable to have only 1 value and see if that works for you.
Possible solution if you created the cluster in the UI
If you created the cluster in the UI, it's possible the AWS root user created the cluster. According to the docs, "When an Amazon EKS cluster is created, the IAM entity (user or role) that creates the cluster is added to the Kubernetes RBAC authorization table as the administrator (with system:master) permissions. Initially, only that IAM user can make calls to the Kubernetes API server using kubectl. "
You'll need to first login to the AWS CLI as the root user in order to update the permissions of the IAM user you want to have access to the cluster.
You'll need to get an access key for the root user and put this info in .aws/credentials under the default user. You can do this using the command aws configure
Now kubectl get svc works, since you're logged in as the root user that initially created the cluster.
Apply the aws-auth ConfigMap to the cluster. Follow step 2 from these docs, using the NodeInstanceRole value you got as the Output from Step 3: Launch and Configure Amazon EKS Worker Nodes
To add a non-root IAM user or role to an Amazon EKS cluster, follow step 3 from these docs.
Edit the configmap/aws-auth and add other users that need kubectl access in the mapUsers section.
Run aws configure again and add the access key info from your non-root user.
Now you can access your cluster from the AWS CLI and using kubectl.
I ran into this error, and it was a DIFFERENT kube config issue, so the
error: the server doesn't have a resource type “svc”
error is probably very generic.
Im my case, the solution was to remove the quotes around the certificate-authority-data
Example
(not working)
certificate-authority-data:"xyxyxyxyxyxy"
(working)
certificate-authority-data: xyxyxyxyxyxy
I had a similar issue where was not able to list any of the kubernetes objects using kubectl. I tried following commands but I got the same "error: the server doesn't have a resource type object_name"
kubectl get pod
kubectl get service
kubectl get configmap
kubectl get namespace
I checked my k8s dashboard, it was working fine for me. Hence, I understood that there is a problem when kubectl is trying to make a connection with kube-apiserver. I decided to curl apiserver with existing certificates but it required certificate key and crt file. By default, kubectl reads the config from $HOME/.kube/config and look for context. In case of multiple clusters, check value of current-context: your_user#cluster_name. In the users section, check your_user and save the value of client-certificate/client-certificate-data and client-key/client-key-data in a file after following steps.
echo "value of client-certificate-data" | base64 --decode > your_user.crt
echo "value of client-key-data" | base64 --decode > your_user.key
#check the validality of certificate
openssl x509 -in your_user.crt -text
If the certificate had expired then create a new certificate and try to authenticate
openssl genrsa -out your_user.key 2048
openssl req -new -key your_user.key -subj "/CN=check_cn_from_existing_certificate_crt_file" -out your_user.csr
openssl x509 -req -in your_user.csr -CA /$PATH/ca.crt -CAkey /$PATH/ca.key -out your_user.crt -days 30
# Get the apiserver ip
APISERVER=$(cat ~/.kube/config | grep server | cut -f 2- -d ":" | tr -d " ")
# Authenticate with apiserver using curl command
curl $APISERVER/api/v1/pods \
--cert your_user.crt \
--key your_user.key \
--cacert /$PATH/ca.crt
If you are able to see pods then update the certificate in config file
Final output of $HOME/.kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority: /$PATH/ca.crt
server: https://192.168.0.143:8443 ($APISERVER)
name: cluster_name
contexts:
- context:
cluster: cluster_name
user: your_user
name: your_user#cluster_name
current-context: your_user#cluster_name
kind: Config
preferences: {}
users:
- name: your_user
user:
client-certificate: /$PATH/your_user.crt
client-key: /$PATH/your_user.key
Now, you should successfully able to list pod or other resources using kubectl

How to let kubelet communicate with apiserver by using HTTPS? v0.19

I deployed apiserver on master node (core01) with following conf:
core01> /opt/bin/kube-apiserver \
--insecure_bind_address=127.0.0.1 \
--insecure_port=8080 \
--kubelet_port=10250 \
--etcd_servers=http://core01:2379,http://core02:2379,http://core03:2379 \
--service-cluster-ip-range=10.1.0.0/16 \
--allow_privileged=false \
--logtostderr=true \
--v=5 \
--tls-cert-file="/var/run/kubernetes/apiserver_36kr.pem" \
--tls-private-key-file="/var/run/kubernetes/apiserver_36kr.key" \
--client-ca-file="/var/run/kubernetes/cacert.pem" \
--kubelet-certificate-authority="/var/run/kubernetes/cacert.pem" \
--kubelet-client-certificate="/var/run/kubernetes/kubelet_36kr.pem" \
--kubelet-client-key="/var/run/kubernetes/kubelet_36kr.key"
On minion node (core02), I can call api from HTTPS:
core02> curl https://core01:6443/api/v1/nodes --cert /var/run/kubernetes/kubelet_36kr.pem --key /var/run/kubernetes/kubelet_36kr.key
> GET /api/v1/nodes HTTP/1.1
> Host: core01:6443
> User-Agent: curl/7.42.1
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Type: application/json
< Date: Sat, 27 Jun 2015 15:33:50 GMT
< Content-Length: 1577
<
{
"kind": "NodeList",
"apiVersion": "v1",
"metadata": {
"selfLink": "/api/v1/nodes",
"resourceVersion": "510078"
}, ....
However, I can not start kubelet on this minion. It always complain no credentials.
How can I make it work? Is there any doc on master <-> minion communication authentication? Could you please give me the best practice?
FYI, The command is following:
core02> /opt/bin/kubelet \
--logtostderr=true \
--v=0 \
--api_servers=https://core01:6443 \
--address=127.0.0.1 \
--port=10250 \
--allow-privileged=false \
--tls-cert-file="/var/run/kubernetes/kubelet_36kr.pem" \
--tls-private-key-file="/var/run/kubernetes/kubelet_36kr.key"
kubelet log is following:
W0627 23:34:03.646311 3004 server.go:460] Could not load kubeconfig file /var/lib/kubelet/kubeconfig: stat /var/lib/kubelet/kubeconfig: no such file or directory. Trying auth path instead.
W0627 23:34:03.646520 3004 server.go:422] Could not load kubernetes auth path /var/lib/kubelet/kubernetes_auth: stat /var/lib/kubelet/kubernetes_auth: no such file or directory. Continuing with defaults.
I0627 23:34:03.646710 3004 manager.go:127] cAdvisor running in container: "/system.slice/sshd.service"
I0627 23:34:03.647292 3004 fs.go:93] Filesystem partitions: map[/dev/sda9:{mountpoint:/ major:0 minor:30} /dev/sda4:{mountpoint:/usr major:8 minor:4} /dev/sda6:{mountpoint:/usr/share/oem major:8 minor:6}]
I0627 23:34:03.648234 3004 manager.go:156] Machine: {NumCores:1 CpuFrequency:2399996 MemoryCapacity:1046294528 MachineID:29f94a4fad8b31668bd219ca511bdeb0 SystemUUID:4F4AF929-8BAD-6631-8BD2-19CA511BDEB0 BootID:fa1bea28-675e-4989-ad86-00797721a794 Filesystems:[{Device:/dev/sda9 Capacity:18987593728} {Device:/dev/sda4 Capacity:1031946240} {Device:/dev/sda6 Capacity:113229824}] DiskMap:map[8:0:{Name:sda Major:8 Minor:0 Size:21474836480 Scheduler:cfq} 8:16:{Name:sdb Major:8 Minor:16 Size:1073741824 Scheduler:cfq}] NetworkDevices:[{Name:eth0 MacAddress:52:54:71:f6:fc:b8 Speed:0 Mtu:1500} {Name:flannel0 MacAddress: Speed:10 Mtu:1472}] Topology:[{Id:0 Memory:1046294528 Cores:[{Id:0 Threads:[0] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:4194304 Type:Unified Level:2}]}] Caches:[]}]}
I0627 23:34:03.649934 3004 manager.go:163] Version: {KernelVersion:4.0.5 ContainerOsVersion:CoreOS 695.2.0 DockerVersion:1.6.2 CadvisorVersion:0.15.1}
I0627 23:34:03.651758 3004 plugins.go:69] No cloud provider specified.
I0627 23:34:03.651855 3004 docker.go:289] Connecting to docker on unix:///var/run/docker.sock
I0627 23:34:03.652877 3004 server.go:659] Watching apiserver
E0627 23:34:03.748954 3004 reflector.go:136] Failed to list *api.Pod: the server has asked for the client to provide credentials (get pods)
E0627 23:34:03.750157 3004 reflector.go:136] Failed to list *api.Node: the server has asked for the client to provide credentials (get nodes)
E0627 23:34:03.751666 3004 reflector.go:136] Failed to list *api.Service: the server has asked for the client to provide credentials (get services)
I0627 23:34:03.758158 3004 plugins.go:56] Registering credential provider: .dockercfg
I0627 23:34:03.856215 3004 server.go:621] Started kubelet
E0627 23:34:03.858346 3004 kubelet.go:662] Image garbage collection failed: unable to find data for container /
I0627 23:34:03.869739 3004 kubelet.go:682] Running in container "/kubelet"
I0627 23:34:03.869755 3004 server.go:63] Starting to listen on 127.0.0.1:10250
E0627 23:34:03.899877 3004 event.go:185] Server rejected event '&api.Event{TypeMeta:api.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"core02.13eba23275ceda25", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:util.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*util.Time)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil)}, InvolvedObject:api.ObjectReference{Kind:"Node", Namespace:"", Name:"core02", UID:"core02", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"starting", Message:"Starting kubelet.", Source:api.EventSource{Component:"kubelet", Host:"core02"}, FirstTimestamp:util.Time{Time:time.Time{sec:63571016043, nsec:856189989, loc:(*time.Location)(0x1ba6120)}}, LastTimestamp:util.Time{Time:time.Time{sec:63571016043, nsec:856189989, loc:(*time.Location)(0x1ba6120)}}, Count:1}': 'the server has asked for the client to provide credentials (post events)' (will not retry!)
I0627 23:34:04.021297 3004 factory.go:226] System is using systemd
I0627 23:34:04.021790 3004 factory.go:234] Registering Docker factory
I0627 23:34:04.022241 3004 factory.go:89] Registering Raw factory
I0627 23:34:04.144065 3004 manager.go:946] Started watching for new ooms in manager
I0627 23:34:04.144655 3004 oomparser.go:183] oomparser using systemd
I0627 23:34:04.145379 3004 manager.go:243] Starting recovery of all containers
I0627 23:34:04.293020 3004 manager.go:248] Recovery completed
I0627 23:34:04.343829 3004 status_manager.go:56] Starting to sync pod status with apiserver
I0627 23:34:04.343928 3004 kubelet.go:1683] Starting kubelet main sync loop.
E0627 23:34:04.457765 3004 event.go:185] Server rejected event '&api.Event{TypeMeta:api.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"core02.13eba232995c8213", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:util.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*util.Time)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil)}, InvolvedObject:api.ObjectReference{Kind:"Node", Namespace:"", Name:"core02", UID:"core02", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeReady", Message:"Node core02 status is now: NodeReady", Source:api.EventSource{Component:"kubelet", Host:"core02"}, FirstTimestamp:util.Time{Time:time.Time{sec:63571016044, nsec:452676115, loc:(*time.Location)(0x1ba6120)}}, LastTimestamp:util.Time{Time:time.Time{sec:63571016044, nsec:452676115, loc:(*time.Location)(0x1ba6120)}}, Count:1}': 'the server has asked for the client to provide credentials (post events)' (will not retry!)
E0627 23:34:04.659874 3004 event.go:185] Server rejected event '&api.Event{TypeMeta:api.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"core02.13eba232a599cf8c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:util.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*util.Time)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil)}, InvolvedObject:api.ObjectReference{Kind:"Node", Namespace:"", Name:"core02", UID:"core02", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeReady", Message:"Node core02 status is now: NodeReady", Source:api.EventSource{Component:"kubelet", Host:"core02"}, FirstTimestamp:util.Time{Time:time.Time{sec:63571016044, nsec:658020236, loc:(*time.Location)(0x1ba6120)}}, LastTimestamp:util.Time{Time:time.Time{sec:63571016044, nsec:658020236, loc:(*time.Location)(0x1ba6120)}}, Count:1}': 'the server has asked for the client to provide credentials (post events)' (will not retry!)
The first two lines of the kubelet log file actually point to the underlying problem -- you aren't specifying any client credentials for the kubelet to connect to the master.
The --tls-cert-file and --tls-private-key-file arguments for the kubelet are used to configure the http server on the kubelet (if not specified, the kubelet will generate a self-signed certificate for its https endpoint). This certificate / key pair are not used as the client certificate presented to the master for authentication.
To specify credentials, there are two options: a kubeconfig file and a kubernetes_auth file. The later is deprecated, so I would recommend using a kubeconfig file.
Inside the kubeconfig file you need to specify either a bearer token or a client certificate that the kubelet should present to the apiserver. You can also specify the CA certificate for the apiserver (if you want the connection to be secure) or tell the kubelet to skip checking the certificate presented by the apiserver. Since you have certificates for the apiserver, I'd recommend adding the CA certificate to the kubeconfig file.
The kubeconfig file should look like:
apiVersion: v1
kind: Config
users:
- name: kubelet
user:
client-certificate-data: <base64-encoded-cert>
client-key-data: <base64-encoded-key>
clusters:
- name: local
cluster:
certificate-authority-data: <base64-encoded-ca-cert>
contexts:
- context:
cluster: local
user: kubelet
name: service-account-context
current-context: service-account-context
To generate the base64 encoded client cert, you should be able to run something like cat /var/run/kubernetes/kubelet_36kr.pem | base64. If you don't have the CA certificate handy, you can replace the certificate-authority-data: <base64-encoded-ca-cert> with insecure-skip-tls-verify: true.
If you put this file at /var/lib/kubelet/kubeconfig it should get picked up automatically. Otherwise, you can use the --kubeconfig argument to specify a custom location.
All credit to jnoller as he specifies the below commands on this issue.
He just made a typo because he runs kubectl config set-credentials twice
This is similar to Robert Bailey's accepted answer except you don't need to base64 anything and it makes it easy to script.
kubectl config set-cluster default-cluster --server=https://${MASTER} \
--certificate-authority=/path/to/ca.pem
kubectl config set-credentials default-admin \
--certificate-authority=/path/to/ca.pem \
--client-key=/path/to/admin-key.pem \
--client-certificate=/path/to/admin.pem
kubectl config set-context default-system --cluster=default-cluster --user=default-admin
kubectl config use-context default-system
The resulting config generated in ~/.kube/config looks like this:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /etc/kubernetes/certs/ca.crt
server: https://kubernetesmaster
name: default-cluster
contexts:
- context:
cluster: default-cluster
user: default-admin
name: default-system
current-context: default-system
kind: Config
preferences: {}
users:
- name: default-admin
user:
client-certificate: /etc/kubernetes/certs/server.crt
client-key: /etc/kubernetes/certs/server.key
As Robert Bailey said, the main problem has to do with the client credentials to connect the master ("Could not load kubeconfig file /var/lib/kubelet/kubeconfig...").
Instead of create the kubeconfig file manually, I chose to generate it using kubectl tool.
Example from docs:
$ kubectl config set-credentials myuser --username=myusername --password=mypassword
$ kubectl config set-cluster local-server --server=http://localhost:8080
$ kubectl config set-context default-context --cluster=local-server --user=myuser
$ kubectl config use-context default-context
$ kubectl config set contexts.default-context.namespace mynamespace
Those commands will generate a config file in ~/.kube/config
Check the result with:
$ kubectl config view
Then I just created a symbolic link inside /var/lib/kubelet (default place) to my config file:
$ cd /var/lib/kubelet
$ sudo ln -s ~/.kube/config kubeconfig
This worked for me. I hope it works for you too.