How to let kubelet communicate with apiserver by using HTTPS? v0.19 - kubernetes

I deployed apiserver on master node (core01) with following conf:
core01> /opt/bin/kube-apiserver \
--insecure_bind_address=127.0.0.1 \
--insecure_port=8080 \
--kubelet_port=10250 \
--etcd_servers=http://core01:2379,http://core02:2379,http://core03:2379 \
--service-cluster-ip-range=10.1.0.0/16 \
--allow_privileged=false \
--logtostderr=true \
--v=5 \
--tls-cert-file="/var/run/kubernetes/apiserver_36kr.pem" \
--tls-private-key-file="/var/run/kubernetes/apiserver_36kr.key" \
--client-ca-file="/var/run/kubernetes/cacert.pem" \
--kubelet-certificate-authority="/var/run/kubernetes/cacert.pem" \
--kubelet-client-certificate="/var/run/kubernetes/kubelet_36kr.pem" \
--kubelet-client-key="/var/run/kubernetes/kubelet_36kr.key"
On minion node (core02), I can call api from HTTPS:
core02> curl https://core01:6443/api/v1/nodes --cert /var/run/kubernetes/kubelet_36kr.pem --key /var/run/kubernetes/kubelet_36kr.key
> GET /api/v1/nodes HTTP/1.1
> Host: core01:6443
> User-Agent: curl/7.42.1
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Type: application/json
< Date: Sat, 27 Jun 2015 15:33:50 GMT
< Content-Length: 1577
<
{
"kind": "NodeList",
"apiVersion": "v1",
"metadata": {
"selfLink": "/api/v1/nodes",
"resourceVersion": "510078"
}, ....
However, I can not start kubelet on this minion. It always complain no credentials.
How can I make it work? Is there any doc on master <-> minion communication authentication? Could you please give me the best practice?
FYI, The command is following:
core02> /opt/bin/kubelet \
--logtostderr=true \
--v=0 \
--api_servers=https://core01:6443 \
--address=127.0.0.1 \
--port=10250 \
--allow-privileged=false \
--tls-cert-file="/var/run/kubernetes/kubelet_36kr.pem" \
--tls-private-key-file="/var/run/kubernetes/kubelet_36kr.key"
kubelet log is following:
W0627 23:34:03.646311 3004 server.go:460] Could not load kubeconfig file /var/lib/kubelet/kubeconfig: stat /var/lib/kubelet/kubeconfig: no such file or directory. Trying auth path instead.
W0627 23:34:03.646520 3004 server.go:422] Could not load kubernetes auth path /var/lib/kubelet/kubernetes_auth: stat /var/lib/kubelet/kubernetes_auth: no such file or directory. Continuing with defaults.
I0627 23:34:03.646710 3004 manager.go:127] cAdvisor running in container: "/system.slice/sshd.service"
I0627 23:34:03.647292 3004 fs.go:93] Filesystem partitions: map[/dev/sda9:{mountpoint:/ major:0 minor:30} /dev/sda4:{mountpoint:/usr major:8 minor:4} /dev/sda6:{mountpoint:/usr/share/oem major:8 minor:6}]
I0627 23:34:03.648234 3004 manager.go:156] Machine: {NumCores:1 CpuFrequency:2399996 MemoryCapacity:1046294528 MachineID:29f94a4fad8b31668bd219ca511bdeb0 SystemUUID:4F4AF929-8BAD-6631-8BD2-19CA511BDEB0 BootID:fa1bea28-675e-4989-ad86-00797721a794 Filesystems:[{Device:/dev/sda9 Capacity:18987593728} {Device:/dev/sda4 Capacity:1031946240} {Device:/dev/sda6 Capacity:113229824}] DiskMap:map[8:0:{Name:sda Major:8 Minor:0 Size:21474836480 Scheduler:cfq} 8:16:{Name:sdb Major:8 Minor:16 Size:1073741824 Scheduler:cfq}] NetworkDevices:[{Name:eth0 MacAddress:52:54:71:f6:fc:b8 Speed:0 Mtu:1500} {Name:flannel0 MacAddress: Speed:10 Mtu:1472}] Topology:[{Id:0 Memory:1046294528 Cores:[{Id:0 Threads:[0] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:4194304 Type:Unified Level:2}]}] Caches:[]}]}
I0627 23:34:03.649934 3004 manager.go:163] Version: {KernelVersion:4.0.5 ContainerOsVersion:CoreOS 695.2.0 DockerVersion:1.6.2 CadvisorVersion:0.15.1}
I0627 23:34:03.651758 3004 plugins.go:69] No cloud provider specified.
I0627 23:34:03.651855 3004 docker.go:289] Connecting to docker on unix:///var/run/docker.sock
I0627 23:34:03.652877 3004 server.go:659] Watching apiserver
E0627 23:34:03.748954 3004 reflector.go:136] Failed to list *api.Pod: the server has asked for the client to provide credentials (get pods)
E0627 23:34:03.750157 3004 reflector.go:136] Failed to list *api.Node: the server has asked for the client to provide credentials (get nodes)
E0627 23:34:03.751666 3004 reflector.go:136] Failed to list *api.Service: the server has asked for the client to provide credentials (get services)
I0627 23:34:03.758158 3004 plugins.go:56] Registering credential provider: .dockercfg
I0627 23:34:03.856215 3004 server.go:621] Started kubelet
E0627 23:34:03.858346 3004 kubelet.go:662] Image garbage collection failed: unable to find data for container /
I0627 23:34:03.869739 3004 kubelet.go:682] Running in container "/kubelet"
I0627 23:34:03.869755 3004 server.go:63] Starting to listen on 127.0.0.1:10250
E0627 23:34:03.899877 3004 event.go:185] Server rejected event '&api.Event{TypeMeta:api.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"core02.13eba23275ceda25", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:util.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*util.Time)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil)}, InvolvedObject:api.ObjectReference{Kind:"Node", Namespace:"", Name:"core02", UID:"core02", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"starting", Message:"Starting kubelet.", Source:api.EventSource{Component:"kubelet", Host:"core02"}, FirstTimestamp:util.Time{Time:time.Time{sec:63571016043, nsec:856189989, loc:(*time.Location)(0x1ba6120)}}, LastTimestamp:util.Time{Time:time.Time{sec:63571016043, nsec:856189989, loc:(*time.Location)(0x1ba6120)}}, Count:1}': 'the server has asked for the client to provide credentials (post events)' (will not retry!)
I0627 23:34:04.021297 3004 factory.go:226] System is using systemd
I0627 23:34:04.021790 3004 factory.go:234] Registering Docker factory
I0627 23:34:04.022241 3004 factory.go:89] Registering Raw factory
I0627 23:34:04.144065 3004 manager.go:946] Started watching for new ooms in manager
I0627 23:34:04.144655 3004 oomparser.go:183] oomparser using systemd
I0627 23:34:04.145379 3004 manager.go:243] Starting recovery of all containers
I0627 23:34:04.293020 3004 manager.go:248] Recovery completed
I0627 23:34:04.343829 3004 status_manager.go:56] Starting to sync pod status with apiserver
I0627 23:34:04.343928 3004 kubelet.go:1683] Starting kubelet main sync loop.
E0627 23:34:04.457765 3004 event.go:185] Server rejected event '&api.Event{TypeMeta:api.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"core02.13eba232995c8213", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:util.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*util.Time)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil)}, InvolvedObject:api.ObjectReference{Kind:"Node", Namespace:"", Name:"core02", UID:"core02", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeReady", Message:"Node core02 status is now: NodeReady", Source:api.EventSource{Component:"kubelet", Host:"core02"}, FirstTimestamp:util.Time{Time:time.Time{sec:63571016044, nsec:452676115, loc:(*time.Location)(0x1ba6120)}}, LastTimestamp:util.Time{Time:time.Time{sec:63571016044, nsec:452676115, loc:(*time.Location)(0x1ba6120)}}, Count:1}': 'the server has asked for the client to provide credentials (post events)' (will not retry!)
E0627 23:34:04.659874 3004 event.go:185] Server rejected event '&api.Event{TypeMeta:api.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"core02.13eba232a599cf8c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:util.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*util.Time)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil)}, InvolvedObject:api.ObjectReference{Kind:"Node", Namespace:"", Name:"core02", UID:"core02", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeReady", Message:"Node core02 status is now: NodeReady", Source:api.EventSource{Component:"kubelet", Host:"core02"}, FirstTimestamp:util.Time{Time:time.Time{sec:63571016044, nsec:658020236, loc:(*time.Location)(0x1ba6120)}}, LastTimestamp:util.Time{Time:time.Time{sec:63571016044, nsec:658020236, loc:(*time.Location)(0x1ba6120)}}, Count:1}': 'the server has asked for the client to provide credentials (post events)' (will not retry!)

The first two lines of the kubelet log file actually point to the underlying problem -- you aren't specifying any client credentials for the kubelet to connect to the master.
The --tls-cert-file and --tls-private-key-file arguments for the kubelet are used to configure the http server on the kubelet (if not specified, the kubelet will generate a self-signed certificate for its https endpoint). This certificate / key pair are not used as the client certificate presented to the master for authentication.
To specify credentials, there are two options: a kubeconfig file and a kubernetes_auth file. The later is deprecated, so I would recommend using a kubeconfig file.
Inside the kubeconfig file you need to specify either a bearer token or a client certificate that the kubelet should present to the apiserver. You can also specify the CA certificate for the apiserver (if you want the connection to be secure) or tell the kubelet to skip checking the certificate presented by the apiserver. Since you have certificates for the apiserver, I'd recommend adding the CA certificate to the kubeconfig file.
The kubeconfig file should look like:
apiVersion: v1
kind: Config
users:
- name: kubelet
user:
client-certificate-data: <base64-encoded-cert>
client-key-data: <base64-encoded-key>
clusters:
- name: local
cluster:
certificate-authority-data: <base64-encoded-ca-cert>
contexts:
- context:
cluster: local
user: kubelet
name: service-account-context
current-context: service-account-context
To generate the base64 encoded client cert, you should be able to run something like cat /var/run/kubernetes/kubelet_36kr.pem | base64. If you don't have the CA certificate handy, you can replace the certificate-authority-data: <base64-encoded-ca-cert> with insecure-skip-tls-verify: true.
If you put this file at /var/lib/kubelet/kubeconfig it should get picked up automatically. Otherwise, you can use the --kubeconfig argument to specify a custom location.

All credit to jnoller as he specifies the below commands on this issue.
He just made a typo because he runs kubectl config set-credentials twice
This is similar to Robert Bailey's accepted answer except you don't need to base64 anything and it makes it easy to script.
kubectl config set-cluster default-cluster --server=https://${MASTER} \
--certificate-authority=/path/to/ca.pem
kubectl config set-credentials default-admin \
--certificate-authority=/path/to/ca.pem \
--client-key=/path/to/admin-key.pem \
--client-certificate=/path/to/admin.pem
kubectl config set-context default-system --cluster=default-cluster --user=default-admin
kubectl config use-context default-system
The resulting config generated in ~/.kube/config looks like this:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /etc/kubernetes/certs/ca.crt
server: https://kubernetesmaster
name: default-cluster
contexts:
- context:
cluster: default-cluster
user: default-admin
name: default-system
current-context: default-system
kind: Config
preferences: {}
users:
- name: default-admin
user:
client-certificate: /etc/kubernetes/certs/server.crt
client-key: /etc/kubernetes/certs/server.key

As Robert Bailey said, the main problem has to do with the client credentials to connect the master ("Could not load kubeconfig file /var/lib/kubelet/kubeconfig...").
Instead of create the kubeconfig file manually, I chose to generate it using kubectl tool.
Example from docs:
$ kubectl config set-credentials myuser --username=myusername --password=mypassword
$ kubectl config set-cluster local-server --server=http://localhost:8080
$ kubectl config set-context default-context --cluster=local-server --user=myuser
$ kubectl config use-context default-context
$ kubectl config set contexts.default-context.namespace mynamespace
Those commands will generate a config file in ~/.kube/config
Check the result with:
$ kubectl config view
Then I just created a symbolic link inside /var/lib/kubelet (default place) to my config file:
$ cd /var/lib/kubelet
$ sudo ln -s ~/.kube/config kubeconfig
This worked for me. I hope it works for you too.

Related

Access kubernetes cluster outside of VPN

I configured a kubernetes cluster with rke on premise (for now single node - control plane, worker and etcd).
The VM which I launched the cluster on is inside a VPN.
After succesfully initializng the cluster, I managed to access the cluster with kubectl from inside the VPN.
I tried to access the cluster outside of the VPN so I updated the kubeconfig file and changed the followings:
server: https://<the VM IP> to be server: https://<the external IP>.
I also exposed port 6443.
When trying to access the cluster I get the following error:
E0912 16:23:39 proxy_server.go:147] Error while proxying request: x509: certificate is valid for <the VM IP>, 127.0.0.1, 10.43.0.1, not <the external IP>
My question is, how can I add the external IP to the certificate so I will be able to access the cluster with kubectl outside of the VPN.
The rke configuration yml.
# config.yml
nodes:
- address: <VM IP>
hostname_override: control-plane-telemesser
role: [controlplane, etcd, worker]
user: motti
ssh_key_path: /home/<USR>/.ssh/id_rsa
ssh_key_path: /home/<USR>/.ssh/id_rsa
cluster_name: my-cluster
ignore_docker_version: false
kubernetes_version:
services:
etcd:
backup_config:
interval_hours: 12
retention: 6
snapshot: true
creation: 6h
retention: 24h
kube-api:
service_cluster_ip_range: 10.43.0.0/16
service_node_port_range: 30000-32767
pod_security_policy: false
kube-controller:
cluster_cidr: 10.42.0.0/16
service_cluster_ip_range: 10.43.0.0/16
kubelet:
cluster_domain: cluster.local
cluster_dns_server: 10.43.0.10
fail_swap_on: false
extra_args:
max-pods: 110
network:
plugin: flannel
options:
flannel_backend_type: vxlan
dns:
provider: coredns
authentication:
strategy: x509
authorization:
mode: rbac
ingress:
provider: nginx
options:
use-forwarded-headers: "true"
monitoring:
provider: metrics-server
Thanks,
So I found the solution for RKE cluster configuration.
You to add sans to the the cluster.yml file at the authentication section:
authentication:
strategy: x509
sans:
- "10.18.160.10"
After you saved the file just run rke up again and it will update the cluster.

EKS: Use cluster config yaml file with eksctl to create a new cluster but node can't join cluster

I am new to eks. I use this cluster config yaml file to create a new cluster,
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: h2-dev-cluster
region: us-west-2
nodeGroups:
- name: h2-dev-ng-1
instanceType: t2.small
desiredCapacity: 2
ssh: # use existing EC2 key
publicKeyName: dev-eks-node
but eksctl stuck at
waiting for at least 1 node(s) to become ready in "h2-dev-ng-1
then timeout.
I have checked all points from this aws document https://docs.aws.amazon.com/eks/latest/userguide/troubleshooting.html
all the points are right exclude The ClusterName in your worker node AWS CloudFormation template I can't check because UserData has been encrypted by cloudformation.
I access to one of node and type journalctl -u kubelet, then find these error
Jul 03 08:22:31 ip-192-168-53-151.us-west-2.compute.internal kubelet[4541]: E0703 08:22:31.007677 4541 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta
Jul 03 08:22:31 ip-192-168-53-151.us-west-2.compute.internal kubelet[4541]: E0703 08:22:31.391913 4541 kubelet.go:2272] node "ip-192-168-53-151.us-west-2.compute.internal" not found
Jul 03 08:22:31 ip-192-168-53-151.us-west-2.compute.internal kubelet[4541]: E0703 08:22:31.434158 4541 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.
Jul 03 08:22:31 ip-192-168-53-151.us-west-2.compute.internal kubelet[4541]: E0703 08:22:31.492746 4541 kubelet.go:2272] node "ip-192-168-53-151.us-west-2.compute.internal" not found
Then I type cat /var/lib/kubelet/kubeconfig , I see follows
apiVersion: v1
kind: Config
clusters:
- cluster:
certificate-authority: /etc/kubernetes/pki/ca.crt
server: MASTER_ENDPOINT
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubelet
name: kubelet
current-context: kubelet
users:
- name: kubelet
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: /usr/bin/aws-iam-authenticator
args:
- "token"
- "-i"
- "CLUSTER_NAME"
- --region
- "AWS_REGION"
I noticed that parameter of server is MASTER_ENDPINT. So I run /etc/eks/bootstrap.sh h2-dev-cluster to set cluster name. Find the parameter become right as folllows (I marked url)
apiVersion: v1
kind: Config
clusters:
- cluster:
certificate-authority: /etc/kubernetes/pki/ca.crt
server: https://XXXXXXXX.gr7.us-west-2.eks.amazonaws.com
name: kubernetes
run sudo service restart kubectl but journalctl -u kubelet still can find the same error, and nodes still can't join cluster
How can I resolve it?
eksctl: 0.23.0 rc1 (also test with 0.20.0 has the same error)
kubectl: 1.18.5
os: ubuntu 18.04 (use a new ec2 )

certificate signed by unknown authority when connect to remote kubernetes cluster using kubectl

I am using kubectl to connect remote kubernetes cluster(v1.15.2),I am copy config from remote server to local macOS:
scp -r root#ip:~/.kube/config ~/.kube
and change the url to https://kube-ctl.example.com,I exposed the api server to the internet:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURvakNDQW9xZ0F3SUJBZ0lVU3FpUlZSU3FEOG1PemRCT1MyRzlJdGE0R2Nrd0RRWUpLb1pJaHZjTkFRRUwKQlFB92FERUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFVcHBibWN4RURBT0JnTlZCQWNUQjBKbAphVXBwYm1jeEREQUtCZ05WQkFvVEEyczRjekVTTUJBR0ExVUVDeE1KTkZCaGNtRmthV2R0TVJNd0VRWURWUVFECkV3cHJkV0psY201bGRHVnpNQ0FYR3RFNU1Ea3hNekUxTkRRd01Gb1lEekl4TVRrd09ESXdNVFUwTkRBd1dqQm8KTVFzd0NRWURWUVFHRXdKRFRqRVFNQTRHQTFVRUNCTUhRbVZwU21sdVp6RVFNQTRHQTFVRUJ4TUhRbVZwU21sdQpaekVNTUFvR0ExVUVDaE1EYXpoek1SSXdFQVlEVlFRTEV3azBVR0Z5WVdScFoyMHhFekFSQmdOVkJBTVRDbXQxClltVnlibVYwWlhNd2dnRWlNQTBHQ1NxR1NJYjNEUUVCQVFVQUE0SUJEd0F3Z2dFS0FvSUJBUUNzOGFFR2R2TUgKb0E1eTduTjVydnAvQkEyTVM1RG1TNWwzQ0p4S3VMOGJ1bkF3alF1c0lTalUxVWlqeVdGOW03VzA3elZJaVJpRwpiYzZGT3JkSEJ2QXgzazBpT2pPRlduTHp1UjdaSFhqQ3lTbDJRam9YN3gzL0l1MERQelNHTXJLSzZETGpTRTk4CkdadEpjUi9OSmpiVFFJc3FXbWFEdUIyc3dmcEc1ZmlZU1A1KzVpcld1TG1pYjVnWnJYeUJJNlJ0dVV4K1NvdW0KN3RDKzJaVG5QdFF0QnFUZHprT3p3THhwZ0Zhd1kvSU1mbHBsaUlMTElOamcwRktxM21NOFpUa0xvNXcvekVmUApHT25GNkNFWlR6bkdrTWc2aUVGenNBcDU5N2lMUDBNTkR4YUNjcTRhdTlMdnMzYkdOZmpqdDd3WkxIVklLa0lGCm44Mk92cExGaElq2kFnTUJBQUdqUWpCQU1BNEdBMVVkRHdFQi93UUVBd0lCQmpBUEJnTlZIUk1CQWY4RUJUQUQKQVFIL01CMEdBMVVkRGdRV0JCUm0yWHpJSHNmVzFjMEFGZU9SLy9Qakc4dWdzREFOQmdrcWhraUc5dzBCQVFzRgpBQU9DQVFFQW1mOUozN3RYTys1dWRmT2RLejdmNFdMZyswbVJUeTBRSEVIblk5VUNLQi9vN2hLUVJHRXI3VjNMCktUeGloVUhvbHY1QzVUdG8zbUZJY2FWZjlvZlp0VVpvcnpxSUFwNE9Od1JpSnQ1Yk94K1d6SW5qN2JHWkhnZjkKSk8rUmNxQnQrUWsrejhTMmJKRG04WFdvMW5WdjJRNU1pUndPdnRIbnRxd3MvTlJ2bHBGV25ISHBEVExjOU9kVwpoMllzWVpEMmV4d0FRVDkxSlExVjRCdklrZGFPeW9USHZ6U2oybThSTzh6b3JBd09kS1NTdG9TZXdpOEhMeGI2ClhmaTRFbjR4TEE3a3pmSHFvcDZiSFF1L3hCa0JzYi9hd29kdDJKc2FnOWFZekxEako3S1RNYlQyRW52MlllWnIKSUhBcjEyTGVCRGRHZVd1eldpZDlNWlZJbXJvVnNRPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://k8s-ctl.example.com
name: kubernetes
contexts:
- context:
cluster: kubernetes
namespace: kube-system
user: admin
name: kubernetes
current-context: kubernetes
kind: Config
preferences: {}
users:
- name: admin
user:
when I get cluster pod info in my local Mac:
kubectl get pods --all-namespaces
give this error:
Unable to connect to the server: x509: certificate signed by unknown authority
when I access https://k8s-ctl.example.com in google chrome,the result is:
{
kind: "Status",
apiVersion: "v1",
metadata: { },
status: "Failure",
message: "Unauthorized",
reason: "Unauthorized",
code: 401
}
what should I do to make access remote k8s cluster sucess using kubectl client?
One way I have tried to using this .kube/config generate by command,but get the same result:
apiVersion: v1
clusters:
- cluster:
certificate-authority: ssl/ca.pem
server: https://k8s-ctl.example.com
name: default
contexts:
- context:
cluster: default
user: admin
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: admin
user:
client-certificate: ssl/admin.pem
client-key: ssl/admin-key.pem
I've reproduced your problem and as you created your cluster following kubernetes-the-hard-way, you need to follow these steps to be able to access your cluster from a different console.
First you have to copy the following certificates created while you was bootstraping your cluster to ~/.kube/ directory in your local machine:
ca.pem
admin.pem
admin-key.pem
After copying these files to your local machine, execute the following commands:
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=~/.kube/ca.pem \
--embed-certs=true \
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443
kubectl config set-credentials admin \
--client-certificate=~/.kube/admin.pem \
--client-key=~/.kube/admin-key.pem
kubectl config set-context kubernetes-the-hard-way \
--cluster=kubernetes-the-hard-way \
--user=admin
kubectl config use-context kubernetes-the-hard-way
Notice that you have to replace the ${KUBERNETES_PUBLIC_ADDRESS} variable with the remote address to your cluster.
When kubectl interacts with kube API server it will validate the kube API server certificate as well as send the certificate in client-certificate to the kube API server for mutual TLS authentication. I believe the problem is either of below.
the ca that you have used to generate the client-certificate is not the ca that has been used to startup the kube API server.
The ca in certificate-authority-data is not the ca used to generate kube API server certificate.
If you make sure that you are using same ca to generate all the certificates consistently across the board then it should work.

Setup Kubernetes HA cluster with kubeadm and F5 as load-balancer

I'm trying to setup a Kubernetes HA cluster using kubeadm as installer and F5 as load-balancer (cannot use HAproxy). I'm experiencing issues with the F5 configuration.
I'm using self-signed certificates and passed the apiserver.crt and apiserver.key to the load balancer.
For some reasons the kubeadm init script fails with the following error:
[apiclient] All control plane components are healthy after 33.083159 seconds
I0805 10:09:11.335063 1875 uploadconfig.go:109] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0805 10:09:11.340266 1875 request.go:947] Request Body: {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"kubeadm-config","namespace":"kube-system","creationTimestamp":null},"data":{"ClusterConfiguration":"apiServer:\n certSANs:\n - $F5_LOAD_BALANCER_VIP\n extraArgs:\n authorization-mode: Node,RBAC\n timeoutForControlPlane: 4m0s\napiVersion: kubeadm.k8s.io/v1beta2\ncertificatesDir: /etc/kubernetes/pki\nclusterName: kubernetes\ncontrolPlaneEndpoint: $F5_LOAD_BALANCER_VIP:6443\ncontrollerManager: {}\ndns:\n type: CoreDNS\netcd:\n local:\n dataDir: /var/lib/etcd\nimageRepository: k8s.gcr.io\nkind: ClusterConfiguration\nkubernetesVersion: v1.15.1\nnetworking:\n dnsDomain: cluster.local\n podSubnet: 192.168.0.0/16\n serviceSubnet: 10.96.0.0/12\nscheduler: {}\n","ClusterStatus":"apiEndpoints:\n lnxkbmaster02:\n advertiseAddress: $MASTER01_IP\n bindPort: 6443\napiVersion: kubeadm.k8s.io/v1beta2\nkind: ClusterStatus\n"}}
I0805 10:09:11.340459 1875 round_trippers.go:419] curl -k -v -XPOST -H "Accept: application/json, */*" -H "Content-Type: application/json" -H "User-Agent: kubeadm/v1.15.1 (linux/amd64) kubernetes/4485c6f" 'https://$F5_LOAD_BALANCER_VIP:6443/api/v1/namespaces/kube-system/configmaps'
I0805 10:09:11.342399 1875 round_trippers.go:438] POST https://$F5_LOAD_BALANCER_VIP:6443/api/v1/namespaces/kube-system/configmaps 403 Forbidden in 1 milliseconds
I0805 10:09:11.342449 1875 round_trippers.go:444] Response Headers:
I0805 10:09:11.342479 1875 round_trippers.go:447] Content-Type: application/json
I0805 10:09:11.342507 1875 round_trippers.go:447] X-Content-Type-Options: nosniff
I0805 10:09:11.342535 1875 round_trippers.go:447] Date: Mon, 05 Aug 2019 08:09:11 GMT
I0805 10:09:11.342562 1875 round_trippers.go:447] Content-Length: 285
I0805 10:09:11.342672 1875 request.go:947] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"configmaps is forbidden: User \"system:anonymous\" cannot create resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"","reason":"Forbidden","details":{"kind":"configmaps"},"code":403}
error execution phase upload-config/kubeadm: error uploading the kubeadm ClusterConfiguration: unable to create ConfigMap: configmaps is forbidden: User "system:anonymous" cannot create resource "configmaps" in API group "" in the namespace "kube-system"
The init is really basic:
kubeadm init --config=kubeadm-config.yaml --upload-certs
Here's the kubeadm-config.yaml:
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: stable
controlPlaneEndpoint: "$F5_LOAD_BALANCER_VIP:6443"
networking:
podSubnet: "192.168.0.0/16"
If I setup the cluster using a HAproxy the init runs smoothly:
#---------------------------------------------------------------------
# kubernetes
#---------------------------------------------------------------------
frontend kubernetes
bind $HAPROXY_LOAD_BALANCER_IP:6443
option tcplog
mode tcp
default_backend kubernetes-master-nodes
backend kubernetes-master-nodes
mode tcp
balance roundrobin
option tcp-check
server master01.my-domain $MASTER_01_IP:6443 check fall 3 rise 2
server master02.my-domain $MASTER_02_IP:6443 check fall 3 rise 2
server master03.my-domain $MASTER_03_IP:6443 check fall 3 rise 2
END
My solution has been to deploy the cluster without the proxy (F5) with a configuration as follows:
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: stable
controlPlaneEndpoint: "$MASTER_1_IP:6443"
networking:
podSubnet: "192.168.0.0/16"
Afterwards it was necessary to deploy on the cluster the F5 BIG-IP Controller for Kubernetes to manage the F5 device from Kubernetes.
Detailed guide can be found here:
https://clouddocs.f5.com/products/connectors/k8s-bigip-ctlr/v1.10/
Beware that it requires an additional F5 license and admin privileges.

Kubelet don't auth using TLS

I have deployed a Kubernetes cluster on Ubuntu VMs using Docker.
Without TLS, it work fine (on port 8080).
I use Let's Encrypt for secure API Server (port 6443), it's work ! My problem appear when my Kubelet want auth to the master using https.
This is how I launch Kubelet Api Server :
/hyperkube apiserver
--service-cluster-ip-range=10.0.0.1/24
--insecure-bind-address=127.0.0.1
--etcd-servers=http://127.0.0.1:4001
--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota
--client-ca-file=/srv/kubernetes/ca.crt
--basic-auth-file=/srv/kubernetes/basic_auth.csv
--min-request-timeout=300
--tls-cert-file=/srv/kubernetes/server.cert
--tls-private-key-file=/srv/kubernetes/server.key
--token-auth-file=/srv/kubernetes/known_tokens.csv
--allow-privileged=true --v=4
And this is how I launch Kubelet :
/hyperkube kubelet \
--allow-privileged=true \
--api-servers=https://k8:6443 \
--kubeconfig=/srv/kubernetes/config.yaml \
--v=2 \
--address=0.0.0.0 \
--enable-server \
--containerized \
--cluster-dns=10.0.0.10 \
--cluster-domain=k8.local
Here is the config.yaml file :
apiVersion: v1
kind: Config
clusters:
- name: k8.local
cluster:
insecure-skip-tls-verify: true
server: https://k8:6443
contexts:
- context:
cluster: "k8.local"
user: "node1"
name: development
current-context: development
users:
- name: node1
user:
client-certificate: /var/run/kubernetes/kubelet.crt
client-key: /var/run/kubernetes/kubelet.key
When I launch my Kubelet, logs says :
the server has asked for the client to provide credentials.
I think I'm wrong with Kubelet's certs but I don't understand why.
Can you help me ?
10xx.
Were your client certs (/var/run/kubernetes/kubelet.crt) signed by the CA file identified in: --client-ca-file=/srv/kubernetes/ca.crt?
Also, you might try replacing
- cluster:
insecure-skip-tls-verify: true
server: https://k8:6443
with:
- cluster
certificate-authority: /srv/kubernetes/ca.crt
server: https://k8:6443
I've never used basic auth or token auth, but it's possible having those flags in place is requiring password based authentication. I'd try removing these as well if you're doing purely cert based authentication.
--basic-auth-file=/srv/kubernetes/basic_auth.csv
--token-auth-file=/srv/kubernetes/known_tokens.csv