EKS: Use cluster config yaml file with eksctl to create a new cluster but node can't join cluster - kubernetes

I am new to eks. I use this cluster config yaml file to create a new cluster,
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: h2-dev-cluster
region: us-west-2
nodeGroups:
- name: h2-dev-ng-1
instanceType: t2.small
desiredCapacity: 2
ssh: # use existing EC2 key
publicKeyName: dev-eks-node
but eksctl stuck at
waiting for at least 1 node(s) to become ready in "h2-dev-ng-1
then timeout.
I have checked all points from this aws document https://docs.aws.amazon.com/eks/latest/userguide/troubleshooting.html
all the points are right exclude The ClusterName in your worker node AWS CloudFormation template I can't check because UserData has been encrypted by cloudformation.
I access to one of node and type journalctl -u kubelet, then find these error
Jul 03 08:22:31 ip-192-168-53-151.us-west-2.compute.internal kubelet[4541]: E0703 08:22:31.007677 4541 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta
Jul 03 08:22:31 ip-192-168-53-151.us-west-2.compute.internal kubelet[4541]: E0703 08:22:31.391913 4541 kubelet.go:2272] node "ip-192-168-53-151.us-west-2.compute.internal" not found
Jul 03 08:22:31 ip-192-168-53-151.us-west-2.compute.internal kubelet[4541]: E0703 08:22:31.434158 4541 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.
Jul 03 08:22:31 ip-192-168-53-151.us-west-2.compute.internal kubelet[4541]: E0703 08:22:31.492746 4541 kubelet.go:2272] node "ip-192-168-53-151.us-west-2.compute.internal" not found
Then I type cat /var/lib/kubelet/kubeconfig , I see follows
apiVersion: v1
kind: Config
clusters:
- cluster:
certificate-authority: /etc/kubernetes/pki/ca.crt
server: MASTER_ENDPOINT
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubelet
name: kubelet
current-context: kubelet
users:
- name: kubelet
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: /usr/bin/aws-iam-authenticator
args:
- "token"
- "-i"
- "CLUSTER_NAME"
- --region
- "AWS_REGION"
I noticed that parameter of server is MASTER_ENDPINT. So I run /etc/eks/bootstrap.sh h2-dev-cluster to set cluster name. Find the parameter become right as folllows (I marked url)
apiVersion: v1
kind: Config
clusters:
- cluster:
certificate-authority: /etc/kubernetes/pki/ca.crt
server: https://XXXXXXXX.gr7.us-west-2.eks.amazonaws.com
name: kubernetes
run sudo service restart kubectl but journalctl -u kubelet still can find the same error, and nodes still can't join cluster
How can I resolve it?
eksctl: 0.23.0 rc1 (also test with 0.20.0 has the same error)
kubectl: 1.18.5
os: ubuntu 18.04 (use a new ec2 )

Related

kubectl minikube renew certificate

I'm using kubectl to access the api server on my minikube cluster on ubuntu
but when try to use kubectl command I got an error certificate expired:
/home/ayoub# kubectl get pods
Unable to connect to the server: x509: certificate has expired or is not yet valid: current time 2021-08-30T14:39:50+01:00 is before 2021-08-30T14:20:10Z
Here's my kubectl config:
/home/ayoub# kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://127.0.0.1:16443
name: microk8s-cluster
contexts:
- context:
cluster: microk8s-cluster
user: admin
name: microk8s
current-context: microk8s
kind: Config
preferences: {}
users:
- name: admin
user:
token: REDACTED
root#ayoub-Lenovo-ideapad-720S-13IKB:/home/ayoub# /home/ayoub# kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://127.0.0.1:16443
name: microk8s-cluster
contexts:
- context:
cluster: microk8s-cluster
user: admin
name: microk8s
current-context: microk8s
kind: Config
preferences: {}
users:
- name: admin
user:
token: REDACTED
root#ayoub-Lenovo-ideapad-720S-13IKB:/home/ayoub#
How I can renew this certificate?
Posted community wiki for better visibility. Feel free to expand it.
There is similar issue opened on minikube GitHub.
The temporary workaround is to remove some files in the /var/lib/minikube/ directory, then reset Kubernetes cluster and replace keys on the host. Those steps are described in this answer.
The solution described in this blog solved the problem for me:
https://dev.to/boris/microk8s-unable-to-connect-to-the-server-x509-certificate-has-expired-or-is-not-yet-valid-2b73
In summary:
Run sudo microk8s.refresh-certs then restarting the servers to reboot the microk8s cluster
minikube delete - deletes the local Kubernetes cluster - worked for me
reference:
https://github.com/kubernetes/minikube/issues/10122#issuecomment-758227950

How to establish the New service connection for EKS in AzureDevOps?

I want to create CI/CD for deploying pods on EKS using AzureDevOps. First I have installed kubectl and aws-iam-authenticator on my system(windows) and created a Kubernetes CLuster in AWS. To communicate with Cluster, I have updated kubeconfig using the following command aws eks --region us-east-2 update-kubeconfig --name cluster_name. Now I have copied the updated config file and pasted in Azure DevOps New service Connection. while verifying the test connection, getting the below error.
No user credentials found for a cluster in KubeConfig content. Make sure that the credentials exist and try again.
Below are my kubeconfig & aws-auth-cm.yaml files.
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: *********
server: https://*************.gr7.us-east-1.eks.amazonaws.com
name: arn:aws:eks:us-east-1:*********:cluster/testconnection
contexts:
- context:
cluster: arn:aws:eks:us-east-1:*********:cluster/testconnection
user: arn:aws:eks:us-east-1:********:cluster/testconnection
name: arn:aws:eks:us-east-1:**********:cluster/testconnection
current-context: arn:aws:eks:us-east-1:*******:cluster/testconnection
kind: Config
preferences: {}
users:
- name: arn:aws:eks:us-east-1:*********:cluster/testconnection
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- --region
- us-east-1
- eks
- get-token
- --cluster-name
- testconnection
command: aws
--- aws-auth-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: arn:aws:iam::*********:role/eks_nodegroup
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
So, Could anyone guide me in the configurations which I have missed to establish the connection for EKS to AzureDevOps?

Applying API Server App ID to k8s cluster spec

Team,
I already have a cluster running and I need to update the OIDC value. is there a way I can do it without having to recreate the cluster?
ex: below is my cluster info and I need to update the oidcClientID: spn:
How can I do this as I have 5 masters running?
kubeAPIServer:
storageBackend: etcd3
oidcClientID: spn:45645hhh-f641-498d-b11a-1321231231
oidcUsernameClaim: upn
oidcUsernamePrefix: "oidc:"
oidcGroupsClaim: groups
oidcGroupsPrefix: "oidc:"
You update your kube-apiserver on your masters one by one (update/restart). If your cluster is setup correctly, when you get to the active kube-apiserver it should automatically failover to another kube-apiserver master in standby.
You can add the oidc options in the /etc/kubernetes/manifests/kube-apiserver.yaml pod manifest file.
apiVersion: v1
kind: Pod
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --authorization-mode=Node,RBAC
- --advertise-address=172.x.x.x
- --allow-privileged=true
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --oidc-client-id=...
- --oidc-username-claim=...
- --oidc-username-prefix=...
- --oidc-groups-claim=...
- --oidc-groups-prefix=...
...
Then you can restart your kube-apiserver container, if you are using docker:
$ sudo docker restart <container-id-for-kube-apiserver>
Or if you'd like to restart all the components on the master:
$ sudo systemctl restart docker
Watch for logs on the kube-apiserver container
$ sudo docker logs -f <container-id-for-kube-apiserver>
Make sure you never have less running nodes than your quorum which should be 3 for your 5 master cluster, to be safe. If for some reason your etcd cluster falls out of quorum you will have to recover by recreating the etcd cluster and restoring from a backup.

Connecting to kubernetes cluster from different kubectl clients

I have installed kubernetes cluster using kops.
From the node where kops install kubectl all works perfect (lets say node A).
I'm trying connect to kubernetes cluster from another server with installed kubectl on it (node B). I have copied ~/.kube from node A to B.
But when I'm trying execute basic command like:
kubectl get pods
I'm getting:
Unable to connect to the server: x509: certificate signed by unknown authority
My config file is:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSU.........
server: https://api.kub.domain.com
name: kub.domain.com
contexts:
- context:
cluster: kub.domain.com
user: kub.domain.com
name: kub.domain.com
current-context: kub.domain.com
kind: Config
preferences: {}
users:
- name: kub.domain.com
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0F..........
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVk..........
password: r4ho3rNbYrjqZOhjnu8SJYXXXXXXXXXXX
username: admin
- name: kub.domain.com-basic-auth
user:
password: r4ho3rNbYrjqZOhjnu8SJYXXXXXXXXXXX
username: admin
Appreciate any help
Lets try to trobuleshoot these two.
Unable to connect to the server:
Check and see you have any firewall rules. Is your node running in virtual machine?
x509: certificate signed by unknown authority
can you compare the certificates on both servers getting the same certificates?
curl -v -k $(grep 'server:' ~/.kube/config|sed 's/server://')

Minikube 0.5.0 : cannot validate certificate for 192.168.99.101 because it doesn't contain any IP SANs

starting today with a brand new install of Minikube 0.5.0 and Kubectl 1.3.0 (my machine is running ubuntu 14.04 64 bits).
Just start Minikube with minikube start and everything seems to run fine (Vbox machine is created and started) but contacting the cluster seems impossible due to certificate issue:
laurent#ponyo:~$kubectl cluster-info
Unable to connect to the server: x509: cannot validate certificate for 192.168.99.101 because it doesn't contain any IP SAN
kubectl config view runs fine and outputs the following config
laurent#ponoy:~$ kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/laurent/.minikube/apiserver.crt
server: https://192.168.99.101:443
name: minikube
contexts:
- context:
cluster: minikube
user: minikube
name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: minikube
user:
client-certificate: /home/laurent/.minikube/apiserver.crt
client-key: /home/laurent/.minikube/apiserver.key
Any clue on this issue ? Is there any extra step before starting minikube regarding cert provisionning ? Is there any pointer on how to solve that ?
Thank you for help,
You can now pass in env vars into the minikube VM (such as HTTP_PROXY or HTTPS_PROXY)
https://github.com/kubernetes/minikube#using-minikube-with-an-http-proxy