kubectl minikube renew certificate - kubernetes

I'm using kubectl to access the api server on my minikube cluster on ubuntu
but when try to use kubectl command I got an error certificate expired:
/home/ayoub# kubectl get pods
Unable to connect to the server: x509: certificate has expired or is not yet valid: current time 2021-08-30T14:39:50+01:00 is before 2021-08-30T14:20:10Z
Here's my kubectl config:
/home/ayoub# kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://127.0.0.1:16443
name: microk8s-cluster
contexts:
- context:
cluster: microk8s-cluster
user: admin
name: microk8s
current-context: microk8s
kind: Config
preferences: {}
users:
- name: admin
user:
token: REDACTED
root#ayoub-Lenovo-ideapad-720S-13IKB:/home/ayoub# /home/ayoub# kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://127.0.0.1:16443
name: microk8s-cluster
contexts:
- context:
cluster: microk8s-cluster
user: admin
name: microk8s
current-context: microk8s
kind: Config
preferences: {}
users:
- name: admin
user:
token: REDACTED
root#ayoub-Lenovo-ideapad-720S-13IKB:/home/ayoub#
How I can renew this certificate?

Posted community wiki for better visibility. Feel free to expand it.
There is similar issue opened on minikube GitHub.
The temporary workaround is to remove some files in the /var/lib/minikube/ directory, then reset Kubernetes cluster and replace keys on the host. Those steps are described in this answer.

The solution described in this blog solved the problem for me:
https://dev.to/boris/microk8s-unable-to-connect-to-the-server-x509-certificate-has-expired-or-is-not-yet-valid-2b73
In summary:
Run sudo microk8s.refresh-certs then restarting the servers to reboot the microk8s cluster

minikube delete - deletes the local Kubernetes cluster - worked for me
reference:
https://github.com/kubernetes/minikube/issues/10122#issuecomment-758227950

Related

kubectl config use-context deleting eks user

I'm encountering some really weird behaviour while attempting to switch contexts using kubectl.
My config file declares two contexts; one points to an in-house cluster, while the other points to an Amazon EKS cluster.
apiVersion: v1
kind: Config
clusters:
- cluster:
certificate-authority-data: <..>
server: <..>
name: in-house
- cluster:
certificate-authority-data: <..>
server: <..>
name: eks
contexts:
- context:
cluster: in-house
user: divesh-in-house
name: in-house-context
- context:
cluster: eks
user: divesh-eks
name: eks-context
current-context: in-house-context
preferences: {}
users:
- name: divesh-eks
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: aws-iam-authenticator
args:
- "token"
- "-i"
- "eks"
env: null
- name: divesh-in-house
user:
client-certificate-data: <..>
client-key-data: <..>
I'm also using the aws-iam-authenticator to authenticate to the EKS cluster.
My problem is this - as long as I work with the in-house cluster, everything works fine. But, when I execute kubectl config use-context eks-context, I observe the following behaviour.
Any operation I try to perform on the cluster (say, kubectl get pods -n production) shows me a Please enter Username: prompt. I assumed the aws-iam-authenticator should have managed the authentication for me. I can confirm that running the authenticator manually (aws-iam-authenticator token -i eks) works fine for me.
Executing kubectl config view omits the divesh-eks user, so the output looks like
users:
- name: divesh-eks
user: {}
Switching back to the in-house cluster by xecuting kubectl config use-context in-house-context modifies my config file and deletes the divesh-eks-user, so the config file now contains
users:
- name: divesh-eks
user: {}
My colleagues don't seem to face this problem.
Thoughts?
The exec portion of that config was added in 1.10 (https://github.com/kubernetes/kubernetes/pull/59495)
If you use a version of kubectl prior to that version, it will not recognize the exec plugin (resulting in prompts for credentials), and if you use it to make kubeconfig changes, it will drop the exec field when it persists the changes

Puppet kubernetes module

I installed the puppet kubernetes module to manage pods of my kubernetes cluster with https://github.com/garethr/garethr-kubernetes/blob/master/README.md
I am not able to get any pod information back when I run
puppet resource kubernetes_pod
It just returns an empty line.
I am using a minikube k8s cluster to test the puppet module against.
cat /etc/puppetlabs/puppet/kubernetes.conf
apiVersion: v1
clusters:
- cluster:
certificate-authority: /root/.minikube/ca.crt
server: https://<ip address>:8443
name: minikube
contexts:
- context:
cluster: minikube
user: minikube
name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: minikube
user:
client-certificate: /root/.minikube/apiserver.crt
client-key: /root/.minikube/apiserver.key
I am able to use curl with the certs to talk to the K8s REST API
curl --cacert /root/.minikube/ca.crt --cert /root/.minikube/apiserver.crt --key /root/.minikube/apiserver.key https://<minikube ip>:844/api/v1/pods/
It looks like the garethr-kubernetes package hasn't been updated since August 2017, so you probably need a version of the kubeclient gem at least that old. It seems kubeclient 3.0 came out quite recently, so you might want to try the latest version from the 2.5 major (currently 2.5.2).
From the requirements, this could be related to a credentials issue.
Or the configuration is set to a namespace with nothing in it.
As show in this issue, check the following:
kubectl get pods works fine at the command line, and my ~/.puppetlabs/etc/puppet/kubernetes.conf file is generated as suggested:
mc0e#xxx:~$ kubectl config view --raw=true
apiVersion: v1
clusters:
- cluster:
server: http://localhost:8080
name: test-doc
contexts:
- context:
cluster: test-doc
user: ""
name: test-doc
current-context: test-doc
kind: Config
preferences: {}
users: []

Spinnaker authenticate K8s with service account. Not with system:anonymous

I see spinnaker using system:anonymous user to authenticate K8s. But I want a specific user(which I created already in K8s) to authenticate K8s. I used below kubeconfig to use user veeru
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: RETRACTED
server: https://xx.xx.xx.220:8443
name: xx-xx-xx-220:8443
contexts:
- context:
cluster: xx-xx-xx-220:8443
namespace: default
user: veeru/xx-xx-xx-220:8443
name: area-51/xx-xx-xx-220:8443/veeru
current-context: area-51/xx-xx-xx-220:8443/veeru
kind: Config
preferences: {}
users:
- name: veeru/xx-xx-xx-220:8443
user:
client-certificate-data: RETRACTED
client-key-data: RETRACTED
And I specified(like here) user in config(~/.hal/config) like below
kubernetes:
enabled: true
accounts:
- name: my-k8s-account
requiredGroupMembership: []
providerVersion: V1
dockerRegistries:
- accountName: my-docker-registry2
namespaces: []
configureImagePullSecrets: true
namespaces: ["area-51"]
user: veeru
omitNamespaces: []
kubeconfigFile: /home/ubuntu/.kube/config
oauthScopes: []
oAuthScopes: []
primaryAccount: my-k8s-account
But still spinnaker is using system:anonymous
2018-01-22 08:35:13.929 ERROR 4639 --- [pool-4-thread-1] c.n.s.c.o.DefaultOrchestrationProcessor : com.netflix.spinnaker.clouddriver.kubernetes.v1.deploy.exception.KubernetesOperationException: Get Service openshifttest-dev in area-51 for account my-k8s-account failed: User "system:anonymous" cannot get services in the namespace "area-51": User "system:anonymous" cannot get services in project "area-51"
Is there any way to specify user that spinnaker should use configured user other than system:anonymous
UPDATE-1
Followed: https://blog.spinnaker.io/spinnaker-kubernetes-rbac-c40f1f73c172
Got the secret from kubectl describe secret spinnaker-service-account-token-9sl6q and update in kubeconfig like below
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://xx.xx.xx.xx:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
namespace: webapp
user: kubernetes-admin
name: kubernetes-admin#kubernetes
- context:
cluster: kubernetes
user: spinnaker-service-account
name: spinnaker-context
current-context: spinnaker-context
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
- name: spinnaker-service-account
user:
token: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9....
Than I ran sudo hal deploy
....
! ERROR Unable to communicate with your Kubernetes cluster: Failure
executing: GET at: https://xx.xx.xx.xx:6443/api/v1/namespaces. Message:
Forbidden! User spinnaker-service-account doesn't have permission. namespaces is
forbidden: User "system:serviceaccount:default:spinnaker-service-account" cannot
list namespaces at the cluster scope..
? Unable to authenticate with your Kubernetes cluster. Try using
kubectl to verify your credentials.
....
I'm able run
$ kubectl get namespace webapp
NAME STATUS AGE
webapp Active 22m
I have specified webapp namespace and user as spinnaker-service-account in ~/.hal/config
I'm using GKE with basic authentication disabled. I have my spinnaker use a dedicated K8s service account that I created. In my ~/.kube/config I have tokens for each K8s cluster.
users:
- name: gke_operation-covfefe-1_asia-east1_testing-asia-east1
user:
token: token1
- name: gke_operation-covfefe-1_europe-west1_testing-europe-west1
user:
token: token2
- name: gke_operation-covfefe-1_us-central1_testing-us-central1
user:
token: token3
I got these tokens by running
kubectl get secret spinnaker-service-account -o json \
| jq -r .data.token \
| base64 -d
and then manually updating my ~/.kube/config file.
Make sure your service account has the required RBAC permissions. See blog post here.
Update:
Also make sure you give the service account the required RBAC permissions. See the "Role" section of the blog post above or the guide here. When you test the RBAC permissions with kubectl make sure you're using the same service account as the one Spinnaker is using.
Update 2
If you want spinnaker to act on all namespaces, use ClusterRole and ClusterRoleBinding in your RBAC. The blog post only uses Role and RoleBinding which restricts actions to a particular namespace(s). See this guide for the Cluster* way. Note the PR to fix a typo here.

Configure RABC in K8S v 1.8.1

I am following Configure RBAC to create user accounts, everything works fine, but after updating the context, before binding any roles with the created user, apiserver kubectl get pods returning the pods.
apiserver configuration
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
KUBE_API_PORT="--insecure-port=8080"
KUBELET_PORT="--kubelet-port=10250"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
KUBE_API_ARGS="--client-ca-file=/srv/kubernetes/ca.crt --tls-cert-file=/srv/kubernetes/server.crt --tls-private-key-file=/srv/kubernetes/server.key --authorization-mode=RBAC"
kubectl config
apiVersion: v1
clusters:
- cluster:
certificate-authority: /srv/kubernetes/ca.crt
server: http://172.16.3.23:8080
name: local
contexts:
- context:
cluster: local
namespace: kube-system
user: devops
name: devops
current-context: devops
kind: Config
preferences: {}
users:
- name: devops
user:
client-certificate: /.cert/devops.crt
client-key: /.cert/devops.key
p.s: I am using centos bare metal environment
The insecure port (http://...:8080) bypasses all authentication and authorization

Connecting to kubernetes cluster from different kubectl clients

I have installed kubernetes cluster using kops.
From the node where kops install kubectl all works perfect (lets say node A).
I'm trying connect to kubernetes cluster from another server with installed kubectl on it (node B). I have copied ~/.kube from node A to B.
But when I'm trying execute basic command like:
kubectl get pods
I'm getting:
Unable to connect to the server: x509: certificate signed by unknown authority
My config file is:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSU.........
server: https://api.kub.domain.com
name: kub.domain.com
contexts:
- context:
cluster: kub.domain.com
user: kub.domain.com
name: kub.domain.com
current-context: kub.domain.com
kind: Config
preferences: {}
users:
- name: kub.domain.com
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0F..........
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVk..........
password: r4ho3rNbYrjqZOhjnu8SJYXXXXXXXXXXX
username: admin
- name: kub.domain.com-basic-auth
user:
password: r4ho3rNbYrjqZOhjnu8SJYXXXXXXXXXXX
username: admin
Appreciate any help
Lets try to trobuleshoot these two.
Unable to connect to the server:
Check and see you have any firewall rules. Is your node running in virtual machine?
x509: certificate signed by unknown authority
can you compare the certificates on both servers getting the same certificates?
curl -v -k $(grep 'server:' ~/.kube/config|sed 's/server://')