ssl authentication for gcp kubernetes cluster is not working - kubernetes

For an automation purpose, I have generated the kubernetes configuration file using the below API.
request = service.projects().zones().clusters()
.get(projectId=project_id, zone=zone, clusterId=cluster_id)
The cluster is having both basic & ssl configurations enabled and only the basic authentication is working properly. When I changed the user context from admin to ca-user, I am getting the below error.
Error from server (Forbidden): nodes is forbidden: User "client" cannot list nodes at the cluster scope: Unknown user "client"
The generated configuration file is given below.
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: *************
server: https://*******
name: gke_demo-205812_us-central1-a_cluster-1
contexts:
- context:
cluster: gke_demo-205812_us-central1-a_cluster-1
user: ca-user
name: gke_demo-205812_us-central1-a_cluster-1
current-context: gke_demo-205812_us-central1-a_cluster-1
kind: Config
preferences: {}
users:
- name: admin
user:
password: *****************
username: admin
- name: ca-user
user:
client-certificate-data: ******************
client-key-data: ************************
Thanks in Advance. :)

Try after running this command:
kubectl create clusterrolebinding client-admin \
--clusterrole=cluster-admin \
--user=client
You are giving cluster-admin permission to this user.

Related

kubectl minikube renew certificate

I'm using kubectl to access the api server on my minikube cluster on ubuntu
but when try to use kubectl command I got an error certificate expired:
/home/ayoub# kubectl get pods
Unable to connect to the server: x509: certificate has expired or is not yet valid: current time 2021-08-30T14:39:50+01:00 is before 2021-08-30T14:20:10Z
Here's my kubectl config:
/home/ayoub# kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://127.0.0.1:16443
name: microk8s-cluster
contexts:
- context:
cluster: microk8s-cluster
user: admin
name: microk8s
current-context: microk8s
kind: Config
preferences: {}
users:
- name: admin
user:
token: REDACTED
root#ayoub-Lenovo-ideapad-720S-13IKB:/home/ayoub# /home/ayoub# kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://127.0.0.1:16443
name: microk8s-cluster
contexts:
- context:
cluster: microk8s-cluster
user: admin
name: microk8s
current-context: microk8s
kind: Config
preferences: {}
users:
- name: admin
user:
token: REDACTED
root#ayoub-Lenovo-ideapad-720S-13IKB:/home/ayoub#
How I can renew this certificate?
Posted community wiki for better visibility. Feel free to expand it.
There is similar issue opened on minikube GitHub.
The temporary workaround is to remove some files in the /var/lib/minikube/ directory, then reset Kubernetes cluster and replace keys on the host. Those steps are described in this answer.
The solution described in this blog solved the problem for me:
https://dev.to/boris/microk8s-unable-to-connect-to-the-server-x509-certificate-has-expired-or-is-not-yet-valid-2b73
In summary:
Run sudo microk8s.refresh-certs then restarting the servers to reboot the microk8s cluster
minikube delete - deletes the local Kubernetes cluster - worked for me
reference:
https://github.com/kubernetes/minikube/issues/10122#issuecomment-758227950

kubectl: error You must be logged in to the server (Unauthorized)

I've created a service account for CI purposes and am testing it out. Upon trying any kubectl command, I get the error:
error: You must be logged in to the server (Unauthorized)
Below is my .kube/config file
apiVersion: v1
clusters:
- cluster:
server: <redacted>
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: bamboo
name: default
current-context: 'default'
kind: Config
preferences: {}
users:
- name: bamboo
user:
token: <redacted>
The service account exists and has a cluster role: edit and cluster role binding attached.
What am I doing wrong?
I reproduce the error if I copy the token directly without decoding. Then applied the following steps to decode and set the token and it is working as expected.
$ TOKENNAME=`kubectl -n <namespace> get serviceaccount/<serviceaccount-name> -o jsonpath='{.secrets[0].name}'`
$ TOKEN=`kubectl -n <namespace> get secret $TOKENNAME -o jsonpath='{.data.token}'| base64 --decode`
$ kubectl config set-credentials <service-account-name> --token=$TOKEN
So, I think it might be your case.

Spinnaker authenticate K8s with service account. Not with system:anonymous

I see spinnaker using system:anonymous user to authenticate K8s. But I want a specific user(which I created already in K8s) to authenticate K8s. I used below kubeconfig to use user veeru
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: RETRACTED
server: https://xx.xx.xx.220:8443
name: xx-xx-xx-220:8443
contexts:
- context:
cluster: xx-xx-xx-220:8443
namespace: default
user: veeru/xx-xx-xx-220:8443
name: area-51/xx-xx-xx-220:8443/veeru
current-context: area-51/xx-xx-xx-220:8443/veeru
kind: Config
preferences: {}
users:
- name: veeru/xx-xx-xx-220:8443
user:
client-certificate-data: RETRACTED
client-key-data: RETRACTED
And I specified(like here) user in config(~/.hal/config) like below
kubernetes:
enabled: true
accounts:
- name: my-k8s-account
requiredGroupMembership: []
providerVersion: V1
dockerRegistries:
- accountName: my-docker-registry2
namespaces: []
configureImagePullSecrets: true
namespaces: ["area-51"]
user: veeru
omitNamespaces: []
kubeconfigFile: /home/ubuntu/.kube/config
oauthScopes: []
oAuthScopes: []
primaryAccount: my-k8s-account
But still spinnaker is using system:anonymous
2018-01-22 08:35:13.929 ERROR 4639 --- [pool-4-thread-1] c.n.s.c.o.DefaultOrchestrationProcessor : com.netflix.spinnaker.clouddriver.kubernetes.v1.deploy.exception.KubernetesOperationException: Get Service openshifttest-dev in area-51 for account my-k8s-account failed: User "system:anonymous" cannot get services in the namespace "area-51": User "system:anonymous" cannot get services in project "area-51"
Is there any way to specify user that spinnaker should use configured user other than system:anonymous
UPDATE-1
Followed: https://blog.spinnaker.io/spinnaker-kubernetes-rbac-c40f1f73c172
Got the secret from kubectl describe secret spinnaker-service-account-token-9sl6q and update in kubeconfig like below
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://xx.xx.xx.xx:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
namespace: webapp
user: kubernetes-admin
name: kubernetes-admin#kubernetes
- context:
cluster: kubernetes
user: spinnaker-service-account
name: spinnaker-context
current-context: spinnaker-context
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
- name: spinnaker-service-account
user:
token: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9....
Than I ran sudo hal deploy
....
! ERROR Unable to communicate with your Kubernetes cluster: Failure
executing: GET at: https://xx.xx.xx.xx:6443/api/v1/namespaces. Message:
Forbidden! User spinnaker-service-account doesn't have permission. namespaces is
forbidden: User "system:serviceaccount:default:spinnaker-service-account" cannot
list namespaces at the cluster scope..
? Unable to authenticate with your Kubernetes cluster. Try using
kubectl to verify your credentials.
....
I'm able run
$ kubectl get namespace webapp
NAME STATUS AGE
webapp Active 22m
I have specified webapp namespace and user as spinnaker-service-account in ~/.hal/config
I'm using GKE with basic authentication disabled. I have my spinnaker use a dedicated K8s service account that I created. In my ~/.kube/config I have tokens for each K8s cluster.
users:
- name: gke_operation-covfefe-1_asia-east1_testing-asia-east1
user:
token: token1
- name: gke_operation-covfefe-1_europe-west1_testing-europe-west1
user:
token: token2
- name: gke_operation-covfefe-1_us-central1_testing-us-central1
user:
token: token3
I got these tokens by running
kubectl get secret spinnaker-service-account -o json \
| jq -r .data.token \
| base64 -d
and then manually updating my ~/.kube/config file.
Make sure your service account has the required RBAC permissions. See blog post here.
Update:
Also make sure you give the service account the required RBAC permissions. See the "Role" section of the blog post above or the guide here. When you test the RBAC permissions with kubectl make sure you're using the same service account as the one Spinnaker is using.
Update 2
If you want spinnaker to act on all namespaces, use ClusterRole and ClusterRoleBinding in your RBAC. The blog post only uses Role and RoleBinding which restricts actions to a particular namespace(s). See this guide for the Cluster* way. Note the PR to fix a typo here.

kubernetes: Authentication to ui with default config file fails

I have successfully set up a kubernetes cluster on AWS using kops and the following commands:
$ kops create cluster --name=<my_cluster_name> --state=s3://<my-state-bucket> --zones=eu-west-1a --node-count=2 --node-size=t2.micro --master-size=t2.small --dns-zone=<my-cluster-dns>
$ kops update cluster <my-cluster-name> --yes
When accessing the dashboard, I am prompted to either enter a token or
Please select the kubeconfig file that you have created to configure access to the cluster.
When creating the cluster, ~/.kube/config was created that has the following form:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data:
<some_key_or_token_here>
server: https://api.<my_cluster_url>
name: <my_cluster_name>
contexts:
- context:
cluster: <my_cluster_name>
user: <my_cluster_name>
name: <my_cluster_name>
current-context: <my_cluster_name>
kind: Config
preferences: {}
users:
- name: <my_cluster_name>
user:
as-user-extra: {}
client-certificate-data:
<some_key_or_certificate>
client-key-data:
<some_key_or_certificate>
password: <password>
username: admin
- name:<my-cluster-url>-basic-auth
user:
as-user-extra: {}
password: <password>
username: admin
Why when pointing the kubernetes ui to the above file, I get
Authentication failed. Please try again.
I tried the same and had the same problem. It turns out that kops creates a certificate based authentication. Certificate based authentication can't be used on the web UI interface. Instead, I tried using the token based authentication. Next question, where do you find the token?
kubectl describe secret
This will show you the default token for the cluster. I assume this is very bad security practice but if you're using the UI to improve your learning and understanding then it will get you moving in the right direction.
This Dashboard wiki page is about authentication. That's where I discovered how to do it.
In order to enable basic auth in Dashboard --authentication-mode=basic flag has to be provided. By default it is set to --authentication-mode=token
To get the token or understand more about access control please refer here

Configure RABC in K8S v 1.8.1

I am following Configure RBAC to create user accounts, everything works fine, but after updating the context, before binding any roles with the created user, apiserver kubectl get pods returning the pods.
apiserver configuration
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
KUBE_API_PORT="--insecure-port=8080"
KUBELET_PORT="--kubelet-port=10250"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
KUBE_API_ARGS="--client-ca-file=/srv/kubernetes/ca.crt --tls-cert-file=/srv/kubernetes/server.crt --tls-private-key-file=/srv/kubernetes/server.key --authorization-mode=RBAC"
kubectl config
apiVersion: v1
clusters:
- cluster:
certificate-authority: /srv/kubernetes/ca.crt
server: http://172.16.3.23:8080
name: local
contexts:
- context:
cluster: local
namespace: kube-system
user: devops
name: devops
current-context: devops
kind: Config
preferences: {}
users:
- name: devops
user:
client-certificate: /.cert/devops.crt
client-key: /.cert/devops.key
p.s: I am using centos bare metal environment
The insecure port (http://...:8080) bypasses all authentication and authorization