Spinnaker authenticate K8s with service account. Not with system:anonymous - kubernetes

I see spinnaker using system:anonymous user to authenticate K8s. But I want a specific user(which I created already in K8s) to authenticate K8s. I used below kubeconfig to use user veeru
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: RETRACTED
server: https://xx.xx.xx.220:8443
name: xx-xx-xx-220:8443
contexts:
- context:
cluster: xx-xx-xx-220:8443
namespace: default
user: veeru/xx-xx-xx-220:8443
name: area-51/xx-xx-xx-220:8443/veeru
current-context: area-51/xx-xx-xx-220:8443/veeru
kind: Config
preferences: {}
users:
- name: veeru/xx-xx-xx-220:8443
user:
client-certificate-data: RETRACTED
client-key-data: RETRACTED
And I specified(like here) user in config(~/.hal/config) like below
kubernetes:
enabled: true
accounts:
- name: my-k8s-account
requiredGroupMembership: []
providerVersion: V1
dockerRegistries:
- accountName: my-docker-registry2
namespaces: []
configureImagePullSecrets: true
namespaces: ["area-51"]
user: veeru
omitNamespaces: []
kubeconfigFile: /home/ubuntu/.kube/config
oauthScopes: []
oAuthScopes: []
primaryAccount: my-k8s-account
But still spinnaker is using system:anonymous
2018-01-22 08:35:13.929 ERROR 4639 --- [pool-4-thread-1] c.n.s.c.o.DefaultOrchestrationProcessor : com.netflix.spinnaker.clouddriver.kubernetes.v1.deploy.exception.KubernetesOperationException: Get Service openshifttest-dev in area-51 for account my-k8s-account failed: User "system:anonymous" cannot get services in the namespace "area-51": User "system:anonymous" cannot get services in project "area-51"
Is there any way to specify user that spinnaker should use configured user other than system:anonymous
UPDATE-1
Followed: https://blog.spinnaker.io/spinnaker-kubernetes-rbac-c40f1f73c172
Got the secret from kubectl describe secret spinnaker-service-account-token-9sl6q and update in kubeconfig like below
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://xx.xx.xx.xx:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
namespace: webapp
user: kubernetes-admin
name: kubernetes-admin#kubernetes
- context:
cluster: kubernetes
user: spinnaker-service-account
name: spinnaker-context
current-context: spinnaker-context
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
- name: spinnaker-service-account
user:
token: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9....
Than I ran sudo hal deploy
....
! ERROR Unable to communicate with your Kubernetes cluster: Failure
executing: GET at: https://xx.xx.xx.xx:6443/api/v1/namespaces. Message:
Forbidden! User spinnaker-service-account doesn't have permission. namespaces is
forbidden: User "system:serviceaccount:default:spinnaker-service-account" cannot
list namespaces at the cluster scope..
? Unable to authenticate with your Kubernetes cluster. Try using
kubectl to verify your credentials.
....
I'm able run
$ kubectl get namespace webapp
NAME STATUS AGE
webapp Active 22m
I have specified webapp namespace and user as spinnaker-service-account in ~/.hal/config

I'm using GKE with basic authentication disabled. I have my spinnaker use a dedicated K8s service account that I created. In my ~/.kube/config I have tokens for each K8s cluster.
users:
- name: gke_operation-covfefe-1_asia-east1_testing-asia-east1
user:
token: token1
- name: gke_operation-covfefe-1_europe-west1_testing-europe-west1
user:
token: token2
- name: gke_operation-covfefe-1_us-central1_testing-us-central1
user:
token: token3
I got these tokens by running
kubectl get secret spinnaker-service-account -o json \
| jq -r .data.token \
| base64 -d
and then manually updating my ~/.kube/config file.
Make sure your service account has the required RBAC permissions. See blog post here.
Update:
Also make sure you give the service account the required RBAC permissions. See the "Role" section of the blog post above or the guide here. When you test the RBAC permissions with kubectl make sure you're using the same service account as the one Spinnaker is using.
Update 2
If you want spinnaker to act on all namespaces, use ClusterRole and ClusterRoleBinding in your RBAC. The blog post only uses Role and RoleBinding which restricts actions to a particular namespace(s). See this guide for the Cluster* way. Note the PR to fix a typo here.

Related

kubectl minikube renew certificate

I'm using kubectl to access the api server on my minikube cluster on ubuntu
but when try to use kubectl command I got an error certificate expired:
/home/ayoub# kubectl get pods
Unable to connect to the server: x509: certificate has expired or is not yet valid: current time 2021-08-30T14:39:50+01:00 is before 2021-08-30T14:20:10Z
Here's my kubectl config:
/home/ayoub# kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://127.0.0.1:16443
name: microk8s-cluster
contexts:
- context:
cluster: microk8s-cluster
user: admin
name: microk8s
current-context: microk8s
kind: Config
preferences: {}
users:
- name: admin
user:
token: REDACTED
root#ayoub-Lenovo-ideapad-720S-13IKB:/home/ayoub# /home/ayoub# kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://127.0.0.1:16443
name: microk8s-cluster
contexts:
- context:
cluster: microk8s-cluster
user: admin
name: microk8s
current-context: microk8s
kind: Config
preferences: {}
users:
- name: admin
user:
token: REDACTED
root#ayoub-Lenovo-ideapad-720S-13IKB:/home/ayoub#
How I can renew this certificate?
Posted community wiki for better visibility. Feel free to expand it.
There is similar issue opened on minikube GitHub.
The temporary workaround is to remove some files in the /var/lib/minikube/ directory, then reset Kubernetes cluster and replace keys on the host. Those steps are described in this answer.
The solution described in this blog solved the problem for me:
https://dev.to/boris/microk8s-unable-to-connect-to-the-server-x509-certificate-has-expired-or-is-not-yet-valid-2b73
In summary:
Run sudo microk8s.refresh-certs then restarting the servers to reboot the microk8s cluster
minikube delete - deletes the local Kubernetes cluster - worked for me
reference:
https://github.com/kubernetes/minikube/issues/10122#issuecomment-758227950

How to establish the New service connection for EKS in AzureDevOps?

I want to create CI/CD for deploying pods on EKS using AzureDevOps. First I have installed kubectl and aws-iam-authenticator on my system(windows) and created a Kubernetes CLuster in AWS. To communicate with Cluster, I have updated kubeconfig using the following command aws eks --region us-east-2 update-kubeconfig --name cluster_name. Now I have copied the updated config file and pasted in Azure DevOps New service Connection. while verifying the test connection, getting the below error.
No user credentials found for a cluster in KubeConfig content. Make sure that the credentials exist and try again.
Below are my kubeconfig & aws-auth-cm.yaml files.
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: *********
server: https://*************.gr7.us-east-1.eks.amazonaws.com
name: arn:aws:eks:us-east-1:*********:cluster/testconnection
contexts:
- context:
cluster: arn:aws:eks:us-east-1:*********:cluster/testconnection
user: arn:aws:eks:us-east-1:********:cluster/testconnection
name: arn:aws:eks:us-east-1:**********:cluster/testconnection
current-context: arn:aws:eks:us-east-1:*******:cluster/testconnection
kind: Config
preferences: {}
users:
- name: arn:aws:eks:us-east-1:*********:cluster/testconnection
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- --region
- us-east-1
- eks
- get-token
- --cluster-name
- testconnection
command: aws
--- aws-auth-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: arn:aws:iam::*********:role/eks_nodegroup
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
So, Could anyone guide me in the configurations which I have missed to establish the connection for EKS to AzureDevOps?

kubectl: error You must be logged in to the server (Unauthorized)

I've created a service account for CI purposes and am testing it out. Upon trying any kubectl command, I get the error:
error: You must be logged in to the server (Unauthorized)
Below is my .kube/config file
apiVersion: v1
clusters:
- cluster:
server: <redacted>
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: bamboo
name: default
current-context: 'default'
kind: Config
preferences: {}
users:
- name: bamboo
user:
token: <redacted>
The service account exists and has a cluster role: edit and cluster role binding attached.
What am I doing wrong?
I reproduce the error if I copy the token directly without decoding. Then applied the following steps to decode and set the token and it is working as expected.
$ TOKENNAME=`kubectl -n <namespace> get serviceaccount/<serviceaccount-name> -o jsonpath='{.secrets[0].name}'`
$ TOKEN=`kubectl -n <namespace> get secret $TOKENNAME -o jsonpath='{.data.token}'| base64 --decode`
$ kubectl config set-credentials <service-account-name> --token=$TOKEN
So, I think it might be your case.

ssl authentication for gcp kubernetes cluster is not working

For an automation purpose, I have generated the kubernetes configuration file using the below API.
request = service.projects().zones().clusters()
.get(projectId=project_id, zone=zone, clusterId=cluster_id)
The cluster is having both basic & ssl configurations enabled and only the basic authentication is working properly. When I changed the user context from admin to ca-user, I am getting the below error.
Error from server (Forbidden): nodes is forbidden: User "client" cannot list nodes at the cluster scope: Unknown user "client"
The generated configuration file is given below.
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: *************
server: https://*******
name: gke_demo-205812_us-central1-a_cluster-1
contexts:
- context:
cluster: gke_demo-205812_us-central1-a_cluster-1
user: ca-user
name: gke_demo-205812_us-central1-a_cluster-1
current-context: gke_demo-205812_us-central1-a_cluster-1
kind: Config
preferences: {}
users:
- name: admin
user:
password: *****************
username: admin
- name: ca-user
user:
client-certificate-data: ******************
client-key-data: ************************
Thanks in Advance. :)
Try after running this command:
kubectl create clusterrolebinding client-admin \
--clusterrole=cluster-admin \
--user=client
You are giving cluster-admin permission to this user.

Configure RABC in K8S v 1.8.1

I am following Configure RBAC to create user accounts, everything works fine, but after updating the context, before binding any roles with the created user, apiserver kubectl get pods returning the pods.
apiserver configuration
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
KUBE_API_PORT="--insecure-port=8080"
KUBELET_PORT="--kubelet-port=10250"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
KUBE_API_ARGS="--client-ca-file=/srv/kubernetes/ca.crt --tls-cert-file=/srv/kubernetes/server.crt --tls-private-key-file=/srv/kubernetes/server.key --authorization-mode=RBAC"
kubectl config
apiVersion: v1
clusters:
- cluster:
certificate-authority: /srv/kubernetes/ca.crt
server: http://172.16.3.23:8080
name: local
contexts:
- context:
cluster: local
namespace: kube-system
user: devops
name: devops
current-context: devops
kind: Config
preferences: {}
users:
- name: devops
user:
client-certificate: /.cert/devops.crt
client-key: /.cert/devops.key
p.s: I am using centos bare metal environment
The insecure port (http://...:8080) bypasses all authentication and authorization