kubernetes: Authentication to ui with default config file fails - kubernetes

I have successfully set up a kubernetes cluster on AWS using kops and the following commands:
$ kops create cluster --name=<my_cluster_name> --state=s3://<my-state-bucket> --zones=eu-west-1a --node-count=2 --node-size=t2.micro --master-size=t2.small --dns-zone=<my-cluster-dns>
$ kops update cluster <my-cluster-name> --yes
When accessing the dashboard, I am prompted to either enter a token or
Please select the kubeconfig file that you have created to configure access to the cluster.
When creating the cluster, ~/.kube/config was created that has the following form:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data:
<some_key_or_token_here>
server: https://api.<my_cluster_url>
name: <my_cluster_name>
contexts:
- context:
cluster: <my_cluster_name>
user: <my_cluster_name>
name: <my_cluster_name>
current-context: <my_cluster_name>
kind: Config
preferences: {}
users:
- name: <my_cluster_name>
user:
as-user-extra: {}
client-certificate-data:
<some_key_or_certificate>
client-key-data:
<some_key_or_certificate>
password: <password>
username: admin
- name:<my-cluster-url>-basic-auth
user:
as-user-extra: {}
password: <password>
username: admin
Why when pointing the kubernetes ui to the above file, I get
Authentication failed. Please try again.

I tried the same and had the same problem. It turns out that kops creates a certificate based authentication. Certificate based authentication can't be used on the web UI interface. Instead, I tried using the token based authentication. Next question, where do you find the token?
kubectl describe secret
This will show you the default token for the cluster. I assume this is very bad security practice but if you're using the UI to improve your learning and understanding then it will get you moving in the right direction.
This Dashboard wiki page is about authentication. That's where I discovered how to do it.

In order to enable basic auth in Dashboard --authentication-mode=basic flag has to be provided. By default it is set to --authentication-mode=token
To get the token or understand more about access control please refer here

Related

How to add more nodes in the self signed certificate of Kubernetes Dashboard

I finally managed to resolve my question related to how to add more nodes in the CAs of the Master nodes (How to add extra nodes to the certificate-authority-data from a self signed k8s cluster?).
Now the problem that I am facing is I want to use kubeconfig file e.g. ~/.kube/config to access the Dashboard.
I managed to figured it out by having the following syntax:
$ kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://IP:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin#kubernetes
current-context: kubernetes-admin#kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
token: REDACTED
The problem that I am having is that I need to use the IP of one of the Master nodes in order to be able to reach the Dashboard. I would like to be able to use the LB IP to reach the Dashboard.
I assume this is related to the same problem that I had before as I can see from the file that the CAs are autogenerated.
args:
- --auto-generate-certificates
- etc etc
.
.
.
Apart from creating the CAs on your self in order to use them is there any option to pass e.g. IP1 / IP2 etc etc in a flag within the file?
Update: I am deploying the Dashboard through the recommended way kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml (Deploying the Dashboard UI). The deployment is on prem but I have configured the cluster with an external loadbalancer (HAProxy) towards the Api and also Ingress and also type: LoadBalancer on Ingress. Everything seems to working as expected apart from the Dashboard UI (through LB IP). I am also using authentication mode authorization-mode: Node,RBAC on the kubeconfig file (if relevant).
I am access the Dashboard through Inress HTTPS e.g. https://dashboard.example.com.
I get the error Not enough data to create auth info structure. Found the token: xxx solution from this question Kubernetes Dashboard access using config file Not enough data to create auth info structure..
If I switch the LB IP with the Master nodes then I can access the UI with the kubeconfig file.
I just updated now to the latest version of the dashboard v2.0.5 is not working with the kubeconfig button / file but it works with the token directly kubernetes/Dashoboard-v2.0.5. With the previous version everything works as described above. No error logs in the pod logs.

aws-iam-authenticator & EKS

I've deployed a test EKS cluster with the appropiate configMap, and users that are SSO'd in can access the clusters via exporting session creds (AWS_ACCESS_KEY_ID, SECRET_ACCESS_KEY_ID, AWS_SESSION_TOKEN etc) and having the aws-iam-authenticator client installed in their terminal. The problem comes in when users attempt to use an aws sso profile stored in ~/.aws/config using the aws-iam-authenticator. The error that's recieved when running any kubectl command is the following:
$ kubectl get all
could not get token: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
I've tested this on my local machine (AWS CLI v2) and I haven't had any success. I've exported an AWS profile found in the ~/.aws/config file via export AWS_PROFILE=User1 and running aws sts get-caller-identity correctly shows the profile being exported. I've switched between mulitple named profiles and each one gets the correct identity and permissions, however, when running any kubectl command I get the above error. I've also tried symlinking config -> credentials but no luck. The only way it works is if I export the access_key, secret_key, and session_token to the environment variables.
I suppose I can live with having to paste in the dynamic creds that come from AWS SSO, but my need to solve solutions won't let me give up :(. I was following the thread found in this github issue but no luck. The kube config file that I have setup is spec'd to AWS's documentation.
I suspect there may be something off with the aws-iam-authenticator server deployment, but nothing shows in the pod logs. Here's a snippet from the tools github page, which I think I followed correctly, but I did skip step 3 for reasons that I forgot:
The Kubernetes API integrates with AWS IAM Authenticator for
Kubernetes using a token authentication webhook. When you run
aws-iam-authenticator server, it will generate a webhook configuration
file and save it onto the host filesystem. You'll need to add a single
additional flag to your API server configuration:
Kube Config File
apiVersion: v1
clusters:
- cluster:
server: <endpoint-url>
certificate-authority-data: <base64-encoded-ca-cert>
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: aws
name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: aws-iam-authenticator
args:
- "token"
- "-i"
- "eks-cluster-name"
- "-r"
- "EKS-ADMIN-ROLE:ARN:::::::"
env:
- name: AWS_PROFILE
value: "USER"
The AWS CLI v2 now supports AWS SSO so I decided to update my Kube config file to leverage the aws command instead of aws-iam-authenticator. Authentication via SSO is now a breeze! It looks like AWS wanted to get away from having to have an additional binary to be able to authenticate in to EKS clusters which is fine by me! Hope this helps.

Can kubectl work from an assumed role from AWS

I'm using Amazon EKS for Kubernetes deployment (initially created by an AWS admin user), and currently having difficulty to use the AWS credentials from AWS STS assume-role to execute kubectl commands to interact with the stack
I have 2 EKS stacks on 2 different AWS accounts (PROD & NONPROD), and I'm trying to get the CI/CD tool to deploy to both kubernetes stacks with the credentials provided by AWS STS assume-role but I'm constantly getting error such as error: You must be logged in to the server (the server has asked for the client to provide credentials).
I have followed the following link to add additional AWS IAM role to the config:
https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html
But I'm not sure what I'm not doing right.
I ran "aws eks update-kubeconfig" to update the local .kube/config file, contents populated as below:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: [hidden]
server: https://[hidden].eu-west-1.eks.amazonaws.com
name: arn:aws:eks:eu-west-1:[hidden]:cluster/demo-eks
contexts:
- context:
cluster: arn:aws:eks:eu-west-1:[hidden]:cluster/demo-eks
user: arn:aws:eks:eu-west-1:[hidden]:cluster/demo-eks
name: arn:aws:eks:eu-west-1:[hidden]:cluster/demo-eks
current-context: arn:aws:eks:eu-west-1:[hidden]:cluster/demo-eks
kind: Config
preferences: {}
users:
- name: arn:aws:eks:eu-west-1:[hidden]:cluster/demo-eks
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- token
- -i
- triage-eks
command: aws-iam-authenticator
and had previously updated Kubernetes aws-auth ConfigMap with an additional role as below:
data:
mapRoles: |
- rolearn: arn:aws:iam::[hidden]:role/ci_deployer
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:masters
My CI/CD EC2 instance can assume the ci_deployer role for either AWS accounts.
Expected: I can call "kubectl version" to see both Client and Server versions
Actual: but I get "the server has asked for the client to provide credentials"
What is still missing?
After further testing, I can confirm kubectl will only work from an environment (e.g. my CI EC2 instance with an AWS instance role) of the same AWS account where the EKS stack is created. This means that my CI instance from account A will not be able to communicate with EKS from account B, even if the CI instance can assume a role from account B, and the account B role is included in the aws-auth of the kube config of account B EKS. I hope its due to missing configuration as I find this rather undesirable if a CI tool can't deploy to multiple EKS's from multiple AWS accounts using role assumption.
Look forward to further #Kubernetes support on this
Can kubectl work from an assumed role from AWS
Yes, it can work. A good way to troubleshoot it is to run from the same command line where you are running kubectl:
$ aws sts get-caller-identity
You can see the Arn for the role (or user) and then make sure there's a trust relationship in IAM between that and the role that you specify here in your kubeconfig:
command: aws-iam-authenticator
args:
- "token"
- "-i"
- "<cluster-name>"
- "-r"
- "<role-you-want-to-assume-arn>"
or with the newer option:
command: aws
args:
- eks
- get-token
- --cluster-name
- <cluster-name>
- --role
- <role-you-want-to-assume-arn>
Note that if you are using aws eks update-kubeconfig you can pass in the --role-arn flag to generate the above in your kubeconfig.
In your case, some things that you can look at:
The credential environment variables are not set in your CI?:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
Your ~/.aws/credentials file is not populated correctly in your CI. With something like this:
[default]
aws_access_key_id = xxxx
aws_secret_access_key = xxxx
Generally, the environment variables take precedence so it could be that you could have different credentials altogether in those environment variables too.
It could also be the AWS_PROFILE env variable or the AWS_PROFILE config in ~/.kube/config
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: aws-iam-authenticator
args:
- "token"
- "-i"
- "<cluster-name>"
- "-r"
- "<role-arn>"
env:
- name: AWS_PROFILE <== is this value set
value: "<aws-profile>"
Is the profile set correctly under ~/.aws/config?
From Step 1: Create Your Amazon Cluster
When an Amazon EKS cluster is created, the IAM entity (user or role) that creates the cluster is added to the Kubernetes RBAC authorization table as the administrator (with system:master permissions. Initially, only that IAM user can make calls to the Kubernetes API server using kubectl.
As you have discovered you can only access the cluster with the same user/role that created the EKS cluster in the first place.
There is a way to add additional roles to the cluster after creation by editing the aws-auth ConfigMap that has been created.
Add User Role
By editing the aws-auth ConfigMap you can add different levels of access based on the role of the user.
First you MUST have the "system:node:{{EC2PrivateDNSName}}" user
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: <ARN of instance role (not instance profile)>
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
This is required for Kubernetes to even work, giving the nodes the ability to join the cluster. The "ARN of instance role" is the role that includes the required policies AmazonEKSWorkerNodePolicy, AmazonEKS_CNI_Policy, AmazonEC2ContainerRegistryReadOnly etc.
Below that add your role
- rolearn: arn:aws:iam::[hidden]:role/ci_deployer
username: ci-deployer
groups:
- system:masters
The 'username' can actually be set to about anything. It appears to only be important if there are custom roles and bindings added to your EKS cluster.
Also, use the command 'aws sts get-caller-identity' to validate the environment/shell and the AWS credentials are properly configured. When correctly configured 'get-caller-identity' should return the same role ARN specified in aws-auth.

ssl authentication for gcp kubernetes cluster is not working

For an automation purpose, I have generated the kubernetes configuration file using the below API.
request = service.projects().zones().clusters()
.get(projectId=project_id, zone=zone, clusterId=cluster_id)
The cluster is having both basic & ssl configurations enabled and only the basic authentication is working properly. When I changed the user context from admin to ca-user, I am getting the below error.
Error from server (Forbidden): nodes is forbidden: User "client" cannot list nodes at the cluster scope: Unknown user "client"
The generated configuration file is given below.
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: *************
server: https://*******
name: gke_demo-205812_us-central1-a_cluster-1
contexts:
- context:
cluster: gke_demo-205812_us-central1-a_cluster-1
user: ca-user
name: gke_demo-205812_us-central1-a_cluster-1
current-context: gke_demo-205812_us-central1-a_cluster-1
kind: Config
preferences: {}
users:
- name: admin
user:
password: *****************
username: admin
- name: ca-user
user:
client-certificate-data: ******************
client-key-data: ************************
Thanks in Advance. :)
Try after running this command:
kubectl create clusterrolebinding client-admin \
--clusterrole=cluster-admin \
--user=client
You are giving cluster-admin permission to this user.

How to refresh cluster access-token in GKE

I have three clusters in Google Kubernetes Engine, and I am trying to see Kubernetes dashboard but I get the same access-token for two different clusters.
Using kubectl config view command I get:
- name: gke_PROJECT_ZONE_A_NAME_A
user:
auth-provider:
config:
access-token: TOKEN-A
- name: gke_PROJECT_ZONE_B_NAME_B
user:
auth-provider:
config:
access-token: TOKEN-B
- name: gke_PROJECT_ZONE_C_NAME_C
user:
auth-provider:
config:
access-token: TOKEN-B
when gke_PROJECT_ZONE_B_NAME_B and gke_PROJECT_ZONE_C_NAME_C share the same access token, hence when I connect via kubectl proxy and insert the token I get the same the dashboard.
How I can refresh the access token for cluster B or C so i'll get the desired dashboard?
i've tried to use gcloud container clusters get-credentials CLUSTER-C --zone ZONE-C --project MY_PROJECT, which returns
Fetching cluster endpoint and auth data. kubeconfig entry generated
for CLUSTER-C.
and afterwards I don't get any access token for CLUSTER-C
thank you
Restarting the UI service by running kubectl proxy, entering to the UI via http://localhost:8001/ui and refreshing the page cause the access token to refresh.
If you know your access token for CLUSTER-C, you can do this
$ kubectl config set-credentials gke_PROJECT_ZONE_C_NAME_C --token=""