aws-iam-authenticator & EKS - kubernetes

I've deployed a test EKS cluster with the appropiate configMap, and users that are SSO'd in can access the clusters via exporting session creds (AWS_ACCESS_KEY_ID, SECRET_ACCESS_KEY_ID, AWS_SESSION_TOKEN etc) and having the aws-iam-authenticator client installed in their terminal. The problem comes in when users attempt to use an aws sso profile stored in ~/.aws/config using the aws-iam-authenticator. The error that's recieved when running any kubectl command is the following:
$ kubectl get all
could not get token: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
I've tested this on my local machine (AWS CLI v2) and I haven't had any success. I've exported an AWS profile found in the ~/.aws/config file via export AWS_PROFILE=User1 and running aws sts get-caller-identity correctly shows the profile being exported. I've switched between mulitple named profiles and each one gets the correct identity and permissions, however, when running any kubectl command I get the above error. I've also tried symlinking config -> credentials but no luck. The only way it works is if I export the access_key, secret_key, and session_token to the environment variables.
I suppose I can live with having to paste in the dynamic creds that come from AWS SSO, but my need to solve solutions won't let me give up :(. I was following the thread found in this github issue but no luck. The kube config file that I have setup is spec'd to AWS's documentation.
I suspect there may be something off with the aws-iam-authenticator server deployment, but nothing shows in the pod logs. Here's a snippet from the tools github page, which I think I followed correctly, but I did skip step 3 for reasons that I forgot:
The Kubernetes API integrates with AWS IAM Authenticator for
Kubernetes using a token authentication webhook. When you run
aws-iam-authenticator server, it will generate a webhook configuration
file and save it onto the host filesystem. You'll need to add a single
additional flag to your API server configuration:
Kube Config File
apiVersion: v1
clusters:
- cluster:
server: <endpoint-url>
certificate-authority-data: <base64-encoded-ca-cert>
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: aws
name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: aws-iam-authenticator
args:
- "token"
- "-i"
- "eks-cluster-name"
- "-r"
- "EKS-ADMIN-ROLE:ARN:::::::"
env:
- name: AWS_PROFILE
value: "USER"

The AWS CLI v2 now supports AWS SSO so I decided to update my Kube config file to leverage the aws command instead of aws-iam-authenticator. Authentication via SSO is now a breeze! It looks like AWS wanted to get away from having to have an additional binary to be able to authenticate in to EKS clusters which is fine by me! Hope this helps.

Related

How to add more nodes in the self signed certificate of Kubernetes Dashboard

I finally managed to resolve my question related to how to add more nodes in the CAs of the Master nodes (How to add extra nodes to the certificate-authority-data from a self signed k8s cluster?).
Now the problem that I am facing is I want to use kubeconfig file e.g. ~/.kube/config to access the Dashboard.
I managed to figured it out by having the following syntax:
$ kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://IP:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin#kubernetes
current-context: kubernetes-admin#kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
token: REDACTED
The problem that I am having is that I need to use the IP of one of the Master nodes in order to be able to reach the Dashboard. I would like to be able to use the LB IP to reach the Dashboard.
I assume this is related to the same problem that I had before as I can see from the file that the CAs are autogenerated.
args:
- --auto-generate-certificates
- etc etc
.
.
.
Apart from creating the CAs on your self in order to use them is there any option to pass e.g. IP1 / IP2 etc etc in a flag within the file?
Update: I am deploying the Dashboard through the recommended way kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml (Deploying the Dashboard UI). The deployment is on prem but I have configured the cluster with an external loadbalancer (HAProxy) towards the Api and also Ingress and also type: LoadBalancer on Ingress. Everything seems to working as expected apart from the Dashboard UI (through LB IP). I am also using authentication mode authorization-mode: Node,RBAC on the kubeconfig file (if relevant).
I am access the Dashboard through Inress HTTPS e.g. https://dashboard.example.com.
I get the error Not enough data to create auth info structure. Found the token: xxx solution from this question Kubernetes Dashboard access using config file Not enough data to create auth info structure..
If I switch the LB IP with the Master nodes then I can access the UI with the kubeconfig file.
I just updated now to the latest version of the dashboard v2.0.5 is not working with the kubeconfig button / file but it works with the token directly kubernetes/Dashoboard-v2.0.5. With the previous version everything works as described above. No error logs in the pod logs.

Azure Devops kubernetes service connection for "kubeconfig" option does not appear to work against an AAD openidconnect integrated AKS cluster

When using the "kubeconfig" option I get the error when I click on "verify connection"
Error: TFS.WebApi.Exception: No user credentials found for cluster in KubeConfig content. Make sure that the credentials exist and try again.
The kubeconfig I pasted in, and selected the correct context from, is a direct copy paste of what is in my ~/.kube./config file and this works fine w/ kubectl
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: xxxxx
server: https://aks-my-stage-cluster-xxxxx.hcp.eastus.azmk8s.io:443
name: aks-my-stage-cluster-xxxxx
contexts:
- context:
cluster: aks-my-stage-cluster-xxxxx
user: clusterUser_aks-my-stage-cluster-xxxxx_aks-my-stage-cluster-xxxxx
name: aks-my-stage-cluster-xxxxx
current-context: aks-my-stage-cluster-xxxxx
kind: Config
preferences: {}
users:
- name: clusterUser_aks-my-stage-cluster-xxxxx_aks-my-stage-cluster-xxxxx
user:
auth-provider:
config:
access-token: xxxxx.xxx.xx-xx-xx-xx-xx
apiserver-id: xxxx
client-id: xxxxx
environment: AzurePublicCloud
expires-in: "3599"
expires-on: "1572377338"
refresh-token: xxxx
tenant-id: xxxxx
name: azure
Azure DevOps has an option to save the service connection without verification:
Even though the verification fails when editing the service connection, pipelines that use the service connection do work in my case.
Depending on the pasted KubeConfig you might encounter a 2nd problem where the Azure DevOps GUI for the service connection doesn't save or close, but also doesn't give you any error message. By inspecting the network traffic in e.g. Firefox' developer tools, I found out that the problem was the KubeConfig value being too long. Only ~ 20.000 characters are allowed. After removing irrelevant entries from the config, it worked.
PS: Another workaround is to run kubelogin in a script step in your pipeline.
It seems like it's not enough just to use converted with kubelogin kubeconfig. This plugin is required for kubectl to make a test connection and probably it's not used in the Azure DevOps service connection configuration.
As a workaround that can work for self-hosted build agent, you can install kubectl, kubelogin and whatever software you need to work with your AKS cluster and use shell scripts like:
export KUBECONFIG=~/.kube/config
kubectl apply -f deployment.yaml
You can try run below command to get the KubeConfig. And then copy the content of ~/.kube/config file the service connection to try again.
az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
After run above command and copy the config from the ~/.kube/config on my local machine. i successfully add my kubernetes connection using kubeconfig option
You can also refer to the steps here.

What is the difference between a service account and a context in Kubernetes?

What are the practical differences between the two? When should I choose one over the other?
For example if I'd like to give a developer in my project access to just view the logs of a pod. It seems both a service account or a context could be assigned these permissions via a RoleBinding.
What is Service Account?
From Docs
User accounts are for humans. Service accounts are for processes,
which run in pods.
User accounts are intended to be global...Service
accounts are namespaced.
Context
context related to kubeconfig file(~/.kube/config). As you know kubeconfig file is a yaml file, the section context holds your user/token and cluster references. context is really usefull when you have multiple cluster, you can define all your clusters and users in single kubeconfig file, then you can switch between them with help of context (Example: kubectl config --kubeconfig=config-demo use-context dev-frontend)
From Docs
apiVersion: v1
clusters:
- cluster:
certificate-authority: fake-ca-file
server: https://1.2.3.4
name: development
- cluster:
insecure-skip-tls-verify: true
server: https://5.6.7.8
name: scratch
contexts:
- context:
cluster: development
namespace: frontend
user: developer
name: dev-frontend
- context:
cluster: development
namespace: storage
user: developer
name: dev-storage
- context:
cluster: scratch
namespace: default
user: experimenter
name: exp-scratch
current-context: ""
kind: Config
preferences: {}
users:
- name: developer
user:
client-certificate: fake-cert-file
client-key: fake-key-file
- name: experimenter
user:
password: some-password
username: exp
You can above, there are 3 contexts, holds references of cluster and user.
..if I'd like to give a developer in my project access to just view the
logs of a pod. It seems both a service account or a context could be
assigned these permissions via a RoleBinding.
That correct, you need to create service account, Role(or ClusterRole), RoleBinding(or ClusterRoleBinding) and generate kubeconfig file that contains service account token and give it your developer.
I have a script to generate kubconfig file, takes service account name argument. Feel free to check out
UPDATE:
If you want to create Role and RoleBinding, this might help
Service Account: A service account represents an identity for processes that run in a pod. When a process is authenticated through a service account, it can contact the API server and access cluster resources. If a pod doesn’t have an assigned service account, it gets the default service account.
When you create a pod, if you do not specify a service account, it is automatically assigned the default service account in the same namespace and you can access the API from inside a pod using automatically mounted service account credentials.
Context: A context is just a set of access parameters that contains a Kubernetes cluster, a user, and a namespace.
The current context is the cluster that is currently the default for kubectl and all kubectl commands run against that cluster.
They are two different concepts. A context most probably refers to an abstraction relating to the kubectl configuration, where a context can be associated with a service account.
For some reason I assumed a context was just another method of authentication.

Can kubectl work from an assumed role from AWS

I'm using Amazon EKS for Kubernetes deployment (initially created by an AWS admin user), and currently having difficulty to use the AWS credentials from AWS STS assume-role to execute kubectl commands to interact with the stack
I have 2 EKS stacks on 2 different AWS accounts (PROD & NONPROD), and I'm trying to get the CI/CD tool to deploy to both kubernetes stacks with the credentials provided by AWS STS assume-role but I'm constantly getting error such as error: You must be logged in to the server (the server has asked for the client to provide credentials).
I have followed the following link to add additional AWS IAM role to the config:
https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html
But I'm not sure what I'm not doing right.
I ran "aws eks update-kubeconfig" to update the local .kube/config file, contents populated as below:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: [hidden]
server: https://[hidden].eu-west-1.eks.amazonaws.com
name: arn:aws:eks:eu-west-1:[hidden]:cluster/demo-eks
contexts:
- context:
cluster: arn:aws:eks:eu-west-1:[hidden]:cluster/demo-eks
user: arn:aws:eks:eu-west-1:[hidden]:cluster/demo-eks
name: arn:aws:eks:eu-west-1:[hidden]:cluster/demo-eks
current-context: arn:aws:eks:eu-west-1:[hidden]:cluster/demo-eks
kind: Config
preferences: {}
users:
- name: arn:aws:eks:eu-west-1:[hidden]:cluster/demo-eks
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- token
- -i
- triage-eks
command: aws-iam-authenticator
and had previously updated Kubernetes aws-auth ConfigMap with an additional role as below:
data:
mapRoles: |
- rolearn: arn:aws:iam::[hidden]:role/ci_deployer
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:masters
My CI/CD EC2 instance can assume the ci_deployer role for either AWS accounts.
Expected: I can call "kubectl version" to see both Client and Server versions
Actual: but I get "the server has asked for the client to provide credentials"
What is still missing?
After further testing, I can confirm kubectl will only work from an environment (e.g. my CI EC2 instance with an AWS instance role) of the same AWS account where the EKS stack is created. This means that my CI instance from account A will not be able to communicate with EKS from account B, even if the CI instance can assume a role from account B, and the account B role is included in the aws-auth of the kube config of account B EKS. I hope its due to missing configuration as I find this rather undesirable if a CI tool can't deploy to multiple EKS's from multiple AWS accounts using role assumption.
Look forward to further #Kubernetes support on this
Can kubectl work from an assumed role from AWS
Yes, it can work. A good way to troubleshoot it is to run from the same command line where you are running kubectl:
$ aws sts get-caller-identity
You can see the Arn for the role (or user) and then make sure there's a trust relationship in IAM between that and the role that you specify here in your kubeconfig:
command: aws-iam-authenticator
args:
- "token"
- "-i"
- "<cluster-name>"
- "-r"
- "<role-you-want-to-assume-arn>"
or with the newer option:
command: aws
args:
- eks
- get-token
- --cluster-name
- <cluster-name>
- --role
- <role-you-want-to-assume-arn>
Note that if you are using aws eks update-kubeconfig you can pass in the --role-arn flag to generate the above in your kubeconfig.
In your case, some things that you can look at:
The credential environment variables are not set in your CI?:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
Your ~/.aws/credentials file is not populated correctly in your CI. With something like this:
[default]
aws_access_key_id = xxxx
aws_secret_access_key = xxxx
Generally, the environment variables take precedence so it could be that you could have different credentials altogether in those environment variables too.
It could also be the AWS_PROFILE env variable or the AWS_PROFILE config in ~/.kube/config
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: aws-iam-authenticator
args:
- "token"
- "-i"
- "<cluster-name>"
- "-r"
- "<role-arn>"
env:
- name: AWS_PROFILE <== is this value set
value: "<aws-profile>"
Is the profile set correctly under ~/.aws/config?
From Step 1: Create Your Amazon Cluster
When an Amazon EKS cluster is created, the IAM entity (user or role) that creates the cluster is added to the Kubernetes RBAC authorization table as the administrator (with system:master permissions. Initially, only that IAM user can make calls to the Kubernetes API server using kubectl.
As you have discovered you can only access the cluster with the same user/role that created the EKS cluster in the first place.
There is a way to add additional roles to the cluster after creation by editing the aws-auth ConfigMap that has been created.
Add User Role
By editing the aws-auth ConfigMap you can add different levels of access based on the role of the user.
First you MUST have the "system:node:{{EC2PrivateDNSName}}" user
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: <ARN of instance role (not instance profile)>
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
This is required for Kubernetes to even work, giving the nodes the ability to join the cluster. The "ARN of instance role" is the role that includes the required policies AmazonEKSWorkerNodePolicy, AmazonEKS_CNI_Policy, AmazonEC2ContainerRegistryReadOnly etc.
Below that add your role
- rolearn: arn:aws:iam::[hidden]:role/ci_deployer
username: ci-deployer
groups:
- system:masters
The 'username' can actually be set to about anything. It appears to only be important if there are custom roles and bindings added to your EKS cluster.
Also, use the command 'aws sts get-caller-identity' to validate the environment/shell and the AWS credentials are properly configured. When correctly configured 'get-caller-identity' should return the same role ARN specified in aws-auth.

kubernetes: Authentication to ui with default config file fails

I have successfully set up a kubernetes cluster on AWS using kops and the following commands:
$ kops create cluster --name=<my_cluster_name> --state=s3://<my-state-bucket> --zones=eu-west-1a --node-count=2 --node-size=t2.micro --master-size=t2.small --dns-zone=<my-cluster-dns>
$ kops update cluster <my-cluster-name> --yes
When accessing the dashboard, I am prompted to either enter a token or
Please select the kubeconfig file that you have created to configure access to the cluster.
When creating the cluster, ~/.kube/config was created that has the following form:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data:
<some_key_or_token_here>
server: https://api.<my_cluster_url>
name: <my_cluster_name>
contexts:
- context:
cluster: <my_cluster_name>
user: <my_cluster_name>
name: <my_cluster_name>
current-context: <my_cluster_name>
kind: Config
preferences: {}
users:
- name: <my_cluster_name>
user:
as-user-extra: {}
client-certificate-data:
<some_key_or_certificate>
client-key-data:
<some_key_or_certificate>
password: <password>
username: admin
- name:<my-cluster-url>-basic-auth
user:
as-user-extra: {}
password: <password>
username: admin
Why when pointing the kubernetes ui to the above file, I get
Authentication failed. Please try again.
I tried the same and had the same problem. It turns out that kops creates a certificate based authentication. Certificate based authentication can't be used on the web UI interface. Instead, I tried using the token based authentication. Next question, where do you find the token?
kubectl describe secret
This will show you the default token for the cluster. I assume this is very bad security practice but if you're using the UI to improve your learning and understanding then it will get you moving in the right direction.
This Dashboard wiki page is about authentication. That's where I discovered how to do it.
In order to enable basic auth in Dashboard --authentication-mode=basic flag has to be provided. By default it is set to --authentication-mode=token
To get the token or understand more about access control please refer here