Error in connecting to GKE cluster from Google Cloud Build - kubernetes

I am trying to connect to GKE cluster in Google cloud build, i performed below build step
steps:
- name: gcr.io/cloud-builders/gcloud
args:
- container
- clusters
- get-credentials
- cluster-name
- '--zone=us-central1-a'
Error: Kubernetes cluster unreachable: Get "https://IP/version": error executing access token command "/builder/google-cloud-sdk/bin/gcloud config config-helper --format=json": err=exit status 127 output= stderr=/builder/google-cloud-sdk/bin/gcloud: exec: line 192: python: not found
Update: Though it can be done using gcloud, using kubectl cloud-builder seems to be good idea here to connect to GKE cluster.
You can directly mention few Environment Variables and kubectl will do the rest for you.
Refer to this link: https://github.com/GoogleCloudPlatform/cloud-builders/tree/master/kubectl#usage
My build step looks like this:
- name: gcr.io/cloud-builders/kubectl
env:
- CLOUDSDK_COMPUTE_ZONE=us-central1-a
- CLOUDSDK_CONTAINER_CLUSTER=cluster-name
- KUBECONFIG=/workspace/.kube/config
args:
- cluster-info

Related

aws-iam-authenticator & EKS

I've deployed a test EKS cluster with the appropiate configMap, and users that are SSO'd in can access the clusters via exporting session creds (AWS_ACCESS_KEY_ID, SECRET_ACCESS_KEY_ID, AWS_SESSION_TOKEN etc) and having the aws-iam-authenticator client installed in their terminal. The problem comes in when users attempt to use an aws sso profile stored in ~/.aws/config using the aws-iam-authenticator. The error that's recieved when running any kubectl command is the following:
$ kubectl get all
could not get token: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
I've tested this on my local machine (AWS CLI v2) and I haven't had any success. I've exported an AWS profile found in the ~/.aws/config file via export AWS_PROFILE=User1 and running aws sts get-caller-identity correctly shows the profile being exported. I've switched between mulitple named profiles and each one gets the correct identity and permissions, however, when running any kubectl command I get the above error. I've also tried symlinking config -> credentials but no luck. The only way it works is if I export the access_key, secret_key, and session_token to the environment variables.
I suppose I can live with having to paste in the dynamic creds that come from AWS SSO, but my need to solve solutions won't let me give up :(. I was following the thread found in this github issue but no luck. The kube config file that I have setup is spec'd to AWS's documentation.
I suspect there may be something off with the aws-iam-authenticator server deployment, but nothing shows in the pod logs. Here's a snippet from the tools github page, which I think I followed correctly, but I did skip step 3 for reasons that I forgot:
The Kubernetes API integrates with AWS IAM Authenticator for
Kubernetes using a token authentication webhook. When you run
aws-iam-authenticator server, it will generate a webhook configuration
file and save it onto the host filesystem. You'll need to add a single
additional flag to your API server configuration:
Kube Config File
apiVersion: v1
clusters:
- cluster:
server: <endpoint-url>
certificate-authority-data: <base64-encoded-ca-cert>
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: aws
name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: aws-iam-authenticator
args:
- "token"
- "-i"
- "eks-cluster-name"
- "-r"
- "EKS-ADMIN-ROLE:ARN:::::::"
env:
- name: AWS_PROFILE
value: "USER"
The AWS CLI v2 now supports AWS SSO so I decided to update my Kube config file to leverage the aws command instead of aws-iam-authenticator. Authentication via SSO is now a breeze! It looks like AWS wanted to get away from having to have an additional binary to be able to authenticate in to EKS clusters which is fine by me! Hope this helps.

Deploying to Kubernetes cluster using runner from GitLab

I've integrated GitLab with my Digital Ocean Kubernetes cluster. I am trying to set up a simple manual build that will deploy to my Kubernetes cluster.
My gitlab-ci-yml file details are below:
deploy:
stage: deploy
image: bitnami/kubectl:latest
script:
- kubectl version
- kubectl apply -f web.yaml
I am not sure why this is not working. Currently getting the following error:
Error from server (Forbidden): error when retrieving current
configuration ... from server for: "web.yaml": ingresses.extensions "hmweb-ingress" is forbidden: User "system:serviceaccount:gitlab-managed-apps:default" cannot get resource "ingresses" in API group "extensions" in the namespace "hm-ns01"
As far as I can understand it cannot execute the kubectl apply .. commands
Am I doing something wrong?
I think you are missing the environment in your deploy job.
Modify your job definition to look something like this:
deploy:
stage: deploy
image: bitnami/kubectl:latest
environment:
name: production
script:
- kubectl version
- kubectl apply -f web.yaml
Where "production" is interchangable with any environment name.
At least that fixed the issue for me.

Can kubectl work from an assumed role from AWS

I'm using Amazon EKS for Kubernetes deployment (initially created by an AWS admin user), and currently having difficulty to use the AWS credentials from AWS STS assume-role to execute kubectl commands to interact with the stack
I have 2 EKS stacks on 2 different AWS accounts (PROD & NONPROD), and I'm trying to get the CI/CD tool to deploy to both kubernetes stacks with the credentials provided by AWS STS assume-role but I'm constantly getting error such as error: You must be logged in to the server (the server has asked for the client to provide credentials).
I have followed the following link to add additional AWS IAM role to the config:
https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html
But I'm not sure what I'm not doing right.
I ran "aws eks update-kubeconfig" to update the local .kube/config file, contents populated as below:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: [hidden]
server: https://[hidden].eu-west-1.eks.amazonaws.com
name: arn:aws:eks:eu-west-1:[hidden]:cluster/demo-eks
contexts:
- context:
cluster: arn:aws:eks:eu-west-1:[hidden]:cluster/demo-eks
user: arn:aws:eks:eu-west-1:[hidden]:cluster/demo-eks
name: arn:aws:eks:eu-west-1:[hidden]:cluster/demo-eks
current-context: arn:aws:eks:eu-west-1:[hidden]:cluster/demo-eks
kind: Config
preferences: {}
users:
- name: arn:aws:eks:eu-west-1:[hidden]:cluster/demo-eks
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- token
- -i
- triage-eks
command: aws-iam-authenticator
and had previously updated Kubernetes aws-auth ConfigMap with an additional role as below:
data:
mapRoles: |
- rolearn: arn:aws:iam::[hidden]:role/ci_deployer
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:masters
My CI/CD EC2 instance can assume the ci_deployer role for either AWS accounts.
Expected: I can call "kubectl version" to see both Client and Server versions
Actual: but I get "the server has asked for the client to provide credentials"
What is still missing?
After further testing, I can confirm kubectl will only work from an environment (e.g. my CI EC2 instance with an AWS instance role) of the same AWS account where the EKS stack is created. This means that my CI instance from account A will not be able to communicate with EKS from account B, even if the CI instance can assume a role from account B, and the account B role is included in the aws-auth of the kube config of account B EKS. I hope its due to missing configuration as I find this rather undesirable if a CI tool can't deploy to multiple EKS's from multiple AWS accounts using role assumption.
Look forward to further #Kubernetes support on this
Can kubectl work from an assumed role from AWS
Yes, it can work. A good way to troubleshoot it is to run from the same command line where you are running kubectl:
$ aws sts get-caller-identity
You can see the Arn for the role (or user) and then make sure there's a trust relationship in IAM between that and the role that you specify here in your kubeconfig:
command: aws-iam-authenticator
args:
- "token"
- "-i"
- "<cluster-name>"
- "-r"
- "<role-you-want-to-assume-arn>"
or with the newer option:
command: aws
args:
- eks
- get-token
- --cluster-name
- <cluster-name>
- --role
- <role-you-want-to-assume-arn>
Note that if you are using aws eks update-kubeconfig you can pass in the --role-arn flag to generate the above in your kubeconfig.
In your case, some things that you can look at:
The credential environment variables are not set in your CI?:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
Your ~/.aws/credentials file is not populated correctly in your CI. With something like this:
[default]
aws_access_key_id = xxxx
aws_secret_access_key = xxxx
Generally, the environment variables take precedence so it could be that you could have different credentials altogether in those environment variables too.
It could also be the AWS_PROFILE env variable or the AWS_PROFILE config in ~/.kube/config
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: aws-iam-authenticator
args:
- "token"
- "-i"
- "<cluster-name>"
- "-r"
- "<role-arn>"
env:
- name: AWS_PROFILE <== is this value set
value: "<aws-profile>"
Is the profile set correctly under ~/.aws/config?
From Step 1: Create Your Amazon Cluster
When an Amazon EKS cluster is created, the IAM entity (user or role) that creates the cluster is added to the Kubernetes RBAC authorization table as the administrator (with system:master permissions. Initially, only that IAM user can make calls to the Kubernetes API server using kubectl.
As you have discovered you can only access the cluster with the same user/role that created the EKS cluster in the first place.
There is a way to add additional roles to the cluster after creation by editing the aws-auth ConfigMap that has been created.
Add User Role
By editing the aws-auth ConfigMap you can add different levels of access based on the role of the user.
First you MUST have the "system:node:{{EC2PrivateDNSName}}" user
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: <ARN of instance role (not instance profile)>
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
This is required for Kubernetes to even work, giving the nodes the ability to join the cluster. The "ARN of instance role" is the role that includes the required policies AmazonEKSWorkerNodePolicy, AmazonEKS_CNI_Policy, AmazonEC2ContainerRegistryReadOnly etc.
Below that add your role
- rolearn: arn:aws:iam::[hidden]:role/ci_deployer
username: ci-deployer
groups:
- system:masters
The 'username' can actually be set to about anything. It appears to only be important if there are custom roles and bindings added to your EKS cluster.
Also, use the command 'aws sts get-caller-identity' to validate the environment/shell and the AWS credentials are properly configured. When correctly configured 'get-caller-identity' should return the same role ARN specified in aws-auth.

Pulling Images from GCR into GKE

Today is my first day playing with GCR and GKE. So apologies if my question sounds childish.
So I have created a new registry in GCR. It is private. Using this documentation, I got hold of my Access Token using the command
gcloud auth print-access-token
#<MY-ACCESS_TOKEN>
I know that my username is oauth2accesstoken
On my local laptop when I try
docker login https://eu.gcr.io/v2
Username: oauth2accesstoken
Password: <MY-ACCESS_TOKEN>
I get:
Login Successful
So now its time to create a docker-registry secret in Kubernetes.
I ran the below command:
kubectl create secret docker-registry eu-gcr-io-registry --docker-server='https://eu.gcr.io/v2' --docker-username='oauth2accesstoken' --docker-password='<MY-ACCESS_TOKEN>' --docker-email='<MY_EMAIL>'
And then my Pod definition looks like:
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: my-app
image: eu.gcr.io/<my-gcp-project>/<repo>/<my-app>:latest
ports:
- containerPort: 8090
imagePullSecrets:
- name: eu-gcr-io-registry
But when I spin up the pod, I get the ERROR:
Warning Failed 4m (x4 over 6m) kubelet, node-3 Failed to pull image "eu.gcr.io/<my-gcp-project>/<repo>/<my-app>:latest": rpc error: code = Unknown desc = Error response from daemon: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
I verified my secrets checking the YAML file and doing a base64 --decode on the .dockerconfigjson and it is correct.
So what have I missed here ?
If your GKE cluster & GCR registry are in the same project: You don't need to configure authentication. GKE clusters are authorized to pull from private GCR registries in the same project with no config. (Very likely you're this!)
If your GKE cluster & GCR registry are in different GCP projects: Follow these instructions to give "service account" of your GKE cluster access to read private images in your GCR cluster: https://cloud.google.com/container-registry/docs/access-control#granting_users_and_other_projects_access_to_a_registry
In a nutshell, this can be done by:
gsutil iam ch serviceAccount:[PROJECT_NUMBER]-compute#developer.gserviceaccount.com:objectViewer gs://[BUCKET_NAME]
where [BUCKET_NAME] is the GCS bucket storing your GCR images (like artifacts.[PROJECT-ID].appspot.com) and [PROJECT_NUMBER] is the numeric GCP project ID hosting your GKE cluster.

Error when deploying kube-dns: No configuration has been provided

I have just installed a basic kubernetes cluster the manual way, to better understand the components, and to later automate this installation. I followed this guide: https://kubernetes.io/docs/getting-started-guides/centos/centos_manual_config/
The cluster is completely empty without addons after this. I've already deployed kubernetes-dashboard succesfully, however, when trying to deploy kube-dns, it fails with the log:
2017-01-11T15:09:35.982973000Z F0111 15:09:35.978104 1 server.go:55]
Failed to create a kubernetes client:
invalid configuration: no configuration has been provided
I used the following yaml template for kube-dns without modification, only filling in the cluster IP:
https://coreos.com/kubernetes/docs/latest/deploy-addons.html
What did I do wrong?
Experimenting with kubedns arguments, I added --kube-master-url=http://mykubemaster.mydomain:8080 to the yaml file, and suddenly it reported in green.
How did this solve it? Was the container not aware of the master for some reason?
In my case, I had to put numeric IP on "--kube-master-url=http://X.X.X.X:8080". It's on yaml file of RC (ReplicationController), just like:
...
spec:
containers:
- name: kubedns
...
args:
# command = "/kube-dns"
- --domain=cluster.local
- --dns-port=10053
- --kube-master-url=http://192.168.99.100:8080