Google Cloud get cluster credentials unauthorized - kubernetes

I've created a service account in the IAM page of google cloud console but unfortunately I'm unable to assign roles to this account - or I'm missing something.
When attempting to get the cluster credentials for kubectl, GCloud always responds with the following:
gcloud container clusters get-credentials api --zone europe-west1-b --project *****
Fetching cluster endpoint and auth data.
ERROR: (gcloud.container.clusters.get-credentials) ResponseError: code=403, message=Required "container.clusters.get" permission for "projects/*****/zones/europe-west1-b/clusters/api".
I've also added all the roles to the account as demonstrated here:
gcloud projects get-iam-policy project-tilas
bindings:
- members:
- serviceAccount:travis#*****.iam.gserviceaccount.com
role: roles/container.admin
- members:
- serviceAccount:travis#*****.iam.gserviceaccount.com
role: roles/editor
- members:
- user:Tj****n#gmail.com
role: roles/owner
- members:
- serviceAccount:travis#*****.iam.gserviceaccount.com
role: roles/viewer
etag: BwVqZB734TY=
version: 1
What am I missing?
Authentication is successful, and the project id/number's match up to what I see in the GCloud dashboard...

While not a definitive answer, I actually solved this but just deleting the service account and recreating it.
Frustrating.

I was facing same issue, my fault is I was mentioning project name instead of project id. I solved the issue with:
gcloud config set project <project_id>
and then:
gcloud container clusters get-credentials <cluster-name> --zone=<zone>
Just thought about leaving this here as it may help someone in the future

Related

DO Kubernetes Cluster + GCP Container Registry

I have a Kubernetes cluster in Digital Ocean, I want to pull the images from a private repository in GCP.
I tried to create a secret that make me able to to pull the images following this article https://blog.container-solutions.com/using-google-container-registry-with-kubernetes
Basically, these are the steps
In the GCP account, create a service account key, with a JSON credential
Execute
kubectl create secret docker-registry gcr-json-key \
--docker-server=gcr.io \
--docker-username=_json_key \
--docker-password="$(cat ~/json-key-file.json)" \
--docker-email=any#valid.email
In the deployment yaml reference the secret
imagePullSecrets:
- name: gcr-json-key
I don't understand why I am getting 403. If there are some restriccions to use the registry outside google cloud, or if I missed some configuration something.
Failed to pull image "gcr.io/myapp/backendnodeapi:latest": rpc error: code = Unknown desc = failed to pull and unpack image "gcr.io/myapp/backendnodeapi:latest": failed to resolve reference "gcr.io/myapp/backendnodeapi:latest": unexpected status code [manifests latest]: 403 Forbidden
Verify that you have enabled the Container Registry API, Installed Cloud SDK and Service account you are using for authentication has permissions to access Container Registry.
Docker requires privileged access to interact with registries. On Linux or Windows, add the user that you use to run Docker commands to the Docker security group.
This documentation has details on prerequisites for container registry.
Note:
Ensure that the version of kubectl is the latest version.
I tried replicating by following the document you provided and it worked at my end, So ensure that all the prerequisites are met.
That JSON string is not a password.
The documentation suggests to either activate the service account:
gcloud auth activate-service-account [USERNAME]#[PROJECT-ID].iam.gserviceaccount.com --key-file=~/service-account.json
Or add the configuration to $HOME/.docker/config.json
And then run docker-credential-gcr configure-docker.
Kubernetes seems to demand a service-account token secret
and this requires annotation kubernetes.io/service-account.name.
Also see Configure Service Accounts for Pods.

Permission error using kubectl after GKE Kubernetes cluster migration from one organization to another

I had a GKE cluster my-cluster created in a project that belonged to organization org1.
When the cluster was created I logged in with user#org1.com using gcloud auth login and configured the local kubeconfig using gcloud container clusters get-credentials my-cluster --region europe-west4 --project project.
Recently we had to migrate this project (with the GKE cluster) to another organization, org2. We did it sucessfully following the documentation.
The IAM owner in org2 is user#org2.com. In order to reconfigure the kube config I followed the previous steps, logging in in this case with user#org2.com:
gcloud auth login
gcloud container clusters get-credentials my-cluster --region europe-west4 --project project.
When I execute kubectl get pods I get an error referencing the old org1 user:
Error from server (Forbidden): pods is forbidden: User "user#org1.com" cannot list resource "pods" in API group "" in the namespace "default": requires one of ["container.pods.list"] permission(s).
What's the problem here?
I've accepted DazWilking's answer since in a way he was right, the config file was "inconsistent".
The problematic bit was in the user section:
user:
auth-provider:
config:
access-token: xxxxxx
expiry: "2021-07-11T18:36:42Z"
For some reason when using the gcloud container clusters get-credential command it created all items (cluster, context and user) with an invalid user section.
To fix it I connected to a cloud shell directly from the Google Cloud web console and checked the ./kube/config file there. My local config was missing the cmd-path, cmd-args, expiry-key and token-key entries:
user:
auth-provider:
config:
access-token: xxx
cmd-args: config config-helper --format=json
cmd-path: /Users/xxx/applications/google-cloud-sdk/bin/gcloud
expiry: "2021-07-11T18:36:42Z"
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
This may not be the answer but hopefully it's part of the answer.
gcloud container clusters get-credentials is a convenience function that mutates the local ${KUBECONFIG} (often ~/.kube/config) and populates it with cluster, context and user properties.
I suspect (!?), your KUBECONFIG has become inconsistent.
You should be able to edit it directly to better understand what's happening.
There are 3 primary blocks clusters, contexts and users. You're looking to find entries (one each cluster, context, user) for your old GKE cluster and for your new GKE cluster.
Don't delete anything
Either back the file up first, or rename the entries.
Each section will have a name property that reflects the GKE cluster name gke_${PROJECT}_${LOCATION}_${CLUSTER}
It may be simply that the current-context is incorrect.
NOTE Even though gcloud creates user entries for each cluster, these are usually identical (per user) and so you can simplify this section.
NOTE If you always use gcloud, it does a decent job of tidying up (removing entries) too.

How to easily switch gcloud / kubectl credentials

At work we use Kubernetes hosted in GCP. I also have a side project hosted in my personal GCP account using Google App Engine (deploy using gcloud app deploy).
Often when I try to run a command such as kubectl logs -f service-name, I get an error like "Error from server (Forbidden): pods is forbidden: User "my_personal_email#gmail.com" cannot list resource "pods" in API group "" in the namespace "WORK_NAMESPACE": Required "container.pods.list" permission." and then I have to fight with kubectl for hours trying to get it to work.
Can somebody please break it down for a slow person like me, how gcloud and kubectl work together, and how I can easily switch accounts so I can use gcloud commands for my personal projects and kubectl commands for my work projects? I'm happy to nuke my whole config and start from scratch if that's what it takes. I've found various kubectl and gcloud documentation but it doesn't make much sense or talks in circles.
Edit: this is on Linux.
Had the same problem and doing all of the:
gcloud auth login
gcloud auth list
gcloud config set account
gcloud projects list
didn't help. I knew gcloud switched fine as I was able to list other resources with it directly.
But it seems kubectl can't pick those changes up automatically, as kubectl/gcloud integration relies on the pre-generated key, which has a 1h expiration(not sure if it's a default but it's what it is on my machine right now).
So, on top of setting right user/project/account with gcloud, you should re-generate the creds:
gcloud container clusters get-credentials <my-cluster> --zone <clusters-zone>
I'm in the same boat as you - apps deployed in GKE for work and personal projects deployed in my personal GCP account.
gcloud stores a list of logged in accounts that you can switch between to communicate with associated projects. Take a look at these commands:
gcloud auth login
gcloud auth list
gcloud config set account
gcloud projects list
To work with a specific project under one of your accounts you need to set that configuration via gcloud config set project PROJECT_ID
kubectl has a list of "contexts" on your local machine in ~/.kube/config. Your current context is the cluster you want to run commands against - similar to the active account/project in gcloud.
Unlike gcloud these are cluster specific and store info on cluster endpoint, default namespaces, the current context, etc. You can have contexts from GCP, AWS, on-prem...anywhere you have a cluster. We have different clusters for dev, qa, and prod (thus different contexts) and switch between them a ton. Take a look at the [kubectx project][1] https://github.com/ahmetb/kubectx for an easier way to switch between contexts and namespaces.
kubectl will use the keys from whatever GCP account you are logged in with against the cluster that is set as your current context. i.e., from your error above, if your active account for gcloud is your personal but try to list pods from a cluster at work you will get an error. You either need to set the active account/project for gcloud to your work email or change the kubectl context to a cluster that is hosted in your personal GCP account/project.
For me updating the ~/.kube/config and setting the expiry to a date in past fixes it
TL;DR
Use gcloud config configurations to manage your separate profiles with Google Cloud Platform.
Add an explicit configuration argument to the cmd-args of your kubeconfig's user to prevent gcloud from producing an access token for an unrelated profile.
users:
- user:
auth-provider:
config:
cmd-args: config --configuration=work config-helper --format=json
Can somebody please break it down for a slow person like me, how gcloud and kubectl work together, and how I can easily switch accounts so I can use gcloud commands for my personal projects and kubectl commands for my work projects?
Sure! By following Google's suggested instructions that lead to running gcloud container clusters get-credentials ... when configuring a kubernetes cluster, you will end up with a section of your kubeconfig that contains information on what kubectl should do to acquire an access token when communicating with a cluster that is configured with a given user. That will look something like this:
users:
- name: gke_project-name_cluster-zone_cluster-name
user:
auth-provider:
config:
access-token: &Redacted
cmd-args: config config-helper --format=json
cmd-path: /path/to/google-cloud-sdk/bin/gcloud
expiry: "2022-12-25T01:02:03Z"
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
Basically, this tells kubectl to run gcloud config config-helper --format=json when it needs a new token, and to parse the access_token using the json-path .credential.access_token in the response from that command. This is the crux in understanding how kubectl communicates with gcloud.
Like you, I use google cloud both personally and at work. The issue is that this user configuration block does not take into account the fact that it shouldn't use the currently active gcloud account when generating a credential. Even if you don't use kubernetes in either one of your two projects, extensions in vscode for example might try to run a kubectl command when you're working on something in a different project. If this were to happen after your current token is expired, gcloud config config-helper might get invoked to generate a token using a personal account.
To prevent this from happening, I suggest using gcloud config configuations. Configurations are global configuration profiles that you can quickly switch between. For example, I have two configurations that look like:
> gcloud config configurations list
NAME IS_ACTIVE ACCOUNT PROJECT COMPUTE_DEFAULT_ZONE COMPUTE_DEFAULT_REGION
work False zev#work.email work-project us-west1-a us-west1
personal True zev#personal.email personal-project northamerica-northeast1-a northamerica-northeast1
With configurations you can alter your kubeconfig to specify which configuration to always use when creating an access token for a given kubernetes user by altering the kubeconfig user's auth-provider.config.cmd-args to include one of your gcloud configurations. With a value like config --configuration=work config-helper --format=json, whenever kubectl needs a new access token, it will use the account from your work configuration regardless of which account is currently active with the gcloud tool.

Always getting error: You must be logged in to the server (Unauthorized) EKS

I am currently playing around with AWS EKS
But I always get error: You must be logged in to the server (Unauthorized) when trying to run kubectl cluster-info command.
I have read a lot of AWS documentation and look at lots of similar issues who face the same problem. Unfortunately, none of them resolves my problem.
So, this is what I did
install all required packages
create a user to access aws-cli name crop-portal
create a role for EKS name crop-cluster
create EKS cluster via AWS console with the role crop-cluster namecrop-cluster(cluster and role have the same name)
run AWS configure for user crop-portal
run aws eks update-kubeconfig --name crop-cluster to update the kube config
run aws sts assume-role --role-arn crop-cluster-arn --role-session-name eks-access
copy accessKey, secreyKey and sessionToken into env variables AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_SESSION_TOKEN accordingly
run aws sts get-caller-indentity and now the result says it used assume role already
{
"UserId": "AROAXWZGX5HOBZPVGAUKC:botocore-session-1572604810",
"Account": "529972849116",
"Arn": "arn:aws:sts::529972849116:assumed-role/crop-cluster/botocore-session-1572604810"
}
run kubectl cluster and always get error: You must be logged in to the server (Unauthorized)
when I run aws-iam-authenticator token -i crop-cluster, it gave me the token and
when I run aws-iam-authenticator verify -t token -i crop-portal, it also passed
&{ARN:arn:aws:sts::529972849116:assumed-role/crop-cluster/1572605554603576170 CanonicalARN:arn:aws:iam::529972849116:role/crop-cluster AccountID:529972849116 UserID:AROAXWZGX5HOBZPVGAUKC SessionName:1572605554603576170}
I don't know what is wrong or what I miss. I try so hard to get it works but I really don't know what to do after this.
Some people suggest creating a cluster with awscli instead of GUI. I tried both methods and none of them work. Either creating with awscli or GUI is the same for me.
Please someone helps :(
I will try to add some more information here and I hope it will be more helpful while setting up the access to the EKS cluster.
When we create the EKS cluster by any method via CloudFormation/CLI/EKSCTL the IAM role/user who created the cluster will automatically binded to the default kubernetes RBAC API group "system:masters" (https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles) and in this way creator of the cluster will get the admin access to the cluster.
To verify the role or user for the EKS cluster we can search for the CreateCluster Api call on cloudtrail and it will tell us the creator of the cluster.
Now generally if we use role to create the cluster as you did (For example "crop-cluster"). We have to make sure that we are assuming this role before making any api calls using kubectl and the easiest way to do this is set this role in the kubeconfig file. And we can easily do this by running the below command from the terminal.
aws eks --region region-code update-kubeconfig --name cluster_name --role-arn crop-cluster-arn
Now if we will run the above command then it will set the role with -r flag in the kube config file so in that way we are telling the aws/aws-iam-authenticator that before making any api call it should first assume the role and in this way WE DON'T HAVE TO ASSUME THE ROLE MANUALLY via cli using "aws sts assume-role --role-arn crop-cluster-arn --role-session-name eks-access".
Once kubeconfig file is set properly make sure that CLI is configured properly wit h the IAM user credentials "crop-portal". And we can confirm this by running the "aws sts get-caller-identity" command and output should show us the user ARN in the "Arn" section like below.
$ aws sts get-caller-identity
{
"Account": "xxxxxxxxxxxxx",
"UserId": "xxxxxxxxxxxxxx",
"Arn": "arn:aws:iam::xxxxxxxxxxx:user/crop-portal"
}
Once that is done you should be directly able to make kubectl command without any issue.
Note: I have assumed that user "crop-portal" does have enogh permission to assume the role "crop-cluster"
Note:
For more details we can also refer to answer on this question Getting error "An error occurred (AccessDenied) when calling the AssumeRole operation: Access denied" after setting up EKS cluster

How to refresh kubernetes config access-token in GKE

how to refresh token in gcloud or GKE cluster in spinnaker
kubeconfig expaires after 1 hour, how to refresh token in gcloud or GKE cluster in spinnaker
This appears to be a known issue, could you try updating the credential file using the following command:
gcloud container clusters get-credentials NAME [--region=REGION | --zone=ZONE, -z ZONE] [GCLOUD_WIDE_FLAG …]
This Github thread suggested the above solution and it worked for other users. Additionally review this thread to get an overview of this issue.