I am not able to get write access to a GCS bucket from within a GKE pod.
I have a GKE pod running. I have not changed any k8s configuration regarding service accounts. I have docker exec'd into the pod and installed gcloud/gsutil. gcloud auth list shows a 1234-compute#developer.gserviceaccount.com entry. From within GCS I have added that same account as storage admin, storage legacy bucket owner, storage object creator (i.e., I just tried a bunch of stuff). I am able to run gsutil ls gs://bucket. However when running gsutil cp file gs://bucket, it prints:
AccessDeniedException: 403 Insufficient OAuth2 scope to perform this operation.
Acceptable scopes: https://www.googleapis.com/auth/cloud-platform
gsutil acl get gs://bucket prints:
AccessDeniedException: Access denied. Please ensure you have OWNER permission on gs://bucket
Other things I have tried are adding the allUsers and allAuthenticatedUsers as creators and owners of the bucket, with no change. I am able to write to the bucket from my dev machine just fine.
When I run gsutil acl get gs://bucket from another machine, it prints the same address as an OWNER as the output from gcloud auth list from within the pod.
What is the special sauce I need to allow the pod to write to the bucket?
You need to set permissions for cluster (or better for particular node in case of Terraform):
oauth_scopes = [
"https://www.googleapis.com/auth/devstorage.read_write", // 'ere we go!
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
"https://www.googleapis.com/auth/service.management.readonly",
"https://www.googleapis.com/auth/servicecontrol",
"https://www.googleapis.com/auth/trace.append",
"https://www.googleapis.com/auth/compute",
]
The GKE cluster was created with default permissions, which only has read scope to GCS. Solutions:
Apply advice from Changing Permissions of Google Container Engine Cluster
Set GOOGLE_APPLICATION_CREDENTIALS as described in https://developers.google.com/identity/protocols/application-default-credentials
Had the same issue, I had to recreated a node pool with custom security config in order to get that access.
Also, in my pod I mounted the SA provided in a secret (default-token-XXXXX)
Then, once gcloud is installed in the pod (via docker file) works like a charm.
The key is the node-pool config and mounting the SA.
Related
I am using gocd for ci/cd. Result is tar archive. I need to copy resulting tar to GCE bucket.
I have gocd-agent docker image with included google sdk.
I know how to use gcloud with service account from local machine, but not from inside pod.
How to use service account assigned to pod with gcloud auth on pod?
Final goal is to use gsutil to copy above mentioned archive to bucket in same project.
My first thought would be to create Secret based on the service account, reference it in a pod yaml definiton to mount to some file and then run gcloud auth from the pod using that file. There's more info in Google cloud docs.
Another option which is quite new is to use Workload Identitiy. Seems you'd need to configure GKE cluster to enable this option. And it's working for some versions of GKE.
At work we use Kubernetes hosted in GCP. I also have a side project hosted in my personal GCP account using Google App Engine (deploy using gcloud app deploy).
Often when I try to run a command such as kubectl logs -f service-name, I get an error like "Error from server (Forbidden): pods is forbidden: User "my_personal_email#gmail.com" cannot list resource "pods" in API group "" in the namespace "WORK_NAMESPACE": Required "container.pods.list" permission." and then I have to fight with kubectl for hours trying to get it to work.
Can somebody please break it down for a slow person like me, how gcloud and kubectl work together, and how I can easily switch accounts so I can use gcloud commands for my personal projects and kubectl commands for my work projects? I'm happy to nuke my whole config and start from scratch if that's what it takes. I've found various kubectl and gcloud documentation but it doesn't make much sense or talks in circles.
Edit: this is on Linux.
Had the same problem and doing all of the:
gcloud auth login
gcloud auth list
gcloud config set account
gcloud projects list
didn't help. I knew gcloud switched fine as I was able to list other resources with it directly.
But it seems kubectl can't pick those changes up automatically, as kubectl/gcloud integration relies on the pre-generated key, which has a 1h expiration(not sure if it's a default but it's what it is on my machine right now).
So, on top of setting right user/project/account with gcloud, you should re-generate the creds:
gcloud container clusters get-credentials <my-cluster> --zone <clusters-zone>
I'm in the same boat as you - apps deployed in GKE for work and personal projects deployed in my personal GCP account.
gcloud stores a list of logged in accounts that you can switch between to communicate with associated projects. Take a look at these commands:
gcloud auth login
gcloud auth list
gcloud config set account
gcloud projects list
To work with a specific project under one of your accounts you need to set that configuration via gcloud config set project PROJECT_ID
kubectl has a list of "contexts" on your local machine in ~/.kube/config. Your current context is the cluster you want to run commands against - similar to the active account/project in gcloud.
Unlike gcloud these are cluster specific and store info on cluster endpoint, default namespaces, the current context, etc. You can have contexts from GCP, AWS, on-prem...anywhere you have a cluster. We have different clusters for dev, qa, and prod (thus different contexts) and switch between them a ton. Take a look at the [kubectx project][1] https://github.com/ahmetb/kubectx for an easier way to switch between contexts and namespaces.
kubectl will use the keys from whatever GCP account you are logged in with against the cluster that is set as your current context. i.e., from your error above, if your active account for gcloud is your personal but try to list pods from a cluster at work you will get an error. You either need to set the active account/project for gcloud to your work email or change the kubectl context to a cluster that is hosted in your personal GCP account/project.
For me updating the ~/.kube/config and setting the expiry to a date in past fixes it
TL;DR
Use gcloud config configurations to manage your separate profiles with Google Cloud Platform.
Add an explicit configuration argument to the cmd-args of your kubeconfig's user to prevent gcloud from producing an access token for an unrelated profile.
users:
- user:
auth-provider:
config:
cmd-args: config --configuration=work config-helper --format=json
Can somebody please break it down for a slow person like me, how gcloud and kubectl work together, and how I can easily switch accounts so I can use gcloud commands for my personal projects and kubectl commands for my work projects?
Sure! By following Google's suggested instructions that lead to running gcloud container clusters get-credentials ... when configuring a kubernetes cluster, you will end up with a section of your kubeconfig that contains information on what kubectl should do to acquire an access token when communicating with a cluster that is configured with a given user. That will look something like this:
users:
- name: gke_project-name_cluster-zone_cluster-name
user:
auth-provider:
config:
access-token: &Redacted
cmd-args: config config-helper --format=json
cmd-path: /path/to/google-cloud-sdk/bin/gcloud
expiry: "2022-12-25T01:02:03Z"
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
Basically, this tells kubectl to run gcloud config config-helper --format=json when it needs a new token, and to parse the access_token using the json-path .credential.access_token in the response from that command. This is the crux in understanding how kubectl communicates with gcloud.
Like you, I use google cloud both personally and at work. The issue is that this user configuration block does not take into account the fact that it shouldn't use the currently active gcloud account when generating a credential. Even if you don't use kubernetes in either one of your two projects, extensions in vscode for example might try to run a kubectl command when you're working on something in a different project. If this were to happen after your current token is expired, gcloud config config-helper might get invoked to generate a token using a personal account.
To prevent this from happening, I suggest using gcloud config configuations. Configurations are global configuration profiles that you can quickly switch between. For example, I have two configurations that look like:
> gcloud config configurations list
NAME IS_ACTIVE ACCOUNT PROJECT COMPUTE_DEFAULT_ZONE COMPUTE_DEFAULT_REGION
work False zev#work.email work-project us-west1-a us-west1
personal True zev#personal.email personal-project northamerica-northeast1-a northamerica-northeast1
With configurations you can alter your kubeconfig to specify which configuration to always use when creating an access token for a given kubernetes user by altering the kubeconfig user's auth-provider.config.cmd-args to include one of your gcloud configurations. With a value like config --configuration=work config-helper --format=json, whenever kubectl needs a new access token, it will use the account from your work configuration regardless of which account is currently active with the gcloud tool.
I am currently playing around with AWS EKS
But I always get error: You must be logged in to the server (Unauthorized) when trying to run kubectl cluster-info command.
I have read a lot of AWS documentation and look at lots of similar issues who face the same problem. Unfortunately, none of them resolves my problem.
So, this is what I did
install all required packages
create a user to access aws-cli name crop-portal
create a role for EKS name crop-cluster
create EKS cluster via AWS console with the role crop-cluster namecrop-cluster(cluster and role have the same name)
run AWS configure for user crop-portal
run aws eks update-kubeconfig --name crop-cluster to update the kube config
run aws sts assume-role --role-arn crop-cluster-arn --role-session-name eks-access
copy accessKey, secreyKey and sessionToken into env variables AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_SESSION_TOKEN accordingly
run aws sts get-caller-indentity and now the result says it used assume role already
{
"UserId": "AROAXWZGX5HOBZPVGAUKC:botocore-session-1572604810",
"Account": "529972849116",
"Arn": "arn:aws:sts::529972849116:assumed-role/crop-cluster/botocore-session-1572604810"
}
run kubectl cluster and always get error: You must be logged in to the server (Unauthorized)
when I run aws-iam-authenticator token -i crop-cluster, it gave me the token and
when I run aws-iam-authenticator verify -t token -i crop-portal, it also passed
&{ARN:arn:aws:sts::529972849116:assumed-role/crop-cluster/1572605554603576170 CanonicalARN:arn:aws:iam::529972849116:role/crop-cluster AccountID:529972849116 UserID:AROAXWZGX5HOBZPVGAUKC SessionName:1572605554603576170}
I don't know what is wrong or what I miss. I try so hard to get it works but I really don't know what to do after this.
Some people suggest creating a cluster with awscli instead of GUI. I tried both methods and none of them work. Either creating with awscli or GUI is the same for me.
Please someone helps :(
I will try to add some more information here and I hope it will be more helpful while setting up the access to the EKS cluster.
When we create the EKS cluster by any method via CloudFormation/CLI/EKSCTL the IAM role/user who created the cluster will automatically binded to the default kubernetes RBAC API group "system:masters" (https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles) and in this way creator of the cluster will get the admin access to the cluster.
To verify the role or user for the EKS cluster we can search for the CreateCluster Api call on cloudtrail and it will tell us the creator of the cluster.
Now generally if we use role to create the cluster as you did (For example "crop-cluster"). We have to make sure that we are assuming this role before making any api calls using kubectl and the easiest way to do this is set this role in the kubeconfig file. And we can easily do this by running the below command from the terminal.
aws eks --region region-code update-kubeconfig --name cluster_name --role-arn crop-cluster-arn
Now if we will run the above command then it will set the role with -r flag in the kube config file so in that way we are telling the aws/aws-iam-authenticator that before making any api call it should first assume the role and in this way WE DON'T HAVE TO ASSUME THE ROLE MANUALLY via cli using "aws sts assume-role --role-arn crop-cluster-arn --role-session-name eks-access".
Once kubeconfig file is set properly make sure that CLI is configured properly wit h the IAM user credentials "crop-portal". And we can confirm this by running the "aws sts get-caller-identity" command and output should show us the user ARN in the "Arn" section like below.
$ aws sts get-caller-identity
{
"Account": "xxxxxxxxxxxxx",
"UserId": "xxxxxxxxxxxxxx",
"Arn": "arn:aws:iam::xxxxxxxxxxx:user/crop-portal"
}
Once that is done you should be directly able to make kubectl command without any issue.
Note: I have assumed that user "crop-portal" does have enogh permission to assume the role "crop-cluster"
Note:
For more details we can also refer to answer on this question Getting error "An error occurred (AccessDenied) when calling the AssumeRole operation: Access denied" after setting up EKS cluster
If I am able to SSH into the master or any nodes in the cluster, is it possible for me to get 1) the kubeconfig file or 2) all information necessary to compose my own kubeconfig file?
You could find configuration on master node under /etc/kubernetes/admin.conf (on v1.8+).
On some versions of kubernetes, this can be found under ~/.kube
I'd be interested in hearing the answer to this as well. But I think it depends on how the authentication is set up. For example,
Minikube uses "client certificate" authentication. If it stores the client.key on the cluster as well, you might construct a kubeconfig file by combining it with the cluster’s CA public key.
GKE (Google Kubernetes Engine) uses authentication on a frontend that's separate from the Kubernetes cluster (masters are hosted separately). You can't ssh into the master, but if it was possible, you still might not be able to construct a token that works against the API server.
However, by default Pods have a service account token that can be used to authenticate to Kubernetes API. So if you SSH into a node and run docker exec into a container managed by Kubernetes, you will see this:
/ # ls run/secrets/kubernetes.io/serviceaccount
ca.crt namespace token
You can combine ca.crt and token to construct a kubeconfig file that will authenticate to the Kubernetes master.
So the answer to your question is yes, if you SSH into a node, you can then jump into a Pod and collect information to compose your own kubeconfig file. (See this question on how to disable this. I think there are solutions to disable it by default as well by forcing RBAC and disabling ABAC, but I might be wrong.)
I'm trying to create and mount a Google storage bucket on a Ubuntu Linux instance using gsutil.
sudo gsutil mb -c STANDARD -l us-central1-a gs://test-bucket
Here's what I'm getting:
Creating gs://test-bucket/...
AccessDeniedException: 403 Insufficient Permission
I've been searching around for a solution with no success. Can anyone help?
Check to see who is the account managing your VM instance from the GCloud Dashboard. Should be the compute service OR app account that is automatically created.
In the initial configuration settings you should see the Cloud API list that will state whether or not that user has Cloud Storage capability.
If not You will have to recreate your VM instance.
Create a GCP snapshot of your VM
Delete VM instance
Create a new instance using it existing snapshot. (allows you to start where you left off in new VM)
When creating the VM under API give the user full access which will allow them to write to the Cloud storage using the gsutil/gcsfuse commands.
THEN permissions from the Cloud Storage will be a concern but your root user should be able to write.