I’m investigating this letsencrypt controller (https://github.com/tazjin/kubernetes-letsencrypt).
It requires pods have permission to make changes to records in Cloud DNS. I thought with the pods running on GKE I’d get that access with the default service account, but the requests are failing. What do I need to do do to allow the pods access to Cloud DNS?
The Google Cloud DNS API's changes.create call requires either the https://www.googleapis.com/auth/ndev.clouddns.readwrite or https://www.googleapis.com/auth/cloud-platform scope, neither of which are enabled by default on a GKE cluster.
You can add a new Node Pool to your cluster with the DNS scope by running:
gcloud container node-pools create np1 --cluster my-cluster --scopes https://www.googleapis.com/auth/ndev.clouddns.readwrite
Or, you can create a brand new cluster with the scopes you need, either by passing the --scopes flag to gcloud container clusters create, or in the New Cluster dialog in Cloud Console, click "More", and set the necessary scopes to "Enabled".
Related
I've got a container inside a GKE cluster and I want it to be able to talk to the Kubernetes API of another GKE cluster to list some resources there.
This works well if run the following command in a separate container to proxy the connection for me:
gcloud container clusters get-credentials MY_CLUSTER --region MY_REGION --project MY_PROJECT; kubectl --context MY_CONTEXT proxy --port=8001 --v=10
But this requires me to run a separate container that, due to the size of the gcloud cli is more than 1GB big.
Ideally I would like to talk directly from my primary container to the other GKE cluster. But I can't figure out how to figure out the IP address and set-up the authentication required for the connection.
I've seen a few questions:
How to Authenticate GKE Cluster on Kubernetes API Server using its Java client library
Is there a golang sdk equivalent of "gcloud container clusters get-credentials"
But it's still not really clear to me if/how this would work with the Java libraries, if at all possible.
Ideally I would write something like this.
var info = gkeClient.GetClusterInformation(...);
var auth = gkeClient.getAuthentication(info);
...
// using the io.fabric8.kubernetes.client.ConfigBuilder / DefaultKubernetesClient
var config = new ConfigBuilder().withMasterUrl(inf.url())
.withNamespace(null)
// certificate or other autentication mechanishm
.build();
return new DefaultKubernetesClient(config);
Does that make sense, is something like that possible?
There are multiple ways to connect to your cluster without using the gcloud cli, since you are trying to access the cluster from another cluster within the cloud you can use the workload identity authentication mechanism. Workload Identity is the recommended way for your workloads running on Google Kubernetes Engine (GKE) to access Google Cloud services in a secure and manageable way. For more information refer to this official document. Here they have detailed a step by step procedure for configuring workload identity and provided reference links for code libraries.
This is drafted based on information provided in google official documentation.
I've built a Google Kubernetes Engine (GKE) cluster in a GCP project.
According to the different use cases of applications running on the cluster, I associated the applications with the different service accounts and the different granted permissions. To do so, I bound Google Service Account (GSA) with the Kubernetes Cluster Service Account (KSA) as follows:
gcloud iam service-accounts add-iam-policy-binding \
--role roles/iam.workloadIdentityUser \
--member "serviceAccount:PROJECT_ID.svc.id.goog[K8S_NAMESPACE/KSA_NAME]" \
GSA_NAME#PROJECT_ID.iam.gserviceaccount.com
kubectl annotate serviceaccount \
--namespace K8S_NAMESPACE \
KSA_NAME \
iam.gke.io/gcp-service-account=GSA_NAME#PROJECT_ID.iam.gserviceaccount.com
Reference: https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity#option_2_node_pool_modification
Everything I have explained works normally.
Currently, there are many GKE clusters in different projects. Furthermore, the service accounts assigned with the applications are supposed to be created in the same project that hosts the GKE clusters. I am planning to do the GSA centralisation for KSA into one GCP project.
Questions
Would it be possible to build a GKE cluster in a project and create a GSA for an application running on the GKE cluster in another project?
If so, what roles do I have to grant the GSA associated with the GKE cluster? in order to access the GSAs in the other project and bind them with KSA.
Note: This thread is only about the Google Service Account (GSA) associated with the application running on a GKE cluster, not about the Google Service Account (GSA) associated with the GKE cluster.
And about how to bind Google Service Account (GSA) in a GCP project with Kubernetes Cluster Service Account (KSA) in the GKE cluster in another GCP project.
This should be possible. You can definitely create service accounts in one project and attach them to resources in another project.
In the project which "hosts" your service account(s):
ensure that the iam.disableCrossProjectServiceAccountUsage constraint is NOT enforced for the project (this is done by updating the organization policy for the project)
I believe you will also need to grant roles/iam.serviceAccountTokenCreator to the GSA associated with each cluster as well.
See https://cloud.google.com/iam/docs/impersonating-service-accounts#attaching-different-project
Kubernetes RBAC can be used to give permissions to a subject in a particular Namespace. Can the same be accomplished with Cloud IAM?
Not at the moment, no. IAM is used to assign and verify permissions when interacting with GCP APIs. IAM can only provide access to the GKE API, which does not take into account namespaces.
As you mentioned, RBAC is your option for more granular permissions within the cluster
If I got your point correctly that:
The IAM roles for a GKE kubernetes cluster are very simple, "Admin, Read/Write, Read".
But you need more fine-grained control over the kubernetes cluster.
In this case:
There's a new "Alpha" feature in Google Cloud's IAM which wasn't available previously.
Under IAM > Roles
You can now create custom IAM roles with your own subset of permissions.
You can create a minimal role which allows for example gcloud container clusters get-credentials to work, but nothing else, allowing permissions within the kubernetes cluster to be fully managed by RBAC.
It will allow you to get more fine-grained access configurations for kubernetes cluster.
Our infrastructure currently has 2 Kubernetes Cluster, with one Cluster (cluster-1) creating pods in another cluster (cluster-2). Since we are on kubernetes1.7.x, we are able to make this work.
However, with 1.8 Kubernetes added support for RBAC as a result of which we cannot create pods in the new cluster anymore.
We already added support for Service Accounts and made sure that RoleBindings are properly set-up. But the main issue is that the service-account is not propagated outside of the cluster (and rightly so). The user that cluster-2 receives the request is called 'client', so when we added a RoleBinding with 'client' as a User, everything worked.
This is most certainly not the correct solution, as now any cluster that talks to Kubernetes API server can create a pod.
Is there support for RBAC that works cross cluster? Or, is there a way to propagate the service info through to the cluster we want to create the pods in?
P.S.: Our Kubernetes cluster are currently on GKE. But, we would like this to work on all Kubernetes-engine.
Your cluster-1 SA uses a kubecfg (for cluster-2) which resolves to the user "client". The only way to solve this is to generate a kubecfg (for cluster-2) with an identity associated (cert/token) for your cluster-1 SA. Lot of ways to do that: https://kubernetes.io/docs/admin/authentication/
Simplest way is to create an identical SA in cluster-2 and use its token in the kubecfg in cluster-1. Give RBAC only to that SA.
Where does GKE log RBAC permission events?
On Google Container Engine (GKE) clusters with kubernetes version v1.6 enable RBAC authorization per default. Apparently ABAC is enabled as fallback authorization as well in order to ease the transition of existing clusters to the new authorization scheme. The idea is that first RBAC is tried to authorize an action. If that fails, this should be logged somewhere and then ABAC is consulted to allow the action. This should enabled cluster admins to inspect the logs for missed RBAC permissions before finally switching off ABAC.
We have some clusters that disable GCP logging/monitoring, instead use an own ELK stack. Just to be sure I've created a test cluster with GCP's cloud logging and monitoring, but still can's find any RBAC events anywhere. The test pod is a prometheus server that discovers and scrapes other pods and nodes.
To make this more comprehensive. From Using RBAC Authorization:
When run with a log level of 2 or higher (--v=2), you can see RBAC denials in the apiserver log (prefixed with RBAC DENY:).
In GKE the apiservers logs can be accessed via HTTP like:
kubectl proxy &
curl -s http://localhost:8001/logs/kube-apiserver.log
The RBAC denials are logged to the master apiserver log.