app in its own namespace with a service account available in any namespace - kubernetes

I have a very specific scenario I'm trying to solve for:
Using Kubernetes (single cluster)
Installing Vault on that cluster
sending GitLab containers to the same cluster.
I need to install Vault in such a way that:
Vault lives in it's own namespace (easy/solved)
Vault's service account (vault-auth) is available to all other namespaces (unsolved)
GitLab's default behavior is to put all apps/services into their own namespaces with the Project ID; EG: repo_name+project_id. It's predictable but the two options are:
When the app is in its own namespace it cannot access the Vault service account in the 'vault' Namespace. It requires you to create a vault service account in each application namespace; hot garbage, or...
Put ALL apps + Vault in the default namespace and applications can easily find the 'vault-auth' service account. Messy but totally works.
To use GitLab in the way it is intended (and I don't disagree) is to leave each app in it's own namespace. The question then becomes:
How would one create the Kubernetes Service Account for Vault (vault-auth) so that Vault the application is in it's own namespace but the service account itself is available to ALL namespaces?
Then, no matter the namespace that GitLab creates, the containers have equal access to the 'vault-auth' service account.

Related

Default ServiceAccount k8s

I'm a little confused about the default Service Account in new created Namespace in my Minikube.
Does it have any permissions? It seems not because I can't find any rolebinding or clusterrolebindung which references this SA
Then why is it created when it does not have a permission, or is there a use case around that?
and lastly, why are service accounts by default mount to pods?
Regards
ralph
The default service account doesn’t have enough permissions to retrieve the services running in the same namespace.
Kubernetes follows the convention of closed-to-open which means that by default no user or service account has any permissions.
To fulfill this request, we need to create a role binding associating the default service account with an appropriate role.This is similar to how we assign a viewer role to the service account that can give permission to list pods.
Pods have the default service account assigned even when you don’t ask for it. This is because every pod in the cluster needs to have one (and only one) service account assigned to it.
Refer Kubernetes namespace default service account for more information.

How to configure an AKS cluster to use secrets from external Vault installed on different AKS Cluster

I have two kubernetes clusters running on Azure AKS.
One cluster named APP-Cluster which is hosting application pods.
One cluster named Vault-Cluster which the Hashicorp Vault is installed on.
I have installed Hashicorp Vault with Consul in HA mode according to below official document. The installation is successful.
https://learn.hashicorp.com/tutorials/vault/kubernetes-minikube?in=vault/kubernetes
But I am quite lost on how to connect and retrieve the secrets in Vault cluster from another cluster. I would like to use the sidecar injection method of Vault for my app cluster to communicate with vault cluster. I tried the follow the steps in below official document but in the document minikube is used instead of public cloud Kubernetes Service. How do I define the "EXTERNAL_VAULT_ADDR" variable for AKS like described in the document for minikube? Is it the api server DNS address which I can get from Azure portal?
https://learn.hashicorp.com/tutorials/vault/kubernetes-external-vault?in=vault/kubernetes
The way you interact with Vault is via HTTP(s) API. That means you need to expose the vault service running in your Vault-Cluster cluster using one of the usual methods.
As an example you could:
use a service of type LoadBalancer (this works because you are running kubernetes in a cloud provider that supports this feature);
install an ingress controller, expose it (again with a load balancer) and define an Ingress resource for your vault service.
use a node port service
The EXTERNAL_VAULT_ADDR value depends on which strategy you want to use.

Using a Google service account keyfile in a Kubernetes serviceaccount as a testing environment replacement for GKE workload identity

I have a GKE app that uses kubernetes serviceaccounts linked to google service accounts for api authorizations in-app.
Up until now, to test these locally, I had two versions of my images- one with and one without a test-keyfile.json copied into them for authorization. (The production images used the serviceaccount for authorization, the test environment would ignore the serviceaccounts and instead look for a keyfile which gets copied in during the image build.)
I was wondering if there was a way to merge the images into one, and have both prod/test use the Kubernetes serviceaccount for authorization. On production, use GKE's workload identity, and in testing, use a keyfile(s) linked with or injected into a Kubernetes serviceaccount.
Is such a thing possible? Is there a better method for emulating GKE workload identity on a local test environment?
I do not know a way of emulating workload identity on a non-Google Kubernetes cluster, but you could change your app to read the auth credentials from a volume/file or the metadata server, depending on the environment setting. See this article (and particularly the code linked there) for an example of how to authenticate using local credentials or Google SA depending on environmental variables.The article also shows how to use Pod overlays to keep the prod vs dev changes separate from the bulk of the configuration.

Kubernetes service account role using OIDC

I am trying out the capability where 2 pods deployed to the same worker node in EKS are associated to different service accounts. Below are the steps
Each service account is associated to a different role one with access to SQS and other without access.
Used eksutil to associate OIDC provider with cluster and also created iamserviceaccount with service account in kubernetes and role with policy for accessing SQS attached (implicit annotation of service account with IAM role provided by eksctl create iamserviceaccount).
But when I try to start the pod which has service account tied to role with SQS access, I am getting access denied for SQS, however if I add SQS permissions to worker node instance role, its working fine.
Am I missing any steps and is my understanding correct?
So, there are a few things required to get IRSA to work:
There has to be an OIDC provider associated with the cluster, following the directions here.
The IAM role has to have a trust relationship with the OIDC provider, as defined in the AWS CLI example here.
The service account must be annotated with a matching eks.amazonaws.com/role-arn.
The pod must have the appropriate service account specified with a serviceAccountName in its spec, as per the API docs.
The SDK for the app needs to support the AssumeRoleWithWebIdentity API call. Weirdly, the aws-sdk-go-v2 SDK doesn't currently support it at all (the "old" aws-sdk-go does).
It's working with the node role because one of the requirements above isn't met, meaning the credential chain "falls through" to the underlying node role.

Should the Kubernetes api server be accesible as https://kubernetes:443 from any pod in the cluster?

According to the Kubernetes docs,
The kubernetes service (in all namespaces) is configured with a virtual IP address that is redirected (via kube-proxy) to the HTTPS endpoint on the apiserver.
For some reason I can't access kubernetes from a non-default namespace, unless I manually create the service there (or use kubernetes.default). Looking at the code I see the kubernetes service is created in namespace default, is it also available in other namespaces? If so, how is that accomplished? How might I debug it?
I've been finding it difficult to Google this, since "kubernetes service" is not really a great search keyword.
For the record, I'm using GKE.
Service kubernetes is only available in Namespace default.
If you want to access API server using this service, you need to use kubernetes.default
Services are assigned a DNS A record for a name of the form
my-svc.my-namespace.svc.cluster.local
This resolves to the cluster IP of the Service.
That means, you need to use kubernetes.default.svc.cluster.local
You can skip svc.cluster.local.
So to access a kubernetes Service, you need to provide kubernetes.default.
If you want to access from default namespace, you can skip namespace part.
See details in here.
Also,
When you create a pod, if you do not specify a service account, it is automatically assigned the default service account in the same namespace.
You can access the API from inside a pod using automatically mounted service account credentials, as described in Accessing the Cluster.