Can we configure AWS Secrets Manager to integrate with an on-premises k8s cluster - kubernetes

I setup a EKS cluster and integrated AWS Secrets Manager in it following the steps mentioned in https://github.com/aws/secrets-store-csi-driver-provider-aws and it worked as expected.
Now we have a requirement to integrate the AWS Secrets Manager on an on-premises k8s cluster and I am unable to follow the same steps as they seem to be explicitly for AWS EKS based clusters.
I googled around a bit and found you can call the Secrets Manager programmatically using one of the ways in https://docs.aws.amazon.com/secretsmanager/latest/userguide/asm_access.html, but this approach wont work for us.
Is there a k8s way to directly connect to AWS secrets Manager without setting up AWS-CLI and the OIDC cluster ID on the on-premises cluster?
Any help would be highly appreciated.

You can setup external OIDC providers with AWS and also setup K8s to with OIDC, but that is a lot of work.
AWS recently announced IAM Roles Anywhere which will let you use host based certificates to authenticate, but you will still have to call the Secrets Manager APIs.
If you are willing to retrieve secrets through etcd (which may store the secrets base64 encoded on the cluster) you can look at using the opensource External Secrets solution.

Related

How to authenticate to a GKE cluster without using the gcloud CLI

I've got a container inside a GKE cluster and I want it to be able to talk to the Kubernetes API of another GKE cluster to list some resources there.
This works well if run the following command in a separate container to proxy the connection for me:
gcloud container clusters get-credentials MY_CLUSTER --region MY_REGION --project MY_PROJECT; kubectl --context MY_CONTEXT proxy --port=8001 --v=10
But this requires me to run a separate container that, due to the size of the gcloud cli is more than 1GB big.
Ideally I would like to talk directly from my primary container to the other GKE cluster. But I can't figure out how to figure out the IP address and set-up the authentication required for the connection.
I've seen a few questions:
How to Authenticate GKE Cluster on Kubernetes API Server using its Java client library
Is there a golang sdk equivalent of "gcloud container clusters get-credentials"
But it's still not really clear to me if/how this would work with the Java libraries, if at all possible.
Ideally I would write something like this.
var info = gkeClient.GetClusterInformation(...);
var auth = gkeClient.getAuthentication(info);
...
// using the io.fabric8.kubernetes.client.ConfigBuilder / DefaultKubernetesClient
var config = new ConfigBuilder().withMasterUrl(inf.url())
.withNamespace(null)
// certificate or other autentication mechanishm
.build();
return new DefaultKubernetesClient(config);
Does that make sense, is something like that possible?
There are multiple ways to connect to your cluster without using the gcloud cli, since you are trying to access the cluster from another cluster within the cloud you can use the workload identity authentication mechanism. Workload Identity is the recommended way for your workloads running on Google Kubernetes Engine (GKE) to access Google Cloud services in a secure and manageable way. For more information refer to this official document. Here they have detailed a step by step procedure for configuring workload identity and provided reference links for code libraries.
This is drafted based on information provided in google official documentation.

How to setup GKE Cluster and GKE pods has to communicate with cloud sql and cloud sql password stored on google cloud secret manager

I am trying to setup google kubernetes engine and its pods has to communicate with cloud sql database. The cloud sql database credentials are stored on google cloud secret manger. How pods will fetch credentials from secret manager and if secret manager credentials are updated than how pod will get update the new secret?
How to setup above requirement? Can you someone please help on the same?
Thanks,
Anand
You can make your deployed application get the secret (password) programmatically, from Google Cloud Secret Manager. You can find and example in many languages in the following link: https://cloud.google.com/secret-manager/docs/samples/secretmanager-access-secret-version
But before make sure that your GKE setup, more specifically your application is able to authenticate to Google Cloud Secret Manager. The following links can help you to choose the appropriate approche:
https://cloud.google.com/kubernetes-engine/docs/tutorials/authenticating-to-cloud-platform
https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity
You can find information regarding that particular solution in this doc.
There are also good examples on medium here and here.
To answer your question regarding updating the secrets:
Usually secrets are pulled when the container is being created, but if you expect the credentials to change often (or for the pods to stick around for very long) you can adjust the code to update the secrets on every execution.

How to configure an AKS cluster to use secrets from external Vault installed on different AKS Cluster

I have two kubernetes clusters running on Azure AKS.
One cluster named APP-Cluster which is hosting application pods.
One cluster named Vault-Cluster which the Hashicorp Vault is installed on.
I have installed Hashicorp Vault with Consul in HA mode according to below official document. The installation is successful.
https://learn.hashicorp.com/tutorials/vault/kubernetes-minikube?in=vault/kubernetes
But I am quite lost on how to connect and retrieve the secrets in Vault cluster from another cluster. I would like to use the sidecar injection method of Vault for my app cluster to communicate with vault cluster. I tried the follow the steps in below official document but in the document minikube is used instead of public cloud Kubernetes Service. How do I define the "EXTERNAL_VAULT_ADDR" variable for AKS like described in the document for minikube? Is it the api server DNS address which I can get from Azure portal?
https://learn.hashicorp.com/tutorials/vault/kubernetes-external-vault?in=vault/kubernetes
The way you interact with Vault is via HTTP(s) API. That means you need to expose the vault service running in your Vault-Cluster cluster using one of the usual methods.
As an example you could:
use a service of type LoadBalancer (this works because you are running kubernetes in a cloud provider that supports this feature);
install an ingress controller, expose it (again with a load balancer) and define an Ingress resource for your vault service.
use a node port service
The EXTERNAL_VAULT_ADDR value depends on which strategy you want to use.

Vault deployment in alive cluster using terraform

I want to deploy vault with a cluster which contains microservices and my vault shouldn't have an external access, and everything should be done using terraform. Does anyone know how to do it?
Please read the Terraform Getting started on how to create Terraform code.
You will need to use the Google Provider to deploy your resources. On that page you can view resources like kubernetes, vault and lots of others.

Anthos showing wrong status of Deployment on on-premise external cluster

I wanted to give a try to GCP's Anthos On-Premise GKE offering.
For sake of my demo I setup a Kubernetes cluster in GCP itself using Google Compute Engine following instructions from (https://kubernetes.io/docs/setup/production-environment/turnkey/gce/)
After this I followed Anthos documentation to register my cluster to Anthos. I was able to register the cluster and Login into it using both Token based and Basic authentication based mechanisms.
Now when I try to deploy anything from GCP console, I get following error
But the deployment succeeds, I can see deployment and associated pods in Running state on my cluster.
Also when I try to deploy using Marketplace I get following error.
I wish to know if it is a bug in Anthos or my cluster has some missing configurations ?
You're not running Anthos GKE On-Prem, you're running open-source Kubernetes on Google Cloud. Things designed for Anthos - the marketplace and connecting clusters to Cloud Console - are not supposed to work in your setup. The fact that they mostly work despite that is an accident (and a testament to the portability and compatibility of Kubernetes).
To get Cloud Console integration and use the marketplace, you need to use either Anthos GKE On-Prem that runs on VMWare or regular GKE.