How to create an AKS StatefulSet with Azure Storage Key - kubernetes

I know it is possible to use Azure Files from AKS if you give permissions over the storage account to your sp or managed id.
But is it possible to create a StatefulSet if you are not allowed to give such access and only have storage key?
With normal deploys is possible since you only need to have the secret and to use secretName or even include shareName in order to have always the same name.
But when it comes to StatefulSet, which uses volumeClaimTemplates, It seems impossible, unless you have permissions over the storage as mentioned before.

Related

Rename the EKS creator's IAM user name via aws cli

If we have a role change in the team, I read that EKS creator can NOT be transferred. Can we instead rename the creator's IAM user name via aws cli? Will that break EKS?
I only find ways to add new user using configmap but this configmap doesn't have the root user in there.
$ kubectl edit configmap aws-auth --namespace kube-system
There is no way to transfer the root user of an EKS cluster to another IAM user. The only way to do this would be to delete the cluster and recreate it with the new IAM user as the root user.
Can we instead rename the creator's IAM user name via aws cli? Will that break EKS?
The creator record is immutable and managed within EKS. This record is simply not accessible using CLI and not amendable (including DELETE).
How do we know a cluster was created by IAM roles or IAM users?
If you cannot find the identity (userIdentity.arn) in CloudTrail that invoked CreateCluster (eventName) for the cluster (responseElements.clusterName) in last 90 days, you need to raise it to the AWS Support to obtain the identity.
is it safe to delete the creator IAM user?
Typically, you start with deactivate the IAM user account (creator) if you are not sure of any side effect. You can proceed to delete the account later when you are confident to do so.
As already mentioned in the answer by Muhammad, it is not possible to transfer the root/creator role to another IAM user.
To avoid getting into the situation that you describe, or any other situation where the creator of the cluster should not stay root, it is recommended to not create clusters with IAM users but with assumed IAM roles instead.
This leads to the IAM role becoming the "creator", meaning that you can use IAM access management to control who can actually assume the given role und thus act as root.
You can either have dedicated roles for each cluster or one role for multiple clusters, depending on how you plan to do access management. The limits will however apply later, meaning that you can not switch the creator role afterwards, so this must be properly planned in advance.

K8s RBAC needed when no API calls?

My pod is running with the default service account. My pod uses secrets through mounted files and config maps but this is defined in yaml and the pod does not contain kubectl or similar component.
Is there a point of using RBAC for anything if I don't call the API? The best practices state "Enable or configure RBAC rules that restrict reading data in Secrets (including via indirect means)."
Only things that call the Kubernetes API, like the kubectl command and the various Kubernetes SDK libraries, use RBAC. For your basic application, you as the user need permission to create deployments, create secrets, etc. but if you have cluster-administrator permissions you don't need anything special setup.
You could imagine an orchestrator application that wanted to farm out work by creating Kubernetes Jobs. In this case the orchestrator itself would need an RBAC setup; typically its Helm chart or other deployment YAML would contain a Role (to create Jobs), a ServiceAccount, and a RoleBinding, and set its own Deployment to run using that ServiceAccount. This isn't the "normal" case of a straightforward HTTP-based application (Deployment/Service/Ingress) with a backing database (StatefulSet/Service).
... restrict reading data in Secrets ...
If you can kubectl get secret -o yaml then the Secret values are all but there to read; they are base64 encoded but not encrypted at all. It's good practice to limit the ability to do this. This having been said, you can also create a Pod, mounting the Secret, and make the main container command be to dump out the Secret value to somewhere readable, so even then Secrets aren't that secret. It's still a good practice, but not required per se, particularly in an evaluation or test cluster.

How to pass configuration via argocd and crossplane

We are trying to create an environment using crossplane and argocd. Once Crossplane generates the database and saves the credentials to a secret on the management cluster. After we are deploying the credentials from management cluster to our destination cluster to a secret.
Now we need to pass the credentials from secret a to secret B which the application knows about. The issue starts when argo do not use helm install but template thus lookup function don't work. We thought about using vault as a middle man but we are not sure how to load values from secret to vault.
Anyway if you encounter such an issue or have some sort of a solution we'll be very happy to hear.
Thank you
You need to commit the (encrypted) secrets somewhere for ArgoCD to pick them up. That is the whole point of GitOps.
Alternatively you can try using https://argo-cd.readthedocs.io/en/stable/user-guide/parameters/ but this is considered a temporary workaround

Kubernetes secret is really secret?

While I developed an API server, I needed to give some account information to API server, which should not be shown to anyone.
K8s recommends secret for this kind of situation, so I used.
But I wonder if the secret is really secret.
Secret is just base 64 "encoded" text, not "encrypted".
When I see an arbitary secret like below,
namespace: ZGVmYXVsdA==
I can easily know the real value of it by decoding.
namespace: default
In such a this situation, is secret really helpful for security?
What I know about the security advantage of secret is that it is on-memory not on-node file system.
But I think that is not enough for security.
Thank you.
From Kubernetes Secrets documentation:
Risks
In the API server, secret data is stored in etcd(by default, etcd data is not encrypted); therefore:
Administrators should enable encryption at rest for cluster data (requires v1.13 or later).
Administrators should limit access to etcd to admin users.
Administrators may want to wipe/shred disks used by etcd when no longer in use.
If running etcd in a cluster, administrators should make sure to use SSL/TLS for etcd peer-to-peer communication.
If you configure the secret through a manifest (JSON or YAML) file which has the secret data encoded as base64, sharing this file or checking it in to a source repository means the secret is compromised. Base64 encoding is not an encryption method and is considered the same as plain text.
Applications still need to protect the value of secret after reading it from the volume, such as not accidentally logging it or transmitting it to an untrusted party.
A user who can create a Pod that uses a secret can also see the value of that secret. Even if the API server policy does not allow that user to read the Secret, the user could run a Pod which exposes the secret.
Currently, anyone with root permission on any node can read any secret from the API server, by impersonating the kubelet. It is a planned feature to only send secrets to nodes that actually require them, to restrict the impact of a root exploit on a single node.
Also check great post Can Kubernetes Keep a Secret? It all depends what tool you’re using, especcially "What’s wrong with Kubernetes plain Secrets?" part..
I hope that answered your question, but generally #Harsh Manvar is right: you should have an access first to that secret.
You should limit access using authorization policies such as RBAC.
You'll need to create a Role/ClusterRole with appropriate permissions and then bind (using RoleBinding/ClusterRoleBinding) that to a user and/or a service account (can be used in pod definition then), depending on your use case.
You can look at the documentation here to create Role & ClusterRole and the docs here for RoleBinding and ClusterRoleBinding.

How to write secrets to HashiCorp Valut or Azure Key Vault from Kubernetes?

I have come across injectors/drivers/et cetera for Kubernetes for most major secret providers, but the common theme with those solutions are that these only sync one-way, i.e., only from the vault to the cluster. I want to be able to update the secrets too, from my Kubernetes cluster.
What is the recommended pattern for doing this? (Apart from the obvious solution of writing a custom service that communicates with the vault)
I'd say that this is an anti pattern, meaning you shouldn't do that.
If you create your secret in k8s from file, that would mean you either have it in version control, something you should never do. Or you don't have it in version control or create it from literal, which is good, but than you neither have a change history/log nor a real documentation of your secret. I guess that would explain, why the major secret providers don't support that.
You should set up the secret using the key vault and apply it to your cluster using Terraform for example.
Terraform supports both azure key vault secret https://www.terraform.io/docs/providers/azurerm/r/key_vault_secret.html and Kubernetes secrets https://www.terraform.io/docs/providers/kubernetes/r/secret.html
You can simply import the key vault secret and use it in the k8s secret. Every time you update the key vault secret, you apply the changes with Terraform.