Is it possible to prevent k8s secret of being empty (zero bytes)? - kubernetes

Is it possible to configure k8s in a way that empty secrets are not possible?
I had a problem in a service that somewhat the secret got overwritten with an empty one (zero bytes) and thereby my service malfunctioned. I see no advantage of having an secret empty at any time and would like to prevent empty secrets all together.
Thans for your help!

While it's not a simple answer to implement, as best I can tell what you are looking for is an Admission Controller, with a very popular one being OPA Gatekeeper
The theory is that kubernetes, as a platform, does not understand your business requirement to keep mistakes from overwriting Secrets. But OPA as a policy rules engine allows you to specify those things without requiring the upstream kubernetes to adopt those policies for everyone
An alternative is to turn on audit logging and track down the responsible party for re-education
A further alternative is to correctly scope RBAC Roles to actually deny writes to Secrets except for those credentials that are known to be trusted

Related

OPA Gatekeeper: possible to check for Network Policies on Ingress Update?

I want to make sure that a Network Policy exists when an Ingress is created / updated. CertManager spawns a Pod to get a ACME certificate for the URL when an Ingress is created and fails if no NetworkPolicy is defined.
Sadly I haven't found a way to access Network Policies for the Namespace the Ingress is created in.
You can do this by a custom admission-controller. I suggest this because OPA implements policy and compliance checking and therefore has the feature to implement the same, it might not come out of the box with functionality for dependency checking.
Since, the problem you have, however, is more of a workflow/dependency problem. You want to ensure resource creation/deletion enforcement based on dependency resolution. This is best done through a custom admission-controller. This will have the ability to query your API server to get information about existing resources before allowing certain requests to be passed to it. You can read more about admission-controllers here in the k8s docs.

custom rule for NetworkPolicy

could you please support me in understanding how to configure the NetworkPolicy in order to set rule, that only predefined user's role may have access for specific pod (or service)?
I have begun with Kubernetes and read "Kebernetes in action", but didn't found any description how to do it. In general, this request is Authorisation task and only solution (i suppose) is to apply some kind of CustomResourceDefinition and create my own controller for manage the behaviour of CustomNetworkPolicy. Am I on right way, or is there any appropriate solution?
My microservices current equipped with authorisation on application level, but i need to move this task on cluster level. One of a reason is, i.e. I can orchestrate access of users without to change the source code of microservices.
I will be very thankful for some example or clarification
Using NetworkPolicy you can only manage the incoming and outgoing traffic to/from pods. For authorization, you can leverage service mesh which provides many more functionalities without changing your source code. The most famous one is istio (https://istio.io/docs/tasks/security/authorization/authz-http/), you can check more of them.
You could use RBAC to control your cluster access permissions.
This link show how you could use RBAC to restrict a namespace from a specific user.
It works perfectly if you need your pods have a limited access to other pods or resources. You could create a serviceAccount with defined permissions and link this account in your deployment, for example. See this link
References:
https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
https://kubernetes.io/docs/reference/access-authn-authz/rbac/
https://kubernetes.io/docs/reference/access-authn-authz/authorization/

how could `secret` protect sensitive information in Kubernetes

I am fresh to Kubernetes.
My understanding of secret is that it encodes information by base64. And from the resources I have seen, it is claimed that secret could protect sensitive information. I do not get this.
Besides encoding information with base64, I do not see any real difference between secret and configMap. And we could decode base64-encoded information so easily. That means there is no protection at all...
Is my understanding wrong?
The thing which protects a Secret is the fact that it is a distinct resource type in kubernetes, and thus can be subject to a different RBAC policy than a ConfigMap.
If you are currently able to read Secrets in your cluster, that's because your ClusterRoleBinding (or RoleBinding) has a rule that specifically grants access to those resources. It can be due to you accessing the cluster through its "unauthenticated" port from one of the master Nodes, or due to the [Cluster]RoleBinding attaching your Subject to cluster-admin, which is probably pretty common in hello-world situations, but I would guess less common in production cluster setups.
That's the pedantic answer, however, really guarding the secrets contained in a Secret is trickier, given that they are usually exposed to the Pods through environment injection or a volume mount. That means anyone who has exec access to the Pod can very easily exfiltrate the secret values, so if the secrets are super important, and must be kept even from the team, you'll need to revoke exec access to your Pods, too. A middle ground may be to grant the team access to Secrets in their own Namespace, but forbid it from other Namespaces. It's security, so there's almost no end to the permutations and special cases.

management of kubernetes secrets

we are starting with Kubernetes and wondering how other projects manage Kubernetes secrets:
Since Kubernetes secrets values are just base64 encoded, it's not recommended to commit the secrets into source control
If not committing to source control, it should be kept in some central place somewhere else, otherwise there's no single source of truth. If it's stored in some other places (.e.g. Hashicorp Vault), how's the integration with CI? Does the CI get values from Vault, create secrets resource on demand in Kubernetes?
Another approach is probably to have a dedicated team to handle infrastructure and only that team knows and manages secrets. But if this team can potentially become a bottleneck if number of projects are large
how other projects manage Kubernetes secrets
Since they are not (at least not yet) proper secrets (base64 encoded), we treat them to separate restricted access git repository.
Most of our projects have code repository (with non-secret related manifests such as deployments and services as part of CI/CD pipeline) and separate manifest repository (holding namespaces, shared database inits, secrets and more or less anything that is either one-time init separate from CI/CD, requires additional permission to implement or that should be restricted in any other way such as secrets).
With that being said, although regular developer doesn't have access to restricted repository, special care must be given to CI/CD pipelines since even if you secure secrets, they are known (and can be displayed/misused) during CI/CD stage, so that can be weak security spot there. We mitigate that by having one of our DevOps supervise and approve (protected branches) any change to CI/CD pipeline in much the same manner that senior lead is supervising code changes to be deployed to production environment.
Note that this is highly dependent on project volume and staffing, as well as actual project needs in term of security/development pressure/infrastructure integration.
I came across this in github called SealedSecrets. https://github.com/bitnami-labs/sealed-secrets. I haven't used it myself. Though it seems to be a good alternative.
But take note of this github issue (https://github.com/bitnami-labs/sealed-secrets/issues/92). It may cause you to lose labels and annotations.
In a nut shell SealedSecrets allows you to create a custom resource definition which encrypts your secret. In turn when you deploy the resource it will decrypt the CRD and turn it to a kubernetes Secret. This way you can commit your SealedSecret resource in your source code repo.
I use k8 secrets as the store where secrets are kept. As in when I define a secret I define it in k8 not somewhere else to then figure out how to inject it into k8. I have a handy client to create lookup and modify my secrets. I don't need to worry about my secrets leaving the firewall. They are easily injected into my services
If you want an extra layer of protection you can encrypt the secrets in k8 yourself with a KMS or something like that
We recently released a project called Kamus. The idea is to allow developers to encrypt secrets for a specific application (identified with a Kubernetes service account), while only this application can decrypt it. Kamus was designed to support GitOps flow, as the encrypted secret can be committed to source control. Take a look at this blog post for more details.

Kubernetes secrets and service accounts

I've been working with kubernetes for the past 6 months and we've deployed a few services.
We're just about to deploy another which stores encrypted data and puts the keys in KMS. This requires two service accounts, one for the data and one for the keys.
Data access to this must be audited. Since access to this data is very sensitive we are reluctant to put both service accounts in the name namespace as if compromised in any way the attacker could gain access to the data and the keys without it being audited.
For now we have one key in a secret and the other we're going to manually post to the single pod.
This is horrible as it requires that a single person be trusted with this key, and limits scalability. Luckily this service will be very low volume.
Has anyone else came up against the same problem?
How have you gotten around it?
cheers
Requirements
No single person ever has access to both keys (datastore and KMS)
Data access to this must be audited
If you enable audit logging, every API call done via this service account will be logged. This may not help you if your service isn't ever called via the API, but considering you have a service account being used, it sounds like it would be.
For now we have one key in a secret and the other we're going to manually post to the single pod.
You might consider using Vault for this. If you store the secret in vault, you can use something like this to have the environment variable pushed down into the pod as an environment variable automatically. This is a little more involved than your process, but is considerably more secure.
You can also use Vault alongside Google Cloud KMS which is detailed in this article
What you're describing is pretty common - using a key/ service account/ identity in Kubernetes secrets to access an external secret store.
I'm a bit confused by the double key concept - what are you gaining by having a key in both secrets and in the pod? If secrets are compromised, then etcd is compromised and you have bigger problems. I would suggest you focus instead on locking down secrets, using audit logs, and making the key is easy to rotate in case of compromise.
A few items to consider:
If you're mostly using Kubernetes, consider storing (encrypted) secrets in Kubernetes secrets.
If you're storing secrets centrally outside of Kubernetes, like you're describing, consider just using a single Kubernetes secret - you will get Kubernetes audit logs for access to the secret (see the recommended audit-policy), and Cloud KMS audit logs for use of the key.