Is there a way to programmatically update a kubernetes secret from a pod? that is, not using kubectl.
I have a secret mounted on a pod and also exposed via an environment variable. I would like to modify it from my service but it looks that it's read only by default.
You can use the Kubernetes REST API with the pod's serviceaccount's token as credentials (found at /var/run/secrets/kubernetes.io/serviceaccount/token inside the pod), you just need to allow the service account to edit secrets in the namespace via a role.
See Secret for the API docs
The API server is internally reachable via https://kubernetes.default
Related
I have a pod that exposes a port (Server). Other pods (Clients) can communicate with it.
The server can find remote IP and port on a socket (when the client connects to it). I am looking for a way to get the client's pod name (from its IP and port).
I saw a bunch of questions/answers about getting pod names via kubectl. However, I am not sure whether I can do kubectl from within a cluster itself.
I am trying to figure out what is available for something running on the cluster. It's ok if it requires some special privileges. It's more complicated if it requires authentication.
List all the Pods with the List Pods API operation and parse the JSON response for the podIP field (e.g. with jq or some other JSON parsing tool) to find the JSON object of the Pod that has your desired IP address. Then, extract the metadata.name field from this JSON object to get the name of the Pod.
You can do this by either directly using the Kubernetes API (e.g. with curl) or with kubectl (e.g. kubectl get pods -o json | jq ...). In any case, you must include in this request the ServiceAccount token of the ServiceAccount used by the Pod from which you are issuing the request (if you use the Kubernetes API directly, as a Bearer token in the Authorization header, and if you use kubectl with the --token command-line flag).
Regarding authorisation, you need a Role allowing the list verb on the pods resource and a RoleBinding that binds this Role to the ServiceAccount that your Pod is using (by default, Pods use a ServiceAccount named default in their namespace, but you can specify a custom ServiceAccount with the serviceAccountName field of the Pod).
Whatever can be done via kubectl can be done via direct requests to the API server as well. All you need is to have proper ServiceAccount set up for your pod. Once you have it - you use can plenty of libraries dedicated for communication with k8s API server.
I am trying to create a K8s cluster in Azure AKS and when cluster is ready I can see couple of resources are created within the default namespace. Example secret, configmap:
As a security recommendation NO k8s resources should be created under the default namespace so how to avoid it? It's created by default during cluster creation.
I have found the same question asked here:
User srbose-msft (Microsoft employee) explained the principle of operation very well:
In Kubernetes, a ServiceAccount controller manages the ServiceAccounts inside namespaces, and ensures a ServiceAccount named "default" exists in every active namespace. [Reference]
TokenController runs as part of kube-controller-manager. It acts asynchronously. It watches ServiceAccount creation and creates a corresponding ServiceAccount token Secret to allow API access. [Reference] Thus, the secret for the default ServiceAccount token is also created.
Trusting the custom CA from an application running as a pod usually requires some extra application configuration. You will need to add the CA certificate bundle to the list of CA certificates that the TLS client or server trusts. For example, you would do this with a golang TLS config by parsing the certificate chain and adding the parsed certificates to the RootCAs field in the tls.Config struct.
You can distribute the CA certificate as a ConfigMap that your pods have access to use. [Reference] AKS implements this in all active namespaces through ConfigMaps named kube-root-ca.crt in these namespaces.
You shall also find a Service named kubernetes in the default namespace. It has a ServiceType of ClusterIP and exposes the API Server Endpoint also named kubernetes internally to the cluster in the default namespace.
All the resources mentioned above will be created by design at the time of cluster creation and their creation cannot be prevented. If you try to remove these resources manually, they will be recreated to ensure desired goal state by the kube-controller-manager.
Additionally:
The Kubernetes clusters should not use the default namespace Policy is still in Preview. Currently the schema does not explicitly allow for Kubernetes resources in the default namespace to be excluded during policy evaluation. However, at the time of writing, the schema allows for labelSelector.matchExpressions[].operator which can be set to NotIn with appropriate labelSelector.matchExpressions[].values for the Service default/kubernetes with label:
component=apiserver
The default ServiceAccount, the default ServiceAccount token Secret and the RootCA ConfigMap themselves are not created with any labels and hence cannot to added to this list. If this is impeding your use-case I would urge you to share your feedback at https://techcommunity.microsoft.com/t5/azure/ct-p/Azure
We're trying to create different kuberentes secrets and offer access to specific secrets through specific service accounts that are assigned to pods. For example:
Secrets
- User-Service-Secret
- Transaction-Service-Secret
Service Account
- User-Service
- Transaction-Service
Pods
- User-Service-Pod
- Transaction-Service-Pod
The idea is to restrict access to User-Service-Secretsecret to User-Service service account that is assign to User-Service-Pod. So we can set this all up with the relevant kuberentes resources (i.e. ServiceAccount, Role, RoleBinding), but we realize that this may not be actually enforced, because Transaction-Service-Pod can just as easily read the User-Service-Secret secret when Pod starts up, even though the service account its assign to doesn't have get permission to the User-Service-Secret.
How do we actually enforce the RBAC system?
FYI we are using EKS
First it is important to distinguish between API access to the secret and consuming the secret as an environment variable or a mounted volume.
TLDR:
RBAC controls who can access a secret (or any other resource) using K8s API requests.
Namespaces or the service account's secrets attribute control if a pod can consume a secret as an environment variable or through a volume mount.
API access
RBAC is used to control if an identity (in your example the service account) is allowed to access a resource via the K8s API. You control this by creating a RoleBinding (namespaced) or a ClusterRoleBinding (cluster-wide) that binds an identity to a Role (namespaced) or a ClusterRole (not-namespaced) to your identity (service account). Then, when you assign the service account to a pod by setting the serviceAccountName attribute, running kubectl get secret in that pod or the equivalent method from one of the client libraries would mean you have credentials available to make the API request.
Consuming Secrets
This however is independent of configuring the pod to consume the secret as an environment variable or a volume mount. If the container spec in a pod spec references the secret it is made available inside that container. Note, per container, not per pod. You can limit what secret a pod can mount by having the pods in different namespaces, because a pod can only refer to a secret in the same namespace. Additionally, you can use the service account's secrets attribute, to limit what secrets a pod with thet service account can refer to.
$ kubectl explain sa.secrets
KIND: ServiceAccount
VERSION: v1
RESOURCE: secrets <[]Object>
DESCRIPTION:
Secrets is the list of secrets allowed to be used by pods running using
this ServiceAccount. More info:
https://kubernetes.io/docs/concepts/configuration/secret
ObjectReference contains enough information to let you inspect or modify
the referred object.
You can learn more about the security implications of Kubernetes secrets in the secret documentation.
The idea is to restrict access to User-Service-Secret secret to User-Service service account that is assign to User-Service-Pod. So we can set this all up with the relevant Kubernetes resources (i.e. ServiceAccount, Role, RoleBinding), but we realize that this may not be actually enforced, because Transaction-Service-Pod can just as easily read the User-Service-Secret secret when Pod starts up, even though the service account its assign to doesn't have get permission to the User-Service-Secret.
Yes, this is correct.
This is documented for Kubernetes on privilege escalation via pod creation - within a namespace.
Users who have the ability to create pods in a namespace can potentially escalate their privileges within that namespace. They can create pods that access their privileges within that namespace. They can create pods that access secrets the user cannot themselves read, or that run under a service account with different/greater permissions.
To actually enforce this kind of Security Policies, you probably have to add an extra layer of policies via the admission controller. The Open Policy Agent in the form of OPA Gatekeeper is most likely a good fit for this kind of policy enforcement.
We have an AWS EKS Kubernetes cluster with two factor authentication for all the kubectl commands.
Is there a way of deploying an app into this cluster using a pod deployed inside the cluster?
Can I deploy using helm charts or by specifying service account instead of kubeconfig file?
Can I specify a service account(use the one that is assigned to the pod with kubectl) for all actions of kubectl?
All this is meant to bypass two-factor authentication for the continuous deployment via Jenkins, by deploying jenkins agent into the cluster and using it for deployments. Thanks.
You can use a supported Kubernetes client library or Kubectl or directly use curl to call rest api exposed by Kubernetes API Server from within a pod.
You can use helm as well as long as you install it in the pod.
When you call Kubernetes API from within a pod by default service account is used.Service account mounted in the pod need to have role and rolebinding associated to be able to call Kubernetes API.
How to make a service can access via the service proxy running at the master in kubernetes ?
like service of kube-ui or fluentd-elasticsearch in example. can access the url: http://[masterIP:post]/api/v1/proxy/namespaces/kube-system/services/kube-ui/
I can not access http://[masterIP:post]/api/v1/proxy/namespaces/test/services/myweb, when I create a service in the test namespace named myweb.
So how to do ?
If you're trying to access it from a pod running in the cluster, you're best off just accessing the service directly. Services are made available using DNS within the cluster. If your pod is in the same namespace as the service, you should be able to access it simply using its name, e.g. at myweb in this case. If your pod is in a different namespace, you can hit it at pod-name.namespace, e.g. myweb.test in this case.
If you're trying to access it from outside the cluster, then you shouldn't need to do anything different than you do for the default services. If you're unable to access it in the same way, it's likely that your service doesn't have any pods backing it, or that those pods aren't working. You can check which pods are backing your service using kubectl get endpoints myweb --namespace=test. If that's empty, then you should make sure you've scheduled the pods that are meant to implement the service, and if so, that their labels are correct.
You might find the documentation on services useful.