Only enable ServiceAccounts for some pods in Kubernetes - kubernetes

I use the Kubernetes ServiceAccount plugin to automatically inject a ca.crt and token in to my pods. This is useful for applications such as kube2sky which need to access the API Server.
However, I run many hundreds of other pods that don't need this token. Is there a way to stop the ServiceAccount plugin from injecting the default-token in to these pods (or, even better, have it off by default and turn it on explicitly for a pod)?

As of Kubernetes 1.6+ you can now disable automounting API credentials for a particular pod as stated in the Kubernetes Service Accounts documentation
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
serviceAccountName: build-robot
automountServiceAccountToken: false
...

Right now there isn't a way to enable a service account for some pods but not others, although you can use ABAC to for some service accounts to restrict access to the apiserver.
This issue is being discussed in https://github.com/kubernetes/kubernetes/issues/16779 and I'd encourage you to add your use can to that issue and see when it will be implemented.

Related

Determine Service Account for Airflow in Kubernetes

I am running Airflow in Kubernetes
One pod, 2 containers - webserver and scheduler.
KubernetesExecutor in configs
But due to organizational settings, the scheduler with the default service account can't work, not enough roles. I can't change this setting
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pods is forbidden: User \"system:serviceaccount:<account_name>:default\" cannot list resource \"pods\" in API group \"\" in the namespace \"<namespace_name>\"","reason":"Forbidden","details":{"kind":"pods"},"code":403}
So I created Service Account with the needed roles, roleBinging and etc. How can I set Airflow to run the scheduler with that SA?
You can specify the desired SA to use in your pod spec as discussed in the link below:
https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
serviceAccountName: build-robot
automountServiceAccountToken: false
...

StatefulSet without service and rollback support yaml

I need to deploy pod with Persistent volume claim support and at the same time, I also need support for modification of pod(edit any configuration) and also rollback to the previous version of the previous container image version.
I went through docs, but everywhere they included service in statefulset.yaml file.
I don't want service here, it should just deploy statefulset pod with rollback support.
Can you help me giving any sample statefulset YAML file
apiVersion: v1
kind: Service
metadata:
name: redis
namespace: default
......................
.................
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis
spec:
spec.serviceName in statefulset is required as per the API. Hence you have to have it.
kubectl explain statefulset.spec.serviceName
KIND: StatefulSet
VERSION: apps/v1
FIELD: serviceName <string>
DESCRIPTION:
serviceName is the name of the service that governs this StatefulSet. This
service must exist before the StatefulSet, and is responsible for the
network identity of the set. Pods get DNS/hostnames that follow the
pattern: pod-specific-string.serviceName.default.svc.cluster.local where
"pod-specific-string" is managed by the StatefulSet controller.
As you can see above This service must exist before the StatefulSet.
Actually, that's one of the StatefulSet's limitations, it's mandatory to have a headless service. ✅
StatefulSets currently require a Headless Service to be responsible for the network identity of the Pods. You are responsible for creating this Service.
Also, if you'd like other pods to access your Redis instance from other pods in your Kubernetes cluster or from somewhere outside the cluster it is a must-have.
If you don't want to use services you can switch 🔀 your StatefulSet to a regular Deployment.
✌️
Service is needed only when you want to expose your application. Without a service, you can only access your statefulSet via IP within the cluster. You can find more details in official docs.
Your requirements of PVC, editing, and rollback are builtin features of statefulset(you can only edit a few fields of a statefulset tho), so you are good to go.

How to allow/deny http requests from other namespaces of the same cluster?

In a cluster with 2 namespaces (ns1 and ns2), I deploy the same app (deployment) and expose it with a service.
I thought separate namespaces would prevent from executing curl http://deployment.ns1 from a pod in ns2, but apparently, it's possible.
So my question is, how to allow/deny such cross namespaces operations? For example:
pods in ns1 should accept requests from any namespace
pods (or service?) in ns2 should deny all requests from other namespaces
Good that you are working with namespace isolation.
Deploy a new kind Network Policy in your ns1 with ingress all. You can lookup the documentation to define network ingress policy to allow all inbound traffic
Likewise for ns2, you can create a new kind Network Policy and deploy the config in ns2 to deny all ingress. Again the docs will come to rescue to help with you the yaml construct.
It may look something like this:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
namespace: ns1
name: web-allow-all-namespaces
spec:
podSelector:
matchLabels:
app: app_name_ns1
ingress:
- from:
- namespaceSelector: {}
It would not be answer you want, but I can provide the helpful feature information to implement your requirements.
AFAIK Kubernetes can define network policy to limit the network access.
Refer Declare Network Policy for more details of Network Policy.
Default policies
Setting a Default NetworkPolicy for New Projects in case OpenShift.

How to disable the use of a default service account by a statefulset/deployments in kubernetes

I am setting up a namespace for my application that has statefulsets, deployments, and secrets into that namespace. Using RBAC, I am defining specific roles and binding them to a service account that is used by the deployment/statefulset. This works as expected.
Now when I try to test if the secrets are secure by not assigning any service account to the deployment, it still pulls down the secrets. The default service account in the namespace is bound with the view clusterrole which should not have access to secrets.
Any clue what is happening here?
Thanks in advance.
I believe you need to assign a RoleBinding to the default service account on your namespace. For example:
kubectl create rolebinding myapp-view-binding --clusterrole=view --serviceaccount=default:default --namespace=default
The view role should prevent you from reading secrets.
Now when I try to test if the secrets are secure by not assigning any service account to the deployment...
If you don't assign a service account to your deployment, the default service account in the deployment's namespace will be used.
... it still pulls down the secrets
Try set the automountServiceAccountToken: false on the pod. That will ensure the service account token is not automatically mounted. So something like:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-pod
spec:
...
template:
...
spec:
serviceAccountName: default
automountServiceAccountToken: false

How to add or introduce a kubernetes normal user?

I saw it on offical doc, but I don't know how to add or introduce a normal user outside kubernetes clusters. And I searched a lot about normal user in kubernetes but nothing useful.
I know it's different from serviceaccount and we cannot add a normal user through Kubernetes API.
Any idea about how to add or introduce a normal user to kubernetes cluster and what's normal user for?
See "Comparing Kubernetes Authentication Methods" by Etienne Dilocker
A possible solution is the x509 client certs:
Advantages
operating the Kubernetes cluster and issuing user certificates is decoupled
much more secure than basic authentication
Disadvantages
x509 certificates tend to have a very long lifetime (months or years). So, revoking user access is nearly impossible. If we instead choose to issue short-lived certificates, the user experience drops, because replacing certificates involves some effort.
But Etienne recommends OpenID:
Wouldn’t it be great if we could have short-lived certificates or tokens, that are issued by a third-party, so there is no coupling to the operators of the K8s cluster.
And at the same time all of this should be integrated with existing enterprise infrastructure, such as LDAP or Active Directory.
This is where OpenID Connect (OIDC) comes in.
For my example, I’ve used Keycloak as a token issuer. Keycloak is both a token issuer and an identity provider out-of-the box and quite easy to spin up using Docker.
To use RBAC with that kind of authentication is not straight-forward, but possible.
See "issue 118; Security, auth and logging in"
With 1.3 I have SSO into the dashboard working great with a reverse proxy and OIDC/OAuth2. I wouldn't create an explicit login screen, piggy back off of the RBAC model and the Auth model that is already supported. It would be great to have something that says who the logged in user is though.
Note that since 1.3, there might be simpler solution.
The same thread includes:
I have a prototype image working that will do what I think you're looking for: https://hub.docker.com/r/mlbiam/openunison-k8s-dashboard/
I removed all the requirements for user provisioning and stripped it down to just:
reverse proxy
integration with openid connect
display the user's access token
simple links page
It includes the role binding:
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
name: admin-role
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
nonResourceURLs: ["*"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
name: admin-binding
subjects:
- kind: Group
name: admin
- kind: ServiceAccount
name: default
namespace: kube-system
- kind: ServiceAccount
name: openunison
namespace: default
roleRef:
kind: ClusterRole
name: admin-role
Again, this was specific to the dashboard RBAC access, and has since been improved with PR 2206 Add log in mechanism (to dashboard).
It still can give you some clues in order to link a regular user to kubernetes RBAC.