Let us assume we are the owners of a Kubernetes cluster and we give other users in our organization access to individual namespaces, so they are not supposed to know what is going on in other namespaces.
If user A deploys a certain ressource like a Grafana-Prometheus monitoring stack to namespace A, how do we ensure that he cannot see with the monitoring stack anything from namespace B, where he should not have any access to.
Of course, we will have to limit the rights of user A anyhow, but how do we automatically limit the rights of it's deployed ressources in namespace A? In case you have any suggestions perhaps with some Kubernetes configuration examples, that would be great.
The most important aspect of this question is to control the access permission of the service accounts which will be used in the Pods and a network policy which will limit the traffic within the namespace.
Hence we arrive to this algorithm:
Prerequisite:
Creating the user and namespace
sudo useradd user-a
kubectl create ns ns-user-a
limiting access permission of user-a to the namespace ns-user-a.
kubectl create clusterrole permission-users --verb=* --resource=*
kubectl create rolebinding permission-users-a --clusterrole=permission-users --user=user-a --namespace=ns-user-a
limiting all the service accounts access permission of namespace ns-user-a.
kubectl create clusterrole permission-serviceaccounts --verb=* --resource=*
kubectl create rolebinding permission-serviceaccounts --clusterrole=permission-serviceaccounts --namespace=ns-user-a --group=system:serviceaccounts:ns-user-a
kubectl auth can-i create pods --namespace=ns-user-a --as-group=system:serviceaccounts:ns-user-a --as sa
A network policy in namespace ns-user-a to limit incoming traffic from other namespaces.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-from-other-namespaces
namespace: ns-user-a
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- podSelector: {}
Edit: Allowing traffic from selective namespaces
Assign a label to the monitoring namespace with a custom label.
kubectl label ns monitoring nsname=monitoring
Or, use the following reserved labels from kubernetes to make sure nobody can edit or update this label. So by convention this label should have "monitoring" as assigned value for the "monitoring" namespace.
https://kubernetes.io/docs/reference/labels-annotations-taints/#kubernetes-io-metadata-name
kubernetes.io/metadata.name
Applying a network policy to allow traffic from internal and monitoring namespace.
Note: Network policies always add up. So you can keep both or you can only keep the new one. I am keeping both here, for example purposes.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-only-monitoring-and-inernal
namespace: ns-user-a
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- podSelector: {} # allows traffic from ns-user-a namespace (same as earlier)
- namespaceSelector: # allows traffic from monitoring namespace
matchLabels:
kubernetes.io/metadata.name: monitoring
Related
I deployed Prometheus with an Helm chart from Rancher. Targets such as Alertmanager, Prometheus, Grafana, Node-exporter, Kubelet etc. are configured automatically. The endpoint from alertmanager refers to the IP address of the specific pod for example. I also configured multiple targets successfully like Jira and Confluence.
Since the service external-dns is running in the namespace kube-system, it's also configured automatically. But only this service is getting the error Context deadline exceeded.
I checked in a random pod if those metrics are accessible by running the command curl -s http://<IP-ADDRESS-POD>:7979/metrics. Also did this with the service ip address (kubectl get service external-dns and curl-s http://<IP-ADDRESS-SVC>:7979/metrics).
Both of these curl commands returned the metrics within a second. So increasing the scrape timeout won't help.
But when I exec in the Prometheus container and use the promtool debug metrics command it shows the same behaviour like in my browser. The external-dns returns a timeout with both of the IP addresses and if I try this with another target it just returns the metrics.
I also don't think it's a SSL issue, because I already injected the correct CA bundle for the targets Jira and Confluence.
So anybody an idea? :)
I had to edit the NetworkPolicy in the kube-system namespace. The containers from the cattle-monitoring-system namespace are now allowed to access the containers from the kube-system namespace. You can upload your NetworkPolicies here and it visualizes which resources has access or not. The NetworkPolicy looks like this now:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-network-policy
namespace: kube-system
spec:
ingress:
- from:
- namespaceSelector:
matchLabels:
name: cattle-monitoring-system
- from:
- podSelector: {}
podSelector: {}
policyTypes:
- Ingress
I currently have multiple NFS server Pods running in different namespaces (1 replica per namespace). I have a Service per namespace to wrap this Pod just to have a fixed endpoint. A Persistent Volume connects to this server with the fixed endpoint, so other Pods in the namespace can mount this as a volume using a PVC. Since I create a PV per NFS server, how can I prevent that a PV not bounded to a PVC that belongs to the same namespace reads from it. I tried using a Network Policy, but it looks like the PV (not tied to a namespace) can go around it. Unfortunately, the application deployed in K8s currently has a field where a user can provide any nfs:// endpoint to instruct the PV where it needs to access the files.
Using GKE 1.17.
I'm trying this NP:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: team1-ns
spec:
podSelector:
matchLabels:
role: nfs-server
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
nfs-server: team1-ns
- podSelector: {}
am I missing something in the NP, or PVs can actually go around NPs?
Help is really appreciated...
I would not say that PVs "can go around" NPs, but rather than are not applicable.
PVs, Volumes and StorageClasses provide a layer of abstraction between your pod(s) and the underlying storage implementation. The actual storage itself is actually attached/mounted to the node and not directly to the container(s) in the pods.
In your case with NFS, the storage driver/plugin attaches the actual NFS share to the node running your pod(s). So NetworkPolicy cannot possible apply.
I have an EKS cluster with an nginx deployment on namespace gitlab-managed-apps. Exposing the application to the public from ALB ingress. I'm trying to block a specific Public IP (ex: x.x.x.x/32) from accessing the webpage. I tried Calico and K8s network policies. Nothing worked for me. I created this Calico policy with my limited knowledge of Network policies, but it blocks everything from accessing the nginx app, not just x.x.x.x/32 external IP. Showing everyone 504 Gateway timeout from ALB
apiVersion: projectcalico.org/v3
kind: NetworkPolicy
metadata:
name: ingress-external
namespace: gitlab-managed-apps
spec:
selector:
app == 'nginx'
types:
- Ingress
ingress:
- action: Deny
source:
nets:
- x.x.x.x/32
Try this:
apiVersion: projectcalico.org/v3
kind: NetworkPolicy
metadata:
name: ingress-external
namespace: gitlab-managed-apps
spec:
selector:
app == 'nginx'
types:
- Ingress
ingress:
- action: Deny
source:
nets:
- x.x.x.x/32
- action: Allow
calico docs suggests:
If one or more network policies apply to a pod containing ingress rules, then only the ingress traffic specifically allowed by those policies is allowed.
So this means that any traffic is denied by default and only allowed if you explicitly allow it. This is why adding additional rule action: Allow should allow all other traffic that was not matched by the previous rule.
Also remember what docs mention about rules:
A single rule matches a set of packets and applies some action to them. When multiple rules are specified, they are executed in order.
So default Allow rule has to follow the Deny rule for the specific IP, not the other way around.
As I have seen few related posts but none answered my question, I thought I would ask a new question based on suggestions from other users as well here.
I have the need to make a selector label for a network policy for a running cronjob that is responsible to connect to some other services within the cluster, as far as I know there is no easy straight forward way to make a selector label for the jobs pod as that would be problematic with duplicate job labels if they ever existed. Not sure why the cronjob can't have a selector itself, and then can be applied to the job and the pod.
also there might be a possibility to just set this cronjob in its own namespace and then allow all from that one namespace to whatever needed in the network policy but does not feel like the right way to overcome that problem.
Using k8s v1.20
First of all, to select pods (spawned by your CronJob) that should be allowed by the NetworkPolicy as ingress sources or egress destinations, you may set specific label for those pods.
You can easily set a label for Jobs spawned by CronJob using labels field (another example with an explanation can be found in the OpenShift CronJobs documentation):
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: mysql-test
spec:
...
jobTemplate:
spec:
template:
metadata:
labels:
workload: cronjob # Sets a label for jobs spawned by this CronJob.
type: mysql # Sets another label for jobs spawned by this CronJob.
...
Pods spawned by this CronJob will have the labels type=mysql and workload=cronjob, using this labels you can create/customize your NetworkPolicy:
$ kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
mysql-test-1615216560-tkdvk 0/1 Completed 0 2m2s ...,type=mysql,workload=cronjob
mysql-test-1615216620-pqzbk 0/1 Completed 0 62s ...,type=mysql,workload=cronjob
mysql-test-1615216680-8775h 0/1 Completed 0 2s ...,type=mysql,workload=cronjob
$ kubectl describe pod mysql-test-1615216560-tkdvk
Name: mysql-test-1615216560-tkdvk
Namespace: default
...
Labels: controller-uid=af99e9a3-be6b-403d-ab57-38de31ac7a9d
job-name=mysql-test-1615216560
type=mysql
workload=cronjob
...
For example this mysql-workload NetworkPolicy allows connections to all pods in the mysql namespace from any pod with the labels type=mysql and workload=cronjob (logical conjunction) in a namespace with the label namespace-name=default :
NOTE: Be careful to use correct YAML (take a look at this namespaceSelector and podSelector example).
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: mysql-workload
namespace: mysql
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
namespace-name: default
podSelector:
matchLabels:
type: mysql
workload: cronjob
To use network policies, you must be using a networking solution which supports NetworkPolicy:
Network policies are implemented by the network plugin. To use network policies, you must be using a networking solution which supports NetworkPolicy. Creating a NetworkPolicy resource without a controller that implements it will have no effect.
You can learn more about creating Kubernetes NetworkPolicies in the Network Policies documentation.
In a cluster with 2 namespaces (ns1 and ns2), I deploy the same app (deployment) and expose it with a service.
I thought separate namespaces would prevent from executing curl http://deployment.ns1 from a pod in ns2, but apparently, it's possible.
So my question is, how to allow/deny such cross namespaces operations? For example:
pods in ns1 should accept requests from any namespace
pods (or service?) in ns2 should deny all requests from other namespaces
Good that you are working with namespace isolation.
Deploy a new kind Network Policy in your ns1 with ingress all. You can lookup the documentation to define network ingress policy to allow all inbound traffic
Likewise for ns2, you can create a new kind Network Policy and deploy the config in ns2 to deny all ingress. Again the docs will come to rescue to help with you the yaml construct.
It may look something like this:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
namespace: ns1
name: web-allow-all-namespaces
spec:
podSelector:
matchLabels:
app: app_name_ns1
ingress:
- from:
- namespaceSelector: {}
It would not be answer you want, but I can provide the helpful feature information to implement your requirements.
AFAIK Kubernetes can define network policy to limit the network access.
Refer Declare Network Policy for more details of Network Policy.
Default policies
Setting a Default NetworkPolicy for New Projects in case OpenShift.