My scenary is like the image below:
After a couple of days trying to find a way to block connections among pods based on a rule i found the Network Policy. But it's not working for me neither at Google Cloud Platform or Local Kubernetes!
My scenary is quite simple, i need a way to block connections among pods based in a rule (e.g. namespace, workload label and so on). At the first glance i tought the will work for me, but i don't know why it's not working at the Google Cloud, even when i create a cluster from the scratch with the option "Network policy" enable.
Network policy will allow you to do exactly what you described on picture. You can allow or block based on labels or namespaces.
It's difficult to help you when you don't explain what you exactly did and what is not working. Update your question with actual network policy YAML you created and ideally also send kubectl get pod --show-labels from the namespace with the pods.
What do you mean by 'local kubernetes' is also unclear but it depends largely on network CNI you're using as it must support network policies. For example Calico or Cilium support it. Minikube in it's default setting don't so you should follow i.e. this guide: https://medium.com/#atsvetkov906090/enable-network-policy-on-minikube-f7e250f09a14
You can use Istio Sidecar to solve this : https://istio.io/latest/docs/reference/config/networking/sidecar/
Another Istio solution is the usage of AuthorizationPolicy : https://istio.io/latest/docs/reference/config/security/authorization-policy/
Just to update because I was involved in the problem of this post.
The problem was with pods that had the istio injected. In this case, all pods in the namespace, because it had istio-injection=enabled.
The NetworkPolicy rule was not taking effect when the selection was made by a matchselector, egress or ingress, and the pods involved were already running before NetworkPolicy was created. By killing the pod and then starting it, the pods that had the label match had access normally. I don't know if there is a way to say to refresh the sidecar inside the pods without having to restart it.
Pods started after the creation of NetworkPolicy did not give the problem of this post.
Related
I'm trying to figure out which the best approach to verify the network policy configuration for a given cluster.
According to the documentation
Network policies are implemented by the network plugin, so you must be
using a networking solution which supports NetworkPolicy - simply
creating the resource without a controller to implement it will have
no effect.
Assuming I have access only through kubectl to my cluster, what should I do to ensure the network policy resource deployed into the cluster will be honored?
I'm aware about the CNI available and the related matrix capabilities.
I know you could check for the pod deployed under kube-system that are related to those CNI and verify the related capabilities using for ex. the matrix that I shared, but I wonder if there's a more structured approach to verify the current CNI installed and the related capabilities.
Regarding the "controller to implement it", is there a way to get the list of add-on/controller related to the network policy?
Which the best approach to verify the network policy configuration for
a given cluster?
If you have access to the pods, you can run tests to make sure that your NetworkPolicies are effective or not. There are two ways for you to check it:
Reading your NetworkPolicy using kubectl (kubectl get networkpolicies).
Testing your endpoints to check if NetworkPolicies are effective.
I wonder if there's a more structured approach to verify the current
CNI installed and the related capabilities.
There is no structured way to check your CNI. You need to understand how your CNI works to be able to identify it on your cluster. For Calico for example, you can identify it by checking if calico pods are running. (kubectl get pods --all-namespaces --selector=k8s-app=calico-node)
Regarding the "controller to implement it", is there a way to get the
list of add-on/controller related to the network policy?
"controller to implement it" is a reference to the CNI you are using.
There is a tool called Kubernetes Network Policies Viewer that allows your to see graphically your NetworkPolicy. This is not connected to your question but it might help you to visualize your NetworkPolicies and understand what they are doing.
We have a deployment of Kubernetes in Google Cloud Platform. Recently we hit one of the well known issues related on a problem with the kube-dns that happens at high amount of requests https://github.com/kubernetes/kubernetes/issues/56903 (its more related to SNAT/DNAT and contract but the final result is out of service of kube-dns).
After a few days of digging on that topic we found that k8s already have a solution witch is currently in alpha (https://kubernetes.io/docs/tasks/administer-cluster/nodelocaldns/)
The solution is to create a caching CoreDNS as a daemonset on each k8s node so far so good.
Problem is that after you create the daemonset you have to tell to kubelet to use it with --cluster-dns option and we cant find any way to do that in GKE environment. Google bootstraps the cluster with "configure-sh" script in instance metadata. There is an option to edit the instance template and "hardcode" the required values but that is not an option if you upgrade the cluster or use the horizontal autoscaling all of the modified values will be lost.
The last idea was to use custom startup script that pull configuration and update the metadata server but this is a too complicated task.
As of 2019/12/10, GKE now supports through the gcloud CLI in beta:
Kubernetes Engine
Promoted NodeLocalDNS Addon to beta. Use --addons=NodeLocalDNS with gcloud beta container clusters create. This addon can be enabled or disabled on existing clusters using --update-addons=NodeLocalDNS=ENABLED or --update-addons=NodeLocalDNS=DISABLED with gcloud container clusters update.
See https://cloud.google.com/sdk/docs/release-notes#27300_2019-12-10
You can spin up another kube-dns deployment e.g. in different node-pool and thus having 2x nameserver in the pod's resolv.conf.
This would mitigate the evictions and other failures and generally allow you to completely control your kube-dns service in the whole cluster.
In addition to what was mentioned in this answer - With beta support on GKE, the nodelocal caches now listen on the kube-dns service IP, so there is no need for a kubelet flag change.
I'm new to K8s, so still trying to get my head around things. I've been looking at deployments and can appreciate how useful they will be. However, I don't understand why they don't support services (only replica sets and pods).
Why is this? Does this mean that services would typically be deployed outside of a deployment?
To answer your question, Kubernetes deployments are used for managing stateless services running in the cluster instead of StatefulSets which are built for the stateful application run-time. Actually, with deployments you can describe the update strategy and road map for all underlying objects that have to be created during implementation.Therefore, we can distinguish separate specification fields for some objects determination, like needful replica number of Pods, template for Pod by describing a list of containers that should be in the Pod, etc.
However, as #P Ekambaram already mention in his answer, Services represent abstraction layer of network communication model inside Kubernetes cluster, and they declare a way to access Pods within a cluster via corresponded Endpoints. Services are separated from deployment object manifest specification, because of their mission to dynamically provide specific network behavior for the nested Pods without affecting or restarting them in case of any communication modification via appropriate Service Types.
Yes, services should be deployed as separate objects. Note that deployment is used to upgrade or rollback the image and works above ReplicaSet
Kubernetes Pods are mortal. They are born and when they die, they are not resurrected. ReplicaSets in particular create and destroy Pods dynamically (e.g. when scaling out or in). While each Pod gets its own IP address, even those IP addresses cannot be relied upon to be stable over time. This leads to a problem: if some set of Pods (let’s call them backends) provides functionality to other Pods (let’s call them frontends) inside the Kubernetes cluster, how do those frontends find out and keep track of which backends are in that set?
Services.come to the rescue.
A Kubernetes Service is an abstraction which defines a logical set of Pods and a policy by which to access them. The set of Pods targeted by a Service is (usually) determined by a Label Selector
Something I've just learnt that is somewhat related to my question: multiple K8s objects can be included in the same yaml file, separate by ---. Something like:
apiVersion: v1
kind: Deployment
# other stuff here
---
apiVersion: v1
kind: Service
# other stuff here
i think it intends to decoupled and fine-grained.
I have a StatefulSet with pods server-0, server-1, etc. I want to expose them directly to the internet with URLs like server-0.mydomain.com or like mydomain.com/server-0.
I want to be able to scale the StatefulSet and automatically be able to access the new pods from the internet. For example, if I scale it up to include a server-2, I want mydomain.com/server-2 to route requests to the new pod when it's ready. I don't want to have to also scale some other resource or create another Service to achieve that effect.
I could achieve this with a custom proxy service that just checks the request path and forwards to the correct pod internally, but this seems error-prone and wasteful.
Is there a way to cause an Ingress to automatically route to different pods within a StatefulSet, or some other built-in technique that would avoid custom code?
I don't think you can do it. Being part of the same statefulSet, all pods up to pod-x, are targeted by a service. As you can't define which pod is going to get a request, you can't force "pod-1.yourapp.com" or "yourapp.com/pod-1" to be sent to pod-1. It will be sent to the service, and the service might sent it to pod-4.
Even though if you could, you would need to dynamically update your ingress rules, which can cause a downtime of minutes, easily.
With the custom proxy, I see it impossible too. Note that it would need to basically replace the service behind the pods. If your ingress controller knows that it needs to deliver a packet to a service, now you have to force it to deliver to your proxy. But how?
A Kubernetes service is a set of iptables (or IPVS) rules that will redirect a packet with the ServiceIP as a destination address to ONE OF THE PODS that have the same label.
from Kubernetes Services documentation
The service installs iptables rules which select a backend Pod. By default, the choice of backend is random.
Which refers to the fact that a service is not able to distinguish between different pods in the same set.
If you want to force the selection of a specific Pod out of the set by changing the iprules (fairly simple), or by adding any type of proxy is problematic:
let's say you configured pod-1 and pod-2 (1.1.1.1 and 1.1.1.2 respectively), and you configured iptables rules to DNAT requests with destination pod-1.myserver.com to 1.1.1.1 and same for pod-2. (you may ask why the IP, and it's simply because it's the only way to distinguish between these pods)
This approach will fail whenever a pod restarts, let's say pod-1 failed, Kubernetes won't recreate the same pod with same IP and name, instead will create pod-3 with a different IP and updates the iptables accordingly. As a result, all the packets going toward 1.1.1.1 will be dropped until you update the proxy or iptables again.
In fact, that's one of the reasons why we use service to access pods instead of accessing them directly since the Pod IP can change however the service IP won't.
However, since this very specific part of kubernetes was my work for the last 4 months, I have developed a python script to edit the iptables and to choose a specific pod, my conclusion of that work was it's costy and time-consuming and will impose the server to go offline for a couple of seconds when the pods are changed, you can take a look at the code, it definitely works but its not recommended.
This problem is a kubernetes problem and the solution is changing the source code of Kube-proxy, which is my current work.
I suggest you read my answer explaining how kubernetes services exactly work in this question: Which service is doing load balancing between kubernetes nodes?
We need to know about pods network isolation.
Is there a possibility to access one pod from another one in cluster? Maybe by namespace dividing?
We also need pod's membership in local networks, which are not accessible from outside.
Any plans? Is it will be soon?
In a standard Kubernetes installation, all pods (even across namespaces) share a flat IP space and can all communicate with each other.
To get isolation, you'll need to customize your install to prevent cross namespace communication. One way to do this is to use OpenContrail. They recently wrote a blog post describing an example deployment using the Guestbook from the Kubernetes repository.