Is it possible to block egress network access from a sidecar container?
I'm trying to implement capability to run some untrusted code in a sidecar container exposed via another trusted container in same pod having full network access.
It seems 2 containers in a pod can't have different network policies. Is there some way to achieve similar functionality?
As a sidenote, I do control the sidecar image which provides runtime to untrusted code.
You are correct, all containers in a pod share the same networking so you can't easily differentiate it. In general Kubernetes is not suitable for running code you assume to be actively malicious. You can build such a system around Kubernetes, but K8s itself is not nearly enough.
Related
I have a Kubernetes Cluster with my application running inside of it, also I have a host machine, that my application need to access.
All the infrastructure is located inside the VPN network
How can I setup egress to let my application send requests from the cluster to this host machine (does the Kubernetes Network Policies is an appropriate way to handle this stuff and actually solving this problem?)
(Sorry, if this is too obvious question, haven't found any solutions for that yet, that works)
I'm not sure if I get your question right, but by default no network connectivity is blocked by Kubernetes. I assume you haven't set up any NetworkPolicies, this means all Ingress & Egress communication is open and nothing will block access, at least from K8s perspective.
However, if you have only deployed your application but haven't exposed it yet (with Ingress or Service: LoadBalancer) you will not be able to reach your application from outside the cluster. If you're running on-prem you will need to install MetalLB or some sort of service that allows you to create Services of Type LoadBalancer. The same goes for Ingress however, as the Ingress Controller will need some sort of access in the first place.
I was trying fortio server/client application on istio. I used istoctl for injecting istio dependency and my serer pod was came up fine. But client pod was giving connection refused error due to proxy sidecar is not yet ready to handle connection request of client. Please help me addressing this issue. For reference attaching my yaml files.
This is by design and there is no way around it.
The part responsible for configuration of the iptables for capturing the traffic is run as an init container, which ensures that the required rules are in place before any of the normal pod containers start up. If you use istio for all the traffic, then until it's container is ready, no network traffic will reach in/out of the container.
You should make sure your application handles this right. Apps should be able to withstand unavailability of it's dependencies for a time, both on startup and during operation. In worst case you can introduce your own handling in form of ie. custom entrypoint that awaits for communication to be up.
When using Istio with Kubernetes, is an overlay network still required for each node?
I have read the FAQ's and the documentation, but cannot see anything that directly references this.
Istio is built upon the Kubernetes. It creates sidecar containers in the Kubernetes Pods for routing requests, gathering metrics and so on. But still, it requires a way for Pods to communicate with each other. Therefore, Kubernetes network overlay is required.
For additional information, you can start from the following link.
In preparation for HIPAA compliance, we are transitioning our Kubernetes cluster to use secure endpoints across the fleet (between all pods). Since the cluster is composed of about 8-10 services currently using HTTP connections, it would be super useful to have this taken care of by Kubernetes.
The specific attack vector we'd like to address with this is packet sniffing between nodes (physical servers).
This question breaks down into two parts:
Does Kubernetes encrypts the traffic between pods & nodes by default?
If not, is there a way to configure it such?
Many thanks!
Actually the correct answer is "it depends". I would split the cluster into 2 separate networks.
Control Plane Network
This network is that of the physical network or the underlay network in other words.
k8s control-plane elements - kube-apiserver, kube-controller-manager, kube-scheduler, kube-proxy, kubelet - talk to each other in various ways. Except for a few endpoints (eg. metrics), it is possible to configure encryption on all endpoints.
If you're also pentesting, then kubelet authn/authz should be switched on too. Otherwise, the encryption doesn't prevent unauthorized access to the kubelet. This endpoint (at port 10250) can be hijacked with ease.
Cluster Network
The cluster network is the one used by the Pods, which is also referred to as the overlay network. Encryption is left to the 3rd-party overlay plugin to implement, failing which, the app has to implement.
The Weave overlay supports encryption. The service mesh linkerd that #lukas-eichler suggested can also achieve this, but on a different networking layer.
The replies here seem to be outdated. As of 2021-04-28 at least the following components seem to be able to provide an encrypted networking layer to Kubernetes:
Istio
Weave
linkerd
cilium
Calico (via Wireguard)
(the list above was gained via consultation of the respective projects home pages)
Does Kubernetes encrypts the traffic between pods & nodes by default?
Kubernetes does not encrypt any traffic.
There are servicemeshes like linkerd that allow you to easily introduce https communication between your http service.
You would run a instance of the service mesh on each node and all services would talk to the service mesh. The communication inside the service mesh would be encrypted.
Example:
your service -http-> localhost to servicemesh node - https-> remoteNode -http-> localhost to remote service.
When you run the service mesh node in the same pod as your service the localhost communication would run on a private virtual network device that no other pod can access.
No, kubernetes does not encrypt traffic by default
I haven't personally tried it, but the description on the Calico software defined network seems oriented toward what you are describing, with the additional benefit of already being kubernetes friendly
I thought that Calico did native encryption, but based on this GitHub issue it seems they recommend using a solution like IPSEC to encrypt just like you would a traditional host
We need to know about pods network isolation.
Is there a possibility to access one pod from another one in cluster? Maybe by namespace dividing?
We also need pod's membership in local networks, which are not accessible from outside.
Any plans? Is it will be soon?
In a standard Kubernetes installation, all pods (even across namespaces) share a flat IP space and can all communicate with each other.
To get isolation, you'll need to customize your install to prevent cross namespace communication. One way to do this is to use OpenContrail. They recently wrote a blog post describing an example deployment using the Guestbook from the Kubernetes repository.