Is that possible to deploy an openshift or kubernetes in DMZ zone? - kubernetes

Is that possible to deploy an openshift in DMZ zone ( Restricted zone ).What are the challenges i will face?.What are the things i have to do in DMZ zone network?

You can deploy Kubernetes and OpenShift in DMZ.
You can also add DMZ in front of Kubernetes and OpenShift.
The Kubernetes and OpenShift network model is a flat SDN model. All pods get IP addresses from the same network CIDR and live in the same logical network regardless of which node they reside on.
We have ways to control network traffic within the SDN using the NetworkPolicy API. NetworkPolicies in OpenShift represent firewall rules and the NetworkPolicy API allows for a great deal of flexibility when defining these rules.
With NetworkPolicies it is possible to create zones, but one can also be much more granular in the definition of the firewall rules. Separate firewall rules per pod are possible and this concept is also known as microsegmentation (see this post for more details on NetworkPolicy to achieve microsegmentation).
The DMZ is in certain aspects a special zone. This is the only zone exposed to inbound traffic coming from outside the organization. It usually contains software such as IDS (intrusion detection systems), WAFs (Web Application Firewalls), secure reverse proxies, static web content servers, firewalls and load balancers. Some of this software is normally installed as an appliance and may not be easy to containerize and thus would not generally be hosted within OpenShift.
Regardless of the zone, communication internal to a specific zone is generally unrestricted.
Variations on this architecture are common and large enterprises tend to have several dedicated networks. But the principle of purpose-specific networks protected by firewall rules always applies.
In general, traffic is supposed to flow only in one direction between two networks (as in an osmotic membrane), but often exceptions to this rule are necessary to support special use cases.
Useful article: openshift-and-network-security-zones-coexistence-approache.
It's very secure if you follow standard security practices for your cluster. But nothing is 100% secure. So adding a DMZ would help reduce your attack vectors.
In terms of protecting your Ingress from outside, you can limit your access for your external load balancer just to HTTPS, and most people do that but note that HTTPS and your application itself can also have vulnerabilities.
As for pods and workloads, you can increase security (at some performance cost) using things like a well-crafted seccomp profile and or adding the right capabilities in your pod security context. You can also add more security with AppArmor or SELinux, but lots of people don't since it can get very complicated.
There are also other alternatives to Docker in order to more easily sandbox your pods (still early in their lifecycle as of this writing): Kata Containers, Nabla Containers and gVisor.
Take look on: dmz-kubernetes.
Here is similar problem: dmz.

Related

Kubernetes and Service Mesh load-balancing misalignments

Kubernetes has a support of Pod load-balancing, session affinity through its kube-proxy. Kubernetes’ kube-proxy is essentially an L4 load balancer so we cannot rely on it to load balance L7-transport, e.g. muliple gRPC live connections or load-balancing based on http-headers, cookies, etc.
Service Mesh implementation like e.g. istio can handle these patterns on L7-level including gRPC. But I always thought that Service Mesh is just another layer on top of Kubernetes with additional capabilities(encrypted traffic, blue/green deployments/etc). E.g. My assumption always was that Kubernetes applications should be able to work on both vanilla Kubernetes without Mesh (e.g. for development/testing) or with a Mesh on. Adding this advanced traffic management on L7 breaks this assumption. I won't be able to work on a vanilla Kubernetes anymore, I will be tied to a specific implementation of Istio dataplane(Envoy).
Please let know if my assumption is correct or why not? There's not much information about this type of separation of concerns on this internet.
Let me refer first to the following statement of yours:
My assumption always was that Kubernetes applications should be able
to work on both vanilla Kubernetes without Mesh (e.g. for
development/testing) or with a Mesh on. Adding this advanced traffic
management on L7 breaks this assumption.
I have different view at that, Service Meshes are transparent to the application, so they don't break anything in them, but just add an extra (network, security, monitoring) functions at no cost (ok, the cost it quite complex configuration from Mesh operator perspective). The Service Mesh like Istio doesn't need to occupy all K8S namespaces, so you can still have mixed type of workloads in your cluster (with and w/o proxies). If we speak about Istio, to enable full interoperatbility between them (mixed workloads) you may combine two its features together:
Peer authentication set to PERMISSIVE, so that workloads without sidecars (proxy) can accept both mutual TLS and plain text traffic.
Manual protocol selection, e.g. if you prefer your app speak raw TCP instead of app protocol determined by Envoy itself (e.g. http) - to avoid extra decorations injected by proxy to intercepted requests.
Alternatively you can write your own custom tcp_proxy EnvoyFilter to use Envoy as a L4 network proxy.
References:
https://istio.io/latest/docs/reference/config/networking/envoy-filter/
https://istio.io/latest/docs/concepts/security/#peer-authentication
https://istio.io/latest/docs/ops/configuration/traffic-management/protocol-selection/

Is authentication required on RESTful services interacting with each other on Kubernetes cluster?

We have a microservice architecture and there are REST services interacting with each other through HTTP. All of these services are hosted on a Kubernetes cluster. Do we need to have explicit authentication for such service interaction or does Kubernetes provide enough security for it?
Kubernetes provides only orchestration for your conteinerized applications. It helps you to run, update, scale your services and provides a way of delivering traffic to them inside the cluster. Most of the Kubernetes security relates to traffic management and role based administration of the cluster.
Some additional tools like Istio can provide you secure communication between pods and some other traffic management capabilities.
Applications in pods should have their own capabilities of providing Authentication and Authorization based on local files/databases or network services like LDAP or OpenID etc.
It's purely based on how you design, architect, how you create a SDD for your system. While designing one, security hardening must be considered and give priority. The software and tools bring their features but, how you adopt is important. Kubernetes is no exception.
You are running your micro-services using HTTP and in production system, you can not believe that your system is secure even if it's running in Kubernetes cluster. Kubernetes brings cool features from security perspective as RBAC, CRD, etc. as you can find in here, Kubernetes 1.8 Security, Workloads and Feature Depth. But, still leveraging only these feature is not sufficient. The internal services should be as secure as external once. Following are few things you should take care once you are running your workload into kubernetes cluster,
Scan all your docker images for vulnerability testing.
Use RBAC over ABAC and assign optimum privileges to respective teams.
Configure a security context for a pod running your service.
Avoid unauthorized internal access to service data and protect all micro-services end-points.
Encryption keys should be rotated over a certain period of time.
The datastore like etcd for your kubernetes cluster must be secured.
Only admin should have access to kubectl.
Use token based validation and enable authentication on all REST api calls.
Continuous Monitoring all the services, logs for analysis, health-check, all the processes running inside containers.
Hope this helps.

VPN access for applications running inside a shared Kubernetes cluster

We are currently providing our software as a software-as-a-service on Amazon EC2 machines. Our software is a microservice-based application with around 20 different services.
For bigger customers we use dedicated installations on a dedicated set of VMs, the number of VMs (and number of instances of our microservices) depending on the customer's requirements. A common requirement of any larger customer is that our software needs access to the customer's datacenter (e.g., for LDAP access). So far, we solved this using Amazon's virtual private gateway feature.
Now we want to move our SaaS deployments to Kubernetes. Of course we could just create a Kubernetes cluster across an individual customer's VMs (e.g., using kops), but that would offer little benefit.
Instead, perspectively, we would like to run a single large Kubernetes cluster on which we deploy the individual customer installations into dedicated namespaces, that way increasing resource utilization and lowering cost compared to the fixed allocation of machines to customers that we have today.
From the Kubernetes side of things, our software works fine already, we can deploy multiple installations to one cluster just fine. An open topic is however the VPN access. What we would need is a way to allow all pods in a customer's namespace access to the customer's VPN, but not to any other customers' VPNs.
When googleing for the topic, I found approaches that add a VPN client to the individual container (e.g., https://caveofcode.com/2017/06/how-to-setup-a-vpn-connection-from-inside-a-pod-in-kubernetes/) which is obviously not an option).
Other approaches seem to describe running a VPN server inside K8s (which is also not what we need).
Again others (like the "Strongswan IPSec VPN service", https://www.ibm.com/blogs/bluemix/2017/12/connecting-kubernetes-cluster-premises-resources/ ) use DaemonSets to "configure routing on each of the worker nodes". This also does not seem like a solution that is acceptable to us, since that would allow all pods (irrespective of the namespace they are in) on a worker node access to the respective VPN... and would also not work well if we have dozens of customer installations each requiring its own VPN setup on the cluster.
Is there any approach or solution that provides what we need, .i.e., VPN access for the pods in a specific namespace only?
Or are there any other approaches that could still satisfy our requirement (lower cost due to Kubernetes worker nodes being shared between customers)?
For LDAP access, one option might be to setup a kind of LDAP proxy, so that only this proxy would need to have VPN access to the customer network (by running this proxy on a small dedicated VM for each customer, and then configuring the proxy as LDAP endpoint for the application). However, LDAP access is only one out of many aspects of connectivity that our application needs depending on the use case.
If your IPSec concentrator support VTI, it's possible route the traffic using firewall rules. For example, PFSense suports it: https://www.netgate.com/docs/pfsense/vpn/ipsec/ipsec-routed.html.
Using VTI, you can direct traffic using some kind of policy routing: https://www.netgate.com/docs/pfsense/routing/directing-traffic-with-policy-routing.html
However, i can see two big problems here:
You cannot have two IPSEC tunnels with the conflicted networks. For example, your kube network is 192.168.0.0/24 and you have two customers: A (172.12.0.0/24) and B (172.12.0.0/12). Unfortunelly, this can happen (unless your customer be able to NAT those networks).
Find the ideals criteria for rule match (to allow the routing), since your source network are always the same. Use mark packages (using iptables mangle or even through application) can be a option, but you will still get stucked on the first problem.
A similar scenario is founded on WSO2 (API gateway provider) architecture. They solved it using reverse-proxy in each network (sad but true) https://docs.wso2.com/display/APICloud/Expose+your+On-Premises+Backend+Services+to+the+API+Cloud#ExposeyourOn-PremisesBackendServicestotheAPICloud-ExposeyourservicesusingaVPN
Regards,
UPDATE:
I don't know if you use GKE. If yes, maybe use Alias-IP can be an option: https://cloud.google.com/kubernetes-engine/docs/how-to/alias-ips. The PODs IPs will be routable from VPC. So, you can apply some kind of routing policy based on their CIDR.

Solution for HA production infrastructure and server management

My company mostly specialize in web and mobile development. Some of our clients want to have backend or web applications hosted and managed by us, because of that we have several apps and server to manage. I'm looking for a solution to have all these servers under one panel and most of all deploy all this application in High Availability. Moreover, we have servers in many different cloud providers and it would be nice if it would be possible to use them.
I've already found and tested few solutions. Maybe someone had the same problem and found a better solution or maybe can you advise which one of these are the best?
1. Rancher + DNS Round Robin
It would be setting up Rancher in HA mode with use of cattle or Kubernetes. Then set up few host just for loadBalancer and achieve HA by using DNS Round Robin. Put ip of all LoadBalancer in DNS records for every web application.
Pros:
Easy to setup
Multiple environments. One panel to administrate development, production infrastructure.
No single point of failure
Very cheap
Cons:
Leaves failover to the client-side application
Not reliable
When one node down high response times for some clients (he needs to wait for request to timeout)
2. Rancher + Cloudflare Load Balancer
As the previous set up Rancher in HA mode with use of cattle or Kubernetes. Then set up few host just for LoadBalancer and achieve HA by using Cloudflare LoadBancer pointing to rancher nodes used for LoadBalancers.
Pros:
Easy to setup
Multiple environments. One panel to administrate development, production infrastructure.
Theoretically, Cloudflare LB has 100% SLA
Cons:
The biggest problem is that Cloudflare LB uses DNS records for LoadBalancing. So our clients would need to redirect their domain to our DNS servers on Cloudflare or add CNAME record for our domain. Both of them are not ideal solutions :/ CNAME would be bad for SEO I think.
With many domains and many requests can get expensive.
Notes: I've tested this solution and it's working quite well, after shutting down node with LoadBalancer or with application downtime was about 20s-60s, so just time needed to spin new container.
3. Rancher + Floating IP + Keep alive
As the previous set up Rancher in HA mode with use of cattle or Kubernetes. Then set up few host just for LoadBalancer. Then setting up keepalive and (DigitalOcean) floating IP for nodes that are for LoadBalancers.
DigitalOcean floating ip diagram
Pros:
Easy to setup
Multiple environments. One panel to administrate development, production infrastructure.
No single point of failer
Cons:
LoadBalancers nodes needs to be on DigitalOcean
4. Kubernets on Google Cloud Platform with Kubernetes Engine
Setting up Kubernetes in HA mode on GCP.
Pros:
Super easy to setup on GCP. Just one click
Cons:
I couldn't find SLA of GCP Load Balancers. But probably single point of failer and SLA is not 100%
We would be attached with this Kubernetes cluster to one cloud provider
Having LB for every application, even if it's small could get expensive.
Worse web panel than the Rancher
5. Rancher 2.0 use all from above depending on environment
With Rancher 2.0 we could use all of above solutions it allows to add existing Kubernetes clusters to Rancher. So it would work with Kubernetes engine on GCP. However, it's in alpha version and doesn't have HA deployment yet.
Mostly I'm thinking about setting up option 3. Then if the rancher 2.0 will be released change for it and for larger applications use GCP with Kubernetes Engine. Have someone better solution? or maybe should I use other solutions from provided?

How to configure Kubernetes to encrypt the traffic between nodes, and pods?

In preparation for HIPAA compliance, we are transitioning our Kubernetes cluster to use secure endpoints across the fleet (between all pods). Since the cluster is composed of about 8-10 services currently using HTTP connections, it would be super useful to have this taken care of by Kubernetes.
The specific attack vector we'd like to address with this is packet sniffing between nodes (physical servers).
This question breaks down into two parts:
Does Kubernetes encrypts the traffic between pods & nodes by default?
If not, is there a way to configure it such?
Many thanks!
Actually the correct answer is "it depends". I would split the cluster into 2 separate networks.
Control Plane Network
This network is that of the physical network or the underlay network in other words.
k8s control-plane elements - kube-apiserver, kube-controller-manager, kube-scheduler, kube-proxy, kubelet - talk to each other in various ways. Except for a few endpoints (eg. metrics), it is possible to configure encryption on all endpoints.
If you're also pentesting, then kubelet authn/authz should be switched on too. Otherwise, the encryption doesn't prevent unauthorized access to the kubelet. This endpoint (at port 10250) can be hijacked with ease.
Cluster Network
The cluster network is the one used by the Pods, which is also referred to as the overlay network. Encryption is left to the 3rd-party overlay plugin to implement, failing which, the app has to implement.
The Weave overlay supports encryption. The service mesh linkerd that #lukas-eichler suggested can also achieve this, but on a different networking layer.
The replies here seem to be outdated. As of 2021-04-28 at least the following components seem to be able to provide an encrypted networking layer to Kubernetes:
Istio
Weave
linkerd
cilium
Calico (via Wireguard)
(the list above was gained via consultation of the respective projects home pages)
Does Kubernetes encrypts the traffic between pods & nodes by default?
Kubernetes does not encrypt any traffic.
There are servicemeshes like linkerd that allow you to easily introduce https communication between your http service.
You would run a instance of the service mesh on each node and all services would talk to the service mesh. The communication inside the service mesh would be encrypted.
Example:
your service -http-> localhost to servicemesh node - https-> remoteNode -http-> localhost to remote service.
When you run the service mesh node in the same pod as your service the localhost communication would run on a private virtual network device that no other pod can access.
No, kubernetes does not encrypt traffic by default
I haven't personally tried it, but the description on the Calico software defined network seems oriented toward what you are describing, with the additional benefit of already being kubernetes friendly
I thought that Calico did native encryption, but based on this GitHub issue it seems they recommend using a solution like IPSEC to encrypt just like you would a traditional host