My company mostly specialize in web and mobile development. Some of our clients want to have backend or web applications hosted and managed by us, because of that we have several apps and server to manage. I'm looking for a solution to have all these servers under one panel and most of all deploy all this application in High Availability. Moreover, we have servers in many different cloud providers and it would be nice if it would be possible to use them.
I've already found and tested few solutions. Maybe someone had the same problem and found a better solution or maybe can you advise which one of these are the best?
1. Rancher + DNS Round Robin
It would be setting up Rancher in HA mode with use of cattle or Kubernetes. Then set up few host just for loadBalancer and achieve HA by using DNS Round Robin. Put ip of all LoadBalancer in DNS records for every web application.
Pros:
Easy to setup
Multiple environments. One panel to administrate development, production infrastructure.
No single point of failure
Very cheap
Cons:
Leaves failover to the client-side application
Not reliable
When one node down high response times for some clients (he needs to wait for request to timeout)
2. Rancher + Cloudflare Load Balancer
As the previous set up Rancher in HA mode with use of cattle or Kubernetes. Then set up few host just for LoadBalancer and achieve HA by using Cloudflare LoadBancer pointing to rancher nodes used for LoadBalancers.
Pros:
Easy to setup
Multiple environments. One panel to administrate development, production infrastructure.
Theoretically, Cloudflare LB has 100% SLA
Cons:
The biggest problem is that Cloudflare LB uses DNS records for LoadBalancing. So our clients would need to redirect their domain to our DNS servers on Cloudflare or add CNAME record for our domain. Both of them are not ideal solutions :/ CNAME would be bad for SEO I think.
With many domains and many requests can get expensive.
Notes: I've tested this solution and it's working quite well, after shutting down node with LoadBalancer or with application downtime was about 20s-60s, so just time needed to spin new container.
3. Rancher + Floating IP + Keep alive
As the previous set up Rancher in HA mode with use of cattle or Kubernetes. Then set up few host just for LoadBalancer. Then setting up keepalive and (DigitalOcean) floating IP for nodes that are for LoadBalancers.
DigitalOcean floating ip diagram
Pros:
Easy to setup
Multiple environments. One panel to administrate development, production infrastructure.
No single point of failer
Cons:
LoadBalancers nodes needs to be on DigitalOcean
4. Kubernets on Google Cloud Platform with Kubernetes Engine
Setting up Kubernetes in HA mode on GCP.
Pros:
Super easy to setup on GCP. Just one click
Cons:
I couldn't find SLA of GCP Load Balancers. But probably single point of failer and SLA is not 100%
We would be attached with this Kubernetes cluster to one cloud provider
Having LB for every application, even if it's small could get expensive.
Worse web panel than the Rancher
5. Rancher 2.0 use all from above depending on environment
With Rancher 2.0 we could use all of above solutions it allows to add existing Kubernetes clusters to Rancher. So it would work with Kubernetes engine on GCP. However, it's in alpha version and doesn't have HA deployment yet.
Mostly I'm thinking about setting up option 3. Then if the rancher 2.0 will be released change for it and for larger applications use GCP with Kubernetes Engine. Have someone better solution? or maybe should I use other solutions from provided?
Related
We're integrating with a new partner that requires us to use VPN when communicating with them (over HTTPS). We're running all of our services in a (non-private) Google Kubernetes Engine (GKE) cluster and it's only a single pod that needs to communicate with the partner's API.
The problem we face is that our partner's VPN provider won't allow us to use the private IP-range provided by GKE, 10.244.0.0/14, because the subnet is too large.
Preferably, we don't want to deploy something outside our GKE cluster, like a Compute Engine instance, that is somehow used to proxy our traffic (we will of course do it if this is the only/best way to proceed). We're hoping that, perhaps, it'll be possible to create a new node pool in the same cluster with a different (smaller) subnet, but so far we haven't found a way to do this. We've also looked briefly at CloudVPN, but if we understand it correctly, it only works with private GKE clusters.
Question:
What's the recommended way to obtain a smaller subnet/IP-range for a pod in an existing (public) GKE cluster to allow it to communicate with a third-party API over VPN?
The problem I see is that you have to maintain your VPN connection within your pod, it is possible but looks like an antipattern.
I would recommend using CloudVPN in a separate GCP project (due to cost separation and security) to establish the connection with a specific and limited VPC and then route that traffic to the pod, that might be in a specific ip range as you mentioned.
Take a look at the docs on how to create the vpn:
https://cloud.google.com/network-connectivity/docs/vpn/concepts/overview
Redirect traffic between VPCs:
https://cloud.google.com/vpc/docs/vpc-peering
Create the nodepool with an IP range: https://cloud.google.com/sdk/gcloud/reference/container/node-pools/create
Assign your deployment to that nodepool
https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector
I am looking at some strategies how to make bidirectional communication of applications hosted on seperate clusters. Some of them are hosted in Service Fabric and the others are in Kubernetes. One of the options is to use a DNS service on the Service Fabric and the counterpart on Kubernetes. On the other hand the Reverse Proxy seems to be a way to go. After going through the options I was thinking...what is actually the best way to create microservices that can be deployed either in SF or in K8s without worrying about the communication model which requires least changes if we wish suddenly to migrate one app from SF to K8s but still making it avaiable to the SF apps and vice versa?
I am a newbie in Kubernetes.
I have 19 LAN servers with 190 machines.
Each of the 19 LANs has 10 machines and 1 exposed IP.
I have different websites/apps and their environments that are assigned to each LAN.
how do I manage my Kubernetes cluster and do setup/housekeeping.
Would like to have a single portal or manager to manage the websites and environment(dev, QA, prod) and keep isolation.
Is that possible?
I only got a vague idea of what you want to achieve so here goes nothing.
Since Kubernetes has a lot of convenience tools for setting a cluster on a public cloud platform, I'd suggest to start by going through "kubernetes-the-hard-way". It is a guide to setup a cluster on Google Cloud Platform without any additional scripts or tools, but the instructions can be applied to local setup as well.
Once you have an operational cluster, next step should be to setup an Ingress Controller. This gives you the ability to use one or more exposed machines (with public IPs) as gateways for the services running in the cluster. I'd personally recommend Traefik. It has great support for HTTP and Kubernetes.
Once you have the ingress controller setup, your cluster is pretty much ready to use. Process for deploying a service is really specific to service requirements but the right hand rule is to use a Deployment and a Service for stateless loads, and StatefulSet and headless services for stateful workloads that need peer discovery. This is obviously too generalized and have many exceptions.
For managing different environments, you could split your resources into different namespaces.
As for the single portal to manage it all, I don't think that anything as such exists, but I might be wrong. Besides, depending on your workflow, you can create your own portal using the Kubernetes API but it requires a good understanding of Kubernetes itself.
Is that possible to deploy an openshift in DMZ zone ( Restricted zone ).What are the challenges i will face?.What are the things i have to do in DMZ zone network?
You can deploy Kubernetes and OpenShift in DMZ.
You can also add DMZ in front of Kubernetes and OpenShift.
The Kubernetes and OpenShift network model is a flat SDN model. All pods get IP addresses from the same network CIDR and live in the same logical network regardless of which node they reside on.
We have ways to control network traffic within the SDN using the NetworkPolicy API. NetworkPolicies in OpenShift represent firewall rules and the NetworkPolicy API allows for a great deal of flexibility when defining these rules.
With NetworkPolicies it is possible to create zones, but one can also be much more granular in the definition of the firewall rules. Separate firewall rules per pod are possible and this concept is also known as microsegmentation (see this post for more details on NetworkPolicy to achieve microsegmentation).
The DMZ is in certain aspects a special zone. This is the only zone exposed to inbound traffic coming from outside the organization. It usually contains software such as IDS (intrusion detection systems), WAFs (Web Application Firewalls), secure reverse proxies, static web content servers, firewalls and load balancers. Some of this software is normally installed as an appliance and may not be easy to containerize and thus would not generally be hosted within OpenShift.
Regardless of the zone, communication internal to a specific zone is generally unrestricted.
Variations on this architecture are common and large enterprises tend to have several dedicated networks. But the principle of purpose-specific networks protected by firewall rules always applies.
In general, traffic is supposed to flow only in one direction between two networks (as in an osmotic membrane), but often exceptions to this rule are necessary to support special use cases.
Useful article: openshift-and-network-security-zones-coexistence-approache.
It's very secure if you follow standard security practices for your cluster. But nothing is 100% secure. So adding a DMZ would help reduce your attack vectors.
In terms of protecting your Ingress from outside, you can limit your access for your external load balancer just to HTTPS, and most people do that but note that HTTPS and your application itself can also have vulnerabilities.
As for pods and workloads, you can increase security (at some performance cost) using things like a well-crafted seccomp profile and or adding the right capabilities in your pod security context. You can also add more security with AppArmor or SELinux, but lots of people don't since it can get very complicated.
There are also other alternatives to Docker in order to more easily sandbox your pods (still early in their lifecycle as of this writing): Kata Containers, Nabla Containers and gVisor.
Take look on: dmz-kubernetes.
Here is similar problem: dmz.
In my company we have few public websites and many internal webapps. Currently they are are running in different AWS security groups.
Is it possible to run both kind of services on the same OpenShift cluster and make sure internal services are not accessible from the Internet?
Thanks!
The traditional(?) way that is solved is through Internet-facing ELB/ALBs pointed to the NodePorts on the cluster. I personally haven't tried Service of kind: LoadBalancer since 1.2 to be able to speak to its functionality, but I do know kubernetes has a lot of users on AWS, so it's plausible it works fine by now.
You can also run your own Ingress Controller, several of which have support for ip white/black listing, authentication, SSL/TLS, all the fancy toys, if you'd prefer not to deal with the ELB headache.
If you're not already considering it, Calico SDN has support for in-cluster networking policies, so you could also apply an extra level of locked-down-ness to ensure no Internet app breaks out of its allowed network path; thus, security-groups moving down into the cluster.