Both public and intranet services on the same OpenShift cluster - kubernetes

In my company we have few public websites and many internal webapps. Currently they are are running in different AWS security groups.
Is it possible to run both kind of services on the same OpenShift cluster and make sure internal services are not accessible from the Internet?
Thanks!

The traditional(?) way that is solved is through Internet-facing ELB/ALBs pointed to the NodePorts on the cluster. I personally haven't tried Service of kind: LoadBalancer since 1.2 to be able to speak to its functionality, but I do know kubernetes has a lot of users on AWS, so it's plausible it works fine by now.
You can also run your own Ingress Controller, several of which have support for ip white/black listing, authentication, SSL/TLS, all the fancy toys, if you'd prefer not to deal with the ELB headache.
If you're not already considering it, Calico SDN has support for in-cluster networking policies, so you could also apply an extra level of locked-down-ness to ensure no Internet app breaks out of its allowed network path; thus, security-groups moving down into the cluster.

Related

How to access monitoring services (prometheus, kibana etc.) deployed in Kubernetes production

I have web services running in the GKE Kubernetes Engine. I also have monitoring services running in the cloud that are monitoring these services. Everything is working fine....except that I don't know how to access the Prometheus, and Kibana dashboards. I know I can use port-forward to temporarily forward a local port and access that way but that cannot scale with more and more engineers using the system. I was thinking of a way to provide access to these dashboards to engineers but not sure what would be the best way.
Should I create a load balancer for each of these?
What about security? I only want a few engineers to have access to these systems.
There are other considerations as well, would love to get your thoughts.
Should I create a load balancer for each of these?
No, you can create but not a good idea.
What about security? I only want a few engineers to have access to
these systems.
You can create an account in Kibana and manage access or else you can use the IAP (Identity-Aware Proxy) to restrict access. Ref doc
You have multiple options. You can use the LoadBalancer as you used but not a good idea though.
A good way to expose different applications is using the ingress. So i you are running the Prometheus, Jaeger, and Kibana in your GKE.
You can create the different hosts with domain prom.example.com, tracing.example.com, kibana.example.com so there will be single ingress controller service with type LoadBalancer and you can map IP to DNS.
Ref doc

Restrict IP-range in GKE cluster when using VPN?

We're integrating with a new partner that requires us to use VPN when communicating with them (over HTTPS). We're running all of our services in a (non-private) Google Kubernetes Engine (GKE) cluster and it's only a single pod that needs to communicate with the partner's API.
The problem we face is that our partner's VPN provider won't allow us to use the private IP-range provided by GKE, 10.244.0.0/14, because the subnet is too large.
Preferably, we don't want to deploy something outside our GKE cluster, like a Compute Engine instance, that is somehow used to proxy our traffic (we will of course do it if this is the only/best way to proceed). We're hoping that, perhaps, it'll be possible to create a new node pool in the same cluster with a different (smaller) subnet, but so far we haven't found a way to do this. We've also looked briefly at CloudVPN, but if we understand it correctly, it only works with private GKE clusters.
Question:
What's the recommended way to obtain a smaller subnet/IP-range for a pod in an existing (public) GKE cluster to allow it to communicate with a third-party API over VPN?
The problem I see is that you have to maintain your VPN connection within your pod, it is possible but looks like an antipattern.
I would recommend using CloudVPN in a separate GCP project (due to cost separation and security) to establish the connection with a specific and limited VPC and then route that traffic to the pod, that might be in a specific ip range as you mentioned.
Take a look at the docs on how to create the vpn:
https://cloud.google.com/network-connectivity/docs/vpn/concepts/overview
Redirect traffic between VPCs:
https://cloud.google.com/vpc/docs/vpc-peering
Create the nodepool with an IP range: https://cloud.google.com/sdk/gcloud/reference/container/node-pools/create
Assign your deployment to that nodepool
https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector

What I need to provide to make calls from my k8s cluster?

I have a Kubernetes Cluster with my application running inside of it, also I have a host machine, that my application need to access.
All the infrastructure is located inside the VPN network
How can I setup egress to let my application send requests from the cluster to this host machine (does the Kubernetes Network Policies is an appropriate way to handle this stuff and actually solving this problem?)
(Sorry, if this is too obvious question, haven't found any solutions for that yet, that works)
I'm not sure if I get your question right, but by default no network connectivity is blocked by Kubernetes. I assume you haven't set up any NetworkPolicies, this means all Ingress & Egress communication is open and nothing will block access, at least from K8s perspective.
However, if you have only deployed your application but haven't exposed it yet (with Ingress or Service: LoadBalancer) you will not be able to reach your application from outside the cluster. If you're running on-prem you will need to install MetalLB or some sort of service that allows you to create Services of Type LoadBalancer. The same goes for Ingress however, as the Ingress Controller will need some sort of access in the first place.

multiple environment for websites in Kubernetes

I am a newbie in Kubernetes.
I have 19 LAN servers with 190 machines.
Each of the 19 LANs has 10 machines and 1 exposed IP.
I have different websites/apps and their environments that are assigned to each LAN.
how do I manage my Kubernetes cluster and do setup/housekeeping.
Would like to have a single portal or manager to manage the websites and environment(dev, QA, prod) and keep isolation.
Is that possible?
I only got a vague idea of what you want to achieve so here goes nothing.
Since Kubernetes has a lot of convenience tools for setting a cluster on a public cloud platform, I'd suggest to start by going through "kubernetes-the-hard-way". It is a guide to setup a cluster on Google Cloud Platform without any additional scripts or tools, but the instructions can be applied to local setup as well.
Once you have an operational cluster, next step should be to setup an Ingress Controller. This gives you the ability to use one or more exposed machines (with public IPs) as gateways for the services running in the cluster. I'd personally recommend Traefik. It has great support for HTTP and Kubernetes.
Once you have the ingress controller setup, your cluster is pretty much ready to use. Process for deploying a service is really specific to service requirements but the right hand rule is to use a Deployment and a Service for stateless loads, and StatefulSet and headless services for stateful workloads that need peer discovery. This is obviously too generalized and have many exceptions.
For managing different environments, you could split your resources into different namespaces.
As for the single portal to manage it all, I don't think that anything as such exists, but I might be wrong. Besides, depending on your workflow, you can create your own portal using the Kubernetes API but it requires a good understanding of Kubernetes itself.

Solution for HA production infrastructure and server management

My company mostly specialize in web and mobile development. Some of our clients want to have backend or web applications hosted and managed by us, because of that we have several apps and server to manage. I'm looking for a solution to have all these servers under one panel and most of all deploy all this application in High Availability. Moreover, we have servers in many different cloud providers and it would be nice if it would be possible to use them.
I've already found and tested few solutions. Maybe someone had the same problem and found a better solution or maybe can you advise which one of these are the best?
1. Rancher + DNS Round Robin
It would be setting up Rancher in HA mode with use of cattle or Kubernetes. Then set up few host just for loadBalancer and achieve HA by using DNS Round Robin. Put ip of all LoadBalancer in DNS records for every web application.
Pros:
Easy to setup
Multiple environments. One panel to administrate development, production infrastructure.
No single point of failure
Very cheap
Cons:
Leaves failover to the client-side application
Not reliable
When one node down high response times for some clients (he needs to wait for request to timeout)
2. Rancher + Cloudflare Load Balancer
As the previous set up Rancher in HA mode with use of cattle or Kubernetes. Then set up few host just for LoadBalancer and achieve HA by using Cloudflare LoadBancer pointing to rancher nodes used for LoadBalancers.
Pros:
Easy to setup
Multiple environments. One panel to administrate development, production infrastructure.
Theoretically, Cloudflare LB has 100% SLA
Cons:
The biggest problem is that Cloudflare LB uses DNS records for LoadBalancing. So our clients would need to redirect their domain to our DNS servers on Cloudflare or add CNAME record for our domain. Both of them are not ideal solutions :/ CNAME would be bad for SEO I think.
With many domains and many requests can get expensive.
Notes: I've tested this solution and it's working quite well, after shutting down node with LoadBalancer or with application downtime was about 20s-60s, so just time needed to spin new container.
3. Rancher + Floating IP + Keep alive
As the previous set up Rancher in HA mode with use of cattle or Kubernetes. Then set up few host just for LoadBalancer. Then setting up keepalive and (DigitalOcean) floating IP for nodes that are for LoadBalancers.
DigitalOcean floating ip diagram
Pros:
Easy to setup
Multiple environments. One panel to administrate development, production infrastructure.
No single point of failer
Cons:
LoadBalancers nodes needs to be on DigitalOcean
4. Kubernets on Google Cloud Platform with Kubernetes Engine
Setting up Kubernetes in HA mode on GCP.
Pros:
Super easy to setup on GCP. Just one click
Cons:
I couldn't find SLA of GCP Load Balancers. But probably single point of failer and SLA is not 100%
We would be attached with this Kubernetes cluster to one cloud provider
Having LB for every application, even if it's small could get expensive.
Worse web panel than the Rancher
5. Rancher 2.0 use all from above depending on environment
With Rancher 2.0 we could use all of above solutions it allows to add existing Kubernetes clusters to Rancher. So it would work with Kubernetes engine on GCP. However, it's in alpha version and doesn't have HA deployment yet.
Mostly I'm thinking about setting up option 3. Then if the rancher 2.0 will be released change for it and for larger applications use GCP with Kubernetes Engine. Have someone better solution? or maybe should I use other solutions from provided?