I'm new to kubernetes and openshift (came from docker swarm world) and I'm having trouble with some of kubernetes and openshift documentation especially related to route and services. I was looking for how to expose a replica set of containers externally and I've found kubernetes documentation uses a service to expose the pod while openshift uses routes. can anyone explain to me the differences?
There are only minor differences in tools being used. OpenShift is a Kubernetes distribution, this means it is a collection of opinionated pre-selected components. So for Ingress, OpenShift uses HAProxy to get (HTTP) traffic into the cluster. Other Kubernetes distributions maybe use the NGINX Ingress Controller or something similar.
So Services are used to loadbalance traffic inside the cluster. So when you create a ReplicaSet, you'll have multiple Pods running. To "talk" to these Pods, you typically create a Service. That Service will distribute the traffic evenly between your Pods.
So to get HTTP(S) traffic from the outside to your Service, OpenShift uses Routes (Ingress in other Kubernetes distributions):
+-----+
+-->+ Pod |
+-------+ +---------+ | +-----+
Traffic--->+ Route +------>+ Service +--+-->+ Pod |
+-------+ +---------+ | +-----+
+-->+ Pod |
+-----+
So to expose your application to the outside world, you typically create an internal Service using oc create service and then create a Route using oc expose:
# Create a new ClusterIP service named myservice
oc create service clusterip myservice --tcp=8080:8080
oc expose service myservice
"Routes"in OCP do not compare with K8S"Services"but with K8S"Ingress"
Comparison betweenRoutesandIngressis here
Doc on how to expose "Services" in OCP outside the cluster is here
Red Hat had needed an automated reverse proxy solution for containers running on OpenShift long before Kubernetes came up with Ingress. So now in OpenShift we have a Route objects which do almost the same job as Ingress in Kubernetes. The main difference is that routes are implemented by good, old HAproxy that can be replaced by commercial solution based on F5 BIG-IP. On Kubernetes, however, you have much more choice, as Ingress is an interface implemented by multiple servers starting from most popular nginx, traefik, AWS ELB/ALB, GCE, Kong and others including HAproxy as well.
So which one is better you may ask? Personally, I think HAproxy in OpenShift is much more mature, although doesn’t have as much features as some Ingress implementations. On Kubernetes however you can use different enhancements - my favorite one is an integration with cert-manager that allows you to automate management of SSL certificates. No more manual actions for issuing and renewal of certificates and additionally you can use trusted CA for free thanks to integration with Letsencrypt!
As an interesting fact, I want to mention that starting from OpenShift 3.10 Kubernetes Ingress objects are recognized by OpenShift and are translated/implemented by.. a router. It’s a big step towards compatibility with configuration prepared for Kubernetes that now can be launched on OpenShift without any modifications.
Related
My infrastructure is based on Kubernetes (k3s, with istio ingress). I would like to use istio to expose an application that is not in my cluster.
outside (internet) --https--> my router --> [cluster] istio --> [not cluster] application (192.168.1.29:8123)
I tried creating a HAProxy container, but it didn't work...
Any ideas?
If you insist on piping your traffic to the non-cluster application through the Kubernetes cluster, there are a couple of ways to handle this. You could use a Kubernetes-native ExternalName Kubernetes service.
The Istio way would be to create a ServiceEntry, though, and then use a VirtualService combined with a Gateway to direct traffic to your application outside of the cluster.
I've different deployments over different namespaces and I would like to expose some of them to the Internet, even if I don't have a static and public IP available.
The different services are deployed on Rancher k3s and every service which should be publicly accessible has an Ingress defined in the same namespace.
I was trying to follow Rancher - How to expose my services publicly?, but I didn't really get what I've to do and, moreover:
Why do we need to define a LoadBalancer? It seems to me that the IngressController used by k3s (Traefik?) already creates one. If this is a must (or a good way to go), how it should the service defined exactly?
I don't have any Rancher UI in my environment. Therefore, is there a way to achieve what described in that link in a declarative way?
Is there a way to use services like No-IP or FreeDNS for the final hostname?
If I get it right, you deployed Kubernetes manually on barebone/vms nodes and now you want to reach you deployments running inside that cluster.
There is two level of loadbalancing in this setup, the one managed by your ingress controller, sounds like it is traefik in your case, and it is recommanded to run a second L4 load balancer in front of your workers to reach the ingress pods that are usually deployed on multiple/all nodes. Traefik, or other lb controllers, will load balancer traffic inside the k8s cluster without issue even if you don't have a L4 load balancer, but it is not recommanded as if you loose this node, no traffic can reach the kubernetes cluster anymore. You "just" need to have your dns resolution pointing at your public ip and routed to one of your worker, or the LB in front of it. However, if you don't have a L4 LB, you'll need to have your ingress pods listening on ports 80 and/or 443.
Most things that you do in Rancher UI is just an easier way to see your k8s objects, all ingress configuration can be achieved via kubectl, k9s (strongly recommand thatone!), lens or other methods. However k8s objects are still k8s objects. In this case, you need to have your services exposed with ClusterIP that are then reachable by the ingress pods.
I've never used such a solution natively from k8s, but when I had too the internet router was able to do this part, once you're there, it is internal routing.
I hope this helps. Ingress can definitely be a tough one to grasp!
What is the best approach to create the ingress resource that interact with ELB into target deployment environment that runs on Kubernetes?
As we all know there are different cloud provider and many types of settings that are related to the deployment of your ingress resource which depends on your target environments: AWS, OpenShift, plain vanilla K8S, google cloud, Azure.
On cloud deployments like Amazon, Google, etc., ingresses need also special annotations, most of which are common to all micro services in need of an ingress.
If we deploy also a mesh like Istio on top of k8s then we need to use an Istio gateway with ingress. if we use OCP then it has special kind called “routes”.
I'm looking for the best solution that targets to use more standard options, decreasing the differences between platforms to deploy ingress resource.
So maybe the best approach is to create an operator to deploy the Ingress resource because of the many different setups here?
Is it important to create some generic component to deploy the Ingress while keeping cloud agnostic?
How do other companies deploy their ingress resources to the k8s cluster?
What is the best approach to create the ingress resource that interact with ELB into target deployment environment that runs on Kubernetes?
On AWS the common approach is to use ALB, and the AWS ALB Ingress Controller, but it has its own drawbacks in that it create one ALB per Ingress resource.
Is we deploy also a mesh like Istio then we need to use Istio gateway with ingress.
Yes, then the situation is different, since you will use VirtualService from Istio or use AWS App Mesh - that approach looks better, and you will not have an Ingress resource for your apps.
I'm looking for the best solution that targets to use more standard options, decreasing the differences between platforms to deploy ingress resource.
Yes, this is in the intersection between the cloud provider infrastructure and your cluster, so there are unfortunately many different setups here. It also depends on if your ingress gateway is within the cluster or outside of the cluster.
In addition, the Ingress resource, just become GA (stable) in the most recent Kubernetes, 1.19.
I am using kubernetes on google cloud container, and I still don't understand how the load-balancers are "magically" getting configured when I create / update any of my ingresses.
My understanding was that I needed to deploy a glbc / gce L7 container, and that container would watch the ingresses and do the job. I've never deployed such container. So maybe it is part of this cluster addon glbc, so it works even before I do anything?
Yet, on my cluster, I can see a "l7-default-backend-v1.0" Replication Controller in kube-system, with its pod and NodePort service, and it corresponds to what I see in the LB configs/routes. But I can't find anything like a "l7-lb-controller" that should do the provisionning, such container does not exist on the cluster.
So where is the magic ? What is the glue between the ingresses and the LB provisionning ?
Google Container Engine runs the glbc "glue" on your behalf unless you explicitly request it to be disabled as a cluster add-on (see https://cloud.google.com/container-engine/reference/rest/v1/projects.zones.clusters#HttpLoadBalancing).
Just like you don't see a pod in the system namespace for the scheduler or controller manager (like you do if you deploy Kubernetes yourself), you don't see the glbc controller pod either.
I'm building a container cluster using CoreOs and Kubernetes on DigitalOcean, and I've seen that in order to expose a Pod to the world you have to create a Service with Type: LoadBalancer. I think this is the optimal solution so that you don't need to add external load balancer outside kubernetes like nginx or haproxy. I was wondering if it is possible to create this using DO's Floating IP.
Things have changed, DigitalOcean created their own cloud provider implementation as answered here and they are maintaining a Kubernetes "Cloud Controller Manager" implementation:
Kubernetes Cloud Controller Manager for DigitalOcean
Currently digitalocean-cloud-controller-manager implements:
nodecontroller - updates nodes with cloud provider specific labels and
addresses, also deletes kubernetes nodes when deleted on the cloud
provider.
servicecontroller - responsible for creating LoadBalancers
when a service of Type: LoadBalancer is created in Kubernetes.
To try it out clone the project on your master node.
Next get the token key from https://cloud.digitalocean.com/settings/api/tokens and run:
export DIGITALOCEAN_ACCESS_TOKEN=abc123abc123abc123
scripts/generate-secret.sh
kubectl apply -f do-cloud-controller-manager/releases/v0.1.6.yml
There more examples here
What will happen once you do the above? DO's cloud manager will create a load balancer (that has a failover mechanism out of the box, more on it in the load balancer's documentation
Things will change again soon as DigitalOcean are jumping on the Kubernetes bandwagon, check here and you will have a choice to let them manage your Kuberentes cluster instead of you worrying about a lot of the infrastructure (this is my understanding of the service, let's see how it works when it becomes available...)
The LoadBalancer type of service is implemented by adding code to the kubernetes master specific to each cloud provider. There isn't a cloud provider for Digital Ocean (supported cloud providers), so the LoadBalancer type will not be able to take advantage of Digital Ocean's Floating IPs.
Instead, you should consider using a NodePort service or attaching an ExternalIP to your service and mapping the exposed IP to a DO floating IP.
It is actually possible to expose a service through a floating ip. The only catch is that the external IP that you need to use is a little unintuitive.
From what it seems DO has some sort of overlay network for their Floating IP service. To get the actual IP you need to expose you need to ssh into your gateway droplet and find its anchor IP by hitting up the metadata service:
curl -s http://169.254.169.254/metadata/v1/interfaces/public/0/anchor_ipv4/address
and you will get something like
10.x.x.x
This is the address that you can use as an external ip in LoadBalancer type service in kubernetes.
Example:
kubectl expose rc my-nginx --port=80 --public-ip=10.x.x.x --type=LoadBalancer