I'm comparing layer 7 HTTP(S) load balancers to use with Kubernetes on Google Cloud Platform.
GCP has their own managed service called Google Cloud Load Balancer.
Also popular to use with Kubernetes is Envoy, an open-source "cloud native" proxy that has many contributions from Google staff.
Is Google Cloud Load Balancer a managed version of Envoy? Perhaps just with some added integrations with GCP's CDN? If they are not actually the same, what are they key differences between the two options (beyond just that one is managed and the other is self-deployed)?
Right now the new version of Google Load Balancer uses Envoy proxy to handle advance traffic management (here)
Related
I'm trying to understand why and when to use Spring Cloud K8s Discovery Server when K8s has a Native Service Discovery which does:
Service discovery
Ensures compatibility with additional tooling (e.g. Istio)
Isn't it simpler if your app can simply rely/use a DNS name of a service it needs? On top that that it gets load balancing as well. Why should one even think about a Discovery Server?
My application has it's backend and it's frontend. The frontend is currently hosted in a Google Cloud Storage bucket and I am migrating the backend from Compute Engine VMs to Kubernetes Engine Autopilot.
When migrating, does it make more sense for me to migrate everything to Kubernetes Engine or would I be better off keeping the frontend in the bucket? Backend and frontend are different projects in different git repositories.
I am asking because I saw that it is possible to manage Kubernetes services' exposure, even at the level of URL Maps and Load Balancer, so I thought of perhaps entrusting all my projects' (backend and frontend) hosting to Kubernetes, since I know that Kubernetes is a very complete and powerful solution.
There isn't problem to keep your front on Cloud Storage (or elsewhere) and to have your backend on kubernetes (GKE).
It's not a "perfect pattern" because you can't handle and deploy all the part of your application only with Kubernetes and you haven't a end to end control plane management.
You have, one side to deploy your frontend and to configure your load balancer. On the other hand, you have to deploy your backend on Kubernetes with YAML.
In addition, your application is not portable on other kubernetes cluster (because it's not full kubernetes deployment, but hybrid between Kubernetes and Google Cloud, and you are partly sticky to Google Cloud). But if it's not a requirement, it's fine.
At the end, if you expose your app behind a load balancer with the front on Cloud Storage and the back on GKE, the user will see nothing. If a day, you want to package your front in a container and deploy it on GKE, keep the same load balancer (at least the same domain name) and you user won't notice the difference!!
No worry for now, you can keep going! (and it's cheaper for now! You don't pay processing to serve static resource with Cloud Storage)
I am currently running Kubespray configured Kubernetes clusters that I am trying to integrate Istio as an Ingress Controller, and I am trying to figure out, not how to set up the internal workings of the Istio service (which there are tons of tutorials), but how to connect the Istio Ingress to cloud agnostic load balancers to route traffic into the cluster.
I see a lot of tutorials that mention cloud specific methodologies like AWS or GCP load balancers from within Kubernetes (which are utterly useless to me), but I want a Kubernetes cluster that knows / cares nothing about the external cloud environment that makes it easier to port or create hybrid / multi-cloud environments. I am having trouble finding information for this kind of setup. Can anyone help point me to information about manually configuring external load balancers to link external traffic into the cluster without relying on Kubernetes cloud extensions?
Thanks for any information you can provide or references you can point me to!
I have a question around K8S Service Type "LoadBalancer".
I am working on developing a new "Kubernetes As a Service" Platform (like GKE etc.) for multi cloud.
Question is: K8S Service Type "LoadBalancer" works with Cloud Load Balancers (which are external to Kubernetes). GKE & other cloud based solution provides direct integration with them, so If I create a GKE Cluster & implement a Service Type "LoadBalancer", it will transparently create a new GCP Load Balancer & show Load Balancer IP in Kubernetes (as External IP). Same applies to other Cloud Providers also.
I want to allow a similar feature on my new "Kubernetes As a Service" platform, where users can choose a cloud provider, create a Kubernetes Cluster & then apply a K8S Service Type "LoadBalancer" & this will result creating a Load Balancer on the (user selected) cloud platform.
I am able to automate the flow till Kubernetes Cluster Creation, but clueless when it comes to "K8S Service & External Load Balancer" Integration.
Can anyone please help me how can I approach integrating K8S Service Type "LoadBalancer" with Specific Cloud Load Balancers? Do I need to write a new CRD or is there any similar code available in Git (in case anyone know any link for reference) ?
You have to understand how kubernetes is interacting with cloud provider. Like for example previously I deployed the Kubernetes on AWS with kops. I see that kubernetes uses aws access key & access secret to interact with aws. If I remember correctly, I saw some CLI options in kube-proxy or kubelet to support AWS. (I have searched man pages for all kubernetes binaries for aws options, but I couldn't find any to provide to you).
For example look at the kubelet man page, they provided an option called --google-json-key to authenticate GCP. You will get some idea if you deploy kubernetes on AWS with kops or kube-aws and dig through the setup and its configuration/options etc.(Same applies to other cloud providers)
I have a Cloud MySQL instance which allows traffic only from whitelisted IPs. How do I determine which IP I need to add to the ruleset to allow traffic from my Kubernetes service?
The best solution is to use the Cloud SQL Proxy in a sidecar pattern. This adds an additional container into the pod with your application that allows for traffic to be passed to Cloud SQL.
You can find instructions for setting it up here. (It says it's for GKE, but the principles are the same)
If you prefer something a little more hands on, this codelab will walk you through taking an app from local to on a Kubernetes Cluster.
I am using Google Cloud Platform, so my solution was to add the Google Compute Engine VM instance External IP to the whitelist.