Can this work - Google Cloud Endpoints as API Management layer and Istio as Service Mesh on Kubernetes (GKE) - kubernetes

We would like to use Kubernetes for Microservices and Google Cloud Endpoints as API Management layer.
If I understand well, to have Google Cloud Endpoints functionality we need to have a sidecar or proxy for the real microservice. (image: gcr.io/endpoints-release/endpoints-runtime:1)
So if we were to use Istio as service Mesh technology how will Envoy proxy work along with Google Cloud Endpoint? Will it actually proxy even the google cloud endpoint relevant container?
Or is this a bad strategy?

Related

Why do we need API gateway when using Kubernetes?

In microservices environment deployed to the Kubernetes cluster, why will we use API gateway (for example Spring cloud gateway) if Kubernetes supplies the same service with Ingress?
Ingress controller makes one Kubernetes service that gets exposed as LoadBalancer.For simple understanding, you can consider ingress as Nginx server which just do the work of forwarding the traffic to services based on the ruleset.ingress don't have much functionality like API gateway. Some of ingress don't support authentication, rate limiting, application routing, security, merging response & request, and other add-ons/plugin options.
API gateway can also do the work of simple routing but it mostly gets used when you need higher flexibility, security and configuration options.While multiple teams or projects can share a set of Ingress controllers, or Ingress controllers can be specialized on a per‑environment basis, there are reasons you might choose to deploy a dedicated API gateway inside Kubernetes rather than leveraging the existing Ingress controller. Using both an Ingress controller and an API gateway inside Kubernetes can provide flexibility for organizations to achieve business requirements
For accessing database
If this database and cluster are somewhere in the cloud you could use internal Database IP. If not you should provide the IP of the machine where this Database is hosted.
You can also refer to this Kubernetes Access External Services article.

I use kubernetes claster and Istio service mesh. Can Spring Cloud Gateway add something to this?

I'm studying Spring Cloud Gateway but it seems in case of kubernetes & Istio claster stack it's useless, because everything can get from the kubernetes & Istio out of the box.
Am I right or I miss something and Spring Cloud Gateway can do something they cannot? In other word are there usecases which I cannot solve with kubernetes & Istio but rather need Spring Cloud Gateway?

API gateway for services running with Kubernetes?

We have all our services running with Kubernetes. We want to know what is the best practice to deploy our own API gateway, we thought of 2 solutions:
Deploy API gateways outside the Kubernetes cluster(s), i.e. with Kong. This means the clusters' ingress will connect to the external gateways. The gateway is either VM or physical machines, and you can scale by replicating many gateway instances
Deploy gateway from within Kubernetes (then maybe connect to external L4 load balancer), i.e. Ambassador. However, with this approach, each cluster can only have 1 gateway. The only way to prevent fault-tolerance is to actually replicate the entire K8s cluster
What is the typical setup and what is better?
The typical setup for an api gateway in kubernetes is either using a load balancer service, if the cloud provider that you are using support dynamic provision of load balancers (all major cloud vendors like gcp, aws or azure support it), or even more common to use an ingress controller.
Both of these options can scale horizontally so you have fault tolerance, in fact there is already a solution for ingress controller using kong
https://github.com/Kong/kubernetes-ingress-controller

How to integrate Kubernetes Service Type "LoadBalancer" with Specific Cloud Load Balancers

I have a question around K8S Service Type "LoadBalancer".
I am working on developing a new "Kubernetes As a Service" Platform (like GKE etc.) for multi cloud.
Question is: K8S Service Type "LoadBalancer" works with Cloud Load Balancers (which are external to Kubernetes). GKE & other cloud based solution provides direct integration with them, so If I create a GKE Cluster & implement a Service Type "LoadBalancer", it will transparently create a new GCP Load Balancer & show Load Balancer IP in Kubernetes (as External IP). Same applies to other Cloud Providers also.
I want to allow a similar feature on my new "Kubernetes As a Service" platform, where users can choose a cloud provider, create a Kubernetes Cluster & then apply a K8S Service Type "LoadBalancer" & this will result creating a Load Balancer on the (user selected) cloud platform.
I am able to automate the flow till Kubernetes Cluster Creation, but clueless when it comes to "K8S Service & External Load Balancer" Integration.
Can anyone please help me how can I approach integrating K8S Service Type "LoadBalancer" with Specific Cloud Load Balancers? Do I need to write a new CRD or is there any similar code available in Git (in case anyone know any link for reference) ?
You have to understand how kubernetes is interacting with cloud provider. Like for example previously I deployed the Kubernetes on AWS with kops. I see that kubernetes uses aws access key & access secret to interact with aws. If I remember correctly, I saw some CLI options in kube-proxy or kubelet to support AWS. (I have searched man pages for all kubernetes binaries for aws options, but I couldn't find any to provide to you).
For example look at the kubelet man page, they provided an option called --google-json-key to authenticate GCP. You will get some idea if you deploy kubernetes on AWS with kops or kube-aws and dig through the setup and its configuration/options etc.(Same applies to other cloud providers)

Is Google Cloud Load Balancing a managed version of Envoy?

I'm comparing layer 7 HTTP(S) load balancers to use with Kubernetes on Google Cloud Platform.
GCP has their own managed service called Google Cloud Load Balancer.
Also popular to use with Kubernetes is Envoy, an open-source "cloud native" proxy that has many contributions from Google staff.
Is Google Cloud Load Balancer a managed version of Envoy? Perhaps just with some added integrations with GCP's CDN? If they are not actually the same, what are they key differences between the two options (beyond just that one is managed and the other is self-deployed)?
Right now the new version of Google Load Balancer uses Envoy proxy to handle advance traffic management (here)