Should I use an API Gateway or Service Mesh? - kubernetes

Say you are using Microservices with Docker Containers and Kubernetes.
If you use an API Gateway (e.g. Azure API Gateway) in front of your microservices to handle composite UI and authentication, do you still need a Service Mesh to handle Service Discovery and Circuit Breaker? Is there any functionality in Azure API Gateway to handle these kind of challenges? How?

API gateways are applied on Layer 7 of OSI model or you can say to manage traffic coming from outside network ( sometimes also called north/south traffic ) , whereas Service Mesh is applied to Layer 4 of OSI model or to manager inter-services communications ( sometimes also called as east/west traffic). Some examples of API Gateway features are Reverse Proxy,Load Balancing , Authentication and Authorization , IP Listing , Rate-Limiting etc. 
Service Mesh, on the other hand, works like a proxy or a side-car pattern which de-couples the communication responsibility of the service and handles other concerns such as Circuit breaker , timeouts , retries , service-discovery etc.
If you happen to use Kubernetes and Microservices then you might want to explore other solutions such as Ambassador + Istio Or Kong which works as Gateway as well as Service Mesh.

An API Gateway only handles the entry point into your Kubernetes clusters, e.g. it sends a request to your frontend microservice. However, it can do nothing after the request enters your cluster. There might still be multiple calls between microservices. You still want to verify authentication for those requests, you still want to make sure that there are circuit breakers in between the services, etc. Theoretically, you could make sure all your microservices call each other via the API gateway, however I do not think that is what you want.
In short: No, because an API Gateway is only an entry point, any service to service communication is better handled with a Service Mesh.

you can use an API gateway to handle service discovery and circuit breaker - but that would make it a central point in your deployment i.e. all calls external and internal will have to be routed via the gateway.
A service mesh deploy an additional edge component ("sidecar") alongside each service making the overall behavior distributed (but also more complex)
Depending on your particular requirements you may use one, the other, both or none

Nicely explained by fatcook above.. See Azure-Frontdoor
as this is attempting to do the same as Kong on Azure. API gateway + handling control plane level features

Related

Ingress controller vs api gateway

I would like to know what is/are differences between an api gateway and Ingress controller. People tend to use these terms interchangeably due to similar functionality they offer. When I say, 'Ingress controller'; don't confuse it with Ingress objects provided by kubernetes. Also, it would be nice if you can explain the scenario where one will be more useful than other.
Is api gateway a generic term used for traffic routers in cloud-native world and 'Ingress controller' is implementation of api-gateway in kubernetes world?
Ingress controller allows single ip-port to access all services running in k8s through ingress rules. The ingress controller service is set to load balancer so it is accessible from public internet.
An api gateway is used for application routing, rate limiting, security, request and response handling and other application related tasks. Say, you have a microservice based application in which the request needs an information to be collected from multiple micro services. You need a way to distribute the user requests to different services and gather the responses from all micro services and prepare the final response to be sent to the user. API Gateway is the one which does this kind of work for you.
Ingress
Ingress manages and route the traffic into Kubernetes services.
Ingress rules/config yaml and backed by Ingress controller (Nginx ingress controller famous one)
Ingress controller makes one Kubernetes service using that get exposed as LoadBalancer.
Other list of ingrss controller : https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/
For simple understanding, you can consider ingress as Nginx server which just do the work of forwarding the traffic to services based on the ruleset.
ingress don't have much functionality like API gateway. Some of ingress don't support authentication, rate limiting, application routing, security, merging response & request, and other add-ons/plugin options.
API gateway
API gateway can also do the work of simple routing but it's mostly gets used when you need higher flexibility, security and configuration options.
There are lots of parameters to compare when you are choosing the Ingress or API gateway however it's more depends on your usecase.
API gateway like KrakenD, Kong are way better compare to ingress have security integration like Oauth plugin, API key option, it support rate-limiting, API aggregation.
Kong API gateway also has a good plugin option which you can use if you want to configure logging/monitoring of traffic also.
There are so many API gateways available in the market same as the ingress controller, you can check the API gateway feature and comparison below.
Read more at : https://medium.com/#harsh.manvar111/api-gateway-identity-server-comparison-ec439468cc8a
If your use case is small and sure about requirement you can use the ingress also for production API gateway is not necessary.
Indeed both have a set of features that intersect, path mapping, path conversion, load balancing, etc.
However, they do differ. I may be wrong, but you create an Ingress 1) to run it in Kubernetes 2) to be more of like a reverse proxy "kubernetes native".
API Gateway could be installed anywhere (although there are now many that run in Kubernetes natively like Ambassador, Gloo, Kong), and they do have more functionality available like developer portal, rate limiting, etc.
Personally I use an ingress as a reverse proxy for a website. And API Gateway for APIs. This does not mean you can't use ingress for apis. However, you are not taking full advantage of them.

Synchronize HTTP requests between several service instances in Kubernetes

We have a service with multiple replicas which works with storage without transactions and blocking approaches. So we need somehow to synchronize concurrent requests between multiple instances by some "sharding" key. Right now we host this service in Kubernetes environment as a ReplicaSet.
Don't you know any simple out-of-the-box approaches on how to do this to not implement it from scratch?
Here are several of our ideas on how to do this:
Deploy the service as a StatefulSet and implement some proxy API which will route traffic to the specific pod in this StatefulSet by sharding key from the HTTP request. In this scenario, all requests which should be synchronized will be handled by one instance and it wouldn't be a problem to handle this case.
Deploy the service as a StatefulSet and implement some custom logic in the same service to re-route traffic to the specific instance (or process on this exact instance). As I understand it's not possible to have abstract implementation and it would work only in Kubernetes environment.
Somehow expose each pod IP outside the cluster and implement routing logic on the client-side.
Just implement synchronization between instances through some third-party service like Redis.
I would like to try to route traffic to the specific pods. If you know standard approaches how to handle this case I'll be much appreciated.
Thank you a lot in advance!
Another approach would be to put a messaging queue (like Kafka and RabbitMq) in front of your service.
Then your pods will subscribe to the MQ topic/stream. The pod will decide if it should process the message or not.
Also, try looking into service meshes like Istio or Linkerd.
They might have an OOTB solution for your use-case, although I wasn't able to find one.
Remember that Network Policy is not traffic routing !
Pods are intended to be stateless and indistinguishable from one another, pod-networking.
I recommend to Istio. It has special component which is responsible or routing- Envoy. It is a high-performance proxy developed in C++ to mediate all inbound and outbound traffic for all services in the service mesh.
Useful article: istio-envoy-proxy.
Istio documentation: istio-documentation.
Useful Istio explaination https://www.youtube.com/watch?v=e2kowI0fAz0.
But you should be able to create a Deployment per customer group, and a Service per Deployment. The Ingress nginx should be able to be told to map incoming requests by whatever attributes are relevant to specific customer group Services.
Other solution is to use kube-router.
Kube-router can be run as an agent or a Pod (via DaemonSet) on each node and leverages standard Linux technologies iptables, ipvs/lvs, ipset, iproute2.
Kube-router uses IPVS/LVS technology built in Linux to provide L4 load balancing. Each ClusterIP, NodePort, and LoadBalancer Kubernetes Service type is configured as an IPVS virtual service. Each Service Endpoint is configured as real server to the virtual service. The standard ipvsadm tool can be used to verify the configuration and monitor the active connections.
How it works: service-proxy.

How to consume Istio-based Service that enables `mtls`?

Currently, I want to introduce istio as our service-mesh framework for our microservices. I have played it sometime (< 1 week), and my understanding is that Istio really provides an easy way to secure service to service communication. Much (or all?) of Istio docs/article provides an example how client and server who have istio-proxy (envoy) installed as a sidecar container, can establish secure communication using mtls method.
However, since our existing client (which I don't have any control) who consume our service (which will be migrated to use istio) doesn't have istio, I still don't understand it well how we should do it better.
Is there any tutorial or example that provides my use case better?
How can the non-istio-based client use mtls for consuming our istio-based service? Think about using basic curl command to simulate such thing.
Also, I am thinking of distributing a specific service account (kubernetes, gcp iam service account, etc) to the client to limit the client's privilege when calling our service. I have many questions on how these things: gcp iam service account, istio, rbac, mtls, jwt token, etc contributes to securing our service API?
Any advice?
You want to add a third party to your Istio mesh outside of your network via SSL over public internet?
I dont think Istio is really meant for federating external services but you could just have an istio ingress gateway proxy sat at the edge of your network for routing into and back out of your application.
https://istio.io/docs/tasks/traffic-management/ingress/
If you're building microservices then surely you have an endpoint or gateway, that seems more sensible to me, try Apigee or something.

Kubernetes: should I use HTTPS to communicate between services

Let's say I'm using an GCE ingress to handle traffic from outside the cluster and terminate TLS (https://example.com/api/items), from here the request gets routed to one of two services that are only available inside the cluster. So far so good.
What if I have to call service B from service A, should I go all the way and use the cluster's external IP/domain and use HTTPS (https://example.com/api/user/1) to call the service or could I use the internal IP of the service and use HTTP (http://serviceb/api/user/1)? Do I have to encrypt the data or is it "safe" as long as it isn't leaving the private k8s network?
What if I want to have "internal" endpoints that should only be accessible from within the cluster - when I'm always using the external https-url those endpoints would be reachable for everyone. Calling the service directly, I could just do a http://serviceb/internal/info/abc.
What if I have to call service B from service A, should I go all the way and use the cluster's external IP/domain and use HTTPS (https://example.com/api/user/1) to call the service or could I use the internal IP of the service and use HTTP (http://serviceb/api/user/1)?
If you need to use the features that you API Gateway is offering (authentication, cache, high availability, load balancing) then YES, otherwise DON'T. The External facing API should contain only endpoints that are used by external clients (from outside the cluster).
Do I have to encrypt the data or is it "safe" as long as it isn't leaving the private k8s network?
"safe" is a very relative word and I believe that there are no 100% safe networks. You should put in the balance the probability of "somebody" or "something" sniffing data from the network and the impact that it has on your business if that happens.
If this helps you: for any project that I've worked for (or I heard from somebody I know), the private network between containers/services was more than sufficient.
What if I want to have "internal" endpoints that should only be accessible from within the cluster - when I'm always using the external https-url those endpoints would be reachable for everyone.
Exactly what I was saying on top of the answer. Keeping those endpoints inside the cluster makes them inaccessible by design from outside.
One last thing, managing a lot of SSL certificates for a lot of internal services is a pain that one should avoid if not necessary.

Low Level Protocol for Microservice Orchestration

Recently I started working with Microservices, I wrote a library for service discovery using Redis to store every service's url and port number, along with a TTL value for the entry. It turned out to be an expensive approach since for every cross service call to any other service required one call to Redis. Caching didn't seem to be a good idea, since the services won't be up all the times, there can be possible downtimes as well.
So I wanted to write a separate microservice which could take care of the orchestration part. For this I need to figure out a really low level network protocol to take care of the exchange of heartbeats(which would help me figure out if any of the service instance goes unavailable). How do applications like zookeeperClient, redisClient take care of heartbeats?
Moreover what is the industry's preferred protocol for cross service calls?
I have been calling REST Api's over HTTP and eliminated every possibility of Joins across different collections.
Is there a better way to do this?
Thanks.
I think the term "Orchestration" is not good for what you are asking. From what I've encountered so far in microservices world the term "Orchestration" is used when a complex business process is involved and not for service discovery. What you need is a Service registry combined with a Load balancer. You can find here all the information you need. Here are some relevant extras that great article:
There are two main service discovery patterns: client‑side discovery and server‑side discovery. Let’s first look at client‑side discovery.
The Client‑Side Discovery Pattern
When using client‑side discovery, the client is responsible for determining the network locations of available service instances and load balancing requests across them. The client queries a service registry, which is a database of available service instances. The client then uses a load‑balancing algorithm to select one of the available service instances and makes a request.
The network location of a service instance is registered with the service registry when it starts up. It is removed from the service registry when the instance terminates. The service instance’s registration is typically refreshed periodically using a heartbeat mechanism.
Netflix OSS provides a great example of the client‑side discovery pattern. Netflix Eureka is a service registry. It provides a REST API for managing service‑instance registration and for querying available instances. Netflix Ribbon is an IPC client that works with Eureka to load balance requests across the available service instances. We will discuss Eureka in more depth later in this article.
The client‑side discovery pattern has a variety of benefits and drawbacks. This pattern is relatively straightforward and, except for the service registry, there are no other moving parts. Also, since the client knows about the available services instances, it can make intelligent, application‑specific load‑balancing decisions such as using hashing consistently. One significant drawback of this pattern is that it couples the client with the service registry. You must implement client‑side service discovery logic for each programming language and framework used by your service clients.
The Server‑Side Discovery Pattern
The client makes a request to a service via a load balancer. The load balancer queries the service registry and routes each request to an available service instance. As with client‑side discovery, service instances are registered and deregistered with the service registry.
The AWS Elastic Load Balancer (ELB) is an example of a server-side discovery router. An ELB is commonly used to load balance external traffic from the Internet. However, you can also use an ELB to load balance traffic that is internal to a virtual private cloud (VPC). A client makes requests (HTTP or TCP) via the ELB using its DNS name. The ELB load balances the traffic among a set of registered Elastic Compute Cloud (EC2) instances or EC2 Container Service (ECS) containers. There isn’t a separate service registry. Instead, EC2 instances and ECS containers are registered with the ELB itself.
HTTP servers and load balancers such as NGINX Plus and NGINX can also be used as a server-side discovery load balancer. For example, this blog post describes using Consul Template to dynamically reconfigure NGINX reverse proxying. Consul Template is a tool that periodically regenerates arbitrary configuration files from configuration data stored in the Consul service registry. It runs an arbitrary shell command whenever the files change. In the example described by the blog post, Consul Template generates an nginx.conf file, which configures the reverse proxying, and then runs a command that tells NGINX to reload the configuration. A more sophisticated implementation could dynamically reconfigure NGINX Plus using either its HTTP API or DNS.
Some deployment environments such as Kubernetes and Marathon run a proxy on each host in the cluster. The proxy plays the role of a server‑side discovery load balancer. In order to make a request to a service, a client routes the request via the proxy using the host’s IP address and the service’s assigned port. The proxy then transparently forwards the request to an available service instance running somewhere in the cluster.
The server‑side discovery pattern has several benefits and drawbacks. One great benefit of this pattern is that details of discovery are abstracted away from the client. Clients simply make requests to the load balancer. This eliminates the need to implement discovery logic for each programming language and framework used by your service clients. Also, as mentioned above, some deployment environments provide this functionality for free. This pattern also has some drawbacks, however. Unless the load balancer is provided by the deployment environment, it is yet another highly available system component that you need to set up and manage.