Why kubernetes uses bost Rest API and gRPC? - kubernetes

why k8s uses RestAPI in NodeStats/PodStats summary, but uses gRPC in CRI( RunPodSandbox/CreateContainer/StartContainer)?
why doesn't k8s use gRPC in the whole project?
Thank you.

Without knowing anything about the internals of k8s, it's reasonable to assume that it would serve generic REST endpoints for services it provides to unknown clients, but would be a gRPC client for other internal k8s services, and possibly serve gRPC to other internal k8s services.
gRPC is certainly more efficient to use for cooperating services, but it would be quite awkward for a random client not fully integrated with k8s to form a gRPC client call, so it uses a simplified REST interface for those external clients.

Related

Should I use an API Gateway or Service Mesh?

Say you are using Microservices with Docker Containers and Kubernetes.
If you use an API Gateway (e.g. Azure API Gateway) in front of your microservices to handle composite UI and authentication, do you still need a Service Mesh to handle Service Discovery and Circuit Breaker? Is there any functionality in Azure API Gateway to handle these kind of challenges? How?
API gateways are applied on Layer 7 of OSI model or you can say to manage traffic coming from outside network ( sometimes also called north/south traffic ) , whereas Service Mesh is applied to Layer 4 of OSI model or to manager inter-services communications ( sometimes also called as east/west traffic). Some examples of API Gateway features are Reverse Proxy,Load Balancing , Authentication and Authorization , IP Listing , Rate-Limiting etc. 
Service Mesh, on the other hand, works like a proxy or a side-car pattern which de-couples the communication responsibility of the service and handles other concerns such as Circuit breaker , timeouts , retries , service-discovery etc.
If you happen to use Kubernetes and Microservices then you might want to explore other solutions such as Ambassador + Istio Or Kong which works as Gateway as well as Service Mesh.
An API Gateway only handles the entry point into your Kubernetes clusters, e.g. it sends a request to your frontend microservice. However, it can do nothing after the request enters your cluster. There might still be multiple calls between microservices. You still want to verify authentication for those requests, you still want to make sure that there are circuit breakers in between the services, etc. Theoretically, you could make sure all your microservices call each other via the API gateway, however I do not think that is what you want.
In short: No, because an API Gateway is only an entry point, any service to service communication is better handled with a Service Mesh.
you can use an API gateway to handle service discovery and circuit breaker - but that would make it a central point in your deployment i.e. all calls external and internal will have to be routed via the gateway.
A service mesh deploy an additional edge component ("sidecar") alongside each service making the overall behavior distributed (but also more complex)
Depending on your particular requirements you may use one, the other, both or none
Nicely explained by fatcook above.. See Azure-Frontdoor
as this is attempting to do the same as Kong on Azure. API gateway + handling control plane level features

Which ingress controller should I use to support WebSocket in a AWS k8s cluster deployed by kops?

I have a cluster on AWS installed via kops. Now I need to expose a WebSocket service (with security enabled, the wss://) to the outside world. There are different ingress controllers, nginx, traefik, ELBs, ALBs. Which one is the suggested and:
easy to deploy and config
support http://, https://, ws://, and wss://
In my opinion this question is opinion based and too broad. Please try to avoid such questions as there is not one solution that is the best.
I was able to find plenty resources about nginx and websockets. I do not have production experience with configuring this, but I think you might find this helpful.
NGINX is a popular choice for an Ingress Controller for a variety of
features:
Websocket, which allows you to load balance Websocket applications.
SSL Services, which allows you to load balance HTTPS applications.
Rewrites, which allows you to rewrite the URI of a request before sending it to the application.
Session Persistence (NGINX Plus only), which guarantees that all the requests from the same client are always passed to the same
backend container.
Support for JWTs (NGINX Plus only), which allows NGINX Plus to authenticate requests by validating JSON Web Tokens (JWTs).
The most important part with nginx is the annotation - which specifies which services are Websocket services. Some more information about usage and configuration. Also useful tutorial about configuration of nginx ingress, although it is about GKE it might be useful.

How to consume Istio-based Service that enables `mtls`?

Currently, I want to introduce istio as our service-mesh framework for our microservices. I have played it sometime (< 1 week), and my understanding is that Istio really provides an easy way to secure service to service communication. Much (or all?) of Istio docs/article provides an example how client and server who have istio-proxy (envoy) installed as a sidecar container, can establish secure communication using mtls method.
However, since our existing client (which I don't have any control) who consume our service (which will be migrated to use istio) doesn't have istio, I still don't understand it well how we should do it better.
Is there any tutorial or example that provides my use case better?
How can the non-istio-based client use mtls for consuming our istio-based service? Think about using basic curl command to simulate such thing.
Also, I am thinking of distributing a specific service account (kubernetes, gcp iam service account, etc) to the client to limit the client's privilege when calling our service. I have many questions on how these things: gcp iam service account, istio, rbac, mtls, jwt token, etc contributes to securing our service API?
Any advice?
You want to add a third party to your Istio mesh outside of your network via SSL over public internet?
I dont think Istio is really meant for federating external services but you could just have an istio ingress gateway proxy sat at the edge of your network for routing into and back out of your application.
https://istio.io/docs/tasks/traffic-management/ingress/
If you're building microservices then surely you have an endpoint or gateway, that seems more sensible to me, try Apigee or something.

Is it possible to run both Istio and gRPC in a GKE cluster

Istio and gRPC seem complementary and I'd like to use both in the clusters.
The thing is that they both add an extra container which receives/proxy communication between pods / microservices.
Is it advised or not to use both in parallel in all pods?
Are there particular adaptations to do if one uses both?
Thanks for any advice!
Istio and gRPC do work well together, when declaring your services ports' to istio just make sure to name them grpc-something so the proxy knows it is h2/grpc traffic and route it properly
You mention that gRPC adds an extra container - why not having your service speak gRPC natively ?
We do have future plans with protocol transcoding and rich integrated gRPC/istio libraries that would skip layers but that's not there yet.

Low Level Protocol for Microservice Orchestration

Recently I started working with Microservices, I wrote a library for service discovery using Redis to store every service's url and port number, along with a TTL value for the entry. It turned out to be an expensive approach since for every cross service call to any other service required one call to Redis. Caching didn't seem to be a good idea, since the services won't be up all the times, there can be possible downtimes as well.
So I wanted to write a separate microservice which could take care of the orchestration part. For this I need to figure out a really low level network protocol to take care of the exchange of heartbeats(which would help me figure out if any of the service instance goes unavailable). How do applications like zookeeperClient, redisClient take care of heartbeats?
Moreover what is the industry's preferred protocol for cross service calls?
I have been calling REST Api's over HTTP and eliminated every possibility of Joins across different collections.
Is there a better way to do this?
Thanks.
I think the term "Orchestration" is not good for what you are asking. From what I've encountered so far in microservices world the term "Orchestration" is used when a complex business process is involved and not for service discovery. What you need is a Service registry combined with a Load balancer. You can find here all the information you need. Here are some relevant extras that great article:
There are two main service discovery patterns: client‑side discovery and server‑side discovery. Let’s first look at client‑side discovery.
The Client‑Side Discovery Pattern
When using client‑side discovery, the client is responsible for determining the network locations of available service instances and load balancing requests across them. The client queries a service registry, which is a database of available service instances. The client then uses a load‑balancing algorithm to select one of the available service instances and makes a request.
The network location of a service instance is registered with the service registry when it starts up. It is removed from the service registry when the instance terminates. The service instance’s registration is typically refreshed periodically using a heartbeat mechanism.
Netflix OSS provides a great example of the client‑side discovery pattern. Netflix Eureka is a service registry. It provides a REST API for managing service‑instance registration and for querying available instances. Netflix Ribbon is an IPC client that works with Eureka to load balance requests across the available service instances. We will discuss Eureka in more depth later in this article.
The client‑side discovery pattern has a variety of benefits and drawbacks. This pattern is relatively straightforward and, except for the service registry, there are no other moving parts. Also, since the client knows about the available services instances, it can make intelligent, application‑specific load‑balancing decisions such as using hashing consistently. One significant drawback of this pattern is that it couples the client with the service registry. You must implement client‑side service discovery logic for each programming language and framework used by your service clients.
The Server‑Side Discovery Pattern
The client makes a request to a service via a load balancer. The load balancer queries the service registry and routes each request to an available service instance. As with client‑side discovery, service instances are registered and deregistered with the service registry.
The AWS Elastic Load Balancer (ELB) is an example of a server-side discovery router. An ELB is commonly used to load balance external traffic from the Internet. However, you can also use an ELB to load balance traffic that is internal to a virtual private cloud (VPC). A client makes requests (HTTP or TCP) via the ELB using its DNS name. The ELB load balances the traffic among a set of registered Elastic Compute Cloud (EC2) instances or EC2 Container Service (ECS) containers. There isn’t a separate service registry. Instead, EC2 instances and ECS containers are registered with the ELB itself.
HTTP servers and load balancers such as NGINX Plus and NGINX can also be used as a server-side discovery load balancer. For example, this blog post describes using Consul Template to dynamically reconfigure NGINX reverse proxying. Consul Template is a tool that periodically regenerates arbitrary configuration files from configuration data stored in the Consul service registry. It runs an arbitrary shell command whenever the files change. In the example described by the blog post, Consul Template generates an nginx.conf file, which configures the reverse proxying, and then runs a command that tells NGINX to reload the configuration. A more sophisticated implementation could dynamically reconfigure NGINX Plus using either its HTTP API or DNS.
Some deployment environments such as Kubernetes and Marathon run a proxy on each host in the cluster. The proxy plays the role of a server‑side discovery load balancer. In order to make a request to a service, a client routes the request via the proxy using the host’s IP address and the service’s assigned port. The proxy then transparently forwards the request to an available service instance running somewhere in the cluster.
The server‑side discovery pattern has several benefits and drawbacks. One great benefit of this pattern is that details of discovery are abstracted away from the client. Clients simply make requests to the load balancer. This eliminates the need to implement discovery logic for each programming language and framework used by your service clients. Also, as mentioned above, some deployment environments provide this functionality for free. This pattern also has some drawbacks, however. Unless the load balancer is provided by the deployment environment, it is yet another highly available system component that you need to set up and manage.