In the communication between two services in App Mesh, as I know a service doesn’t call another service directly. It communicates through the envoy sidecar. At that time the envoy sidecar works as a client and the service works as the server. Does the envoy able to communicate HTTP/2 while the server supports HTTP/2?
On the other hand, when calling service to envoy sidecar, the service works as a client, and envoy works as a server. Does the envoy able to handle HTTP/2 connection? I mean does the envoy work as the HTTP/2 listener?
One of our internal services has been attached to the Application Load Balancer(ALB). At the moment ALB supports HTTP/2, so the front-end to ALB communication would be supported by HTTP/2. If that service supported HTTP/2, does the ALB able to communicate through HTTP/2 to that service? Does that communication happen directly to the service or to the envoy sidecar?
I'm asking to check whether the whole internal communication would be supported by HTTP/2. Please correct me if I’m mistaken.
Related
I have a grpc-based web service that runs in Google Kubernetes Engine, and I have had no luck applying Cloud Armor to it.
Currently, this web service is exposed via a Kubernetes service of type External load balancer, which is bound to an External TCP/UDP Network load balancer in Gcloud, and that all works fine.
The issue is that Cloud Armor cannot be applied to an External TCP/UDP Network load balancer.
So I've tried exposing the web service via Kubernetes services of type Node Port and Cluster IP to be able to bind to an Ingress that will use a load balancer that is supported for Cloud Armor (Global External HTTP(S), Global External HTTP(S) (classic), External TCP proxy, or External SSL proxy).
But I can't seem to find a configuration that actually handles the grpc traffic correctly and has a working healthcheck.
Has anyone else been able to get a grpc based web service running out of GKE protected with Cloud Armor?
More background:
The web service is Go-based, and has two features to facilitate Kubernetes healthchecks. First, it supports the standard grpc health protocol with grpc-health-probe, and the container that it is built into also has the grpc-health-probe executable (and this looks to be working correctly for the pod liveness/readiness checks). Second, it also serves an http(1) 200/OK on the '/' route on the same port on which it listens for the http/2 grpc traffic.
The web service runs with TLS using a CA-signed cert and a 4096 bit key, and currently terminates the TLS client traffic itself. But I am open to having the TLS traffic terminated at the edge/load balancer, if it can be made to work for grpc calls.
The Cloud Armor SSL TCP proxy is available, but there are some limitations:
Users can reuse existing Cloud Armor security policies (Backend security policies) or create new ones.
Only security policies with the following rule properties are are supported for TCP/SSL Proxies backend services:
Match Conditions: IP, Geo, ASN
Action: Allow, deny, throttle, rate-based-ban
Availability and limitations:
Security policies can be created/configured in the Console or via API/CLI
New or existing security policies can be attached to backend services fronted by TCP/SSL Proxies only via API/CLI.
To enable Cloud Logging events, leverage CLI/API to enable TCP/SSL Proxy logging on the relevant backend service as described in the Load Balancer documentation
There is a Network Load Balancer option that will be coming to market and is currently pre-GA. It is expected to be generally available sometime in H1 2023.
I've an existing microservice architecture that uses Netflix Eureka and zuul services,
I've deployed a pod that successfully registers on the discover server but when I hit the API it gives a timeout, what I can think is that while registering on the Discovery server the container IP is given because of which it is not accessible.
Is there a way to either map the correct address or redirect the call to the proper URL looking for a easy way, as this needs to be done on multiple services
I think you should be rethinking your design in Kubernetes way! Your Eureka(service discovery), Zuul server (API gateway/ Loadbalancer) are really extra services that you really don't need in the Kubernetes platform.
For Service discovery and load-balancing, you can use Services in Kubernetes.
From Kubernetes documentation:
An abstract way to expose an application running on a set of Pods as a
network service. With Kubernetes, you don't need to modify your
application to use an unfamiliar service discovery mechanism. Kubernetes gives Pods their own IP addresses and a single DNS name for a set of Pods and can load-balance across them.
And for API gateway, you can think about Ingress in Kubernetes.
There are different implementations for Ingress Controllers for Kubernetes. I'm using Ambassador API gateway implementation.
https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/
Is there currently a way to serve websockets from an application deployed on Okteto cloud given the okteto-specific limitations around Ingresses and Services ?
I've read that this would only be possible using a Service or Ingress of type LoadBalancer, so that is what I've tried.
But, according to the Okteto docs, Services of type LoadBalancer (or NodePort) are managed. In practice they seem to get transformed automatically into a ClusterIP Service, + exposed to the internet on an automatic URL.
Do these handle only HTTP requests ? Or is there a way to make them handle other kinds of connections based on TCP or UDP (like websockets) ?
You don't need a LoadBalancer to use WebSockets, they can be served from an Ingress with a ClusterIP as well (this is what Okteto Cloud uses for our endpoints). This setup supports HTTPS, WebSockets and even GRPC-based endpoints.
This sample shows you how to use WebSockets on a Node app deployed in Okteto Cloud, hope it helps! (it uses okteto-generated Kubernetes manifests, but you can also bring your own).
I have a cluster on AWS installed via kops. Now I need to expose a WebSocket service (with security enabled, the wss://) to the outside world. There are different ingress controllers, nginx, traefik, ELBs, ALBs. Which one is the suggested and:
easy to deploy and config
support http://, https://, ws://, and wss://
In my opinion this question is opinion based and too broad. Please try to avoid such questions as there is not one solution that is the best.
I was able to find plenty resources about nginx and websockets. I do not have production experience with configuring this, but I think you might find this helpful.
NGINX is a popular choice for an Ingress Controller for a variety of
features:
Websocket, which allows you to load balance Websocket applications.
SSL Services, which allows you to load balance HTTPS applications.
Rewrites, which allows you to rewrite the URI of a request before sending it to the application.
Session Persistence (NGINX Plus only), which guarantees that all the requests from the same client are always passed to the same
backend container.
Support for JWTs (NGINX Plus only), which allows NGINX Plus to authenticate requests by validating JSON Web Tokens (JWTs).
The most important part with nginx is the annotation - which specifies which services are Websocket services. Some more information about usage and configuration. Also useful tutorial about configuration of nginx ingress, although it is about GKE it might be useful.
I have a workload deployed in kubernetes. I have exposed it using a load balancer service because I need an external IP to communicate with the workload.
The external IP is now publicly accessible. How do I secure it so that only I will be able to access it from an external application?
Kubernetes doesn't come with out-of-the-box authentication for external services. If you have more services and security is important for you I would take a look into istio project. You can configure authentication for your services in decalarative way using authentication policy:
https://istio.io/docs/tasks/security/authn-policy/#end-user-authentication
Using istio you can secure not only incoming connections, but also outgoing and internal traffic.
If you are new to service mesh concept and you don't know how to start, you can check kyma-project where istio is already configured and you can apply token validation with one click in UI or single kubectl command. Check the example:
https://github.com/kyma-project/examples/tree/master/gateway