Zuul Deployment in Kubernetes - kubernetes

This is my first time trying to deploy a microservices architecture into Kubernetes. At the beginning, I was considering to use Ambassador as my API Gateway. I also have an authentication service which validates users and generates a JWT token, however, I need to validate this token
every time a service is called. This represents an overload problem (since every time the API Gateway receives traffic it will go to this external authentication service to validate the JWT token) and Ambassador does not have an option to do this filtering without the use of the external service.
Using the Zuul Gateway seems like the best option in this case, since it allows me to validate the JWT token inside the gateway (not through an external service like Ambassador). However, I'm not sure how Zuul is going to work if I deploy it in Kubernetes since, as I understand, Zuul requires to have the address of the service discovery (like Eureka).
if I deploy Zuul in my Kubernetes cluster, then how it will be able to locate my services?
Locally, for example, there is no problem since I was using Eureka before, and I knew its address. Also, I don't think having Eureka deployed in Kubernetes will be a good idea, since it will be redundant.
If it is not possible to do it with Zuul, is there another API Gateway or approach where I can validate tokens using the Gateway instead of relying on an external authentication service like Ambassador does?
Thank you.

In kubernetes you already have "discovery" service which is kubernetes-service. It locates pods and serves as load balancer for them.
Lets say you have Zuul configuration like this:
zuul:
routes:
books-service:
path: /books/**
serviceId: books-service
which routes requests matching /books/** to the service books-service. Usually you have an Eureka which gives you real address of books-service, but not now.
And this is where Ribbon can help you - it allows you to manually tune routing after Zuul has matched it's request. So you need to add this to configuration:
books-service.ribbon.listOfServers: "http://books:8080"
and after Zuul had found serviceId (books-service) it will route the request to books:8080
And books:8080 is just a kubernetes-service:
kind: Service
apiVersion: v1
metadata:
name: books
spec:
selector:
app: spring-books-service
ports:
- protocol: TCP
port: 8080
targetPort: 9376
You can say its a load balancer that takes traffic from :8080 and redirects it to pods with label app: spring-books-service.
All you have to do next is to assign labels to pods (via deployments for example)
Btw, you can configure Ribbon like this in any app and kubernetes will locate all your apps (pods) with its services so you dont need any discovery service at all! And since k8s-services are much more reliable than Eureka, you can simply remove it.

Related

Kubernetes with cloud providers - How to route SSH trafic to services with Loadbalancers

I'm trying to build a Kubernetes cluster to allow multi-website testing, with multiple databases engine, multiple php versions, multiple dependancies, multiple front-end stacks, ...
So, my goal is to build something similar to this :
infrastructure schema
When using ingress-nginx, my cloud provider gives me a LoadBalancer IP.
I was able to deploy ingress-nginx to route my http/https trafic to the right service using ingress host rules.
Now, i want to be able to connect via SSH to the project1_ssh service with the loadbalancer ip, on port 2022, and to project2_ssh service with the same loadbalancer ip, on port 2023.
Can i achieve that ?
I'm note sure ingress-nginx will allow me to do that.
I successfully was able to connect to my ssh service declaring this kind of service :
kind: Service
apiVersion: v1
metadata:
name: ssh-service
spec:
selector:
app: project1
ports:
- port: 2300
targetPort: 23
type: LoadBalancer
But doing this, creates a new loadbalancerIp, and a new bill on the cloud provider.
I want to have only one LoadBalancer service.
Any suggestions ?
The idea is to run ~50 websites, each one in a pod.
Ok, i finally removed ingress-nginx, and switched to Traefik-V2, and i achieved what i wanted.
Now i will try to figure out if SNI can make it even more simple (one same port for all my ssh services, but the host called in my ssh request would be used to route the connection to the right service inside the cluster.
https://kupczynski.info/2019/05/21/traefik-sni.html
Will let you know if it finally works

Within a k8s cluster Should I always call the Ingress Rule Or Node Port Service Name?

I have a number of restful services within our system
Some are our within the kubernetes cluster
Others are on legacy infrasture and are hosted on VM's
Many of our restful services make synchronous calls to each other (so not asynchronously using message queues)
We also have a number of UI's (fat clients or web apps) that make use of these services
We might define a simple k8s manifest file like this
Pod
Service
Ingress
apiVersion: v1
kind: Pod
metadata:
name: "orderManager"
spec:
containers:
- name: "orderManager"
image: "gitlab-prem.com:5050/image-repo/orderManager:orderManager_1.10.22"
---
apiVersion: v1
kind: Service
metadata:
name: "orderManager-service"
spec:
type: NodePort
selector:
app: "orderManager"
ports:
- protocol: TCP
port: 50588
targetPort: 50588
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: orderManager-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /orders
pathType: Prefix
backend:
service:
name: "orderManager-service"
port:
number: 50588
I am really not sure what the best way for restful services on the cluster to talk to each other.
It seems like there is only one good route for callers outside the cluster which is use the url built by the ingress rule
Two options within the cluster
This might illustrate it further with an example
Caller
Receiver
Example Url
UI
On Cluster
http://clusterip/orders
The UI would use the cluster ip and the ingress rule to reach the order manager
Service off cluster
On Cluster
http://clusterip/orders
Just like the UI
On Cluster
On Cluster
http://clusterip/orders
Could use ingress rule like the above approach
On Cluster
On Cluster
http://orderManager-service:50588/
Could use the service name and port directly
I write cluster ip a few times above but in real life we put something top so there is a friendly name like http://mycluster/orders
So when caller and reciever are both on cluster is it either
Use the ingress rule which is also used by services and apps outside the cluster
Use the nodeport service name which is used in the ingress rule
Or perhaps something else!
One benefit of using nodeport service name is that you do not have to change your base URL.
The ingress rule appends an extra elements to the route (in the above case orders)
When I move a restful service from legacy to k8s cluster it will increase the complexity
It depends on whether you want requests to be routed through your ingress controller or not.
Requests sent to the full URL configured in your Ingress resource will be processed by your ingress controller. The controller itself — NGINX in this case — will proxy the request to the Service. The request will then be routed to a Pod.
Sending the request directly to the Service’s URL simply skips your ingress controller. The request is directly routed to a Pod.
The trade offs between the two options depend on your setup.
Sending requests through your ingress controller will increase request latency and resource consumption. If your ingress controller does nothing other than route requests, I would recommend sending requests directly to the Service.
However, if you use your ingress controller for other purposes, like authentication, monitoring, logging, or tracing, then you may prefer that the controller process internal requests.
For example, on some of my clusters I use the NGINX ingress controller to measure request latency and track HTTP response statuses. I route requests between apps running in the same cluster through the ingress controller in order to have that information available. I pay the cost of increased latency and resource usage in order to have improved observability.
Whether the trade offs are worth it in your case depends on you. If your ingress controller does nothing more that basic routing, then my recommendation is to skip it entirely. If it does more, then you need to weigh the pros and cons of routing requests through it.

How to create https endpoint in Google Cloud from http based server for Kubernetes Engine?

I have been trying to create HTTPS endpoint in Google Cloud K8s environment.
I have built a flask application in Python that serves on the waitress production environment via port 5000.
serve(app, host='0.0.0.0', port=5000, ipv6=False, threads=30)
I created a docker file and pushed this to the google cloud repository. Then, created a Kubernetes cluster with one workload containing this image. After, I exposed this via external IP by creating LoadBalancer. (After pushing the image to the Google repository, everything is managed through the Google Cloud Console. I do not have any configuration file, it should be through the Google Cloud Console.)
Now, I do have an exposed IP and port number to access my application. Let's say this IP address and the port is: 11.111.11.222:1111. Now, I can access this IP via Postman and get a result.
My goal is to implement, If it is possible, to expose this IP address via HTTPS as well, by using any google cloud resources. (redirection, creating ingress, etc)
So, in the end I want to reach the application through http://11.111.11.222:111 and https://11.111.11.222:111
Any suggestions?
A LoadBalancer translates to a network load balancer. You can configure multiple ports for this e.g. 80 and 443. Then your application must handle the TLS part.
The ingress resource creates an HTTP(S) LB
From the GKE perspective you can try to configure Ingress resource with HTTPS enabled:
Steps:
Create a basic flask app inside a pod (for example purposes only)
Expose an app via service object of type nodePort
Create a certificate
Create an Ingress resource
Test
Additional information (added by EDIT)
Create a basic flask app inside a pod (for example purposes only)
Below is a flask script which will respond with <h1>Hello!</h1>:
from flask import Flask
app = Flask(__name__)
#app.route("/")
def index():
return "<h1>Hello!</h1>"
if __name__ == "__main__":
from waitress import serve
serve(app, host="0.0.0.0", port=8080)
By default it will respond on port 8080.
Link to an answer with above script.
Expose an app via service object of type nodePort
Assuming that deployment is configured correctly with working app inside, you can expose it via service object type of nodePort with following YAML definition:
apiVersion: v1
kind: Service
metadata:
name: flask-service
spec:
type: NodePort
selector:
app: ubuntu
ports:
- name: flask-port
protocol: TCP
port: 80
targetPort: 8080
Please make sure that:
selector is configured correctly
targetPort is pointing to port which is app is running on
Create a certificate
For Ingress object to work with HTTPS you will need to provide a certificate. You can create it with GKE official documentation on: Cloud.google.com: Managed certificates
Be aware of a fact that you will need a domain name to do that.
Create an Ingress resource
Below is an example Ingress resource which will point your requests to your flask application:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: flask-ingress
annotations:
networking.gke.io/managed-certificates: flask-certificate
kubernetes.io/ingress.global-static-ip-name: flask-static-ip
spec:
rules:
- host: DOMAIN.NAME
http:
paths:
- path: /
backend:
serviceName: flask-service
servicePort: flask-port
Please take a specific look on part of YAML definition below and change accordingly to your case:
networking.gke.io/managed-certificates: flask-certificate
kubernetes.io/ingress.global-static-ip-name: flask-static-ip
Please wait for everything to configure correctly.
After that you will have access to your application by domain.name with ports:
80(http)
443(https)
Currently the Ingress only supports a single TLS port, 443, and assumes TLS termination.
-- Kubernetes.io: Ingress TLS
Test
You can check if above steps are configured correctly by:
entering https://DOMAIN.NAME in your web browser and check if it responds with Hello with HTTPS enabled
using a tool curl -v https://DOMAIN.NAME.
Please let me know if this solution works for you.
Additional information (added by EDIT)
You can try to configure service object of type LoadBalancer which will be operate at layer 4 as #Florian said in his answer.
Please refer to official documentation: Kubernetes.io: Create external load balancer
You can also use Nginx Ingress controller and either:
Expose TCP/UDP service by following: Kubernetes.github.io: Ingress nginx: Exposing tcp udp services which will operating at L4.
Create an Ingress resource that will have SSL Passthrough configured by following: Kubernetes.github.io: Ingress nginx: Ssl passthrough
After researching, I found the answer in Google Cloud Run. It is very simple to deploy HTTP based flask app in the container. As serve(app, host='0.0.0.0', port=5000, ipv6=False, threads=30)(No need for self-certificate or HTTPS in this part, just make sure the HTTP app works) and then push it Cloud Run.
Adjust the service parameters, depend on how much resources do you need to run it. In the machine settings, set the port that you are using in the docker container to be mapped. for instance, in my case, it is 5000. When you create the service, Google provides you a domain address with HTTPS. You can use that URL and access your resources.
That's it!
For more information on Cloud Run:
https://cloud.google.com/serverless-options
The differences between computing platforms: https://www.signalfx.com/blog/gcp-serverless-comparison/

Weighted routing over kubernetes services

I have one master service and multiple slave services. The master service continuously polls a topic using subscriber from Google PubSub. The Slave services are REST APIs. Once the master service receives a message, it delegates the message to a slave service. Currently I'm using ClusterIP service in Kubernetes. Some of my requests are long running and some are pretty short.
I happen to observe that sometimes if there's a short running request while a long running request is in process, it has to wait until the long running request to finish even though many pods are available without serving any traffic. I think it's due to the round robin load balancing. I have been trying to find a solution and looked into approaches like setting up external HTTP load balancer with ingress and internal HTTP load balancer. But I'm really confused about the difference between these two and which one applies for my use case. Can you suggest which of the approaches would solve my use case?
TL;DR
assuming you want 20% of the traffic to go to x service and the rest 80% to y service. create 2 ingress files for each of the 2 targets, with same host name, the only difference is that one of them will carry the following ingress annotations: docs
nginx.ingress.kubernetes.io/canary: "true" #--> tell the controller to not create a new vhost
nginx.ingress.kubernetes.io/canary-weight: "20" #--> route here 20% of the traffic from the existing vhost
WHY & HOW TO
weighted routing is a bit beyond the ClusterIP. as you said yourself, its time for a new player to enter the game - an ingress controller.
this is a k8s abstraction for a load balancer - a powerful server sitting in front of your app and routing the traffic between the ClusterIPs.
install ingress controller on gcp cluster
once you have it installed and running, use its canary feature to perform a weighted routing. this is done using the following annotations:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: http-svc
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-weight: "20"
spec:
rules:
- host: echo.com
http:
paths:
- backend:
serviceName: http-svc
servicePort: 80
here is the full guide.
External vs internal load balancing
(this is the relevant definition from google cloud docs but the concept is similar among other cloud providers)
GCP's load balancers can be divided into external and internal load
balancers. External load balancers distribute traffic coming from the
internet to your GCP network. Internal load balancers distribute
traffic within your GCP network.
https://cloud.google.com/load-balancing/docs/load-balancing-overview

Kubernetes pods can not make https request after deploying istio service mesh

I am exploring the istio service mesh on my k8s cluster hosted on EKS(Amazon).
I tried deploying istio-1.2.2 on a new k8s cluster with the demo.yml file used for bookapp demonstration and most of the use cases I understand properly.
Then, I deployed istio using helm default profile(recommended for production) on my existing dev cluster with 100s of microservices running and what I noticed is my services can can call http endpoints but not able to call external secure endpoints(https://www.google.com, etc.)
I am getting :
curl: (35) error:1400410B:SSL routines:CONNECT_CR_SRVR_HELLO:wrong
version number
Though I am able to call external https endpoints from my testing cluster.
To verify, I check the egress policy and it is mode: ALLOW_ANY in both the clusters.
Now, I removed the the istio completely from my dev cluster and install the demo.yml to test but now this is also not working.
I try to relate my issue with this but didn't get any success.
https://discuss.istio.io/t/serviceentry-for-https-on-httpbin-org-resulting-in-connect-cr-srvr-hello-using-curl/2044
I don't understand what I am missing or what I am doing wrong.
Note: I am referring to this setup: https://istio.io/docs/setup/kubernetes/install/helm/
This is most likely a bug in Istio (see for example istio/istio#14520): if you have any Kubernetes Service object, anywhere in your cluster, that listens on port 443 but whose name starts with http (not https), it will break all outbound HTTPS connections.
The instance of this I've hit involves configuring an AWS load balancer to do TLS termination. The Kubernetes Service needs to expose port 443 to configure the load balancer, but it receives plain unencrypted HTTP.
apiVersion: v1
kind: Service
metadata:
name: breaks-istio
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:...
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
spec:
selector: ...
ports:
- name: http-ssl # <<<< THIS NAME MATTERS
port: 443
targetPort: http
When I've experimented with this, changing that name: to either https or tcp-https seems to work. Those name prefixes are significant to Istio, but I haven't immediately found any functional difference between telling Istio the port is HTTPS (even though it doesn't actually serve TLS) vs. plain uninterpreted TCP.
You do need to search your cluster and find every Service that listens to port 443, and make sure the port name doesn't start with http-....