My Kubernetes service is running on HTTPS, I wanted to expose it over Istio ingress gateway - kubernetes

I deployed NIFI using cetic helm charts in GKE, its service is working on HTTPS.
now I don't want to make it as a load balancer, I'm using Istio ingress gateway for to expose it on my DNS name.
I used Istio for http service, but not I'm confused about the HTTPS.
Please help me on this

Related

Microservices Api gateway and Identity Server 4 kubernates

I have microsevices and SPA app. All of them run on docker with docker compose. I have ocelot api gateway. But gateway knows ip address or container names of microservices for reaching . I add a aggregater service inside ocelot app. And I can reach to all services from aggregator service with ips.
But I want to move kubernates. I can scale services. there is no static ip. How can I configure .
I have identity service. This service knows clients ip addresses. Again same problem.
I searched for hours. I found some keywords. Envoy, Ingress, Consul, Ocelot . Can someone explain these things ?
It sounds like your question is related to Service Discovery.
In Kubernetes, the native way an "API Gateway" is implemented, is by using Ingress resources and Ingress Controllers. If you use a cloud provider, they usually have a product for this, or you can use a custom deployed within the cluster.
Service Discovery the Kubernetes way, is by referring to Service resources, e.g. the Ingress resources maps URLs (in your public API) to services. And your app is deployed as a Deployment resource, and all replicas (instances) is exposed via a Service resource. An app can also send request to other apps, and it should address that request to the Service resource for the other app. The Service resource does load balancing to the replicas of the receiving app.
you can use the service name to connect with the service instead of the client IP.
for example : curl HTTP://<service.name>.<namespace name>.svc.cluster.local
now if you are looking forward to list of API gateway and Identity server for Kubernetes
there are several options however it all depends on requirement.
For basic requirement nginx ingress and other ingress is available while if you are looking for API gateway :
Kong APi gateway
Ambassador api gateway
TYK API gateway
part of that service mesh can be also useful not in all scenario because it's mostly used for managing internal traffic (east-west).
API gateway is mostly used for managing edge traffic.
List of identity servers :
keycloak
Cognito IAM (AWS)
ingress controllers :
GCE ingress
Nginx ingress controller
Kong ingress controller
Gloo
HA proxy
AKS gateway
istio ingress

Azure Kubernetes Service: using an NGINX ingress controller with a Ocelot-based API gateway

I am planning to deploy to an AKS cluster and use an NGINX ingress controller, so that my micro-services will be internal to the cluster and the NGINX ingress controller will be the entry point to the micro-services.
One of my micro-services acts as an API gateway using the Ocelot library, and it implements the BFF pattern. So my ingress controller will have only one rule which will route requests made to the path "/(.*)" to the API gateway micro-service.
My question is - is this the conventional way to use an ingress controller and an API gateway micro-service? Somehow it feels redundant, although I could think that both have different responsibilities.
I don't think you would need an Ingress-Controller in this case, we use an API Gateway which is Ambassador and we simply have a public IP assigned to its kubernetes service.
If you don't expect other pods to expose themselves using Ingress objects, and that all traffic will be coming in your API gateway, I would simply drop the Ingress-controller and enable a Service of Type LoadBalancer for your API gateway pods

Does Istio on AWS requires AWS ALB?

I will install Istio as a service mesh on AWS EKS. I know that Istio provides its own Ingress Gateway. What I am confused about is: Do we still need to use AWS ALB or ELB in front of Istio Ingress Gateway?
Given that Istio will create a Service for its Ingress Deployment of type LoadBalancer, Kubernetes will take care of provisioning the ELB for you. No need to create it yourself although you could also configure the Service to point to an existing ELB.
The linked Service is outdated and for ease of reference only. The latest Istio chart is actually here. You should be able to download it and confirm the Service configuration.

How to use GKE Ingress along with Nginx Ingress?

GKE Ingress: https://cloud.google.com/kubernetes-engine/docs/concepts/ingress
Nginx Ingress: https://kubernetes.github.io/ingress-nginx/
Why GKE Ingress
GKE Ingress can be used along with Google's managed SSL certificates. These certificates are deployed in edge servers of load balancer which results in very low TTFB (time to first byte)
What's wrong about GKE Ingress
The HTTP/domain routing is done in the load balancer using 'forward rules' which is very pricy. Costs around $7.2 per rule. Each domain requires one rule.
Why Nginx Ingress
Nginx Ingress also creates (TCP/UP) load balancer where we can specify routing of HTTP/domain using ingress controller. Since the routing is done inside the cluster there are no additional costs on adding domains into the rules
What's wrong about Nginx Ingress
To enable SSL, we can use cert-manager. But as I mentioned above, Google's managed certificate deploy certificates in edge servers which results in very low latency
My Question
Is it possible to use both of them together? So that HTTPS requests first hit GKE ingress which will terminate SSL and route the traffic to Nginx ingress which will route it to corresponding pods
Is not possible to point an Ingress to another Ingress. Furthermore and in your particular case, is also not possible to point a GCE ingress class to Nginx since it relies in an HTTP(S) Load Balancer, which can only have GCE instances/instances groups (basically the node pools in GKE), or GCS buckets as backends.
If you were to deploy an Nginx ingress using GKE, it will spin up a Network Load Balancer which is not a valid backend for the HTTP(S) Load Balancer.
So is neither possible via Ingress nor GCP infrastructure features. However, if you need the GCE ingress class to be hit first, and then, manage further routing with Nginx, you might want to consider having Nginx as a Kubernetes Service/Deployment to manage the incoming traffic once is within the cluster network.
You can create a ClusterIP service for internally accessing your Nginx deployment and from there, using cluster-local hostnames to redirect to other services/applications within the cluster.

Securing Nginx-Ingress > Istio-Ingress

I have Istio Ingress which is working with traffic going in to microservices and inbetween microservices is being encrypted within ISTIO domain. But i dont want to expose ISTIO ingress to public.
So tried deploying NGINX or HAPROXY ingress (with https certs) and point them to ISTIO ingress and everything is working great.
My only worry now is that traffic between NGINX INGRESS (https term) > ISTIO INGRESS is not encrypted.
What is the usual way on Istio to get full encryption of traffic but with NGINX/HAPROXY ingress.
I guess one way is to HAPROXY tcp mode to ISTIO ingress with certs on Istio ingress. Haven't tried it but it should work. Wild idea is running NGINX ingress within ISTIO mash but then i would loose some Istio Ingress capabilities.
What is the recommended way or any suggestion. How is usualy Istio being exposed on some real Prod env example.
Since i dont use cloud loadbalancer on voyager instances but expose Voyager/Haproxy on Host-Port
I collocated Voyager(HostPort) and Istio(HostPort) via DeamonSet/node-selector(and taints) on same machines called frontend. Then just pointed Voyager to loadbalance the loopback/localhost with port of Istio HostPort I specified.
backendRule:
- 'server local-istio localhost:30280'
This way no unenctypted traffic is traversing the network between Voyager/Haproxy and Istio Ingress since they communicate now on same Host. I have 2 frontend nodes witch are beeing loadbalanced so i have redundancy. But its kind of improvisation and breaking kubernetes logic. On the other hand it works great.
Other solution was to use selfsigned certificates on Istio, than just point Voyager/Haproxy to Istio instances. But this requires multiple terminations since Voyager is also terminating Https. Advanteg of this is that you can leave Voyager and Istio instances to Kubernetes to distribute. No need to bind them to specific machines.