Is there a limit to Istio maximum number of Gateway resources? - kubernetes

Using stackoverflow in my work for a long time ago, but actually asking my first question here :)
We currently have a K8s cluster with an Istio Ingress Gateway & cert-manager for our public facing TLS-terminated endpoint, and are willing to setup another public endpoint serving customers' own domain names (from their own dns provider).
Customers could go on their project back-office and ask to connect their project to their own domain name by filling a simple form & pasting the appropriate CNAME/A rule within their DNS provider.
These custom domain names would be served by another Ingress Gateway instance configured with 1 LetsEncrypt certificate per domain, generated from cert-manager using HTTP challenges.
However, I wonder about the relevance & scalability of this stack.
First, and as mentioned by this thread title, is there any limit to the number of Gateway resource that a single ingress gateway instance can handle ? We could group all custom rules into a single Gateway resource, but having a separate Gateway per customer would be far easier for maintenance / update / clean up processes. Theoretically this could go up to a few dozens or even 100+ Gateway resources.
As a side note, have you any specific feedback/recommendation on dynamically generating customers' own certificates & ingress rules using this or other better approach ?
One consequence that I really dislike is that we have to push K8s manifests from one of our microservices (and grants persmissions to do so), using infrastructure code within application codebase and opening potential security flaws ...
Thanks !

Related

How many rules can be added to nginx ingress resource [duplicate]

We have a multi tenant application and for each tenant we provision separate container image.
Likewise we create a subdomain for each tenant which will be redirected to its own container.
There might be a scenario where 1000s of tenants can exist and its dynamic.
So it has become necessary for us to consider the limitations in ingress controllers for Kubernetes in general before choosing. Especially the nginx-ingress.
Is there any max limitation on number of Ingress resources or rules inside ingress that can be created? Or will there be any performance or scaling issues when too many ingress resources are created ?
Is it better to add a new rule(for each subdomain) in same ingress resource or to create separate ingress resource for each subdomain ?
AFAIK, there are no limits like that, You will either run out of resources or find a choke point first. This article compares resource consumption of several loadbalancers.
As for Nginx-ingress there are few features hidden behind paid nginx plus version as listed here.
If You wish to have dynamic configurations and scalability, You should try envoy based ingress like Ambassador or Istio.
Envoy offers dynamic configuration updates which will not interrupt existing connections. More info here.
Check out this article which compares most of popular kubernetes ingress controllers.
This article shows a great example of pushing HAproxy and Nginx combination to its limits.
Hope it helps.

Ingress controller vs api gateway

I would like to know what is/are differences between an api gateway and Ingress controller. People tend to use these terms interchangeably due to similar functionality they offer. When I say, 'Ingress controller'; don't confuse it with Ingress objects provided by kubernetes. Also, it would be nice if you can explain the scenario where one will be more useful than other.
Is api gateway a generic term used for traffic routers in cloud-native world and 'Ingress controller' is implementation of api-gateway in kubernetes world?
Ingress controller allows single ip-port to access all services running in k8s through ingress rules. The ingress controller service is set to load balancer so it is accessible from public internet.
An api gateway is used for application routing, rate limiting, security, request and response handling and other application related tasks. Say, you have a microservice based application in which the request needs an information to be collected from multiple micro services. You need a way to distribute the user requests to different services and gather the responses from all micro services and prepare the final response to be sent to the user. API Gateway is the one which does this kind of work for you.
Ingress
Ingress manages and route the traffic into Kubernetes services.
Ingress rules/config yaml and backed by Ingress controller (Nginx ingress controller famous one)
Ingress controller makes one Kubernetes service using that get exposed as LoadBalancer.
Other list of ingrss controller : https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/
For simple understanding, you can consider ingress as Nginx server which just do the work of forwarding the traffic to services based on the ruleset.
ingress don't have much functionality like API gateway. Some of ingress don't support authentication, rate limiting, application routing, security, merging response & request, and other add-ons/plugin options.
API gateway
API gateway can also do the work of simple routing but it's mostly gets used when you need higher flexibility, security and configuration options.
There are lots of parameters to compare when you are choosing the Ingress or API gateway however it's more depends on your usecase.
API gateway like KrakenD, Kong are way better compare to ingress have security integration like Oauth plugin, API key option, it support rate-limiting, API aggregation.
Kong API gateway also has a good plugin option which you can use if you want to configure logging/monitoring of traffic also.
There are so many API gateways available in the market same as the ingress controller, you can check the API gateway feature and comparison below.
Read more at : https://medium.com/#harsh.manvar111/api-gateway-identity-server-comparison-ec439468cc8a
If your use case is small and sure about requirement you can use the ingress also for production API gateway is not necessary.
Indeed both have a set of features that intersect, path mapping, path conversion, load balancing, etc.
However, they do differ. I may be wrong, but you create an Ingress 1) to run it in Kubernetes 2) to be more of like a reverse proxy "kubernetes native".
API Gateway could be installed anywhere (although there are now many that run in Kubernetes natively like Ambassador, Gloo, Kong), and they do have more functionality available like developer portal, rate limiting, etc.
Personally I use an ingress as a reverse proxy for a website. And API Gateway for APIs. This does not mean you can't use ingress for apis. However, you are not taking full advantage of them.

What are the general approach on structuring or modeling the Istio policy around service in a repository?

Currently our GKE cluster consists of multiple services running in different namespace, each svc communicate with each other too. We're using:
each service on different git repo
each service repo contains: source code, its helm chart defining the app deployment and infra surrounding it (Service, Istio Ingress/Egress Gateway,etc), and has its own ci/cd (jenkinsfile).
Right now, I also want to incorporate Istio security policy to enforce security of svc-to-svc communication. I've understand basic concept of it. Now, my case is how put service policy to what repo.
For example: given that I have service A (client) communicating with Service B (server). Istio has 3 different kind of policy enforcement:
mesh wide policy
namespace wide policy
service specific policy
Since our gke cluster is still in the early stage of using Istio and I want to have little effort on central governance, I prefer to adopt service specific policy, so each service owner can govern the policy too.
I am thinking to put:
the Policy (service specific policy) in each service repo who acts
as a server. The reasoning behind this is Policy is enforcing incoming trafficPolicy to the service (not the outcoming).
But I am wondering how about the DestinationRule? From article Istio provides here
To configure the client side, you need to set destination rules to use mutual TLS. I
From quote above, I get the understanding that DestinationRule is the one who enforce the client side (who has istio side car container). So DestionationRule should be put on client service repo (in the given case, it is on Service A repo).
But of course, in the server side (service B repo), the team also want to have a certain load balancing and split traffic mechanism (canary, stable, versioning, etc), which can only be defined by VirtualService and DestionationRule.
Any thought about this? Does anyone has general pattern/approach on designing this policy manifest (istio yaml file) around service in a repository?

How to consume Istio-based Service that enables `mtls`?

Currently, I want to introduce istio as our service-mesh framework for our microservices. I have played it sometime (< 1 week), and my understanding is that Istio really provides an easy way to secure service to service communication. Much (or all?) of Istio docs/article provides an example how client and server who have istio-proxy (envoy) installed as a sidecar container, can establish secure communication using mtls method.
However, since our existing client (which I don't have any control) who consume our service (which will be migrated to use istio) doesn't have istio, I still don't understand it well how we should do it better.
Is there any tutorial or example that provides my use case better?
How can the non-istio-based client use mtls for consuming our istio-based service? Think about using basic curl command to simulate such thing.
Also, I am thinking of distributing a specific service account (kubernetes, gcp iam service account, etc) to the client to limit the client's privilege when calling our service. I have many questions on how these things: gcp iam service account, istio, rbac, mtls, jwt token, etc contributes to securing our service API?
Any advice?
You want to add a third party to your Istio mesh outside of your network via SSL over public internet?
I dont think Istio is really meant for federating external services but you could just have an istio ingress gateway proxy sat at the edge of your network for routing into and back out of your application.
https://istio.io/docs/tasks/traffic-management/ingress/
If you're building microservices then surely you have an endpoint or gateway, that seems more sensible to me, try Apigee or something.

VPN access for applications running inside a shared Kubernetes cluster

We are currently providing our software as a software-as-a-service on Amazon EC2 machines. Our software is a microservice-based application with around 20 different services.
For bigger customers we use dedicated installations on a dedicated set of VMs, the number of VMs (and number of instances of our microservices) depending on the customer's requirements. A common requirement of any larger customer is that our software needs access to the customer's datacenter (e.g., for LDAP access). So far, we solved this using Amazon's virtual private gateway feature.
Now we want to move our SaaS deployments to Kubernetes. Of course we could just create a Kubernetes cluster across an individual customer's VMs (e.g., using kops), but that would offer little benefit.
Instead, perspectively, we would like to run a single large Kubernetes cluster on which we deploy the individual customer installations into dedicated namespaces, that way increasing resource utilization and lowering cost compared to the fixed allocation of machines to customers that we have today.
From the Kubernetes side of things, our software works fine already, we can deploy multiple installations to one cluster just fine. An open topic is however the VPN access. What we would need is a way to allow all pods in a customer's namespace access to the customer's VPN, but not to any other customers' VPNs.
When googleing for the topic, I found approaches that add a VPN client to the individual container (e.g., https://caveofcode.com/2017/06/how-to-setup-a-vpn-connection-from-inside-a-pod-in-kubernetes/) which is obviously not an option).
Other approaches seem to describe running a VPN server inside K8s (which is also not what we need).
Again others (like the "Strongswan IPSec VPN service", https://www.ibm.com/blogs/bluemix/2017/12/connecting-kubernetes-cluster-premises-resources/ ) use DaemonSets to "configure routing on each of the worker nodes". This also does not seem like a solution that is acceptable to us, since that would allow all pods (irrespective of the namespace they are in) on a worker node access to the respective VPN... and would also not work well if we have dozens of customer installations each requiring its own VPN setup on the cluster.
Is there any approach or solution that provides what we need, .i.e., VPN access for the pods in a specific namespace only?
Or are there any other approaches that could still satisfy our requirement (lower cost due to Kubernetes worker nodes being shared between customers)?
For LDAP access, one option might be to setup a kind of LDAP proxy, so that only this proxy would need to have VPN access to the customer network (by running this proxy on a small dedicated VM for each customer, and then configuring the proxy as LDAP endpoint for the application). However, LDAP access is only one out of many aspects of connectivity that our application needs depending on the use case.
If your IPSec concentrator support VTI, it's possible route the traffic using firewall rules. For example, PFSense suports it: https://www.netgate.com/docs/pfsense/vpn/ipsec/ipsec-routed.html.
Using VTI, you can direct traffic using some kind of policy routing: https://www.netgate.com/docs/pfsense/routing/directing-traffic-with-policy-routing.html
However, i can see two big problems here:
You cannot have two IPSEC tunnels with the conflicted networks. For example, your kube network is 192.168.0.0/24 and you have two customers: A (172.12.0.0/24) and B (172.12.0.0/12). Unfortunelly, this can happen (unless your customer be able to NAT those networks).
Find the ideals criteria for rule match (to allow the routing), since your source network are always the same. Use mark packages (using iptables mangle or even through application) can be a option, but you will still get stucked on the first problem.
A similar scenario is founded on WSO2 (API gateway provider) architecture. They solved it using reverse-proxy in each network (sad but true) https://docs.wso2.com/display/APICloud/Expose+your+On-Premises+Backend+Services+to+the+API+Cloud#ExposeyourOn-PremisesBackendServicestotheAPICloud-ExposeyourservicesusingaVPN
Regards,
UPDATE:
I don't know if you use GKE. If yes, maybe use Alias-IP can be an option: https://cloud.google.com/kubernetes-engine/docs/how-to/alias-ips. The PODs IPs will be routable from VPC. So, you can apply some kind of routing policy based on their CIDR.