I have a kubernetes cluster created. According to security policies in the company, I need to have first an application gateway WAF in front that hits the cluster (which has a public IP). And as an ingress controller for this cluster I need to configure a Nginx ingress controller (also has a public IP). How can I connect or point the waf to the ingress controller? Is this possible to be done?
Thanks!
Native support for Nginx ingress controller is with a load balancer and not with app gateway. One possible approach is to create a nginx ingress controller loadbalancer as private using this link docs.
Now add this private Ip of load balancer as the backend pool of app gateway and now your app gateway should start serving the traffic from aks cluster.
The App gateway ingress controller as suggested by another comment is GA now but still is buggy. It takes time to update the backend pools on deploying new pods.
Related
I have a kubernetes objects as below:
a deployment
a service to use with that deployment in step 1
an ingress with backend paths to the service in step 2
I am using Kubernetes Engine in GCP. Once I created an ingress object, it created a load balancer as usual.
So I have my dns, and I added a A record with a domain name test1.<my-domain>.co pointing to the IP of the LoadBalancer created from the ingress
But this is not working. It doesn't load the page.
Then I tried installing ningx ingress controller, and once that is being deployed, it created another load balancer in the gcp. So, I got the IP of that newly created load balancer and switched/changed replace the DNS record IP with newly created Load Balancer's IP. And voila, it started working. So does that mean, an ingress always need an ingress controller to work?
Yes, in order for the Ingress resource to work, the cluster must have an ingress controller running. Only creating an Ingress resource has no effect.
An Ingress controller is responsible for fulfilling the Ingress, usually with a load balancer, which is what you see.
A request from the client lands up on the Ingress managed Load Balancer which is forwarded to the respective Ingress based on the host and path in the original request. Following the routing rules defined in the ingress, request is forwarded to the service from where it lands up on the backend pods.
Ingress resource creating its own Load Balancer seems to be a behaviour followed in GKE. From the GCP docs
When you specify kind:Ingress in the resource manifest, you instruct
GKE to create an Ingress resource. By including annotations and
supporting workloads and Services, you can create a custom Ingress
controller. Otherwise, GKE makes appropriate Google Cloud API calls to
create an external HTTP(S) load balancer.
You can read more about this here.
yes ingress controller is needed to serve user request comes outside the cluster.
when user sends request on load balancer IP of ingress controller ingress controller reads route from ingress resource and forward user request accordingly.
ingress resource is a part of service. means for every service you need to have ingress resource where as a ingress controller can serve multiple ingress resource.
There are mainly two ingress controller used.
nginx
contour.
you can read about them in details.
I have microsevices and SPA app. All of them run on docker with docker compose. I have ocelot api gateway. But gateway knows ip address or container names of microservices for reaching . I add a aggregater service inside ocelot app. And I can reach to all services from aggregator service with ips.
But I want to move kubernates. I can scale services. there is no static ip. How can I configure .
I have identity service. This service knows clients ip addresses. Again same problem.
I searched for hours. I found some keywords. Envoy, Ingress, Consul, Ocelot . Can someone explain these things ?
It sounds like your question is related to Service Discovery.
In Kubernetes, the native way an "API Gateway" is implemented, is by using Ingress resources and Ingress Controllers. If you use a cloud provider, they usually have a product for this, or you can use a custom deployed within the cluster.
Service Discovery the Kubernetes way, is by referring to Service resources, e.g. the Ingress resources maps URLs (in your public API) to services. And your app is deployed as a Deployment resource, and all replicas (instances) is exposed via a Service resource. An app can also send request to other apps, and it should address that request to the Service resource for the other app. The Service resource does load balancing to the replicas of the receiving app.
you can use the service name to connect with the service instead of the client IP.
for example : curl HTTP://<service.name>.<namespace name>.svc.cluster.local
now if you are looking forward to list of API gateway and Identity server for Kubernetes
there are several options however it all depends on requirement.
For basic requirement nginx ingress and other ingress is available while if you are looking for API gateway :
Kong APi gateway
Ambassador api gateway
TYK API gateway
part of that service mesh can be also useful not in all scenario because it's mostly used for managing internal traffic (east-west).
API gateway is mostly used for managing edge traffic.
List of identity servers :
keycloak
Cognito IAM (AWS)
ingress controllers :
GCE ingress
Nginx ingress controller
Kong ingress controller
Gloo
HA proxy
AKS gateway
istio ingress
I am planning to deploy to an AKS cluster and use an NGINX ingress controller, so that my micro-services will be internal to the cluster and the NGINX ingress controller will be the entry point to the micro-services.
One of my micro-services acts as an API gateway using the Ocelot library, and it implements the BFF pattern. So my ingress controller will have only one rule which will route requests made to the path "/(.*)" to the API gateway micro-service.
My question is - is this the conventional way to use an ingress controller and an API gateway micro-service? Somehow it feels redundant, although I could think that both have different responsibilities.
I don't think you would need an Ingress-Controller in this case, we use an API Gateway which is Ambassador and we simply have a public IP assigned to its kubernetes service.
If you don't expect other pods to expose themselves using Ingress objects, and that all traffic will be coming in your API gateway, I would simply drop the Ingress-controller and enable a Service of Type LoadBalancer for your API gateway pods
I've got a Kubernetes cluster with nginx ingress setup for public endpoints. That works great, but I have one service that I don't want to expose to the public, but I do want to expose to people who have vpc access via vpn. The people who will need to access this route will not have kubectl setup, so they can't use port-forward to send it to localhost.
What's the best way to setup ingress for a service that will be restricted to only people on the VPN?
Edit: thanks for the responses. As a few people guessed I'm running an EKS cluster in AWS.
It depends a lot on your Ingress Controller and cloud host, but roughly speaking you would probably set up a second copy of your controller using a internal load balancer service rather than a public LB and then set that service and/or ingress to only allow from the IP of the VPN pods.
Since you are talking about "VPC" and assuming you have your cluster in AWS, you probably need to do what #coderanger said.
Deploy a new ingress controller with "LoadBalancer" in the service type and add an the annotation service.beta.kubernetes.io/aws-load-balancer-internal: "true".
Check here what are the possible annotations that you can add to a Load Balancer in AWS: https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/#load-balancers
You can also create a security group for example and add it to the load balancer with service.beta.kubernetes.io/aws-load-balancer-security-groups.
GKE Ingress: https://cloud.google.com/kubernetes-engine/docs/concepts/ingress
Nginx Ingress: https://kubernetes.github.io/ingress-nginx/
Why GKE Ingress
GKE Ingress can be used along with Google's managed SSL certificates. These certificates are deployed in edge servers of load balancer which results in very low TTFB (time to first byte)
What's wrong about GKE Ingress
The HTTP/domain routing is done in the load balancer using 'forward rules' which is very pricy. Costs around $7.2 per rule. Each domain requires one rule.
Why Nginx Ingress
Nginx Ingress also creates (TCP/UP) load balancer where we can specify routing of HTTP/domain using ingress controller. Since the routing is done inside the cluster there are no additional costs on adding domains into the rules
What's wrong about Nginx Ingress
To enable SSL, we can use cert-manager. But as I mentioned above, Google's managed certificate deploy certificates in edge servers which results in very low latency
My Question
Is it possible to use both of them together? So that HTTPS requests first hit GKE ingress which will terminate SSL and route the traffic to Nginx ingress which will route it to corresponding pods
Is not possible to point an Ingress to another Ingress. Furthermore and in your particular case, is also not possible to point a GCE ingress class to Nginx since it relies in an HTTP(S) Load Balancer, which can only have GCE instances/instances groups (basically the node pools in GKE), or GCS buckets as backends.
If you were to deploy an Nginx ingress using GKE, it will spin up a Network Load Balancer which is not a valid backend for the HTTP(S) Load Balancer.
So is neither possible via Ingress nor GCP infrastructure features. However, if you need the GCE ingress class to be hit first, and then, manage further routing with Nginx, you might want to consider having Nginx as a Kubernetes Service/Deployment to manage the incoming traffic once is within the cluster network.
You can create a ClusterIP service for internally accessing your Nginx deployment and from there, using cluster-local hostnames to redirect to other services/applications within the cluster.