Can Ingress route requests based on ip? - kubernetes

I have been with K8s-ingress well so far but I have a question.
Can ingress route requests based on IP?
I've already know that ingress do routing based on hosts like a.com, b.com... to each services and URI like path /a-service/, /b-service/ to each services.
However, I'm curious with the idea that Ingress can route by IP? I'd like requests from my office(certain ip) to route a specific service for tests.
Does it make sense? and any idea for that?

If this is just for testing I would just whitelist the IP. You can read the docs about nginx ingress annotations
You can specify allowed client IP source ranges through the nginx.ingress.kubernetes.io/whitelist-source-range annotation. The value is a comma separated list of CIDRs, e.g. 10.0.0.0/24,172.10.0.1.
Example yaml might look like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: whitelist
annotations:
nginx.ingress.kubernetes.io/whitelist-source-range: "1.1.1.1/24"
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /
backend:
serviceName: echoheaders
servicePort: 80
Also it looks like you can do that in Istio (I did not tried it) in kind ServiceRole and ServiceRoleBinding for specifying detailed access control requirements. For this you would use source.ip property. It's explained on Constraints and Properties

This is not part of the main Ingress abstraction as you noted, however many Ingress Controllers offer extra features through annotations or secondary CRDs. So in theory it could be added like that. I don't think any do routing like this though, so in practical terms, probably not available off the shelf.

As coderanger stated in his answer, ingress does not have it by default.
I'm not sure if IP based routing is the way to proceed, because how will you test/hit actual deployments/services from Office IP's when needed?
I think you can add a check to perform routing based on IP and header. For ex: you can pass a header 'redirect-to-test: true'. So if you set this to false, you can still access the production services.

Related

How to use ingress so that services can talk to each other?

On AWS EKS, I have three pods in a cluster each of which have been exposed by services. The problem is the services can not communicate with each other as discussed here Error while doing inter pod communication on EKS. It has not been answered yet but further search said that it can be done through Ingress. I am having confusion as to how to do it? Can anybody help ?
Code:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
namespace: test
name: ingress-test
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
spec:
ingressClassName: alb
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: server-service
port:
number: 8000
My server-service has APIs like /api/v1/getAll, /api/v1/updateAll, etc.
So, what should I write in path and for a database service what should I do??
And say in future I make another microservice and open another service which has APIs like /api/v1/showImage, /api/v1/deleteImage will I have to write all paths in ingress or is their another way for it to work?
A Ingress is a really good solution to expose both a frontend and a backend at the same domain with different paths (but reading your other question, it will be of no help in exposing the database)
With this said, you don't have to write all the paths in the Ingress (unless you want to) as you can instead use pathType: Prefix as it is already in your example.
Let me link you to the documentation Examples which explain how it works really well. Basically, you can add a rule with:
path: /api
pathType: Prefix
In order to expose your backend under /api and all the child paths.
The case where you put a second backend, which wants to be exposed under /api as the first one, is way more complex instead. If two Pods wants to be exposed at the same paths, you will probably need to list all the subpaths in a way that differentiate them.
For example:
Backed A
/api/v1/foo/listAll
/api/v1/foo/save
/api/v1/foo/delete
Backend B
/api/v1/bar/listAll
/api/v1/bar/save
/api/v1/bar/delete
Then you could expose one under subPath /api/v1/foo (Prefix) and the other under /api/v1/bar (Prefix).
As another alternative, you may want to expose the backends at different paths from what they actually expect using a rewrite target rule.

force http to https on GKE ingress cloud loadbalancer [duplicate]

Is there a way to force an SSL upgrade for incoming connections on the ingress load-balancer? Or if that is not possible with, can I disable port :80? I haven't found a good documentation pages that outlines such an option in the YAML file. Thanks a lot in advance!
https://github.com/kubernetes/ingress-gce#frontend-https
You can block HTTP through the annotation kubernetes.io/ingress.allow-http: "false" or redirect HTTP to HTTPS by specifying a custom backend. Unfortunately GCE doesn't handle redirection or rewriting at the L7 layer directly for you, yet. (see https://github.com/kubernetes/ingress-gce#ingress-cannot-redirect-http-to-https)
Update: GCP now handles redirection rules for load balancers, including HTTP to HTTPS. There doesn't appear to be a method to create these through Kubernetes YAML yet.
This was already correctly answered by a comment on the accepted answer. But since the comment is buried I missed it several times.
As of GKE version 1.18.10-gke.600 you can add a k8s frontend config to redirect from http to https.
https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#https_redirect
apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
name: ssl-redirect
spec:
redirectToHttps:
enabled: true
# add below to ingress
# metadata:
# annotations:
# networking.gke.io/v1beta1.FrontendConfig: ssl-redirect
The annotation has changed:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
annotations:
kubernetes.io/ingress.allow-http: "false"
spec:
...
Here is the annotation change PR:
https://github.com/kubernetes/contrib/pull/1462/files
If you are not bound to the GCLB Ingress Controller you could have a look at the Nginx Ingress Controller. This controller is different to the builtin one in multiple ways. First and foremost you need to deploy and manage one by yourself. But if you are willing to do so, you get the benefit of not depending on the GCE LB (20$/month) and getting support for IPv6/websockets.
The documentation states:
By default the controller redirects (301) to HTTPS if TLS is enabled for that ingress . If you want to disable that behaviour globally, you
can use ssl-redirect: "false" in the NGINX config map.
The recently released 0.9.0-beta.3 comes with an additional annotation for explicitly enforcing this redirect:
Force redirect to SSL using the annotation ingress.kubernetes.io/force-ssl-redirect
Google has responded to our requests and is testing HTTP->HTTPS SSL redirection on their load balancers. Their latest answer said it should be in Alpha sometime before the end of January 2020.
Their comment:
Thank you for your patience on this issue. The feature is currently in testing and we expect to enter Alpha phase before the end of January. Our PM team will have an announcement with more details as we get closer to the Alpha launch.
My fingers are crossed that we'll have a straightforward solution to this very common feature in the near future.
UPDATE (April 2020):
HTTP(S) rewrites is now a Generally Available feature. It's still a bit rough around the edges and does not work out-of-the-box with the GCE Ingress Controller unfortunately. But time will tell and hopefully a native solution will appear.
A quick update. Here
Now a FrontEndConfig can be make to configure the ingress. Hopes it helps.
Example:
apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
name: my-frontend-config
spec:
redirectToHttps:
enabled: true
responseCodeName: 301
You'll need to make sure that your load balancer supports HTTP and HTTPS
Worked on this for a long time. In case anyone isn't clear on the post above. You would rebuild your ingress with annotation -- kubernetes.io/ingress.allow-http: "false” --
Then delete your ingress and redeploy. The annotation will have the ingress only create a LB for 443, instead of both 443 and 80.
Then you do a compute HTTP LB, not one for GKE.
Gui directions:
Create a load balancer and choose HTTP(S) Load Balancing -- Start configuration.
choose - From Internet to my VMs and continue
Choose a name for the LB
leave the backend configuration blank.
Under Host and path rules, select Advanced host and path rules with the action set to
Redirect the client to different host/path.
Leave the Host redirect field blank.
Select Prefix Redirect and leave the Path value blank.
Chose the redirect response code as 308.
Tick the Enable box for HTTPS redirect.
For the Frontend configuration, leave http and port 80, for ip address select the static
IP address being used for your GKE ingress.
Create this LB.
You will now have all http traffic go to this and 308 redirect to your https ingress for GKE. Super simple config setup and works well.
Note: If you just try to delete the port 80 LB that GKE makes (not doing the annotation change and rebuilding the ingress) and then adding the new redirect compute LB it does work, but you will start to see error messages on your Ingress saying error 400 invalid value for field 'resource.ipAddress " " is in use and would result in a conflict, invalid. It is trying to spin up the port 80 LB and can't because you already have an LB on port 80 using the same IP. It does work but the error is annoying and GKE keeps trying to build it (I think).
Thanks to the comment of #Andrej Palicka and according to the page he provided: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#https_redirect now I have an updated and working solution.
First we need to define a FrontendConfig resource and then we need to tell the Ingress resource to use this FrontendConfig.
Example:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp-app-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: myapp-prd
networking.gke.io/managed-certificates: managed-cert
kubernetes.io/ingress.class: "gce"
networking.gke.io/v1beta1.FrontendConfig: myapp-frontend-config
spec:
defaultBackend:
service:
name: myapp-app-service
port:
number: 80
---
apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
name: myapp-frontend-config
spec:
redirectToHttps:
enabled: true
responseCodeName: MOVED_PERMANENTLY_DEFAULT
You can disable HTTP on your cluster (note that you'll need to recreate your cluster for this change to be applied on the load balancer) and then set HTTP-to-HTTPS redirect by creating an additional load balancer on the same IP address.
I spend couple of hours on the same question, and ended up doing what I've just described. It works perfectly.
Redirecting to HTTPS in Kubernetes is somewhat complicated. In my experience, you'll probably want to use an ingress controller such as Ambassador or ingress-nginx to control routing to your services, as opposed to having your load balancer route directly to your services.
Assuming you're using an ingress controller, then:
If you're terminating TLS at the external load balancer and the LB is running in L7 mode (i.e., HTTP/HTTPS), then your ingress controller needs to use X-Forwarded-Proto, and issue a redirect accordingly.
If you're terminating TLS at the external load balancer and the LB is running in TCP/L4 mode, then your ingress controller needs to use the PROXY protocol to do the redirect.
You can also terminate TLS directly in your ingress controller, in which case it has all the necessary information to do the redirect.
Here's a tutorial on how to do this in Ambassador.

Kubernetes (AKS) : Expose multiple ports of different service to common load balancer

I am setting up Kubernetes cluster on Azure (using AKS) to host Elasticsearch, Kibana, custom api, UI, nginx, etc.
As I don't want separate public IP per service, I need a way to setup a common load balancer/Ingress and then just add the port numbers to there and setup routing.
I tried using the approach mentioned in this stackoverflow question - How to expose multiple port using a load balancer services in kubernetes but didn't work out.
As there are technology clients connecting to my cluster, I need to have service per technology.
Basically I need to expose 9200, 5601, 80 - all on same IP but on accessing the IP with port, user must be re-directed to appropriate technology service.
Below is sample configuration for what am looking for.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myingress
spec:
rules:
- host: myurl.domain.com
http:
paths:
- path: /
backend:
serviceName: elasticsearch
servicePort: 9200
- path: /
backend:
serviceName: kibana
servicePort: 5602
Any thoughts?
With your issue, the ingress is what you want. You can create all your services as you want. And expose the ports for your service. Then create the ingress with a public IP and create the ingress route that routes the access from the ingress to your backend services.
Take a look at the example in Create an ingress controller in Azure Kubernetes Service (AKS). It will show you what the steps need to be done. And if you have more questions please let me know.
I just finished doing this on a mail server project using HAProxy ingress controller (https://github.com/helm/charts/tree/master/incubator/haproxy-ingress) in TCP mode. Works a treat. Working config can be found at https://github.com/funkypenguin/docker-mailserver/blob/fa9bd9c9ed9b66aa6ee1c36ca19a73c558682f24/helm-chart/docker-mailserver/values.yaml#L300
D
Sorry i posted another similar question. Finally was able to solve with issue with simple tagging.
No extra tools/code required.
Please refer my post for answer : Kubernetes: Expose multiple services internally & externally

ingress-nginx - create one ingress per host? Or combine many hosts into one ingress and reload?

I'm building a service where users can build web apps - these apps will be hosted under a virtual DNS name *.laska.io
For example, if Tom and Jerry both built an app, they'd have it hosted under:
tom.laska.io
jerry.laska.io
Now, suppose I have 1000 users. Should I create one big ingress that looks like this?
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: tom.laska.io
http:
paths:
- backend:
serviceName: nginx-service
servicePort: 80
- host: jerry.laska.io
http:
paths:
- backend:
serviceName: nginx-service
servicePort: 80
...and so forth
I'm worried about downtime - if I have an app with websockets for example. Also the file will become huge with 1000 users. Will reloading the ingress go fast enough? Also, how should I reload it?
A second option in my mind is to simply create one ingress for every web app. My worry about that is, can ingress-nginx handle many ingresses? Or is this an anti-pattern?
Which one is better?
You can create one ingress resource for each web app. If you search the official public charts repo, you'll see that many of the charts define an ingress resource within them. It's normal for each app to define its own ingress resource.
It's worth being clear that an ingress resource is just a definition of a routing rule. (It doesn't add an extra ingress controller or any other extra runtime component.) So it makes a lot of sense for an app to define its own routing rule.
The example you've given has all the ingress routing in one resource definition. This approach is easy to grasp and makes a lot of sense when you've got several related applications as then you might want to see their routing configuration together. You can see this also in the fanout ingress example in the kubernetes docs.
I can't see any performance concerns with defining the rules separately in distinct resources. The ingress controller will listen for new rules and update its configuration only when a new rule is created so there shouldn't be a problem with reading the resources. And I'd expect the combined vs separated options to result in the same routing rules being set in the background in nginx.
The most common pattern in the official charts repo is that the chart for each app defines its ingress resource and also exposes it through the values.yaml so that users can choose to enable or customise it as they wish. You can then compose multiple charts together and configure the rules for each in the relevant section of the values.yaml. (Here's an example I've worked on that does this with wildcard dns.) Or you can deploy each app separately under its own helm release.
An ingress ressource is just a rule. It's better to split your ingress ressources in different files and just reapply the ressources that need change.
I never noticed a downtime when applying a ressource.

Kubernetes services for different application tracks

I'm working on an application deployment environment using Kubernetes where I want to be able to spin up copies of my entire application stack based on a Git reference for the primarily web application, e.g. "master" and "my-topic-branch". I want these copies of the app stack to coexist in the same cluster. I can create Kubernetes services, replication controllers, and pods that use a "gitRef" label to isolate the stacks from each other, but some of the pods in the stack depend on each other (via Kubernetes services), and I don't see an easy, clean way to restrict the services that are exposed to a pod.
There a couple ways to achieve it that I can think of, but neither are ideal:
Put each stack in a separate Kubernetes namespace. This provides the cleanest isolation, in that there are no resource name conflicts and the applications can have the DNS hostnames for the services they depend on hardcoded, but seems to violate what the documentation says about namespaces†:
It is not necessary to use multiple namespaces just to separate slightly different resources, such as different versions of the same software: use labels to distinguish resources within the same namespace.
This makes sense, as putting the app stacks in different resources would negate all the usefulness of label selectors. I'd just name the namespace with the Git reference and all the other resources wouldn't need to be filtered at all.
Create a copy of each service for each copy of the application stack, e.g. "mysql-master" and "mysql-my-topic-branch". This has the advantage that all the app stacks can coexist in the same namespace, but the disadvantage of not being able to hardcode the DNS hostname for the service in the applications that need them, e.g. having a web app target the hostname "mysql" regardless of which instance of the MySQL Kubernetes service it actually resolves to. I would need to use some mechanism of injecting the correct hostname into the pods or having them figure it out for themselves somehow.
Essentially what I want is a way of telling Kubernetes, "Expose this service's hostname only to pods with the given labels, and expose it to them with the given hostname" for a specific service. This would allow me to use the second approach without having to have application-level logic for determining the correct hostname to use.
What's the best way to achieve what I'm after?
[†] http://kubernetes.io/v1.1/docs/user-guide/namespaces.html
The documentation on putting different versions in different namespaces is a bit incorrect I think. It is actually the point of namespaces to separate things completely like this. You should put a complete version of each "track" or deployment stage of your app into its own namespace.
You can then use hardcoded service names - "http://myservice/" - as the DNS will resolve on default to the local namespace.
For ingresses I have copied my answer here from the GitHub issue on cross-namespace ingresses.
You should use the approach that our group is using for Ingresses.
Think of an Ingress not as much as a LoadBalancer but just a document specifying some mappings between URLs and services within the same namespace.
An example, from a real document we use:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
namespace: dev-1
spec:
rules:
- host: api-gateway-dev-1.faceit.com
http:
paths:
- backend:
serviceName: api-gateway
servicePort: 80
path: /
- host: api-shop-dev-1.faceit.com
http:
paths:
- backend:
serviceName: api-shop
servicePort: 80
path: /
- host: api-search-dev-1.faceit.com
http:
paths:
- backend:
serviceName: api-search
servicePort: 8080
path: /
tls:
- hosts:
- api-gateway-dev-1.faceit.com
- api-search-dev-1.faceit.com
- api-shop-dev-1.faceit.com
secretName: faceitssl
We make one of these for each of our namespaces for each track.
Then, we have a single namespace with an Ingress Controller which runs automatically configured NGINX pods. Another AWS Load balancer points to these pods which run on a NodePort using a DaemonSet to run at most and at least one on every node in our cluster.
As such, the traffic is then routed:
Internet -> AWS ELB -> NGINX (on node) -> Pod
We keep the isolation between namespaces while using Ingresses as they were intended. It's not correct or even sensible to use one ingress to hit multiple namespaces. It just doesn't make sense, given how they are designed. The solution is to use one ingress per each namespace with a cluster-scope ingress controller which actually does the routing.
All an Ingress is to Kubernetes is an object with some data on it. It's up to the Ingress Controller to do the routing.
See the document here for more info on Ingress Controllers.