cert-manager is creating new ingress with acme responder instead of modifying the existing - kubernetes

I'm trying to use cert-manager to issue a certificate via LetsEncrypt.
I've followed through with the steps here http://docs.cert-manager.io/en/latest/getting-started/index.html
However, my existing ingress is not being modified (I assume it needs to modify it due to adding a path for .well-known/....
Instead I see an ingress created for this with a name like: cm-acme-http-solver-kgpz6? Which is rather confusing?
If I get the yaml for that ingress I see the following for rules:
spec:
rules:
- host: example.com
http:
paths:
- backend:
serviceName: cm-acme-http-solver-2dd97
servicePort: 8089
path: /.well-known/acme-challenge/2T2D_XK1-zIJJ9_f2ANlwR-AcNTm3-WenOExNpmUytY
How exactly is this meant to work? As the documentation seems rather sparse.

The record you are seeing is for the challenge. It needs to succeed to configure the cert. If you are using "example.com" as the domain then it will not succeed. To get this to work you'll need to configure a DNS record for a valid hostname so that LetsEncrypt can resolve the domain and complete the check.
Usually you will not even see the challenge ingress resource. It usually runs the challenge and then removes itself as long as DNS and the hostname have been configured correctly. After it is removed the resource you created will get loaded into your ingress controller.
There are a few ingress controllers that do not support multiple ingress resources per hostname. They will load one ingress resource and ignore the other, so this is sort of a workaround/fix to the issue.

Related

k8s nginx ingress TLS rules: cert vs. paths

I am struggling to get my nginx ingress (on AWS EKS) working with path rules and TLS.
The ingress is from
here
A snippet from the Ingress looks like:
spec:
tls:
- hosts:
- example.com
secretName: ingress-tls
rules:
- host: example.com
- http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: api-service
port:
number: 443
This ingress creates the AWS network load balancer, with a URL like
https://xyz.elb.us-west-1.amazonaws.com/
I am updating the
ingress-tls
secret with a certificate using
cert-manager.
When I access the ingress using the NLB URL
https://xyz.elb.us-west-1.amazonaws.com/api, I get
GOOD: Correct routing based on the path rules from the ingress definition (i.e. it ​goes to my
api-service as expected)
BAD: Certificate errors since I'm not accessing the ingress with the domain that the certificate is for.
When I access the ingress using the correct domain e.g.
https://example.com/api which is what I want to do, I get:
BAD:
404, it doesn't respect my path rules, and goes to
upstream-default-backend instead.
GOOD: certificate all good, it’s the one for
example.com that
cert-manager configured.
I tried removing the
host: example.com from the
rules:, which gives me:
GOOD: Correct routing based on the path rules from the ingress definition
BAD: Certificate errors, it serves up the default ingress “Fake” certificate instead of the one for
example.com, I guess since the
host is missing from the rules, though not sure of the exact reason.
Can someone please help me get
GOOD
GOOD
I’m at a loss here.
After staring at this for several more hours, and digging through the nasty chunk of lua that is the
nginx.conf for this, I found it! Maybe someday someone will have this problem, and might find this useful.
The problem was:
rules:
- host: example.com
- http:
This is defining (I think) a
host with no forwarding rules, then then some
http forwarding rules without a host. What I had intended was obviously that the forwarding rules would be for the host.
And that would be:
rules:
- host: example.com
http:
I have to say that I'm now even less of a fan of YAML than I was previously, if that's even possible.

Can Ingress route requests based on ip?

I have been with K8s-ingress well so far but I have a question.
Can ingress route requests based on IP?
I've already know that ingress do routing based on hosts like a.com, b.com... to each services and URI like path /a-service/, /b-service/ to each services.
However, I'm curious with the idea that Ingress can route by IP? I'd like requests from my office(certain ip) to route a specific service for tests.
Does it make sense? and any idea for that?
If this is just for testing I would just whitelist the IP. You can read the docs about nginx ingress annotations
You can specify allowed client IP source ranges through the nginx.ingress.kubernetes.io/whitelist-source-range annotation. The value is a comma separated list of CIDRs, e.g. 10.0.0.0/24,172.10.0.1.
Example yaml might look like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: whitelist
annotations:
nginx.ingress.kubernetes.io/whitelist-source-range: "1.1.1.1/24"
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /
backend:
serviceName: echoheaders
servicePort: 80
Also it looks like you can do that in Istio (I did not tried it) in kind ServiceRole and ServiceRoleBinding for specifying detailed access control requirements. For this you would use source.ip property. It's explained on Constraints and Properties
This is not part of the main Ingress abstraction as you noted, however many Ingress Controllers offer extra features through annotations or secondary CRDs. So in theory it could be added like that. I don't think any do routing like this though, so in practical terms, probably not available off the shelf.
As coderanger stated in his answer, ingress does not have it by default.
I'm not sure if IP based routing is the way to proceed, because how will you test/hit actual deployments/services from Office IP's when needed?
I think you can add a check to perform routing based on IP and header. For ex: you can pass a header 'redirect-to-test: true'. So if you set this to false, you can still access the production services.

mutual TLS based on specific IP

I'm trying to configure nginx-ingress for mutual TLS but only for specific remote address. I tried to use snippet but no success:
nginx.ingress.kubernetes.io/configuration-snippet: |
if ($remote_addr = 104.214.x.x) {
auth-tls-verify-client: on;
auth-tls-secret: namespace/nginx-ca-secret;
auth-tls-verify-depth: 1;
auth-tls-pass-certificate-to-upstream: false;
}
The auth-tls annotations work when applied as annotations, but inside the snippet they don't.
Any idea how to configure this or maybe a workaround to make it work?
The job of mTLS is basically restricting access to a service by requiring the client to present a certificate. If you expose a service and then require only clients with specific IP addresses to present a certificate, the entire rest of the world can still access your service without a certificate, which completely defeats the point of mTLS.
If you want more info, here is a good article that explains why TLS and mTLS exist and what is the difference between them.
There are two ways to make a sensible setup out of this:
Just use regular TLS instead of mTLS
Make a service in your cluster require mTLS to access it regardless of IP addresses
If you go for option 2, you need to configure the service itself to use mTLS, and then configure ingress to pass through the client certificate to the service. Here's a sample configuration for nginx ingress that will work with a service that expects mTLS:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: mtls-sample
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "https"
nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream: "true"
spec:
rules:
- http:
paths:
- path: /hello
backend:
serviceName: mtls-svc
servicePort: 443

ingress-nginx - create one ingress per host? Or combine many hosts into one ingress and reload?

I'm building a service where users can build web apps - these apps will be hosted under a virtual DNS name *.laska.io
For example, if Tom and Jerry both built an app, they'd have it hosted under:
tom.laska.io
jerry.laska.io
Now, suppose I have 1000 users. Should I create one big ingress that looks like this?
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: tom.laska.io
http:
paths:
- backend:
serviceName: nginx-service
servicePort: 80
- host: jerry.laska.io
http:
paths:
- backend:
serviceName: nginx-service
servicePort: 80
...and so forth
I'm worried about downtime - if I have an app with websockets for example. Also the file will become huge with 1000 users. Will reloading the ingress go fast enough? Also, how should I reload it?
A second option in my mind is to simply create one ingress for every web app. My worry about that is, can ingress-nginx handle many ingresses? Or is this an anti-pattern?
Which one is better?
You can create one ingress resource for each web app. If you search the official public charts repo, you'll see that many of the charts define an ingress resource within them. It's normal for each app to define its own ingress resource.
It's worth being clear that an ingress resource is just a definition of a routing rule. (It doesn't add an extra ingress controller or any other extra runtime component.) So it makes a lot of sense for an app to define its own routing rule.
The example you've given has all the ingress routing in one resource definition. This approach is easy to grasp and makes a lot of sense when you've got several related applications as then you might want to see their routing configuration together. You can see this also in the fanout ingress example in the kubernetes docs.
I can't see any performance concerns with defining the rules separately in distinct resources. The ingress controller will listen for new rules and update its configuration only when a new rule is created so there shouldn't be a problem with reading the resources. And I'd expect the combined vs separated options to result in the same routing rules being set in the background in nginx.
The most common pattern in the official charts repo is that the chart for each app defines its ingress resource and also exposes it through the values.yaml so that users can choose to enable or customise it as they wish. You can then compose multiple charts together and configure the rules for each in the relevant section of the values.yaml. (Here's an example I've worked on that does this with wildcard dns.) Or you can deploy each app separately under its own helm release.
An ingress ressource is just a rule. It's better to split your ingress ressources in different files and just reapply the ressources that need change.
I never noticed a downtime when applying a ressource.

Kubernetes services for different application tracks

I'm working on an application deployment environment using Kubernetes where I want to be able to spin up copies of my entire application stack based on a Git reference for the primarily web application, e.g. "master" and "my-topic-branch". I want these copies of the app stack to coexist in the same cluster. I can create Kubernetes services, replication controllers, and pods that use a "gitRef" label to isolate the stacks from each other, but some of the pods in the stack depend on each other (via Kubernetes services), and I don't see an easy, clean way to restrict the services that are exposed to a pod.
There a couple ways to achieve it that I can think of, but neither are ideal:
Put each stack in a separate Kubernetes namespace. This provides the cleanest isolation, in that there are no resource name conflicts and the applications can have the DNS hostnames for the services they depend on hardcoded, but seems to violate what the documentation says about namespaces†:
It is not necessary to use multiple namespaces just to separate slightly different resources, such as different versions of the same software: use labels to distinguish resources within the same namespace.
This makes sense, as putting the app stacks in different resources would negate all the usefulness of label selectors. I'd just name the namespace with the Git reference and all the other resources wouldn't need to be filtered at all.
Create a copy of each service for each copy of the application stack, e.g. "mysql-master" and "mysql-my-topic-branch". This has the advantage that all the app stacks can coexist in the same namespace, but the disadvantage of not being able to hardcode the DNS hostname for the service in the applications that need them, e.g. having a web app target the hostname "mysql" regardless of which instance of the MySQL Kubernetes service it actually resolves to. I would need to use some mechanism of injecting the correct hostname into the pods or having them figure it out for themselves somehow.
Essentially what I want is a way of telling Kubernetes, "Expose this service's hostname only to pods with the given labels, and expose it to them with the given hostname" for a specific service. This would allow me to use the second approach without having to have application-level logic for determining the correct hostname to use.
What's the best way to achieve what I'm after?
[†] http://kubernetes.io/v1.1/docs/user-guide/namespaces.html
The documentation on putting different versions in different namespaces is a bit incorrect I think. It is actually the point of namespaces to separate things completely like this. You should put a complete version of each "track" or deployment stage of your app into its own namespace.
You can then use hardcoded service names - "http://myservice/" - as the DNS will resolve on default to the local namespace.
For ingresses I have copied my answer here from the GitHub issue on cross-namespace ingresses.
You should use the approach that our group is using for Ingresses.
Think of an Ingress not as much as a LoadBalancer but just a document specifying some mappings between URLs and services within the same namespace.
An example, from a real document we use:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
namespace: dev-1
spec:
rules:
- host: api-gateway-dev-1.faceit.com
http:
paths:
- backend:
serviceName: api-gateway
servicePort: 80
path: /
- host: api-shop-dev-1.faceit.com
http:
paths:
- backend:
serviceName: api-shop
servicePort: 80
path: /
- host: api-search-dev-1.faceit.com
http:
paths:
- backend:
serviceName: api-search
servicePort: 8080
path: /
tls:
- hosts:
- api-gateway-dev-1.faceit.com
- api-search-dev-1.faceit.com
- api-shop-dev-1.faceit.com
secretName: faceitssl
We make one of these for each of our namespaces for each track.
Then, we have a single namespace with an Ingress Controller which runs automatically configured NGINX pods. Another AWS Load balancer points to these pods which run on a NodePort using a DaemonSet to run at most and at least one on every node in our cluster.
As such, the traffic is then routed:
Internet -> AWS ELB -> NGINX (on node) -> Pod
We keep the isolation between namespaces while using Ingresses as they were intended. It's not correct or even sensible to use one ingress to hit multiple namespaces. It just doesn't make sense, given how they are designed. The solution is to use one ingress per each namespace with a cluster-scope ingress controller which actually does the routing.
All an Ingress is to Kubernetes is an object with some data on it. It's up to the Ingress Controller to do the routing.
See the document here for more info on Ingress Controllers.