I was trying to find annotations to do some basic auth in Nginx controller.
Most of the resources on the internet specify this annotation:
"nginx.ingress.kuberenetes.io/..."
While I found that in the nginx docs:
https://docs.nginx.com/nginx-ingress-controller/configuration/ingress-resources/advanced-configuration-with-annotations/
It was switched to "nginx.org"
Searching external docs for answers seems a bit of a detour.
Is there a way to browse what annotations are supported on a contoller with local commands, maybe something similar to kubectl explain?
Any ideas?
you can use kubectl explain ingress, but it will just show documentation about Kubernetes resources about ingress, something like
KIND: Ingress
VERSION: networking.k8s.io/v1
DESCRIPTION:
Ingress is a collection of rules that allow inbound connections to reach
the endpoints defined by a backend. An Ingress can be configured to give
services externally-reachable urls, load balance traffic, terminate SSL,
offer name based virtual hosting etc.
FIELDS:
apiVersion <string>
APIVersion defines the versioned schema of this representation of an
object. Servers should convert recognized schemas to the latest internal
value, and may reject unrecognized values. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
kind <string>
Kind is a string value representing the REST resource this object
represents. Servers may infer this from the endpoint the client submits
requests to. Cannot be updated. In CamelCase. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
metadata <Object>
Standard object's metadata. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
spec <Object>
Spec is the desired state of the Ingress. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
status <Object>
Status is the current state of the Ingress. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
In short, for annotation better to check helm-chart and official documentation
This how you can add basic auth to the ingress
Frist, create basic auth secret
htpasswd -c auth admin
kubectl create secret generic basic-auth --from-file=auth
then refer to this secret in the annotation
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/auth-secret: basic-auth
nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required to access Linkerd'
name: linker-basicauth-ingress
namespace: linkerd-viz
spec:
tls:
- hosts:
- mybasic-auth.example.com
secretName: ingres-tls-secret
rules:
- host: mybasic-auth.example.com
http:
paths:
- backend:
service:
name: web
port:
number: 8084
path: /
pathType: Prefix
nginx-configuration/annotations.md
Related
I'm migrating an architecture to kubernetes and I'd like to use the Haproxy ingress controller that I'm installling with helm, according the documentation (version 1.3).
Thing is, when I'm defining path rules through an ingress file, I can't define Regex or Beging path types as seen on documentation here : https://haproxy-ingress.github.io/docs/configuration/keys/#path-type.
My configuration file :
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: bo-ingress
annotations:
haproxy.org/path-rewrite: "/"
kubernetes.io/ingress.class: haproxy
spec:
rules:
- host: foo.com
http:
paths:
- path: /
pathType: Beging
backend:
service:
name: foo-service
port:
number: 80
When I install my ingress helm chart with this configuration, I have that error message :
Error: UPGRADE FAILED: cannot patch "bo-ingress" with kind Ingress: Ingress.extensions "bo-ingress" is invalid: spec.rules[2].http.paths[12].pathType: Unsupported value: "Begin": supported values: "Exact", "ImplementationSpecific", "Prefix"
Am I missing something ? Is this feature only available for Enterprise plan ?
Thanks, Greg
HAProxy Ingress follows “Ingress v1 spec”, so any Ingress spec configuration should work as stated by the Kubernetes documentation.
As per the kubernetes documentation, the supported path types are ImplementationSpecific, Exact and Prefix. Paths that do not include an explicit pathType will fail validation. Here the path type Begin is not supported as per kubernetes documentation. So use either of those 3 types.
For more information on kubernetes supported path types refer to the documentation.
I have a Kubernetes cluster deployed on AWS (EKS). I deployed the cluster using the “eksctl” command line tool. I’m trying to deploy a Dash python app on the cluster without success. The default port for Dash is 8050. For the deployment I used the following resources:
pod
service (ClusterIP type)
ingress
You can check the resource configuration files below:
pod-configuration-file.yml
kind: Pod
apiVersion: v1
metadata:
name: dashboard-app
labels:
app: dashboard
spec:
containers:
- name: dashboard
image: my_image_from_ecr
ports:
- containerPort: 8050
service-configuration-file.yml
kind: Service
apiVersion: v1
metadata:
name: dashboard-service
spec:
selector:
app: dashboard
ports:
- port: 8050 # exposed port
targetPort: 8050
ingress-configuration-file.yml (host based routing)
kind: Ingress
metadata:
name: dashboard-ingress
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: dashboard.my_domain
http:
paths:
- backend:
serviceName: dashboard-service
servicePort: 8050
path: /
I followed the steps below:
kubectl apply -f pod-configuration-file.yml
kubectl apply -f service-configuration-file.yml
kubectl apply -f ingress-confguration-file.yml
I also noticed that the pod deployment works as expected:
kubectl logs my_pod:
and the output is:
Dash is running on http://127.0.0.1:8050/
Warning: This is a development server. Do not use app.run_server
in production, use a production WSGI server like gunicorn instead.
* Serving Flask app "annotation_analysis" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: on
You can see from the ingress configuration file that I want to do host based routing using my domain. For this to work, I have also deployed an nginx-ingress. I have also created an “A” record set using Route53
that maps the “dashboard.my_domain” to the nginx-ingress:
kubectl get ingress
and the output is:
NAME HOSTS ADDRESS. PORTS. AGE
dashboard-ingress dashboard.my_domain nginx-ingress.elb.aws-region.amazonaws.com 80 93s
Moreover,
kubectl describe ingress dashboard-ingress
and the output is:
Name: dashboard-ingress
Namespace: default
Address: nginx-ingress.elb.aws-region.amazonaws.com
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
host.my-domain
/ dashboard-service:8050 (192.168.36.42:8050)
Annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: false
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: false
Events: <none>
Unfortunately, when I try to access the Dash app on the browser, I get a
502 Bad Gateway error from the nginx. Could you please help me because my Kubernetes knowledge is limited.
Thanks in advance.
It had nothing to do with Kubernetes or AWS settings. I had to change my python Dash code from:
if __name__ == "__main__":
app.run_server(debug=True)
to:
if __name__ == "__main__":
app.run_server(host='0.0.0.0',debug=True).
The addition of host='0.0.0.0' did the trick!
I think you'll need to check whether any other service is exposed at path / on the same host.
Secondly, try removing rewrite-target annotation. Also can you please update your question with output of kubectl describe ingress <ingress_Name>
I would also suggest you to use backend-protocol annotation with value as HTTP (default value, you can avoid using this if dashboard application is not SSL Configured, and only this application is served at the said host.) But, you may need to add this if multiple applications are served at this host, and create one Ingress with backend-protocol: HTTP for non SSL services, and another with backend-protocol: HTTPS to serve traffic to SSL enabled services.
For more information on backend-protocol annotation, kindly refer this link.
I have often faced this issue in my Ingress Setup and these steps have helped me resolve it.
I was doing some research about ingress and it seems I have to create a new ingress resource for each namespace. Is that correct?
I just created 2 separate ingress resources in different namespaces in my GKE cluster but it seems to use the same LB in(which is great for cost) but I would think it is possible to have clashes then. (when using same path). I just tried it and the first one I've created is still working on the path, the other newer one on the same path is just not working.
Can someone explain me the correct setup for ingress?
As Kubernetes works, ingress controller won't pass a packet to a service that is in a different namespace from the ingress resource. So, if you create an ingress resource in the default namespace, all your services must be in the default namespace as well.
This is something that won't change. EVER. There has been a feature request years ago, and kubernetes team announced that it's not going to happen. It introduces a security hole when ingress controller is being able to transpass a namespace.
Now, what we do in these situations is actually pretty neat. You will have to do the following:
Say you have 2 services in the namespaces you need. e.g. service1.foo and service2.bar.
create 2 headless services without selectors and 2 Endpoint objects pointing to the IP addresses of the services service1.foo and service2.bar, in the same namespace as the ingress resource. The headless service without selectors will force kube-dns (or coreDNS) to search for either ExternalName type service or an Endpoint object. Now, the only requirement here is that your headless service and the Endpoint object must have the same name.
Create your ingress resource pointing to the headless services.
It should look like this (for 1 service):
Say the IP address of service1.foo is 10.10.10.10. Your headless service and the Endpoint object would be:
apiVersion: v1
kind: Service
metadata:
name: bait-svc
spec:
clusterIP: None
ports:
- name: http
port: 80
targetPort: 80
---
apiVersion: v1
kind: Endpoints
metadata:
name: bait-svc
subsets:
- addresses:
- ip: 10.10.10.10
ports:
- port: 80
protocol: TCP
and Ingress resource:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
tls:
- secretName: ssl-certs
rules:
- host: site1.training.com
http:
paths:
- path: /
backend:
serviceName: bait-svc
servicePort: 80
So, the Ingress points to the bait-svc, and bait-svc points to service1.foo. And you will do this for each service.
UPDATE
I am thinking now, it might not work with GKE Ingress Controller, as on GKE you need a NodePort type service for the HTTP load balancer to reach the service. As you can see, in my example I've got nginx Ingress Controller.
Independently if it works or not, I would recommend using some other Ingress Controller. It's not that GKE IC is not good. It is quite robust, but almost always you end up hitting some limitation. Other ICs are more flexible.
The behavior of conflicting Ingress routes is undefined and implementation dependent. In most cases it’s just last writer wins.
I have an isomorphic JavaScript app that uses Vue's SSR plugin running on K8s. This app can either be rendered server-side by my Express server with Node, or it can be served straight to the client as with Nginx and rendered in the browser. It works pretty flawlessly either way.
Running it in Express with SSR is a much higher resource use however, and Express is more complicated and prone to fail if I misconfigure something. Serving it with Nginx to be rendered client side on the other hand is dead simple, and barely uses any resources in my cluster.
What I want to do is have a few replicas of a pod running my Express server that's performing the SSR, but if for some reason these pods go down, I want a fallback service on the ingress that will serve from a backup pod with just Nginx serving the client-renderable code.
Setting up the pods is easy enough, but how can I tell an ingress to serve from a different service then normal if the normal service is unreachable and/or responding too slowly to requests?
The easiest way to setup NGINX Ingress to meet your needs is by using the default-backend annotation.
This annotation is of the form
nginx.ingress.kubernetes.io/default-backend: <svc name> to specify
a custom default backend. This <svc name> is a reference to a
service inside of the same namespace in which you are applying this
annotation. This annotation overrides the global default backend.
This service will be handle the response when the service in the
Ingress rule does not have active endpoints. It will also handle the
error responses if both this annotation and the custom-http-errors
annotation is set.
Example:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-app-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: "/"
nginx.ingress.kubernetes.io/custom-http-errors: '404'
nginx.ingress.kubernetes.io/default-backend: default-http-backend
spec:
rules:
- host: myapp.mydomain.com
http:
paths:
- path: "/"
backend:
serviceName: custom-http-backend
servicePort: 80
In this example NGINX is serving custom-http-backend as primary resource and if this service fails, it will redirect the end-user to default-http-backend.
You can find more details on this example here.
Kubernetes dedicated cockroachdb node - accessing admin ui via traefik ingress controller fails - page isn't redirecting properly
I have a dedicated kubernetes node running cockroachdb. The pods get scheduled and everything is setup. I want to access the admin UI from a subdomain like so: cockroachdb.hostname.com. I have done this with traefik dashboard and ceph dashboard so I know my ingress setup is working. I even have cert-manager running to have https enabled. I get the error from the browser that the page is not redirecting properly.
Do I have to specify the host name somewhere special?
I have tried adding this with no success: --http-host cockroachdb.hostname.com
This dedicated node has its own public ip which is not mapped to hostname.com. I think I need to change a setting in cockroachdb, but I don't know which because I am new to it.
Does anyone know how to publish admin UI via an ingress?
EDIT01: Added ingress and service config files
Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: cockroachdb-public
annotations:
kubernetes.io/ingress.class: traefik
traefik.frontend.rule.type: PathPrefixStrip
certmanager.k8s.io/issuer: "letsencrypt-prod"
certmanager.k8s.io/acme-challenge-type: http01
ingress.kubernetes.io/ssl-redirect: "true"
ingress.kubernetes.io/ssl-temporary-redirect: "true"
ingress.kubernetes.io/ssl-host: "cockroachdb.hostname.com"
traefik.frontend.rule: "Host:cockroachdb.hostname.com,www.cockroachdb.hostname.com"
traefik.frontend.redirect.regex: "^https://www.cockroachdb.hostname.com(.*)"
traefik.frontend.redirect.replacement: "https://cockroachdb.hostname.com/$1"
spec:
rules:
- host: cockroachdb.hostname.com
http:
paths:
- path: /
backend:
serviceName: cockroachdb-public
servicePort: http
- host: www.cockroachdb.hostname.com
http:
paths:
- path: /
backend:
serviceName: cockroachdb-public
servicePort: http
tls:
- hosts:
- cockroachdb.hostname.com
- www.cockroachdb.hostname.com
secretName: cockroachdb-secret
Serice:
apiVersion: v1
kind: Service
metadata:
# This service is meant to be used by clients of the database. It exposes a ClusterIP that will
# automatically load balance connections to the different database pods.
name: cockroachdb-public
labels:
app: cockroachdb
spec:
ports:
# The main port, served by gRPC, serves Postgres-flavor SQL, internode
# traffic and the cli.
- port: 26257
targetPort: 26257
name: grpc
# The secondary port serves the UI as well as health and debug endpoints.
- port: 8080
targetPort: 8080
name: http
selector:
app: cockroachdb
EDIT02:
I can access the Admin UI page now but only by going over the external ip address of the server with port 8080. I think I need to tell my server that its ip address is mapped to the correct sub domain?
EDIT03:
On both scheduled traefik-ingress pods the following logs are created:
time="2019-04-29T04:31:42Z" level=error msg="Service not found for default/cockroachdb-public"
Your referencing looks good on the ingress side. You are using quite a few redirects, unless you really know what each one is accomplishing, don't use them, you might end up in an infinite loop of redirects.
You can take a look at the following logs and methods to debug:
Run kubectl logs <traefik pod> and see the last batch of logs.
Run kubectl get service, and from what I hear, this is likely your main issue. Make sure your service exists in the default namespace.
Run kubectl port-forward svc/cockroachdb-public 8080:8080 and try connecting to it through localhost:8080 and see terminal for potential error messages.
Run kubectl describe ingress cockroachdb-public and look at the events, this should give you something to work with.
Try accessing the service from another pod you have running ping cockroachdb-public.default.svc.cluster.local and see if it resolves the IP address.
Take a look at your clusterrolebindings and serviceaccount, it might be limited and not have permission to list services in the default namespace: kubectl create clusterrolebinding default-admin --clusterrole cluster-admin --serviceaccount=default:default