Setup custom default backend for nginx ingress controller in Kubernetes - kubernetes

I am stuck with custom default backend for error page from Nginx Ingress Controller. By default, Nginx Ingress Controller pods return Nginx's default page with errors such as 404, 50x and I want to modify them.
I have installed a DaemonSet of Nginx Ingress Controller following this tutorial. I use kubectl command to apply daemonset yaml file.
I found a tutorial from NGINX Annntations that show me setup a service that catching 404 error from ingress controller. I created a service following this Github repo. My Nginx Ingress Controller DaemonSet yaml file is:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nginx-ingress
namespace: nginx-ingress
spec:
selector:
matchLabels:
app: nginx-ingress
template:
metadata:
labels:
app: nginx-ingress
annotations:
nginx.ingress.kubernetes.io/custom-http-errors: "404,415"
nginx.ingress.kubernetes.io/default-backend: nginx-ingress/custom-default-backend
#prometheus.io/scrape: "true"
#prometheus.io/port: "9113"
#prometheus.io/scheme: http
spec:
serviceAccountName: nginx-ingress
containers:
- image: 10.207.149.80:80/smas/nginx/nginx-ingress:1.12.0
imagePullPolicy: IfNotPresent
name: nginx-ingress
ports:
- name: http
containerPort: 80
hostPort: 80
- name: https
containerPort: 443
hostPort: 443
- name: readiness-port
containerPort: 8081
- name: prometheus
containerPort: 9113
readinessProbe:
httpGet:
path: /nginx-ready
port: readiness-port
periodSeconds: 1
securityContext:
allowPrivilegeEscalation: true
runAsUser: 101 #nginx
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
args:
- -nginx-configmaps=$(POD_NAMESPACE)/nginx-config
- -default-server-tls-secret=$(POD_NAMESPACE)/default-server-secret
It does not work :(
I also read Command-line Arguments tutorials and ConfigMap Resource tutorial but I do not see any argument/ config that allowing me to specify my custom defautl backend service.
Could you show me how to setup custom defautl backend service for my DaemonSet ?
Many thanks.

Related

Permission problem w/ helm3 installation of traefik on port 80 (hostNetwork)

I'm studying helm3 and k8s (microk8s).
While tryingi the following command:
helm install traefik traefik/traefik -n traefik --values traefik-values.yaml
and traefik-values.yaml has the following value:
additionalArguments:
- "--certificatesresolvers.letsencrypt.acme.email=<my-email>"
- "--certificatesresolvers.letsencrypt.acme.storage=/data/acme.json"
- "--certificatesresolvers.letsencrypt.acme.caserver=https://acme-v02.api.letsencrypt.org/directory"
- "--certificatesResolvers.letsencrypt.acme.tlschallenge=true"
- "--api.insecure=true"
- "--accesslog=true"
- "--log.level=INFO"
hostNetwork: true
ipaddress: <my-ip>
service:
type: ClusterIP
ports:
web:
port: 80
websecure:
port: 443
I receive this bind-permission error
traefik.go:76: command traefik error: error while building entryPoint web: error preparing server: error opening listener: listen tcp :80: bind: permission denied
on the other hand, I can install Traefik on the same ports (80 and 443) using the following yaml file (approximately the example on Traefik's site):
---
apiVersion: v1
kind: Namespace
metadata:
name: traefik
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-controller
namespace: traefik
---
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: traefik-ingress-controller
namespace: traefik
labels:
k8s-app: traefik-ingress-lb
spec:
selector:
matchLabels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
tolerations:
- effect: NoSchedule
operator: Exists
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
hostNetwork: true
containers:
- image: traefik:2.4
name: traefik-ingress-lb
ports:
- name: http
containerPort: 80
hostPort: 80
# - name: admin
# containerPort: 8080
# hostPort: 8080
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
args:
- --providers.kubernetesingress=true
# you need to manually set this IP to the incoming public IP
# that your ingress resources would use. Note it only affects
# status and kubectl UI, and doesn't really do anything
# It could even be left out https://github.com/containous/traefik/issues/6303
- --providers.kubernetesingress.ingressendpoint.ip=<my-server-ip>
## uncomment these and the ports above and below to enable
## the web UI on the host NIC port 8080 in **insecure** mode
- --api.dashboard=true
- --api.insecure=true
- --log=true
- --log.level=INFO
- --accesslog=true
- --entrypoints.web.address=:80
- --entrypoints.websecure.address=:443
- --certificatesresolvers.leresolver.acme.tlschallenge=true # <== Enable TLS-ALPN-01 to generate and renew ACME certs
- --certificatesresolvers.leresolver.acme.email=<email> # <== Setting email for certs
- --certificatesresolvers.leresolver.acme.storage=/data/acme.json # <== Defining acme file to store cert information
---
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: traefik
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
# - protocol: TCP
# port: 8080
# name: admin
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: traefik-ingress-controller
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: traefik-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
name: traefik-ingress-controller
namespace: traefik
The two specs are not identical but quite similar as far as I can understand. They both create a ServiceAccount in the 'traefik' namespace and grant a ClusterRole.
What part determines the permission on port 80?
There's an open issue on the Traefik helm chart where Jasper Ben suggests a working solution:
hostNetwork: true
ports:
web:
port: 80
redirectTo: websecure
websecure:
port: 443
securityContext:
capabilities:
drop: [ALL]
add: [NET_BIND_SERVICE]
readOnlyRootFilesystem: true
runAsGroup: 0
runAsNonRoot: false
runAsUser: 0
The missing part in the helm chart is NET_BIND_SERVICE capability in the securityContext.

kong ingress controller has not effect on ingress resource

I have kubernetes Cluster v1.10 Over Centos 7
I installed kubernetes by hard-way
I have installed Kong ingress controller using helm
helm repo add stable https://kubernetes-charts.storage.googleapis.com
helm install stable/kong
and this output
NOTES:
1. Kong Admin can be accessed inside the cluster using:
DNS=guiding-wombat-kong-admin.default.svc.cluster.local
PORT=8444
To connect from outside the K8s cluster:
HOST=$(kubectl get nodes --namespace default -o jsonpath='{.items[0].status.addresses[0].address}')
PORT=$(kubectl get svc --namespace default guiding-wombat-kong-admin -o jsonpath='{.spec.ports[0].nodePort}')
2. Kong Proxy can be accessed inside the cluster using:
DNS=guiding-wombat-kong-proxy.default.svc.cluster.local
PORT=8443
To connect from outside the K8s cluster:
HOST=$(kubectl get nodes --namespace default -o jsonpath='{.items[0].status.addresses[0].address}')
PORT=$(kubectl get svc --namespace default guiding-wombat-kong-proxy -o jsonpath='{.spec.ports[0].nodePort}')
and I deployed dummy file
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: http-svc
spec:
replicas: 1
selector:
matchLabels:
app: http-svc
template:
metadata:
labels:
app: http-svc
spec:
containers:
- name: http-svc
image: gcr.io/google_containers/echoserver:1.8
ports:
- containerPort: 8080
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
---
apiVersion: v1
kind: Service
metadata:
name: http-svc
labels:
app: http-svc
spec:
type: NodePort
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
app: http-svc
---
and I deployed ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: foo-bar
spec:
rules:
- host: foo.bar
http:
paths:
- path: /
backend:
serviceName: http-svc
servicePort: 80
and when I run :
kubectl get ing
NAME HOSTS ADDRESS PORTS AGE
foo-bar foo.bar 80 1m
and when I browse
https://node-IP:controller-admin
{"next":null,"data":[]}
How can I troubleshoot this issue and find the solution?
Thank you :D
I recommend installing it using this guide only not using minikube.
It work for me on AWS:
$ curl -H 'Host: foo.bar' http://35.162.32.30
Hostname: http-svc-66ffffc458-jkxsl
Pod Information:
node name: ip-x-x-x-x.us-west-2.compute.internal
pod name: http-svc-66ffffc458-jkxsl
pod namespace: default
pod IP: 192.168.x.x
Server values:
server_version=nginx: 1.13.3 - lua: 10008
Request Information:
client_address=192.168.x.x
method=GET
real path=/
query=
request_version=1.1
request_uri=http://192.168.x.x:8080/
Request Headers:
accept=*/*
connection=keep-alive
host=192.168.x.x:8080
user-agent=curl/7.58.0
x-forwarded-for=172.x.x.x
x-forwarded-host=foo.bar
x-forwarded-port=8000
x-forwarded-proto=http
x-real-ip=172.x.x.x
Request Body:
-no body in request-

Azure Kubernetes nginx-Ingress: preserve client IP

I try to preserve the client IP with proxy protocol. Unfortunately it does not work.
Azure LB => nginx Ingress => Service
I end up with the Ingress Service Pod IP.
Ingress Controller Deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-ingress-controller
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
k8s-app: nginx-ingress-lb
annotations:
prometheus.io/port: '10254'
prometheus.io/scrape: 'true'
spec:
# hostNetwork makes it possible to use ipv6 and to preserve the source IP correctly regardless of docker configuration
# however, it is not a hard dependency of the nginx-ingress-controller itself and it may cause issues if port 10254 already is taken on the host
# that said, since hostPort is broken on CNI (https://github.com/kubernetes/kubernetes/issues/31307) we have to use hostNetwork where CNI is used
# like with kubeadm
# hostNetwork: true
terminationGracePeriodSeconds: 60
containers:
- image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.5
name: nginx-ingress-controller
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
ports:
- containerPort: 80
hostPort: 80
- containerPort: 443
hostPort: 443
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --configmap=default/nginx-ingress-controller
Ingress Controller Service:
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
namespace: kube-system
annotations:
service.beta.kubernetes.io/external-traffic: "OnlyLocal"
spec:
type: LoadBalancer
ports:
- port: 80
name: http
- port: 443
name: https
selector:
k8s-app: nginx-ingress-lb
nginx config map:
apiVersion: v1
metadata:
name: nginx-ingress-controller
data:
use-proxy-protocol: "true"
kind: ConfigMap
Got it to work.
In Ingress Controller Deployment I changed the image to
gcr.io/google_containers/nginx-ingress-controller:0.8.3
and removed the configmap.
I am using ingress to forward to a pod with a dotnet core api.
Adding
var options = new ForwardedHeadersOptions()
{
ForwardedHeaders = Microsoft.AspNetCore.HttpOverrides.ForwardedHeaders.All,
RequireHeaderSymmetry = false,
ForwardLimit = null
};
//add known proxy network(s) here
options.KnownNetworks.Add(network)
app.UseForwardedHeaders(options);
to Startup did the trick

Kubernetes service as env var to frontend usage

I'm trying to configure kubernetes and in my project I've separeted UI and API.
I created one Pod and I exposed both as services.
How can I set API_URL inside pod.yaml configuration in order to send requests from user's browser?
I can't use localhost because the communication isn't between containers.
pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: project
labels:
name: project
spec:
containers:
- image: 'ui:v1'
name: ui
ports:
- name: ui
containerPort: 5003
hostPort: 5003
env:
- name: API_URL
value: <how can I set the API address here?>
- image: 'api:v1'
name: api
ports:
- name: api
containerPort: 5000
hostPort: 5000
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: postgres-url
key: url
services.yaml
apiVersion: v1
kind: Service
metadata:
name: api
labels:
name: api
spec:
type: NodePort
ports:
- name: 'http'
protocol: 'TCP'
port: 5000
targetPort: 5000
nodePort: 30001
selector:
name: project
---
apiVersion: v1
kind: Service
metadata:
name: ui
labels:
name: ui
spec:
type: NodePort
ports:
- name: 'http'
protocol: 'TCP'
port: 80
targetPort: 5003
nodePort: 30003
selector:
name: project
The service IP is already available in a environment variable inside the pod, because Kubernetes initializes a set of environment variables for each service that exists at that moment.
To list all the environment variables of a pod
kubectl exec <pod-name> env
If the pod was created before the service you must delete it and create it again.
Since you named your service api, one of the variables the command above should list is API_SERVICE_HOST.
But you don't really need to lookup the service IP address inside environment variables. You can simply use the service name as the hostname. Any pod can connect to the service api, simply by calling api.default.svc.cluster (assuming your service is in the default namespace).
I created an Ingress to solve this issue and point to DNS instead of IP.
ingres.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: project
spec:
tls:
- secretName: tls
backend:
serviceName: ui
servicePort: 5003
rules:
- host: www.project.com
http:
paths:
- backend:
serviceName: ui
servicePort: 5003
- host: api.project.com
http:
paths:
- backend:
serviceName: api
servicePort: 5000
deployment.yaml
apiVersion: v1
kind: Pod
metadata:
name: project
labels:
name: project
spec:
containers:
- image: 'ui:v1'
name: ui
ports:
- name: ui
containerPort: 5003
hostPort: 5003
env:
- name: API_URL
value: https://api.project.com
- image: 'api:v1'
name: api
ports:
- name: api
containerPort: 5000
hostPort: 5000
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: postgres-url
key: url

Kubernetes rollout give 503 error when switching web pods

I'm running this command:
kubectl set image deployment/www-deployment VERSION_www=newImage
Works fine. But there's a 10 second window where the website is 503, and I'm a perfectionist.
How can I configure kubernetes to wait for the image to be available before switching the ingress?
I'm using the nginx ingress controller from here:
gcr.io/google_containers/nginx-ingress-controller:0.8.3
And this yaml for the web server:
# Service and Deployment
apiVersion: v1
kind: Service
metadata:
name: www-service
spec:
ports:
- name: http-port
port: 80
protocol: TCP
targetPort: http-port
selector:
app: www
sessionAffinity: None
type: ClusterIP
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: www-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: www
spec:
containers:
- image: myapp/www
imagePullPolicy: Always
livenessProbe:
httpGet:
path: /healthz
port: http-port
name: www
ports:
- containerPort: 80
name: http-port
protocol: TCP
resources:
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- mountPath: /etc/env-volume
name: config
readOnly: true
imagePullSecrets:
- name: cloud.docker.com-pull
volumes:
- name: config
secret:
defaultMode: 420
items:
- key: www.sh
mode: 256
path: env.sh
secretName: env-secret
The Docker image is based on a node.js server image.
/healthz is a file in the webserver which returns ok I thought that liveness probe would make sure the server was up and ready before switching to the new version.
Thanks in advance!
within the Pod lifecycle it's defined that:
The default state of Liveness before the initial delay is Success.
To make sure you don't run into issues better configure the ReadinessProbe for your Pods too and consider to configure .spec.minReadySeconds for your Deployment.
You'll find details in the Deployment documentation