Microk8s ingress - defaultBackend - kubernetes

inside my ingress config i changed default backend.
spec:
defaultBackend:
service:
name: navigation-service
port:
number: 80
When I describe ingress i have got
Name: ingress-nginx
Namespace: default
Address: 127.0.0.1
Default backend: navigation-service:80 (10.1.173.59:80)
I trying to access it via localhost and i have got 404. However when i curl 10.1.173.59, i have got my static page. So my navigation-service its ok and something is wrong with defaultbacked? Even if i trying
- pathType: Prefix
path: /
backend:
service:
name: navigation-service
port:
number: 80
I have got 500 error.
What im doing wrong?
Edit: Works via NodePort but I need to access it via ingress.
apiVersion: apps/v1
kind: Deployment
metadata:
name: navigation-deployment
spec:
selector:
matchLabels:
app: navigation-deployment
template:
metadata:
labels:
app: navigation-deployment
spec:
containers:
- name: nginx
image: nginx:1.13.3-alpine
ports:
- containerPort: 80
volumeMounts:
- mountPath: /usr/share/nginx/html/index.html
name: nginx-html
- mountPath: /etc/nginx/conf.d/default.conf
name: nginx-default
volumes:
- name: nginx-html
hostPath:
path: /home/x/navigation/index.html
- name: nginx-default
hostPath:
path: /home/x/navigation/default.conf
apiVersion: v1
kind: Service
metadata:
name: navigation-service
spec:
type: ClusterIP
selector:
app: navigation-deployment
ports:
- name: "http"
port: 80
targetPort: 80

If someone have this problem then you need to run ingress controller with args - --default-backend-service=namespace/service_name

Related

GCE ingress resource is taking too long to receive an IP address in GKE cluster. What could be the reason?

I am trying to deploy a sample application on GKE cluster. All the resources are getting created successfully except the ingress resource which is taking around 15-20 minutes to receive an ip address. By this time application times out and get in errored state. The ideal time to assign the IP addr is 2-3 minutes. Can anyone help on the issue how to debug it?
This is happening specific to a cluster while the same ingress getting the ip within 2 minutes in other clusters in GKE.
Below the manifest files I am using to deploy app:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-deployment
spec:
selector:
matchLabels:
app: hello
template:
metadata:
labels:
app: hello
spec:
containers:
- name: hello
image: gcr.io/google-samples/hello-app:2.0
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: hello-service
spec:
type: ClusterIP
selector:
app: hello
ports:
- port: 8000
targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: zone-printer-deployment
spec:
selector:
matchLabels:
app: zone-printer
template:
metadata:
labels:
app: zone-printer
spec:
containers:
- name: zone-printer
image: gcr.io/google-samples/zone-printer:0.2
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: zone-printer-service
spec:
type: ClusterIP
selector:
app: zone-printer
ports:
- port: 9000
targetPort: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: awesome-ingress
annotations:
kubernetes.io/ingress.class: "gce-internal"
spec:
defaultBackend:
service:
name: hello-service
port:
number: 8000
rules:
- http:
paths:
- path: /*
pathType: ImplementationSpecific
backend:
service:
name: hello-service
port:
number: 8000
- path: /zone-printer
pathType: ImplementationSpecific
backend:
service:
name: zone-printer-service
port:
number: 9000

Kubernetes Traefik Ingress getting a bad gateway error

I've set up the following Ingress and deployment for Traefik on Kubernetes. I keep getting a bad gateway error on the actual domain name.
For some reason the service isn't working or I have got the connections wrong or something is amiss with the selectors etc.
apiVersion: v1
kind: Service
metadata:
name: web
labels:
app: wordpress
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
selector:
app: wordpress
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: web-ingress
annotations:
kubernetes.io/ingress.class: traefik
# traefik.ingress.kubernetes.io/frontend-entry-points: http,https
spec:
rules:
- host: test.example.services
http:
paths:
- path: /
backend:
serviceName: web
servicePort: http
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: frontend
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: frontend
spec:
containers:
- image: wordpress:4.8-apache
name: wordpress
env:
- name: WORDPRESS_DB_HOST
value: wordpress-mysql
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
ports:
- containerPort: 80
name: wordpress
volumeMounts:
- name: wordpress-persistent-storage
mountPath: /var/www/html
volumes:
- name: wordpress-persistent-storage
persistentVolumeClaim:
claimName: wp-pv-claim
My code is below so if there are any corrections to be made, advice is appreciated.
There are few things to consider:
I see that you are missing the namsespace: in your metadata:. Check if that is the case.
Try to create two services. One for wordpress and one for treafik-ingress-lb.
You might have used too many spaces after ports:. Try something like this:
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
Check if your labels are correctly configured. If you need more detials regarding them, try this documentation.
Please let me know if that helped.

High latency using gunicorn with django in EKS (Kubernetes) and ELB

I'm running DjangoRestAPI in a Kubernetes and notice some really poor performance when I use gunicorn on the same deployment to serve the DjangoRestAPI. On average it takes ~6000ms for a simple HttpResponse. Without gunicorn, serving by python manage.py runserver, the same request would only take 50ms.
Here is my deployment for djangoRestAPI and gunicorn by running the command gunicorn api.wsgi --bind 0.0.0.0:8000
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: djangoapi
type: web
name: djangoapi
namespace: "default"
spec:
replicas: 3
template:
metadata:
labels:
app: djangoapi
type: web
spec:
containers:
- name: djangoapi
image: <repo>/app:v0.9a
imagePullPolicy: Always
args:
- gunicorn
- api.wsgi
- --bind
- 0.0.0.0:8000
envFrom:
- configMapRef:
name: djangoapi-config
ports:
- containerPort: 8000
imagePullSecrets:
- name: regcred
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: djangoapi-svc
namespace: "default"
labels:
app: djangoapi
spec:
ports:
- port: 8000
protocol: TCP
targetPort: 8000
selector:
app: djangoapi
type: web
type: NodePort
If I replace the arguments with the following, the response time immediately drops down to 50ms
- python
- manage.py
- runserver
- 0.0.0.0:8000
Just in case, here are my yaml files for ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: api-ingress
namespace: "default"
labels:
app: api-ingress
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:eu-west-1:<>:certificate/<>
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS": 443}]'
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/healthcheck-interval-seconds: '15'
alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '8'
alb.ingress.kubernetes.io/healthy-threshold-count: '2'
alb.ingress.kubernetes.io/healthcheck-path: '/'
alb.ingress.kubernetes.io/healthcheck-protocol: HTTPS
alb.ingress.kubernetes.io/success-codes: "200"
spec:
rules:
- host: test.mydomain.com
http:
paths:
- path: /*
backend:
serviceName: echoheaders
servicePort: 80
- host: dev.mydomain.com
http:
paths:
- path: /*
backend:
serviceName: djangoapi-svc
servicePort: 8000
and my alb-ingress-controller is based upon https://github.com/kubernetes-sigs/aws-alb-ingress-controller
I'm wondering if the way that I'm deployment gunicorn is wrong or if there's any other way to resolve the latency issue.

How to setup HTTPS load balancer in kubernetes

I have a requirement to make my application to support the request over https and block the http port.I want to use certificate provided my company so do i need the jks certs or some other type. Im not sure how to make it https in gke. I have seen couple of documentation but they are not clear.This is my current kubernetes deployment file.Please let me know how can i configure it.
apiVersion: v1
kind: Service
metadata:
name: oms-integeration-service
spec:
type: NodePort
ports:
- port: 80
targetPort: 8081
protocol: TCP
name: http
selector:
app: integeration
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: integeration
spec:
replicas: 2
template:
metadata:
labels:
app: integeration
spec:
containers:
- name: esp
image: gcr.io/endpoints-release/endpoints-runtime:1
args: [
"--http_port=8081",
"--backend=127.0.0.1:8080",
"--service=oms.endpoints.gcp-dsw-oms-int-{{env}}.cloud.goog",
"--rollout_strategy=managed",
]
- name: integeration-container
image: us.gcr.io/gcp-dsw-oms-int-{{env}}/gke/oms-integ-service:{{tag}}
readinessProbe:
httpGet:
path: /healthcheck
port: 8080
initialDelaySeconds: 60
periodSeconds: 10
ports:
- containerPort: 8080
resources:
requests:
memory: 500M
env:
- name: LOGGING_FILE
value: "integeration-container"
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: integeration-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: "oms-int-ip"
kubernetes.io/ingress.class: "gce"
rules:
- host: "oms.endpoints.gcp-dsw-oms-int-{{env}}.cloud.goog"
http:
paths:
- path: /*
backend:
serviceName: oms-integeration-service
servicePort: 80
You have to create a secret that contains your SSL certificate and then reference that secret in your ingress spec as explained here

Kubernetes ingress with 2 services does not always find the correct service

I have a Kubernetes cluster with a backend service and a security service.
The ingress is defined as follows:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: solidary-life
annotations:
kubernetes.io/ingress.global-static-ip-name: sl-ip
certmanager.k8s.io/acme-http01-edit-in-place: "true"
ingress.kubernetes.io/force-ssl-redirect: "true"
ingress.kubernetes.io/ssl-redirect: "true"
labels:
app: sl
spec:
rules:
- host: app-solidair-vlaanderen.com
http:
paths:
- path: /v0.0.1/*
backend:
serviceName: backend-backend
servicePort: 8080
- path: /auth/*
backend:
serviceName: security-backend
servicePort: 8080
tls:
- secretName: solidary-life-tls
hosts:
- app-solidair-vlaanderen.com
The backend service is configured like:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: backend
labels:
app: sl
spec:
template:
metadata:
labels:
app: sl
tier: web
spec:
containers:
- name: backend-app
image: gcr.io/solidary-life-218713/sv-backend:0.0.6
ports:
- name: http
containerPort: 8080
readinessProbe:
httpGet:
path: /v0.0.1/api/online
port: 8080
---
apiVersion: v1
kind: Service
metadata:
name: backend-backend
labels:
app: sl
spec:
type: NodePort
selector:
app: sl
tier: web
ports:
- port: 8080
targetPort: 8080
and the auth server service:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: security
labels:
app: sl-security
spec:
template:
metadata:
labels:
app: sl
tier: web
spec:
containers:
- name: security-app
image: gcr.io/solidary-life-218713/sv-security:0.0.1
ports:
- name: http
containerPort: 8080
- name: management
containerPort: 9090
- name: jgroups-tcp
containerPort: 7600
- name: jgroups-tcp-fd
containerPort: 57600
- name: jgroups-udp
containerPort: 55200
protocol: UDP
- name: jgroups-udp-mc
containerPort: 45688
protocol: UDP
- name: jgroups-udp-fd
containerPort: 54200
protocol: UDP
- name: modcluster
containerPort: 23364
- name: modcluster-udp
containerPort: 23365
protocol: UDP
- name: txn-recovery-ev
containerPort: 4712
- name: txn-status-mgr
containerPort: 4713
readinessProbe:
httpGet:
path: /auth/
port: 8080
---
apiVersion: v1
kind: Service
metadata:
name: security-backend
labels:
app: sl
spec:
type: NodePort
selector:
app: sl
tier: web
ports:
- port: 8080
targetPort: 8080
Now I can go to the url's:
https://app-solidair-vlaanderen.com/v0.0.1/api/online
https://app-solidair-vlaanderen.com/auth/
Sometimes this works, sometimes I get 404's. This is quite annoying and I am quite new to Kubernetes. I don't find the error.
Can it have something to do with the "sl" label that's on both the backend and security service definition?
Yes. At least that must be the start of the issue, assuming all your services are on the same Kubernetes namespace. Can you use a different label for each?
So, in essence, you have 2 services that are randomly selecting pods belonging to the security Deployment and the backend deployment. One way to determine where your service is really sending requests is by looking at its endpoints and running:
kubectl -n <your-namespace> <get or describe> ep