I am able to access my django app deployment using LoadBalancer service type but I'm trying to switch to ClusterIP service type and ingress-nginx but I am getting 503 Service Temporarily Unavailable when I try to access the site via the host url. Describing the ingress also shows error: endpoints "django-service" not found and error: endpoints "default-http-backend" not found. What am I doing wrong?
This is my service and ingress yaml:
---
apiVersion: v1
kind: Service
metadata:
name: django-service
spec:
type: ClusterIP
ports:
- name: http
protocol: TCP
port: 80
targetPort: 8000
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: django-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/force-ssl-redirect: 'true'
nginx.ingress.kubernetes.io/ssl-redirect: 'true'
spec:
tls:
- hosts:
- django.example.com
rules:
- host: django.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: django-service
port:
number: 80
ingressClassName: nginx
kubectl get all
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/django-app-5bdd8ffff9-79xzj 1/1 Running 0 7m44s
pod/postgres-58fffbb5cc-247x9 1/1 Running 0 7m44s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/django-service ClusterIP 10.233.29.58 <none> 80/TCP 7m44s
service/pg-service ClusterIP 10.233.14.137 <none> 5432/TCP 7m44s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/django-app 1/1 1 1 7m44s
deployment.apps/postgres 1/1 1 1 7m44s
NAME DESIRED CURRENT READY AGE
replicaset.apps/django-app-5bdd8ffff9 1 1 1 7m44s
replicaset.apps/postgres-58fffbb5cc 1 1 1 7m44s
describe ingress
$ kubectl describe ing django-ingress
Name: django-ingress
Labels: <none>
Namespace: django
Address: 10.10.30.50
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
SNI routes django.example.com
Rules:
Host Path Backends
---- ---- --------
django.example.com
/ django-service:80 (<error: endpoints "django-service" not found>)
Annotations: nginx.ingress.kubernetes.io/force-ssl-redirect: true
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: true
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 5m28s (x2 over 6m5s) nginx-ingress-controller Scheduled for sync
Normal Sync 5m28s (x2 over 6m5s) nginx-ingress-controller Scheduled for sync
I think you forgot to make the link with your deployment in your service.
apiVersion: v1
kind: Service
metadata:
name: django-service
spec:
type: ClusterIP
ports:
- name: http
protocol: TCP
port: 80
targetPort: 8000
selector:
app: your-deployment-name
Your label must be set in your deployment as well:
spec:
selector:
matchLabels:
app: your-deployment-name
template:
metadata:
labels:
app: your-deployment-name
Related
Created a kubernetes cluster, and installed ingress-nginx controller. I am getting a 404 not found if i go to the ingress-nginx-controller load balancer external ip that is aa************.us-east-1.elb.amazonaws.com
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
kubectl create namespace ingress-nginx
helm install ingress-nginx -n ingress-nginx ingress-nginx/ingress-nginx
to get the service:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.100.219.162 aa************.us-east-1.elb.amazonaws.com 80:32091/TCP,443:32305/TCP 154m
ingress-nginx-controller-admission ClusterIP 10.100.208.135 <none> 443/TCP 154m
my ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: factory
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: factory.**.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: factory
port:
number: 80
- host: api.factory.**.com # myfactoryapi-factorydomain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: factory
port:
number: 8082
all my namespace
kubectl get namespace
NAME STATUS AGE
default Active 34d
ingress-nginx Active 160m
kerberos-factory Active 34d
kube-node-lease Active 34d
kube-public Active 34d
kube-system Active 34d
mongodb Active 8d
to get my ingress
kubectl get ing -n kerberos-factory
NAME CLASS HOSTS ADDRESS PORTS AGE
factory <none> factory.**.com,api.factory.**.com a****.us-east-1.elb.amazonaws.com 80 65m
to describe the ingress
kubectl describe ing -n kerberos-factory
Name: factory
Namespace: kerberos-factory
Address: a********.us-east-1.elb.amazonaws.com
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
factory.***.com
/ factory:80 (192.168.34.220:80)
api.factory.***.com
/ factory:8082 (192.168.34.220:8082)
Annotations: kubernetes.io/ingress.class: nginx
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 3m8s (x3 over 4m51s) nginx-ingress-controller Scheduled for sync
why am i getting 404 not found
I am trying to use minikube to deploy a sample flask app. But getting 503 nginx error. Please note I am able to access the app using the Nodeport service config.
I checked with minikube IP which is mapped to local host and tried to access the app, but getting 503 error. Not sure if I missed anything. I enable the minikube addons for nginx.
Here are my files -
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: flaskapp-deployment
labels:
app: flaskapp
spec:
replicas: 1
selector:
matchLabels:
app: flaskapp
template:
metadata:
labels:
app: flaskapp
spec:
containers:
- name: flaskapp
image: <repo>/sample-flask-app:1.0
ports:
- containerPort: 5000
env:
- name: APPLICATION_SETTINGS
value: prd_config.py
imagePullSecrets:
- name: jfrog-secret
Service.yaml
apiVersion: v1
kind: Service
metadata:
name: flaskapp-service
labels:
app: flaskapp
spec:
selector:
app: flaskapp
ports:
- protocol: TCP
port: 5000
targetPort: 5000
Ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: flaskapp-ingress
labels:
app: flaskapp
spec:
defaultBackend:
service:
name: default-http-backend
port:
number: 80
rules:
- host: mydashboard.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: flaskapp-service
port:
number: 5000
Ingress status :
minikube kubectl -- get ingress flaskapp-ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
flaskapp-ingress nginx mydashboard.com localhost 80 18m
Cluster status:
minikube kubectl -- get all
NAME READY STATUS RESTARTS AGE
pod/flaskapp-deployment-7f59f96fd5-j9mv9 1/1 Running 1 (103m ago) 15h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/flaskapp-deployment ClusterIP 10.103.143.58 <none> 5000/TCP 34m
service/flaskapp-service ClusterIP 10.111.242.99 <none> 5000/TCP 15h
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 35h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/flaskapp-deployment 1/1 1 1 15h
NAME DESIRED CURRENT READY AGE
replicaset.apps/flaskapp-deployment-7f59f96fd5 1 1 1 15h
I installed one Kubernetes Master and two kubernetes worker on-premises.
After I installed Metallb as LoadBalancer using commands below:
$ kubectl edit configmap -n kube-system kube-proxy
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxy
Configuration mode:
"ipvs" ipvs:
strictARP: true
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.6/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.6/manifests/metallb.yaml
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
vim config-map.yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 10.100.170.200-10.100.170.220
kubectl apply -f config-map.yaml
kubectl describe configmap config -n metallb-system
I created my yaml file as below:
myapp-tst-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-tst-deployment
labels:
app: myapp-tst
spec:
replicas: 2
selector:
matchLabels:
app: myapp-tst
template:
metadata:
labels:
app: myapp-tst
spec:
containers:
- name: myapp-tst
image: myapp-tomcat
securityContext:
privileged: true
capabilities:
add:
- SYS_ADMIN
myapp-tst-service.yaml
apiVersion: v1
kind: Service
metadata:
name: myapp-tst-service
labels:
app: myapp-tst
spec:
externalTrafficPolicy: Cluster
type: LoadBalancer
ports:
- name: myapp-tst-port
nodePort: 30080
port: 80
protocol: TCP
targetPort: 8080
selector:
app: myapp-tst
sessionAffinity: None
myapp-tst-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myapp-tst-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/affinity-mode: "persistent"
nginx.ingress.kubernetes.io/session-cookie-name: "INGRESSCOOKIE"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: myapp-tst-service
servicePort: myapp-tst-port
I run kubectl -f apply for all three files, and these is my result:
kubectl get all -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/myapp-tst-deployment-54474cd74-p8cxk 1/1 Running 0 4m53s 10.36.0.1 bcc-tst-docker02 <none> <none>
pod/myapp-tst-deployment-54474cd74-pwlr8 1/1 Running 0 4m53s 10.44.0.2 bca-tst-docker01 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/myapp-tst-service LoadBalancer 10.110.184.237 10.100.170.15 80:30080/TCP 4m48s app=myapp-tst,tier=backend
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6d22h <none>
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/myapp-tst-deployment 2/2 2 2 4m53s myapp-tst mferraramiki/myapp-test app=myapp-tst
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/myapp-tst-deployment-54474cd74 2 2 2 4m53s myapp-tst myapp/myapp-test app=myapp-tst,pod-template-hash=54474cd74
But when I try to connect using LB external IP (10.100.170.15) the system redirect the browser request
(on the same browser) on a pod, if I refresh or open a new tab (on the same url) the system reply redirect the request to another pod.
I need when a user digit url in the browser, he must be connect to a specific pod during all session, and not switch to other pods.
How can solve this problem if is it possible?
In my VM I resolved this issue using stickysession, how can enable it on LB or in Kubernetes components?
In the myapp-tst-service.yaml file the "sessionAffinity" is set to "None".
You should try to set it to "ClientIP".
From page https://kubernetes.io/docs/concepts/services-networking/service/ :
"If you want to make sure that connections from a particular client are passed to the same Pod each time, you can select the session affinity based on the client's IP addresses by setting service.spec.sessionAffinity to "ClientIP" (the default is "None"). You can also set the maximum session sticky time by setting service.spec.sessionAffinityConfig.clientIP.timeoutSeconds appropriately. (the default value is 10800, which works out to be 3 hours)."
I'm trying to create a simple nginx service on GKE, but I'm running into strange problems.
Nginx runs on port 80 inside the Pod. The service is accessible on port 8080. (This works, I can do curl myservice:8080 inside of the pod and see the nginx home screen)
But when I try to make it publicly accessible using an ingress, I'm running into trouble. Here are my deployment, service and ingress files.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 8080
nodePort: 32111
targetPort: 80
type: NodePort
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
rules:
- http:
paths:
# The * is needed so that all traffic gets redirected to nginx
- path: /*
backend:
serviceName: my-service
servicePort: 80
After a while, this is what my ingress status looks like:
$ k describe ingress test-ingress
Name: test-ingress
Namespace: default
Address: 35.186.255.184
Default backend: default-http-backend:80 (10.44.1.3:8080)
Rules:
Host Path Backends
---- ---- --------
*
/* my-service:32111 (<none>)
Annotations:
backends: {"k8s-be-30030--ecc76c47732c7f90":"HEALTHY"}
forwarding-rule: k8s-fw-default-test-ingress--ecc76c47732c7f90
target-proxy: k8s-tp-default-test-ingress--ecc76c47732c7f90
url-map: k8s-um-default-test-ingress--ecc76c47732c7f90
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ADD 18m loadbalancer-controller default/test-ingress
Normal CREATE 17m loadbalancer-controller ip: 35.186.255.184
Warning Service 1m (x5 over 17m) loadbalancer-controller Could not find nodeport for backend {ServiceName:my-service ServicePort:{Type:0 IntVal:32111 StrVal:}}: could not find matching nodeport from service
Normal Service 1m (x5 over 17m) loadbalancer-controller no user specified default backend, using system default
I don't understand why it's saying that it can't find nodeport - the service has nodePort defined and it is of type NodePort as well. Going to the actual IP results in default backend - 404.
Any ideas why?
The configuration is missing a health check endpoint, for the GKE loadbalancer to know whether the backend is healthy. The containers section for the nginx should also specify:
livenessProbe:
httpGet:
path: /
port: 80
The GET / on port 80 is the default configuration, and can be changed.
http://grs-preprodkubemaster01:5601/kibana
I have followed docs and installed Kibana, When I used the service as type: LoadBalancer, the service isn't
coming up, so I deleted the type: LoadBalancer and let it default to ClusterIP, it came up fine. (Note I don't have AWS)
But, I am not sure how to access the UI, I tried this URL but its not working.
http://my-preprodkubemaster01/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging/app/kibana
any ideas how to access the Kibana UI. I checked service, deployment and everything is green check.
Another thing I tried is this URL with this URL which I got from the command kubectl cluster-info
https://10.123.24.107:6443/api/v1/namespaces/kube-system/services/kibana-logging/proxy
However, this is showing me this error
{
kind: "Status",
apiVersion: "v1",
metadata: { },
status: "Failure",
message: "services "kibana-logging" is forbidden: User "system:anonymous" cannot get services/proxy in the namespace "kube-system"",
reason: "Forbidden",
details: {
name: "kibana-logging",
kind: "services"
},
code: 403
}
So, as another try I used Kibana service as NodePort, but that didn't work either.
apiVersion: v1
kind: Service
metadata:
name: kibana-logging
namespace: kube-system
labels:
k8s-app: kibana-logging
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "Kibana"
spec:
selector:
k8s-app: kibana-logging
type: NodePort
ports:
- port: 5601
protocol: TCP
targetPort: ui
nodePort: 30887
$ kubectl -n kube-system get rc,svc,cm,po
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/elasticsearch-logging ClusterIP 10.98.10.182 <none> 9200/TCP 12m
svc/heapster ClusterIP 10.107.184.85 <none> 80/TCP 3d
svc/kibana-logging NodePort 10.102.254.129 <none> 5601:30887/TCP 12m
svc/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 3d
svc/kubernetes-dashboard ClusterIP 10.105.30.246 <none> 80/TCP 3d
svc/monitoring-influxdb ClusterIP 10.109.144.39 <none> 8086/TCP 3d
I would like to know what URL I should be using to access the Kibana UI. Please note that I have npot tried to do kubectl proxy and I would like to have it work without it
Use the NodePort you defined in your service:
https://10.123.24.107:30887
The most common way to expose internal server outside the cluster is an Ingress.
First, you need to have an Ingress controller running in your Kubernetes cluster.
There are two types of maintained Ingress controllers - GCE and nginx
Then, you need to create a yaml file as shown below and change it according to your needs:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
backend:
serviceName: testsvc
servicePort: 80
When you create it using kubectl create -f, you should see something like this:
$ kubectl get ingress
NAME RULE BACKEND ADDRESS
test-ingress - testsvc:80 1.2.3.4
In this example, 1.2.3.4 is the IP allocated by Ingress controller.
When you have all things in place, you'll be able to access your application (Kibana) by IP 1.2.3.4
Please find more examples and use cases in Ingress documentation
You can also expose a Kubernetes service without using the Ingress resource:
Service.Type=LoadBalancer
Service.Type=NodePort
Port Proxy
I got it to work with these changes in ingress config
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kube
namespace: kube-system
annotations:
kubernetes.io/ingress.class: nginx
nginx.org/rewrites: "serviceName=kubernetes-dashboard rewrite=/;serviceName=kibana-logging rewrite=/"
spec:
rules:
- host: HOSTNAME_OF_MASTER
http:
paths:
- path: /kube-ui/
backend:
serviceName: kubernetes-dashboard
servicePort: 80
- path: /kibana/
backend:
serviceName: kibana-logging
servicePort: 5601
and my Kibana serive is setup as Nodeport
apiVersion: v1
kind: Service
metadata:
name: kibana-logging
namespace: kube-system
labels:
k8s-app: kibana-logging
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "Kibana"
spec:
type: NodePort
ports:
- port: 5601
protocol: TCP
targetPort: ui
selector:
k8s-app: kibana-logging
and dashboard is also configured as this
# ------------------- Dashboard Service ------------------- #
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
type: NodePort
ports:
- port: 80
targetPort: 9090
selector:
k8s-app: kubernetes-dashboard
once you have the svc running you can access kibana using the NodePort from any node. Example: http://node01_ip: 31325/app/kibana
$ kubectl get svc -o wide -n=kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
elasticsearch-logging ClusterIP 10.xx.120.130 <none> 9200/TCP 11h k8s-app=elasticsearch-logging
heapster ClusterIP 10.xx.232.165 <none> 80/TCP 11h k8s-app=heapster
kibana-logging NodePort 10.xx.39.255 <none> 5601:31325/TCP 11h k8s-app=kibana-logging
kube-dns ClusterIP 10.xx.0.xx <none> 53/UDP,53/TCP 12h k8s-app=kube-dns
kubernetes-dashboard NodePort 10.xx.xx.xx <none> 80:32086/TCP 11h k8s-app=kubernetes-dashboard
monitoring-influxdb ClusterIP 10.13.199.138 <none> 8086/TCP 11h k8s-app=influxdb