Host2 (subdomain) works perfect, host 1 gives error message 'default backend - 404'. Both dockerfiles for web1 and web2 works on local machine and YAML files is almost identical except for name variables, nodePort and image location etc..
Any ideas what can be wrong here?
Ingress config:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: test-ip
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: host1.com
http:
paths:
- path: /*
backend:
serviceName: web1
servicePort: 80
- host: sub.host1.com
http:
paths:
- path: /*
backend:
serviceName: web2
servicePort: 80
YAML for web1
apiVersion: v1
kind: Service
metadata:
name: web1
spec:
selector:
app: web1
type: NodePort
ports:
- name: http
protocol: TCP
port: 80
nodePort: 32112
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: web1-deployment
spec:
selector:
matchLabels:
app: web1
replicas: 1
metadata:
labels:
app: web1
spec:
terminationGracePeriodSeconds: 60
containers:
- name: web1
image: gcr.io/image..
imagePullPolicy: Always
ports:
- containerPort: 80
# HTTP Health Check
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 30
timeoutSeconds: 5
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: host1.com
http:
paths:
- path: /
backend:
serviceName: web1
servicePort: 80
- host: sub.host1.com
http:
paths:
- path: /
backend:
serviceName: web2
servicePort: 80
I think this should work. I am also using same config. Please correct indentation issues if any in the yaml, I posted here
Related
I want to config the ingress to work with my domain name.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
tls:
- hosts:
- example.com
secretName: app-tls
rules:
- host: example.com
http:
paths:
- path: /my-api1(/|$)(.*)
backend:
serviceName: app1
servicePort: 80
- path: /my-api2
backend:
serviceName: app2
servicePort: 80
---
apiVersion: v1
kind: Service
metadata:
name: my-api
spec:
selector:
app: my-api
ports:
- name: app1
port: 3000
targetPort: 80
- name: app2
port: 4000
targetPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-api
spec:
selector:
matchLabels:
app: user-api
template:
metadata:
labels:
app: user-api
spec:
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: app1
image: XXX
ports:
- name: app1
containerPort: 3000
- name: app2
image: XXX
ports:
- name: app2
containerPort: 4000
I can reach the app1 service by serverIP:3000(example. 172.16.111.211:3000/my-api1). But remotely it always return the 503 status code(curl https://example.com/my-api1).
# kubectl describe ingress app-ingress
Name: app-ingress
Namespace: default
Address: serverIP
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
app-tls terminates example.com
Rules:
Host Path Backends
---- ---- --------
example.com
/my-api1(/|$)(.*) app1:80 (<error: endpoints "app1" not found>)
/my-api2 app2:80 (<error: endpoints "app2" not found>)
First thing is your service name is not matching, you have created a service with name my-api but in ingress you have referred it as app1 and app2 which is not available.
Second error is selector label between your deployment and service are not matching. Deployment created with label user-api but in service selector is mentioned as my-api.
If you need 80 port for both the application then you have to create two different services and refer that in your ingress.
Below should work for your requirement.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
tls:
- hosts:
- example.com
secretName: app-tls
rules:
- host: example.com
http:
paths:
- path: /my-api1(/|$)(.*)
backend:
serviceName: app1
servicePort: 80
- path: /my-api2
backend:
serviceName: app2
servicePort: 80
---
apiVersion: v1
kind: Service
metadata:
name: app1
spec:
selector:
app: user-api
ports:
- name: app1
port: 3000
targetPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: app2
spec:
selector:
app: user-api
ports:
- name: app2
port: 4000
targetPort: 80
----
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-api
spec:
selector:
matchLabels:
app: user-api
template:
metadata:
labels:
app: user-api
spec:
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: app1
image: XXX
ports:
- name: app1
containerPort: 3000
- name: app2
image: XXX
ports:
- name: app2
containerPort: 4000
You made a mistake on port and targetPort.
It should be:
apiVersion: v1
kind: Service
metadata:
name: my-api
spec:
selector:
app: my-api
ports:
- name: app1
port: 3000
targetPort: 3000
- name: app2
port: 4000
targetPort: 4000
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
tls:
- hosts:
- example.com
secretName: app-tls
rules:
- host: example.com
http:
paths:
- path: /my-api1(/|$)(.*)
backend:
serviceName: my-api
servicePort: app1
- path: /my-api2
backend:
serviceName: my-api
servicePort: app2
port is for exposing service port
targetPort is targeting a pod exposed port
I want to expose my two API service by ingress in one 80 port.
apiVersion: v1
kind: Service
metadata:
name: my-api
spec:
selector:
app: my-api
ports:
- name: api1
port: 3000
targetPort: 3000
- name: api2
port: 4000
targetPort: 4000
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
tls:
- hosts:
- example.com
secretName: app-tls
rules:
- host: example.com
http:
paths:
- path: /my-api(/|$)(.*)
backend:
serviceName: my-api
servicePort: 80
But when I try to reach the https://example.com/my-api, it always returns 503 status code.
servicePort: 80 doesn't mean that the nginx ingress is serving on port 80. That's actually the port on your backend service, it sounds like you have 2 ports: 3000 and 4000.
The nginx ingress controller by default serves on port 80 and if you have TLS enabled also or/and 443. In your case, if you want to serve both APIs you can simply separate the paths.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
tls:
- hosts:
- example.com
secretName: app-tls
rules:
- host: example.com
http:
paths:
- path: /my-api1(/|$)(.*)
backend:
serviceName: my-api
servicePort: 3000
- path: /my-api2(/|$)(.*)
backend:
serviceName: my-api
servicePort: 4000
✌️
Yesterday I was testing having a production and staging deployment running at the same time. The results I got were erratic in that sometimes you'd go to the staging URL and it would load the production. If you refreshed the page, it would switch between the two randomly.
I won't have time to test it until this weekend because QA is going on, and this issue disrupted it from happening. I killed the production deployment and removed the routes from my ingress.yaml so QA could continue without issue.
At any rate, my ingress.yaml configuration was likely the cause, so wanted to share it to see what caused this behavior:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/add-base-url: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$1
cert-manager.io/cluster-issuer: "letsencrypt-prod"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.org/client-max-body-size: "500m"
nginx.ingress.kubernetes.io/use-regex: "true"
name: ingress-service
namespace: default
spec:
tls:
- hosts:
- domain.com
- www.domain.com
- staging.domain.com
secretName: tls-domain-com
rules:
- host: domain.com
http:
paths:
- path: /?(.*)
backend:
serviceName: client-cluster-ip-service-prod
servicePort: 3000
- path: /admin/?(.*)
backend:
serviceName: admin-cluster-ip-service-prod
servicePort: 4000
- path: /api/?(.*)
backend:
serviceName: api-cluster-ip-service-prod
servicePort: 5000
- host: www.domain.com
http:
paths:
- path: /?(.*)
backend:
serviceName: client-cluster-ip-service-prod
servicePort: 3000
- path: /admin/?(.*)
backend:
serviceName: admin-cluster-ip-service-prod
servicePort: 4000
- path: /api/?(.*)
backend:
serviceName: api-cluster-ip-service-prod
servicePort: 5000
- host: staging.domain.com
http:
paths:
- path: /?(.*)
backend:
serviceName: client-cluster-ip-service-staging
servicePort: 3000
- path: /admin/?(.*)
backend:
serviceName: admin-cluster-ip-service-staging
servicePort: 4000
- path: /api/?(.*)
backend:
serviceName: api-cluster-ip-service-staging
servicePort: 5000
I have feeling it is one of the following:
Having the domain.com before the others and that it should be at the end
Now that I'm looking at it again, they are at the same IP and using the same ports, so the ports need to be changed
If I want to leave the ports the same, I'd need to deploy another ingress controller having one for staging and one for production and follow these instructions
At any rate, can anyone confirm?
EDIT
Adding .yaml:
# client-staging.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: client-deployment-staging
spec:
replicas: 3
selector:
matchLabels:
component: client
template:
metadata:
labels:
component: client
spec:
containers:
- name: client
image: testappacr.azurecr.io/test-app-client
ports:
- containerPort: 3000
env:
- name: DOMAIN
valueFrom:
secretKeyRef:
name: test-app-staging-secrets
key: DOMAIN
---
apiVersion: v1
kind: Service
metadata:
name: client-cluster-ip-service-staging
spec:
type: ClusterIP
selector:
component: client
ports:
- port: 3000
targetPort: 3000
# client-prod.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: client-deployment-prod
spec:
replicas: 3
selector:
matchLabels:
component: client
template:
metadata:
labels:
component: client
spec:
containers:
- name: client
image: testappacr.azurecr.io/test-app-client
ports:
- containerPort: 3000
env:
- name: DOMAIN
valueFrom:
secretKeyRef:
name: test-app-prod-secrets
key: DOMAIN
---
apiVersion: v1
kind: Service
metadata:
name: client-cluster-ip-service-prod
spec:
type: ClusterIP
selector:
component: client
ports:
- port: 3000
targetPort: 3000
Without seeing the deployments yaml descriptors and by the description of the problem the most probable cause is that both deployments are getting in the service loadbalancers endpoints, so you must change the selectors and add something like: env: prod and env: staging and add those to each deployment.
For checking if this is the issue run a kubectl describe service for each service and check the endpoints.
If not, let me know the output and i can help you debug further.
EDIT: Changes in files after posting them:
Production
Service:
spec:
type: ClusterIP
selector:
component: client
environment: production
Deployment:
replicas: 3
selector:
matchLabels:
component: client
environment: production
template:
metadata:
labels:
component: client
environment: production
Staging
Service:
spec:
type: ClusterIP
selector:
component: client
environment: staging
Deployment:
replicas: 3
selector:
matchLabels:
component: client
environment: staging
template:
metadata:
labels:
component: client
environment: staging
a bit of background is that I have setup an Azure Kubernetes Service cluster and deployed a basic .Net Core api as a deployment object. I then deployed a nodeport service to expose the api and then deployed a nginx-controller and an ingress object to configure it. I use the IP of the ingress-controller to route the request and that works eg.http://1.2.3.4/hello-world-one/api/values.
But when I replace the Ip with the generated dns, somehow the path is ignored and I get the default backend - 404 returned from the nginx controller. The expected behaviour is that the dns will resolve then the path "api/values" will be sent to my service.
Can anyone help me with this?
Thanks in advance.
My deployment, service and ingress configs are below.
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: test-deployment
labels:
app: test
spec:
replicas: 1
selector:
matchLabels:
app: test
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: test
spec:
containers:
- name: test-service
image: <my-repo>.azurecr.io/testservice
imagePullPolicy: Always
ports:
- name: tcp
containerPort: 80
imagePullSecrets:
- name: regsecret
---
apiVersion: v1
kind: Service
metadata:
name: frontend
spec:
type: NodePort
selector:
app: test
ports:
- name: http
port: 32768
targetPort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hello-world-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /
kubernetes.io/ingress.global-static-ip-name: dev-pip-usw-qa-aks
kubernetes.io/ingress.class: addon-http-application-routing
spec:
rules:
- host: hello-world-ingress.be543d4af69d4c7ca489.westus.aksapp.io
- http:
paths:
- path: /
backend:
serviceName: frontend
servicePort: http
- path: /hello-world-one
backend:
serviceName: frontend
servicePort: http
- path: /hello-world-two
backend:
serviceName: frontend
servicePort: http
pretty sure rules should look like this:
rules:
- host: hello-world-ingress.be543d4af69d4c7ca489.westus.aksapp.io
http:
paths:
- path: /
backend:
serviceName: frontend
servicePort: http
- path: /hello-world-one
backend:
serviceName: frontend
servicePort: http
- path: /hello-world-two
backend:
serviceName: frontend
servicePort: http
reading: https://kubernetes.io/docs/concepts/services-networking/ingress/#types-of-ingress
The need I have just looks like this stuff :
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: http
spec:
serviceName: "nginx-set"
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: gcr.io/google_containers/nginx-slim:0.8
ports:
- containerPort: 80
name: http
----
apiVersion: v1
kind: Service
metadata:
name: nginx-set
labels:
app: nginx
spec:
ports:
- port: 80
name: http
clusterIP: None
selector:
app: nginx
Here is the interesting part :
apiVersion: voyager.appscode.com/v1beta1
kind: Ingress
metadata:
name: test-ingress
namespace: default
spec:
rules:
- host: appscode.example.com
http:
paths:
- path: '/testPath'
backend:
hostNames:
- web-0
serviceName: nginx-set #! There is no extra service. This
servicePort: '80' # is the Statefulset's Headless Service
I'm able to target a specific pod because of setting the hostName in function of the url.
Now I'd like to know if it's possible in kubernetes to create a rule like that
apiVersion: voyager.appscode.com/v1beta1
kind: Ingress
metadata:
name: test-ingress
namespace: default
spec:
rules:
- host: appscode.example.com
http:
paths:
- path: '/connect/(\d+)'
backend:
hostNames:
- web-(result of regex match with \d+)
serviceName: nginx-set #! There is no extra service. This
servicePort: '80' # is the Statefulset's Headless Service
or if I have to wrote a rule for each pod ?
Sorry that isn't possible, the best solution is to create multiple paths, each one referencing one pod:
apiVersion: voyager.appscode.com/v1beta1
kind: Ingress
metadata:
name: test-ingress
namespace: default
spec:
rules:
- host: appscode.example.com
http:
paths:
- path: '/connect/0'
backend:
hostNames:
- web-0
serviceName: nginx-set
servicePort: '80'
- path: '/connect/1'
backend:
hostNames:
- web-1
serviceName: nginx-set
servicePort: '80'