I have an small kubernetes (3 small nodes) on Digital Ocean. I already configured production an it is working, but for development purposes I want to create a second environment (staging).
I have been working on and almost everything is working, but I can't get the network to work.
For example, I own the domain example.com
production will run on www.example.com and api.example.com. Then I have staging on a different kubernetes namespace staging-www.example.com and staging-api.example.com.
The only thing that this two environments are sharing are the load balancer. The issue I am facing is I cant go to staging-www.example.com and staging-api.example.com. I misconfigured something, I am trying to reach where is the misconfiguration
These two files are the ones I have for staging / ingress / network. As I wrote, for production is working, but there is something wrong for staging and I couldn't get where is the problem
apiVersion: v1
kind: Service
metadata:
annotations:
kubernetes.digitalocean.com/load-balancer-id: "8ce613ac-a217-4537-b6ce-d26b0eef0939"
service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true"
service.beta.kubernetes.io/do-loadbalancer-hostname: "example.com"
labels:
helm.sh/chart: ingress-nginx-4.1.0
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
type: LoadBalancer
externalTrafficPolicy: Local
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
namespace: staging
name: example-ingress
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/issuer: "letsencrypt-prod"
spec:
tls:
- hosts:
- staging-www.example.com
- staging-api.example.com
secretName: example-tls
defaultBackend:
service:
name: examplecatalogo-svc
port:
number: 80
rules:
- host: staging-www.example.com
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: examplecatalogo-svc
port:
number: 80
- host: staging-api.example.com
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: examplenodeapi-svc
port:
number: 80
Related
The domain configured is ticket.devaibhav.live
ping ticket.devaibhav.live is pointing to the correct IP address of the load balancer provisioned by Digital Ocean. I haven't configured SSL on the cluster yet, but if I try to access my website http://ticket.devaibhav.live gives an 400 bad request. I am new to kubernetes and networking inside a cluster.
According to my understanding, when browser sends request to http://ticket.devaibhav.live the request is sent to the Digital Ocean Load balancer and then the ingress service (Ingress-nginx by kubernetes in my case) routes the traffic based on the rules I have defined.
ingress-nginx service
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: 'true'
service.beta.kubernetes.io/do-loadbalancer-hostname: 'ticket.devaibhav.live'
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
type: LoadBalancer
externalTrafficPolicy: Local
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
ingress resource rules
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
spec:
rules:
- host: ticket.devaibhav.live
http:
paths:
- path: /api/users/?(.*)
pathType: Prefix
backend:
service:
name: auth-srv
port:
number: 3000
- path: /api/tickets/?(.*)
pathType: Prefix
backend:
service:
name: tickets-srv
port:
number: 3000
- path: /api/orders/?(.*)
pathType: Prefix
backend:
service:
name: orders-srv
port:
number: 3000
- path: /api/payments/?(.*)
pathType: Prefix
backend:
service:
name: payments-srv
port:
number: 3000
- path: /?(.*)
pathType: Prefix
backend:
service:
name: client-srv
port:
number: 3000
essentially when I hit http://ticket.devaibhav.live the request should be mapped to the last rule where it must be routed to client-srv.
client deployment and service
apiVersion: apps/v1
kind: Deployment
metadata:
name: client-depl
spec:
replicas: 1
selector:
matchLabels:
app: client
template:
metadata:
labels:
app: client
spec:
containers:
- name: client
image: vaibhav908/client
---
apiVersion: v1
kind: Service
metadata:
name: client-srv
spec:
selector:
app: client
ports:
- name: client
protocol: TCP
port: 3000
targetPort: 3000
The above configuration works well on the development server where I am using minikube.
I am unable to understand where I am going wrong with the configuration. I will provide more details as I feel it would be necessary.
[edit]
on the cluster that is deployed
kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
client-srv ClusterIP 10.245.100.25 <none> 3000/TCP 2d17h
and some other services
kubectl describe ingress
Name: ingress-service
Labels: <none>
Namespace: default
Address: ticket.devaibhav.live
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
ticket.devaibhav.live
/api/users/?(.*) auth-srv:3000 (10.244.1.76:3000)
/api/tickets/?(.*) tickets-srv:3000 (10.244.0.145:3000)
/api/orders/?(.*) orders-srv:3000 (10.244.1.121:3000)
/api/payments/?(.*) payments-srv:3000 (10.244.1.48:3000)
/?(.*) client-srv:3000 (10.244.1.32:3000)
Annotations: kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: true
Events: <none>
I have a cluster IP service and a Ingress. What should my custom domain name point to if I need to route traffic using Ingress? Backend is plain http.
Do I have to create a AWS Loadbalancer with target groups pointing to k8s nodes? And use domain alias pointing to aws loadbalancer? I was reading this K8s article and they're pointing to a subdomain.
Ingress.yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
creationTimestamp: "2022-08-05T00:50:41Z"
generation: 1
labels:
app: testing
name: httpd
namespace: default
spec:
rules:
- host: www.example.com
http:
paths:
- backend:
service:
name: httpd
port:
number: 8080
path: /
pathType: ImplementationSpecific
tls:
- hosts:
- www.example.com
secretName: tls-secret
status:
loadBalancer: {}
service.yaml:
kind: Service
metadata:
creationTimestamp: "2022-08-05T00:50:41Z"
labels:
app: testing
name: httpd
namespace: default
spec:
clusterIP: 100.65.xxx.xx
clusterIPs:
- 100.65.xxx.xx
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: http
port: 8080
protocol: TCP
targetPort: 8080
selector:
name: httpd
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
Yes you have to create the Load Balancer however that will auto managed by the K8s service.
You can use the Nginx or other ingress controller as per requirement.
You can checkout this Nice official doc from AWS : https://aws.amazon.com/blogs/opensource/network-load-balancer-nginx-ingress-controller-eks/
Once you deploy the Nginx ingress controller it will manage the ingress resource and the Nginx controller will get the public LB.
Example :
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- hosts:
- anthonycornell.com
secretName: tls-secret
rules:
- host: anthonycornell.com
http:
paths:
- path: /apple
backend:
serviceName: apple-service
servicePort: 5678
- path: /banana
backend:
serviceName: banana-service
servicePort: 5678
I have 2 services, one serves up a rest API and the other serves up static content via nginx web server.
I can retrieve the static content from the pod running an nginx web server via the ingress controller using https provided that I don't use the following annotation within the ingress yaml
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
However, the backend API service no longer works. If I add that annotation back, the backend service URL https://fqdn/restservices/engine-rest/v1/api works but the front end https://fqdn/ web server throws a 502.
Ingress
Ingress
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: ingress
namespace: namespace-abc
annotations:
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
spec:
rules:
- http:
paths:
- path: /restservices/engine-rest/v1
backend:
serviceName: a
servicePort: 8080
- path: /
backend:
serviceName: b
servicePort: 8011
Service API
kind: Service
apiVersion: v1
metadata:
name: a
namespace: namespace-abc
labels:
app: a
version: 1
spec:
ports:
- name: https
protocol: TCP
port: 80
targetPort: 8080
nodePort: 31019
selector:
app: a
version: 1
clusterIP: <cluster ip>
type: LoadBalancer
sessionAffinity: ClientIP
externalTrafficPolicy: Cluster
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10800
Service UI
kind: Service
apiVersion: v1
metadata:
name: b
namespace: namespace-abc
labels:
app: b
version: 1
annotations:
spec:
ports:
- name: http
protocol: TCP
port: 8011
targetPort: 8011
nodePort: 32620
selector:
app: b
version: 1
clusterIP: <cluster ip>
type: LoadBalancer
sessionAffinity: None
externalTrafficPolicy: Cluster
If your problem is that adding nginx.ingress.kubernetes.io/backend-protocol: HTTPS makes service-A work but fails service-B, and removing it makes service-A fail but works for service-B, then the solution is to create two different Ingress objects so they can be annotated independently
---
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: ingress-a
namespace: namespace-abc
annotations:
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
spec:
rules:
- http:
paths:
- path: /restservices/engine-rest/v1
backend:
serviceName: a
servicePort: 8080
---
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: ingress-b
namespace: namespace-abc
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: b
servicePort: 8011
I am setting up kubernetes on a AWS environment using kubeadm. I have setup ingress-nginx to access the service on port 443. I have checked the service configurations which look good. I am receiving 502 bad gateway and also the Address field in ingress is empty.
Front end service
apiVersion: v1
kind: Service
metadata:
labels:
name: voyager-configurator-webapp
name: voyager-configurator-webapp
spec:
ports:
-
port: 443
targetPort: 443
selector:
component: app
name: voyager-configurator-webapp
type: ClusterIP
Ingress yml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-ingress-resource
spec:
tls:
- hosts:
- kubernetes-test.xyz.com
secretName: default-server-secret
rules:
- host: kubernetes-test.xyz.com
http:
paths:
- backend:
serviceName: voyager-configurator-webapp
servicePort: 443
NAME CLASS HOSTS ADDRESS PORTS AGE
nginx-ingress-resource <none> kubernetes-test.xyz.com 80, 443 45m
What could be the issue here ? Any help will be appreciated.
Make sure that your service is created in proper namespace - if not add namespace field in service definition. It is not good approach to add label called name with the same name as your service, instead you can use different one to avoid mistake and configurations problem.
Read more about selectors and labels: labels-selectors.
Your frontend service should look like that:
piVersion: v1
kind: Service
name: voyager-configurator-webapp
metadata:
labels:
component: app
appservice: your-example-app
spec:
ports:
- protocol: TCP
port: 443
targetPort: 443
selector:
component: app
app: your-example-app
type: ClusterIP
Your ingress should look like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-ingress-resource
annotations:
kubernetes.io/ingress.class: nginx
spec:
tls:
- hosts:
- kubernetes-test.xyz.com
secretName: default-server-secret
rules:
- host: kubernetes-test.xyz.com
http:
paths:
- path: /
backend:
serviceName: voyager-configurator-webapp
servicePort: 443
You have to define path to backend to with Ingress should send traffic.
Remember that is good to follow some examples and instructions during setup to avoid problems and waste of time during debugging.
Take a look: nginx-ingress-502-bad-gateway, aws-kubernetes-ingress-nginx.
How does an ingress forward https traffic to port 443 of the service(eventually to 8443 on my container)? Do I have to make any changes to my ingress or is this done automatically.
On GCP, I have a layer 4 balancer -> nginx-ingress controller -> ingress
My ingress is:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-keycloak
annotations:
kubernetes.io/ingress.class: "nginx"
certmanager.k8s.io/issuer: "letsencrypt-prod"
certmanager.k8s.io/acme-challenge-type: http01
spec:
tls:
- hosts:
- mysite.com
secretName: staging-iam-tls
rules:
- host: mysite.com
http:
paths:
- path: /auth
backend:
serviceName: keycloak-http
servicePort: 80
I searched online but I don't see explicit examples of hitting 443. It's always 80(or 8080)
My service keycloak-http is(elided and my container is actually listening at 8443)
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2019-05-15T12:45:58Z
labels:
app: keycloak
chart: keycloak-4.12.0
heritage: Tiller
release: keycloak
name: keycloak-http
namespace: default
..
spec:
clusterIP: ..
externalTrafficPolicy: Cluster
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: 8443
selector:
app: keycloak
release: keycloak
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
Try this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-keycloak
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
certmanager.k8s.io/issuer: "letsencrypt-prod"
certmanager.k8s.io/acme-challenge-type: http01
spec:
tls:
- hosts:
- mysite.com
secretName: staging-iam-tls
rules:
- host: mysite.com
http:
paths:
- path: /auth
backend:
serviceName: keycloak-http
servicePort: 443