I am still struggling with kubernetes.
I have issue with preserving request IP address on service for logging purposes. Logging is done with GRPC server. This code is working outside kubernetes as intended.
Service is defined similar to this.
apiVersion: v1
kind: Service
metadata:
annotations: {}
labels:
name: grpc-api
name: grpc-api
namespace: myns
spec:
ports:
- name: ext-5000
port: 5000
targetPort: 5000
- name: grpc-5050
port: 5050
targetPort: 5050
selector:
name: grpc-api
type: ClusterIP
Ingress is:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
certmanager.k8s.io/cluster-issuer: letsencrypt-myns
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: GRPC
nginx.ingress.kubernetes.io/service-upstream: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
labels:
name: api-grpc
name: api-grpc
namespace: myns
spec:
rules:
- host: api.example.org
http:
paths:
- backend:
serviceName: grpc-api
servicePort: 5000
path: /
tls:
- hosts:
- api.example.org
secretName: grpc-api-ingress-cert
Documentation mentions externalTrafficPolicy: Local in service, where type is LoadBalancer. Would it be enough to add parameter above to ClusterIP type service or do I have to change it to something else?
Thank you in advance.
Related
How can I communicate with gRPC on ingress nginx controller?
My Ingress service code is below.
It was made by referring to a famous example
LoadBalancer changed 443 port and changed certificate.
However, the LB address of Ingress and Service Loadbalancer is different.
Service
apiVersion: v1
kind: Service
metadata:
name: test-grpc-service
labels:
test: grpc
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
service.beta.kubernetes.io/aws-load-balancer-internal: "false"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "aarn:aws:acm:xxxxxx"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
spec:
type: LoadBalancer
selector:
test: grpc
ports:
- port: 8888
targetPort: 8888
name: grpc
Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-grpc-ingress
labels:
test: grpc
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: GRPC
nginx.ingress.kubernetes.io/ssl-redirect: 'false'
spec:
tls:
- hosts:
- test.test.com
secretName: test-secret
rules:
- host: test.test.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: test-grpc-service
port:
number: 8888
I have a local website.
The website was created by a docker-compose and it is listening on a localhost port 3000.
When I try:
curl 127.0.0.1:3000
I can see the response.
What I did:
From my domain provider I edited the DNS to point to my server, then I changed nginx-ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: virtual-host-ingress
namespace: ingress-basic
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/use-regex: "true"
cert-manager.io/cluster-issuer: "letsencrypt-pp"
spec:
tls:
- hosts:
- nextformulainvesting.com
secretName: ***
rules:
- host: "nextformulainvesting.com"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: e-frontend-saleor
port:
number: 80
and I created the service:
apiVersion: v1
kind: Service
metadata:
name: e-frontend-saleor
spec:
ports:
- protocol: TCP
port: 80
targetPort: 3000
But with the service or without the service I receive the error 503 Service Temporarily Unavailable.
How can I use nginx-ingress to point to my local TCP service?
To clarify the issue I am posting a community wiki answer.
The answer that helped to resolve this issue is available at this link. Based on that - the clue of the case is to create manually a Service and an Endpoint objects for external server.
After that one can create an Ingress object that will point to Service external-ip with adequate port .
Here are the examples of objects provided in similar question.
Service and an Endpoint objects:
apiVersion: v1
kind: Service
metadata:
name: external-ip
spec:
ports:
- name: app
port: 80
protocol: TCP
targetPort: 5678
clusterIP: None
type: ClusterIP
---
apiVersion: v1
kind: Endpoints
metadata:
name: external-ip
subsets:
- addresses:
- ip: 10.0.40.1
ports:
- name: app
port: 5678
protocol: TCP
Ingress object:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: external-service
spec:
rules:
- host: service.example.com
http:
paths:
- backend:
serviceName: external-ip
servicePort: 80
path: /
See also this reference.
Your service that you have created is for forwarding the traffic to deployments
As your service is running out side of Kubernetes cluster you should be using the Endpoint in this case
apiVersion: v1
kind: Endpoints
metadata:
name: my-service
subsets:
- addresses:
- IP: <External IP>
ports:
- port: 3000
and you can use this Endpoint to ingress so that it will route the traffic.
Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: virtual-host-ingress
namespace: ingress-basic
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/use-regex: "true"
cert-manager.io/cluster-issuer: "letsencrypt-pp"
spec:
tls:
- hosts:
- nextformulainvesting.com
secretName: ***
rules:
- host: "nextformulainvesting.com"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: my-service
port:
number: 3000
I have 2 services, one serves up a rest API and the other serves up static content via nginx web server.
I can retrieve the static content from the pod running an nginx web server via the ingress controller using https provided that I don't use the following annotation within the ingress yaml
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
However, the backend API service no longer works. If I add that annotation back, the backend service URL https://fqdn/restservices/engine-rest/v1/api works but the front end https://fqdn/ web server throws a 502.
Ingress
Ingress
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: ingress
namespace: namespace-abc
annotations:
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
spec:
rules:
- http:
paths:
- path: /restservices/engine-rest/v1
backend:
serviceName: a
servicePort: 8080
- path: /
backend:
serviceName: b
servicePort: 8011
Service API
kind: Service
apiVersion: v1
metadata:
name: a
namespace: namespace-abc
labels:
app: a
version: 1
spec:
ports:
- name: https
protocol: TCP
port: 80
targetPort: 8080
nodePort: 31019
selector:
app: a
version: 1
clusterIP: <cluster ip>
type: LoadBalancer
sessionAffinity: ClientIP
externalTrafficPolicy: Cluster
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10800
Service UI
kind: Service
apiVersion: v1
metadata:
name: b
namespace: namespace-abc
labels:
app: b
version: 1
annotations:
spec:
ports:
- name: http
protocol: TCP
port: 8011
targetPort: 8011
nodePort: 32620
selector:
app: b
version: 1
clusterIP: <cluster ip>
type: LoadBalancer
sessionAffinity: None
externalTrafficPolicy: Cluster
If your problem is that adding nginx.ingress.kubernetes.io/backend-protocol: HTTPS makes service-A work but fails service-B, and removing it makes service-A fail but works for service-B, then the solution is to create two different Ingress objects so they can be annotated independently
---
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: ingress-a
namespace: namespace-abc
annotations:
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
spec:
rules:
- http:
paths:
- path: /restservices/engine-rest/v1
backend:
serviceName: a
servicePort: 8080
---
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: ingress-b
namespace: namespace-abc
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: b
servicePort: 8011
I have deployed the kubernetes cluster on vagrant machine with config as:
one master and two worker nodes.
Two services are deployed with named as nodeport-svc-rc and nodeport-svc-rs
Services config:
# nodeport-svc-rc
apiVersion: v1
kind: Service
metadata:
name: nodeport-svc-rc
spec:
type: NodePort
ports:
- port: 5001
targetPort: 5001
nodePort: 30001
selector:
app: controller
# nodeport-svc-rs
apiVersion: v1
kind: Service
metadata:
name: nodeport-svc-rs
spec:
type: NodePort
ports:
- port: 5002
targetPort: 5002
nodePort: 30002
selector:
app: controller-rs
Ingress Config:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.class: traefik
ingress.kubernetes.io/auth-type: "basic"
ingress.kubernetes.io/auth-secret: "mysecret"
spec:
rules:
- host: example.com
http:
paths:
- path: /demo
backend:
serviceName: nodeport-svc-rc
servicePort: 5001
- path: /demof
backend:
serviceName: nodeport-svc-rs
servicePort: 5002
Traefik is able to detect the ingress resource on its dashboard as backends services:
But no Frontends have been detected on dashboard and no IP address are detected on Backends.
Entry in /etc/hosts file: XXX.XXX.X.X example.com
I'm unable to route traffic using ingress. If i hit from browser example.com/demo, error shows Site can't be reached where i'm wrong? can someone help me.
# sudo kubectl describe ing
Name: ingress
Namespace: default
Address:
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
example.com
/demo nodeport-svc-rc:5001 (10.244.171.95:5001,10.244.171.96:5001,10.244.235.150:5001)
/demof nodeport-svc-rs:5002 (10.244.171.98:5002,10.244.235.157:5002,10.244.235.158:5002)
Annotations: ingress.kubernetes.io/auth-secret: mysecret
ingress.kubernetes.io/auth-type: basic
kubernetes.io/ingress.class: traefik
Events: <none>
And when i hit directly on nodePort service example.com:30001 or example.com:30002 it successfully give response.
Edited: Below Traefik controller config:
kind: Deployment
apiVersion: apps/v1
metadata:
name: traefik-ingress-controller
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
replicas: 1
selector:
matchLabels:
k8s-app: traefik-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- image: traefik:v1.7.26-alpine
name: traefik-ingress-lb
args:
- --web
- --kubernetes
I am setting up kubernetes on a AWS environment using kubeadm. I have setup ingress-nginx to access the service on port 443. I have checked the service configurations which look good. I am receiving 502 bad gateway and also the Address field in ingress is empty.
Front end service
apiVersion: v1
kind: Service
metadata:
labels:
name: voyager-configurator-webapp
name: voyager-configurator-webapp
spec:
ports:
-
port: 443
targetPort: 443
selector:
component: app
name: voyager-configurator-webapp
type: ClusterIP
Ingress yml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-ingress-resource
spec:
tls:
- hosts:
- kubernetes-test.xyz.com
secretName: default-server-secret
rules:
- host: kubernetes-test.xyz.com
http:
paths:
- backend:
serviceName: voyager-configurator-webapp
servicePort: 443
NAME CLASS HOSTS ADDRESS PORTS AGE
nginx-ingress-resource <none> kubernetes-test.xyz.com 80, 443 45m
What could be the issue here ? Any help will be appreciated.
Make sure that your service is created in proper namespace - if not add namespace field in service definition. It is not good approach to add label called name with the same name as your service, instead you can use different one to avoid mistake and configurations problem.
Read more about selectors and labels: labels-selectors.
Your frontend service should look like that:
piVersion: v1
kind: Service
name: voyager-configurator-webapp
metadata:
labels:
component: app
appservice: your-example-app
spec:
ports:
- protocol: TCP
port: 443
targetPort: 443
selector:
component: app
app: your-example-app
type: ClusterIP
Your ingress should look like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-ingress-resource
annotations:
kubernetes.io/ingress.class: nginx
spec:
tls:
- hosts:
- kubernetes-test.xyz.com
secretName: default-server-secret
rules:
- host: kubernetes-test.xyz.com
http:
paths:
- path: /
backend:
serviceName: voyager-configurator-webapp
servicePort: 443
You have to define path to backend to with Ingress should send traffic.
Remember that is good to follow some examples and instructions during setup to avoid problems and waste of time during debugging.
Take a look: nginx-ingress-502-bad-gateway, aws-kubernetes-ingress-nginx.