I had setup ingress on my Kubernetes Cluster running on VMWAre virtual machines by following everything similar to the specifications here. All the ports are open and accessible.
https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/complete-example
My master is x.x.x.10 and nodes are x.x.x.12 and x.x.x.13.
After the creation of ingress/controllers, I need to get the IP where the nginx-controller runs
nginx-ingress-rc-kgfmd 1/1 Running 0 21h 172.16.5.5 x.x.x.12
so, it usually runs either on x.x.x.12 or x.x.x.13, and then when I do this it hits my web service
curl --resolve master.federated.fds:80:x.x.x.12 https://master.federated.fds/coffee
where master.federated.fds is the DNS resolvable name of Master.
I need to make it work without the help of IP address and only with the DNS resolvable name or else atleast with any of the node ip's
Eg: http://node2.federated.fds/coffee, when I curl this I get Connection refused error
Updating with specifications
apiVersion: v1
kind: Service
metadata:
name: coffee-svc
labels:
app: coffee
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
# nodePort: 30080
type: NodePort
selector:
app: coffee
ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: cafe-ingress
spec:
rules:
- host: jciamaster.federated.fds
http:
paths:
- path: /tea
backend:
serviceName: tea-svc
servicePort: 80
- path: /coffee
backend:
serviceName: coffee-svc
servicePort: 80
nginx ing controller
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-ingress-rc
labels:
app: nginx-ingress
spec:
replicas: 1
selector:
app: nginx-ingress
template:
metadata:
labels:
app: nginx-ingress
spec:
containers:
- image: nginxdemos/nginx-ingress:0.8.1
imagePullPolicy: Always
name: nginx-ingress
ports:
- containerPort: 80
hostPort: 80
I see that the port 80 is listening only on the node where nginx pod runs and not on any other node. Could someone pls let me know how to access the application through all node ip's or thro a url like jciamaster.federated.fds?
Thanks,
Update:
Tried to run with nginx controller as svc as suggested by Marc
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-ingress-rc
labels:
app: nginx-ingress
spec:
replicas: 1
selector:
app: nginx-ingress
template:
metadata:
labels:
app: nginx-ingress
spec:
containers:
- image: nginxdemos/nginx-ingress:0.8.1
imagePullPolicy: Always
name: nginx-ingress
ports:
- containerPort: 80
# Uncomment the lines below to enable extensive logging and/or customization of
# NGINX configuration with configmaps
#args:
#- -v=3
#- -nginx-configmaps=default/nginx-config
---
apiVersion: v1
kind: Service
metadata:
labels:
name: nginx-ingress-label
name: nginx-ing-svc
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
nodePort: 30000
type: NodePort
selector:
name: nginx-ingress
When I hit http://x.x.x.:30000/coffee it just hangs and does nothing.Anything I am doing wrong?
You can expose the nginx controller Pod with a NodePort Service, then you can access it on all nodes.
Related
I'm trying to create a GKE Ingress that points to two different backend services based on path. I've seen a few posts explaining this is only possible with an nginx Ingress because gke ingress doesn't support rewrite-target. However, this Google documentation, GKE Ingresss - Multiple backend services, seems to imply otherwise. I've followed the steps in the docs but haven't had any success. Only the service that is available on the path prefix of / is returned. Any other path prefix, like /v2, returns a 404 Not found.
Details of my setup are below. Is there an obvious error here -- is the Google documentation incorrect and this is only possible using nginx ingress?
-- Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: app-static-ip
networking.gke.io/managed-certificates: app-managed-cert
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: api-service
port:
number: 80
- path: /v2
pathType: Prefix
backend:
service:
name: api-2-service
port:
number: 8080
-- Service 1
apiVersion: v1
kind: Service
metadata:
name: api-service
labels:
app: api
spec:
type: NodePort
selector:
app: api
ports:
- port: 80
targetPort: 5000
-- Service 2
apiVersion: v1
kind: Service
metadata:
name: api-2-service
labels:
app: api-2
spec:
type: NodePort
selector:
app: api-2
ports:
- port: 8080
targetPort: 5000
GCP Ingress supports multiple paths. This is also well described in Setting up HTTP(S) Load Balancing with Ingress. For my test I've used both Hello-world v1 and v2.
There are 3 possible issues.
Issue is with container ports opened. You can check it using netstat:
$ kk exec -ti first-55bb869fb8-76nvq -c container -- bin/sh
/ # netstat -plnt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 :::8080 :::* LISTEN 1/hello-app
Issue might be also caused by the Firewall configuration. Make sure you have proper settings. (In general, in the new cluster I didn't need to add anything but if you have more stuff and have specific Firewall configurations it might block).
Misconfiguration between port, containerPort and targetPort.
Below my example:
1st deployment with
apiVersion: apps/v1
kind: Deployment
metadata:
name: first
labels:
app: api
spec:
selector:
matchLabels:
app: api
template:
metadata:
labels:
app: api
spec:
containers:
- name: container
image: gcr.io/google-samples/hello-app:1.0
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: api-service
labels:
app: api
spec:
type: NodePort
selector:
app: api
ports:
- port: 5000
targetPort: 8080
2nd deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: second
labels:
app: api-2
spec:
selector:
matchLabels:
app: api-2
template:
metadata:
labels:
app: api-2
spec:
containers:
- name: container
image: gcr.io/google-samples/hello-app:2.0
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: api-2-service
labels:
app: api-2
spec:
type: NodePort
selector:
app: api-2
ports:
- port: 6000
targetPort: 8080
Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: api-service
port:
number: 5000
- path: /v2
pathType: Prefix
backend:
service:
name: api-2-service
port:
number: 6000
Outputs:
$ curl 35.190.XX.249
Hello, world!
Version: 1.0.0
Hostname: first-55bb869fb8-76nvq
$ curl 35.190.XX.249/v2
Hello, world!
Version: 2.0.0
Hostname: second-d7d87c6d8-zv9jr
Please keep in mind that you can also use Nginx Ingress on GKE by adding specific annotation.
kubernetes.io/ingress.class: "nginx"
Main reason why people use nginx ingress on GKE is using rewrite annotation and possibility to use ClusterIP or NodePort as serviceType, where GCP ingress allows only NodePort serviceType.
Additional information you can find in GKE Ingress for HTTP(S) Load Balancing
I'm using this nginx ingress controller on Hetzner server. After installation of ingress controller, I'm able to access the worker node by its IP, but not able to access the app running on pod inside the cluster. am I missing something?
Are Ingress and Traefik are different, a bit confused in the terminologies.
service file -
apiVersion: v1
kind: Service
metadata:
name: service-name-xxx
spec:
selector:
app: app-name
ports:
- protocol: 'TCP'
port: 80
targetPort: 4200
type: LoadBalancer
deployment file -
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-name
labels:
app: app-name
spec:
replicas: 1
selector:
matchLabels:
app: app-name
template:
metadata:
labels:
app: app-name
spec:
imagePullSecrets:
- name: my-registry-key
containers:
- name: container-name
image: my-private-docker-img
imagePullPolicy: Always
ports:
- containerPort: 4200
ingress file -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-name
spec:
rules:
- host:
http:
paths:
- pathType: Prefix
path: "/app"
backend:
service:
name: service-name-xxx
port:
number: 4200
I think you have to add the kubernetes.io/ingress.class: "nginx" to your Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-name
spec:
name: hsts-ingress-backend1-minion
annotations:
kubernetes.io/ingress.class: "nginx"
You have set port to 80 and targetPort to 4200 in your service. Should mention port 80 in your ingress yaml.
backend:
service:
name: service-name-xxx
port: 80
targetPort: 4200
This is about Traefik not about the way it works with K3s Kubernetes generally, so please don't give me a general K8s answer.
I have a simple k3s deployment and service that looks like this...
apiVersion: v1
kind: Service
metadata:
labels:
app: hello-express
name: app-tier
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 3000
targetPort: 3000
selector:
tier: app
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-express-deployment
labels:
app: hello-express
tier: app
spec:
replicas: 1
selector:
matchLabels:
tier: app
template:
metadata:
labels:
app: hello-express
tier: app
spec:
containers:
- name: server
image: partyk1d24/hello-express:latest
ports:
- containerPort: 3000
I can then access the application using the external ip and port 3000. Now I would like to change that port from 3000 to 80. Apparently this is controlled locally on K3s using Traefik. I tried the following while looking here...
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello-express-ingress
annotations:
kubernetes.io/ingress.class: "traefik"
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: app-tier
port:
number: 80
But when I try to go to the site I get...
curl 192.168.X.XXX
Service Unavailable%
The Blog is a little old so I am sure I am doing something wrong can someone help me identify it?
You should change the port of service to 80.
ports:
- protocol: TCP
port: 80
targetPort: 3000
Keep the target port as 3000.
I have a cluster on Digital Ocean. 1 master with 2 nodes.
I'm using the Nginx Controller with the Digital Ocean Load Balancer.
Three Items in my Ingress Service Work fine. The fourth where I use Nginx Doesn't.
Does anyone know why?
Here are the configs. I left out the Ingress Services for the 1st through 3rd Deployments that are working. I can add them if needed.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hello-kubernetes-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: hw1.example.com
http:
paths:
- backend:
serviceName: hello-kubernetes-first
servicePort: 80
- host: hw2.example.com
http:
paths:
- backend:
serviceName: hello-kubernetes-second
servicePort: 80
- host: hw3.example.com
http:
paths:
- backend:
serviceName: hello-kubernetes-third
servicePort: 80
- host: hw4.example.com
http:
paths:
- backend:
serviceName: hello-kubernetes-fourth
servicePort: 80
apiVersion: v1
kind: Service
metadata:
name: hello-kubernetes-fourth
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8080
selector:
app: hello-kubernetes-fourth
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kubernetes-fourth
spec:
replicas: 3
selector:
matchLabels:
app: hello-kubernetes-fourth
template:
metadata:
labels:
app: hello-kubernetes-fourth
spec:
containers:
- name: nginx
image: nginx:1.8
ports:
- containerPort: 8080
First of all ,if you use the nginx image, the container Port should be the one exposed by the Dockerfile of the image.
For the nginx image the ContainerPort should be port 80
https://hub.docker.com/layers/nginx/library/nginx/stable/images/sha256-cab8e4374e1e32bac58a8c7ae96c252cadcb1049545ed4bb3e3bfd5d087983b9
Now you should test if the nginx is available by accessing the podip:containerPort from the minikube node:
kubectl get po -o wide
hello-kubernetes-fourth-cb4fb668c-7tkd4 1/1 Running 0 25m 172.17.0.12
curl 172.17.0.12
After this, you should modify the ports of the service : targetPort should match the containerPort (80) and port 8080
Now access the Nginx by service URL:
kubectl describe svc hello-kubernetes-fourth
curl ClusterIP:8080
If OK, modify also the servicePort of the ingress to match the Service Port. Don't forget to enable the ingress, as it is disabled by default on minikube:
minikube addons enable ingress
* ingress was successfully enabled
after ingress pod is up, and adding in your host machine the MINIKUBEIP hw4.example.com in your /etc/hosts, you should be able to curl from the host machine:
curl hw4.example.com
The deployment configuration is incorrect. update the YAMLs as shown below
apiVersion: v1
kind: Service
metadata:
name: hello-kubernetes-fourth
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 80
selector:
app: hello-kubernetes-fourth
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kubernetes-fourth
spec:
replicas: 3
selector:
matchLabels:
app: hello-kubernetes-fourth
template:
metadata:
labels:
app: hello-kubernetes-fourth
spec:
containers:
- name: nginx
image: nginx:1.8
ports:
- containerPort: 80
get inside nginx pod and verify that you are able to reach the service and getting the response
curl http://hello-kubernetes-fourth
Then you should be able to reach the service from ingress.
I have an EKS cluster for which I want :
- 1 Load Balancer per cluster,
- Ingress rules to direct to the right namespace and the right service.
I have been following this guide : https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes
My deployments:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world
namespace: default
spec:
replicas: 3
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-world
image: IMAGENAME
ports:
- containerPort: 8000
name: hello-world
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: bleble
namespace: default
spec:
replicas: 3
selector:
matchLabels:
app: bleble
template:
metadata:
labels:
app: bleble
spec:
containers:
- name: bleble
image: IMAGENAME
ports:
- containerPort: 8000
name: bleble
the service of those deployments:
apiVersion: v1
kind: Service
metadata:
name: hello-world-svc
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8000
selector:
app: hello-world
type: NodePort
---
apiVersion: v1
kind: Service
metadata:
name: bleble-svc
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8000
selector:
app: bleble
type: NodePort
My Load balancer:
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
externalTrafficPolicy: Local
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
My ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: simple-fanout-example
namespace : default
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: internal-lb.aws.com
http:
paths:
- path: /bleble
backend:
serviceName: bleble-svc
servicePort: 80
- path: /hello-world
backend:
serviceName: hello-world-svc
servicePort: 80
I've set up the Nginx Ingress Controller with this : kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.24.1/deploy/mandatory.yaml
I am unsure why I get a 503 Service Temporarily Unavailable for one service and one 502 for another... I would guess it's a problem of ports or of namespace? In the guide, they don't define namespace for the deployment...
Every resources create correctly, and I think the ingress is actually working but is getting confused where to go.
Thanks for your help!
In general, use externalTrafficPolicy: Cluster instead of Local. You can gain some performance (latency) improvement by using Local but you need to configure those pod allocations with a lot efforts. You will hit 5xx errors with those misconfigurations. In addition, Cluster is the default option for externalTrafficPolicy.
In your ingress, you route /bleble to service bleble, but your service name is actually bleble-svc. please make them consistent. Also, you would need to set your servicePort to 8080 as you exposed 8080 in your service configuration.
For internal service like bleble-svc, Cluster IP is good enough in your case as it does not need external access.
Hope this helps.
Found it!
The containerPort in the Deployment were set to 8000, the targetport of the services as well, but the person who did the Dockerfile of the code exposed the port 80. Which was the reason it was getting the 502 Bad getaway!
Thanks a lot as well to #Fei who has been a fantastic helper!