Nginx Ingress Failing to Serve - kubernetes

I am new to k8s
I have a deployment file that goes below
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins-deployment
spec:
replicas: 3
selector:
matchLabels:
component: web
template:
metadata:
labels:
component: web
spec:
containers:
- name: jenkins
image: jenkins
ports:
- containerPort: 8080
- containerPort: 50000
My Service File is as following:
apiVersion: v1
kind: Service
metadata:
name: jenkins-svc
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8080
name: http
selector:
component: web
My Ingress File is
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: jenkins-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: jenkins.xyz.com
http:
paths:
- path: /
backend:
serviceName: jenkins-svc
servicePort: 80
I am using the nginx ingress project and my cluster is created using kubeadm with 3 nodes
nginx ingress
I first ran the mandatory command
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
when I tried hitting jenkins.xyz.com it didn't work
when I tried the command
kubectl get ing
the ing resource doesnt get an IP address assigned to it

The ingress resource is nothing but the configuration of a reverse proxy (the Ingress controller).
It is normal that the Ingress doesn't get an IP address assigned.
What you need to do is connect to your ingress controller instance(s).
In order to do so, you need to understand how they're exposed in your cluster.
Considering the YAML you claim you used to get the ingress controller running, there is no sign of exposition to the outside network.
You need at least to define a Service to expose your controller (might be a load balancer if the provider where you put your cluster supports it), you can use HostNetwork: true or a NodePort.
To use the latest option (NodePort) you could apply this YAML:
https://github.com/kubernetes/ingress-nginx/blob/master/deploy/static/provider/baremetal/service-nodeport.yaml
I suggest you read the Ingress documentation page to get a clearer idea about how all this stuff works.
https://kubernetes.io/docs/concepts/services-networking/ingress/

In order to access you local Kubernetes Cluster PODs a NodePort needs to be created. The NodePort will publish your service in every node using using its public IP and a port. Then you can access the service using any of the cluster IPs and the assigned port.
Defining a NodePort in Kubernetes:
apiVersion: v1
kind: Service
metadata:
name: nginx-service-np
labels:
name: nginx-service-np
spec:
type: NodePort
ports:
- port: 8082 # Cluster IP, i.e. http://10.103.75.9:8082
targetPort: 8080 # Application port
nodePort: 30000 # (EXTERNAL-IP VirtualBox IPs) i.e. http://192.168.50.11:30000/ http://192.168.50.12:30000/ http://192.168.50.13:30000/
protocol: TCP
name: http
selector:
app: nginx
See a full example with source code at Building a Kubernetes Cluster with Vagrant and Ansible (without Minikube).
The nginx ingress controller can be replaced also with Istio if you want to benefit from a service mesh architecture for:
Load Balance traffic, external o internal
Control failures, retries, routing
Apply limits and monitor network traffic between services
Secure communication
See Installing Istio in Kubernetes under VirtualBox (without Minikube).

Related

Kubernetes Ingress - how to access my service on my computer?

I have the following template, with a deployment, a service and an Ingress. I ran minikube addons enable ingress locally to add an ingress controller before.
apiVersion: apps/v1
kind: Deployment
metadata:
name: fastapi
labels:
app: fastapi
spec:
replicas: 1
selector:
matchLabels:
app: fastapi
template:
metadata:
labels:
app: fastapi
spec:
containers:
- name: fastapi
image: datamastery/fastapi
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: fastapi-service
spec:
selector:
app: fastapi
type: LoadBalancer
ports:
- protocol: TCP
port: 5000
targetPort: 3000
nodePort: 30002
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dashboard-ingress
namespace: kubernetes-dashboard
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: datamastery.com
http:
paths:
- path: /
pathType: Exact
backend:
service:
name: fastapi-service
port:
number: 3000
When I run kubectl get servicesI get:
fastapi-service LoadBalancer 10.108.5.228 <pending> 5000:30002/TCP 5d22h
I my etc/hosts/ file I added the following:
10.108.5.228 datamastery.com
Normally I would expect now to be able to open my service in the browser, but nothing happens. What did I do wrong? Did I miss something in the template? Is the IP wrong? Something in the hosts file?
Thank you!
fastapi-service LoadBalancer 10.108.5.228 5000:30002/TCP 5d22h
10.108.5.228 is an address within your SDN. Only members of your SDN can reach this address, it is unlikely your workstation would have a route sending this trafic to one of your Kubernetes nodes.
<pending> means your cluster is not integrated with a cloud provider with LoadBalancer capabilities. When in doubt: you should use ClusterIP as your service type. LoadBalancer only makes sense in specific cases. While setting a nodePort as you did is also not required (would make sense with a NodePort service, which is as well useful in few use cases, though should not be used otherwise).
You did create an Ingress. If you have an Ingress Controller, you want to connect to that ip/port. The Host header would tell your ingress controller where to route this, within your SDN.
I believe what you are doing here is trying to combine two different things.
NodePort is only sufficient if you have only one node OR you really control where your service pods getting deployed. Otherwise it is not suitable to use the node IP to access services.
To overcome this issue we usually use ingress as a service proxy. Incoming traffic will be routed to the correct service pods depending on the URL and port. Ingress also manages the SSL termination. So basically this is your first "load balancer" as ingress assigned traffic to services across nodes and pods.
In production environment you deploy the ingress controller with the type: Loadbalancer in the kube-system namespace, example for Nginx-ingress:
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
namespace: kube-system
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
type: LoadBalancer
ports:
- port: 80
name: http
targetPort: 80
- port: 443
name: https
targetPort: 443
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
This would spin up a cloud load balancer of your provider and link it to the ingress service in your cluster. So basically now you would have a real load balancer in place balancing traffic between your nodes and ingress routes them to your services and services to your pods.
Back to your question:
In your config files you try to spin up a service with the type: LoadBalancer. This would skip the ingress part and spin up a second cloud load balancer from your provider dedicated for this single service.
You have to remove the type (and nodePort) to use default ClusterIP for your service.
apiVersion: v1
kind: Service
metadata:
name: fastapi-service
spec:
selector:
app: fastapi
ports:
- protocol: TCP
port: 3000
targetPort: 3000
In addition you have mentioned a wrong port. Your ingress object points on port 3000. You Service object on port 5000. So we also change this.
With this config your traffic on the FQDN is routed to ingress, to ClusterIP service on port 3000 to your pods.

Connect two pods to the same external access point

I have two pods (deployments) running on minikube. Each pod has the same port exposed (say 8081), but use different images. Now I want to configure so that I can access either of the pods using the same external URL, in a load balanced way. So what I tried to do is put same matching label in both pods and map them to same service and then expose through NodePort. Example:
#pod1.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: dep1
labels:
apps: dep1
tier: cloud
spec:
template:
metadata:
name: dep1-pod
labels:
app: deployment1
spec:
containers:
- name: cont1
image: cont1
ports:
- containerPort: 8081
selector:
matchLabels:
app:deployment1
Now second pod
#pod2.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: dep2
labels:
apps: dep2
tier: cloud
spec:
template:
metadata:
name: dep2-pod
labels:
app: deployment1
spec:
containers:
- name: cont2
image: cont2
ports:
- containerPort: 8081
selector:
matchLabels:
app:deployment1
Now the service:
#service.yaml
apiVersion: v1
kind: Service
metadata:
name: service1
spec:
type: NodePort
ports:
- port: 8081
targetPort: 8081
nodePort: 30169
selector:
app: deployment1
Now this does not work as intended as it refuses to connect to my IP:30169. However, I can connect if only one of the pods are deployed.
Now I know I can achieve this functionality using replicas and just one image, but in this case, I want to do this using 2 images. Any help is much appreciated.
You can use Ingress to achieve it.
Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.
An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name-based virtual hosting.
An Ingress controller is responsible for fulfilling the Ingress, usually with a load balancer, though it may also configure your edge router or additional frontends to help handle the traffic.
An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically uses a service of type Service.Type=NodePort or Service.Type=LoadBalancer.
In your situation Ingress will forward traffic to your services using the same URL. It depends which path you type: URL for first Pod and URL/v2 for second Pod. Of course you can change /v2 to something else.
On the beginning you need enable Ingress on minikube. You can do it using a command below. You can red more about it here
minikube addons enable ingress
Next step you need create a Ingress using a yaml file. Here is an example how to do it step by step.
Yaml file of Ingress looks as below.
As you can see in this configuration, you can access to one Pod using URL and it will forward traffic to first service attached to the first Pod. For the second Pod using URL/v2 it will forward traffic to second service on attached to the second Pod.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: hello-world.info
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web
port:
number: 8080
- path: /v2
pathType: Prefix
backend:
service:
name: web2
port:
number: 8080

Cannot access a LoadBalancer service at Kubernetes

I managed to deploy a python app at the kubernetes cluster . The python app image is deployed at AWS ECR (Elastic Container Registry).
My deployment is:
(NAME)charting-rest-server (READY)1/1 (UP-TO-DATE)1 (AVAILABLE)1 (AGE)33m (CONTAINERS)charting-rest-server (IMAGES) *****.dkr.ecr.eu-west-2.amazonaws.com/charting-rest-server:latest (SELECTOR)app=charting-rest-server
And my service is:
(NAME)charting-rest-server-service (TYPE)LoadBalancer (CLUSTER-IP)10.100.4.207 (EXTERNAL-IP)*******.eu-west-2.elb.amazonaws.com (PORT(s))8765:32735/TCP (AGE)124m (SELECTOR)app=charting-rest-server
According to this AWS guide , when I do curl *****.us-west-2.elb.amazonaws.com:80 I should be able to externally access the Load Balancer , who is going to route me to my pod's ip.
But all I get is
(6) Could not resolve host: *******.eu-west-2.elb.amazonaws.com
And come to think about it if I want to have access to my pod and send some requests I should have an external-ip like 111.111.111.111 (obv an example).
EDIT
the deployment's yaml is:
apiVersion: apps/v1
kind: Deployment
metadata:
name: charting-rest-server
spec:
selector:
matchLabels:
app: charting-rest-server
replicas: 1
template:
metadata:
labels:
app: charting-rest-server
spec:
containers:
- name: charting-rest-server
image: *****.eu-west-2.amazonaws.com/charting-rest-server:latest
ports:
- containerPort: 5000
the service's yaml:
apiVersion: v1
kind: Service
metadata:
name: charting-rest-server-service
spec:
type: LoadBalancer
selector:
app: charting-rest-server
ports:
- protocol: TCP
port: 80
targetPort: 5000
I already tried with the suggestions from the comments , using an ingress instance but I only ended up spending a huge amount of time trying to understand how they work , "am I doing something wrong"?/etc .
I will put the yaml file I used here but it made no change since my ADDRESS field was empty - no ip to use.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: charting-rest-server-ingress
spec:
rules:
- host: charting-rest-server-service
http:
paths:
- path:/
backend:
serviceName: charting-rest-server-service
servicePort: 80
I am stuck in that problem for so much time so I would appreciate some help.
You already created a Service with type LoadBalancer, but it looks like you have incorrect ports configured.
Your Deployment is created with containerPort: 5000 and your Service is pointing to targetPort: 9376. Those needs to match for the Deployment to be exposed.
If you are having a hard time writing yaml for the Service you can expose the Deployment using following kubectl command:
kubectl expose --namespace=tick deployment charting-rest-server --type=LoadBalancer --port=8765 --target-port=5000 --name=charting-rest-server-service
Once you fix those ports you will be able to access the service from outside using it's hostname:
status:
loadBalancer:
ingress:
- hostname: aba02b223436111ea85ea06a051f04d8-1294697222.eu-west-2.elb.amazonaws.com
I also recommend this guide Tutorial: Expose Services on your AWS Quick Start Kubernetes cluster.
If you need more control over the http rules please consider using ingress, you can read more about ALB Ingress Controller on Amazon EKS also Using a Network Load Balancer with the NGINX Ingress Controller on Amazon EKS.

How to expose a Ingress for external access in Kubernetes?

I have a kubernetes cluster on a private network(private server, not aws or google cloud) and I created a Service to be able to access, however, I need to be able to access from outside the cluster and for this I created an Ingress and added ingress-nginx in the cluster.
This is the YAML I'm using after making several attempts:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: demo-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: k8s.local
http:
paths:
- path: /
backend:
serviceName: nginx
servicePort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
type: ClusterIP
selector:
name: nginx
ports:
- port: 80
targetPort: 80
protocol: TCP
# selector:
# app: nginx
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: echoserver
image: nginx
ports:
- containerPort: 80
I ran yaml like this: kubectl create -f file.yaml
In the /etc/hosts file I added k8s.local to the ip of the master server.
When trying the command in or out of the master server a "Connection refused" message appears:
$ curl http://172.16.0.18:80/ -H 'Host: k8s.local'
I do not know if it's important, but I'm using Flannel in the cluster.
My idea is just to create a 'hello world' and expose it out of the cluster!
Do I need to change anything in the configuration to allow this access?
YAML file edited:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: demo-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
# nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: k8s.local
http:
paths:
- path: /teste
backend:
serviceName: nginx
servicePort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
type: LoadBalancer # NodePort
selector:
app: nginx
ports:
- port: 80
targetPort: 80
protocol: TCP
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: echoserver
image: nginx
ports:
- containerPort: 80
You can deploy the ingress controller as a daemonset with host port 80. The service of the controller will not matter then. You can point your domain to every node in your cluster
You can do a NodePort type service but that will force you to use some port in the 30k vicinity, you will not be able to use port 80
Of course the best solution is to use a cloud provider with a load balancer
You can make it work with a plain nginx pod but the recommended method is to install a Kubernetes ingress controller, in your case you are using nginx, so you can install an nginx ingress controller.
Here is some information on how to install it.
If you want to allow external access you can also expose the nginx ingress controller as a LoadBalancer service. You can also use NodePort but you will have to manually point a load balancer to the port on your Kubernetes nodes.
And yes the selector on the 'Service' needs to be:
selector:
app: nginx
In this case NodePort would work. It will open a high port number in every node (same port in every node) so you can use any of these nodes. Place a Load Balancer if you want, and point the backend pool to those instances you have running. Do not use ClusterIP, it is just for internal usage.
If you run your cluster baremetal you need to tell the nginx-ingress controller to use hostNetwork: true, to be added in template/spec part of the mandatory.yml
That way the pod running the ingress controller will listen to Port 80 and 443 of the host node.
https://github.com/alexellis/inlets
Is the easiest way of doing what you want.
Note: encryption requires wss:// which requires TLS certs, If you want fully automated encryption + the ability to use Inlets as a Layer 4 LB, you should use Inlets Pro, it's very cheap compared to other cloud alternatives.
I've also been able to setup the oss /non-kubernetes-operator version of Inlets with encryption / wss (web sockets secure), using the open source version of Inlets as a Layer 7 LB. (it just took some manual configuration/wasn't fully automated like the pro version)
https://blog.alexellis.io/https-inlets-local-endpoints/
I was able to get public internet HTTPS + nginx ingress controller to minikube + tested 2 sites routed using ingress objects. In ~3-4 hours with no good guide to doing it / being new to Caddy/Websockets, but expert on Kubernetes Ingress.
Basically:
Step 1.) Create a $0.007/hour or $5/month VPS on Digital Ocean with a public IP
Step 2.) Point mysite1.com, *.mysite1.com, mysite2.com, *.mysite2.com to the public IP of the VPS.
Step 3.) SSH into the machine and install Inlets + Caddy v1.0.3 + Caddyfile here's mine:
mysite1.com, *.mysite1.com, mysite2.com, *.mysite2.com
proxy / 127.0.0.1:8080 {
transparent
}
proxy /tunnel 127.0.0.1:8080 {
transparent
websocket
}
tls {
max_certs 10
}
Step 4.) deploy 1 inlets deployment on kubernetes cluster, use wss to your VPS, and point the inlets deployment to an ingress controller service of type Cluster IP.
The basics of what's happening are:
1.) Caddy leverages Lets Encrypt Free to auto get HTTPS certs for every website you point at the Caddy Server.
2.) Your inlets deployment starts a bidirection VPN tunnel using websockets with the VPS that has a public IP. (Warning the VPN tunnel will only be encrypted if you specify wss, and that requires the server have a TLS cert, which it gets from "LEF")
3.) Caddy is now a public L7 LB/Reverse Proxy that terminates HTTPS, and forwards to your ingress controller over an encrypted websockets VPN tunnel. Then it's normal-ish ingress.
4.) Traffic Flow: DNS -(resolves IP)-> (HTTPS)VPS/L7 ReverseProxy - encrypted VPNtunnel-> Inlets pod from Inlets Deployment -L7 cleartext in cluster network redirect to -> Ingress Controller Service -> Ingress Controller Pod -L7 redirect to-> Cluster IP services/sites defined by ingress objs.

Kubernetes external-IP doesn't work on all nodes

I have created a Kubernetes cluster by Kubeadm with 3 nodes. Their IP address and hostname are 10.10.10.146/24(k8s1, master), 10.10.10.135/24(k8s2), 10.10.10.170/24(k8s3).
Now I create a nginx service which contains 3 pods with this yaml file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-app
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx-app
image: nginx:1.14.0
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-app-srv
spec:
type: ClusterIP
selector:
app: nginx
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
externalIPs:
- 10.10.100.1
Finally one of the pods was scheduled to k8s2 and two of them to k8s3.
Then if I route 10.10.100.0/24 to k8s2, only one pod work. If to k8s3, only two pods work. And if to k8s1, no pods work.
How can I make all pods work fine through external-IP from outside just like through cluster-IP from inside no matter which node I route the external-IP subnet to? Or that's not possible or I need something else such as Kubernetes Ingress?
There are several options to expose your service outside the cluster:
The first option is to use NodePort type of Kubernetes Service. This way Service will open a port on the network interface of each node in the cluster, and all traffic coming to that port will be forwarded to the service.
By default, port range for this kind of service is limited to 30000–32767.
Here is an example of Service NodePort configuration:
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: MyApp
type: NodePort
ports:
- name: http
protocol: TCP
port: 80
targetPort: 9376
nodePort: 30080
The second option is available if you are running Kubernetes cluster in the clouds like AWS, GCP, Azure. If you create LoadBalancer type of service, it will create a new load balancer in the cloud and forward all traffic from that load balancer to the service.
The downside of this way is that each service creates a separate load balancer, which will cost you additional money.
Here is an example of Service LoadBalancer configuration:
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: MyApp
type: LoadBalancer
ports:
- name: http
protocol: TCP
port: 80
targetPort: 9376
The third option is to use an Ingress object. An Ingress controller should be running in the cluster to configure the cloud networking according to the content of the Ingress object you created. Ingress can provide you functionality of routing requests to different services depending of dns name and URI path. You can also use it on bare-metal Kubernetes cluster, but in this case, you are responsible for routing network traffic to the Ingress controller, for example by firewall rules.
Here are a couple of examples of Ingress configuration:
# redirect all traffic to a service
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
backend:
serviceName: testsvc
servicePort: 80
# redirect traffic to a service for specified URI path
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /testpath
backend:
serviceName: test
servicePort: 80