Is it possible to configure k8s nginx-ingress as a LB to have a K8s Service actively connect to external backend hosted on external hosts/ports (where one will be enabled at a time, connecting back to the cluster service)?
Similar to envoy proxy? This is on vanilla K8s, on-prem.
So rather than balance load from
client -> cluster -> service.
I am looking for
service -> nginx-ingress -> external-backend.
Define a Kubernetes Service with no selector. Then you need to define a Endpoint. You can put the IP and port in the Endpoint. Normally you do not define Endpoints for Services but because the Service will not have a Selector you will need to provide an Endpoint with the same name as the Service.
Then you point the Ingress to the Service.
Here's an example that exposes an Ingress on the cluster and sends the traffic to 192.168.88.1 on TCP 8081.
apiVersion: v1
kind: Service
metadata:
name: router
namespace: default
spec:
ports:
- protocol: TCP
port: 80
targetPort: 8081
---
apiVersion: v1
kind: Endpoints
metadata:
name: router
namespace: default
subsets:
- addresses:
- ip: 192.168.88.1
- ip: 192.168.88.2 # As per question below
ports:
- port: 8081
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: router
namespace: default
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: my-router.domain.com
http:
paths:
- path: /
backend:
serviceName: router
servicePort: 80
While defining ingress use nginx.ingress.kubernetes.io/configuration-snippet annotation. Enable also proxy protocol using use-proxy-protocol: "true".
Using this annotation you can add additional configuration to the NGINX location.
Please take a look: ingress-nginx-issue, advanced-configuration-with-annotations.
Related
I have the following topology in my kubernetes cluster:
So, I have 2 Nodes: 1 Master and 1 Worker Node.
Now I created an application with my deployment.yml and my service.yml, using nodePort configuration, see:
apiVersion: v1
kind: Service
metadata:
name: administrativo-service
spec:
type: NodePort
selector:
app: administrativo
ports:
- protocol: TCP
port: 80
targetPort: 8080
And this is my service:
Now I need to access this API using my DNS, something like: myapi.localdns, so I followed this steps to install Ingress Controller based on nginx:
https://kubernetes.github.io/ingress-nginx/deploy/#quick-start
https://kubernetes.github.io/ingress-nginx/deploy/#bare-metal-clusters
After 1 hour this is POD status in ingress-nginx namespace:
And finally, this is my Ingress yml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: administrativo-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: myapi.localdns
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: administrativo-service
port:
number: 80
Well, my idea is to create an entry in my company to DNS to point to this DNS myapi.localdns:
but to do it I need the Ingress Address, that don't show in my ingress resource, see:
I solved the problem, using this steps:
First create in my company DNS the CNAMEs pointing to my kubernetes workernode IP.
Reinstall ingress-nginx controller using bare-metal configuration: kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.5/deploy/static/provider/baremetal/deploy.yaml.
Change the deploy.yaml to use NodePort before use kubectl apply
Use externalIPS to expose my service in port 80.
I have following setup in minikube cluster
SpringBoot app deployed in minikube cluster
name : opaapp and containerPort: 9999
Service use to expose service app as below
apiVersion: v1
kind: Service
metadata:
name: opaapp
namespace: default
labels:
app: opaapp
spec:
selector:
app: opaapp
ports:
- name: http
port: 9999
targetPort: 9999
type: NodePort
Created an ingreass controller and ingress resource as below
apiVersion: networking.k8s.io/v1beta1 # for versions before 1.14 use extensions/v1beta1
kind: Ingress
metadata:
name: opaapp-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: opaapp.info
http:
paths:
- path: /
backend:
serviceName: opaapp
servicePort: 9999
I have setup host file as below
172.17.0.2 opaapp.info
Now, if I access service as below
http://opaapp.info:32746/api/ping : I am getting the response back
But if I try to access as
http://opaapp.info/api/ping : Getting 404 error
Not able to find the error on configuration
The nginx ingress controller has been exposed via NodePort 32746 which means nginx is not listening on port 80/443 in the host's(172.17.0.2) network, rather nginx is listening on port 80/443 on Kubernetes pod network which is different than host network. Hence accessing it via http://opaapp.info/api/ping is not working. To make it work the way you are expecting the nginx ingress controller need to be deployed with hostNetwork: true option so that it can listen on 80/443 port directly in the host(172.17.0.2) network which can be done as discussed here.
I am new to k8s
I have a deployment file that goes below
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins-deployment
spec:
replicas: 3
selector:
matchLabels:
component: web
template:
metadata:
labels:
component: web
spec:
containers:
- name: jenkins
image: jenkins
ports:
- containerPort: 8080
- containerPort: 50000
My Service File is as following:
apiVersion: v1
kind: Service
metadata:
name: jenkins-svc
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8080
name: http
selector:
component: web
My Ingress File is
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: jenkins-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: jenkins.xyz.com
http:
paths:
- path: /
backend:
serviceName: jenkins-svc
servicePort: 80
I am using the nginx ingress project and my cluster is created using kubeadm with 3 nodes
nginx ingress
I first ran the mandatory command
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
when I tried hitting jenkins.xyz.com it didn't work
when I tried the command
kubectl get ing
the ing resource doesnt get an IP address assigned to it
The ingress resource is nothing but the configuration of a reverse proxy (the Ingress controller).
It is normal that the Ingress doesn't get an IP address assigned.
What you need to do is connect to your ingress controller instance(s).
In order to do so, you need to understand how they're exposed in your cluster.
Considering the YAML you claim you used to get the ingress controller running, there is no sign of exposition to the outside network.
You need at least to define a Service to expose your controller (might be a load balancer if the provider where you put your cluster supports it), you can use HostNetwork: true or a NodePort.
To use the latest option (NodePort) you could apply this YAML:
https://github.com/kubernetes/ingress-nginx/blob/master/deploy/static/provider/baremetal/service-nodeport.yaml
I suggest you read the Ingress documentation page to get a clearer idea about how all this stuff works.
https://kubernetes.io/docs/concepts/services-networking/ingress/
In order to access you local Kubernetes Cluster PODs a NodePort needs to be created. The NodePort will publish your service in every node using using its public IP and a port. Then you can access the service using any of the cluster IPs and the assigned port.
Defining a NodePort in Kubernetes:
apiVersion: v1
kind: Service
metadata:
name: nginx-service-np
labels:
name: nginx-service-np
spec:
type: NodePort
ports:
- port: 8082 # Cluster IP, i.e. http://10.103.75.9:8082
targetPort: 8080 # Application port
nodePort: 30000 # (EXTERNAL-IP VirtualBox IPs) i.e. http://192.168.50.11:30000/ http://192.168.50.12:30000/ http://192.168.50.13:30000/
protocol: TCP
name: http
selector:
app: nginx
See a full example with source code at Building a Kubernetes Cluster with Vagrant and Ansible (without Minikube).
The nginx ingress controller can be replaced also with Istio if you want to benefit from a service mesh architecture for:
Load Balance traffic, external o internal
Control failures, retries, routing
Apply limits and monitor network traffic between services
Secure communication
See Installing Istio in Kubernetes under VirtualBox (without Minikube).
I am new to GKE and kubernetes. I installed elastic search on GKE using Google Click to Deploy. I also installed nginx-ingress and secured the elasticsearch service with HTTP basic authentication (through the ingress). I created an external static IP and assigned it to the ingress controller using the loadBalancerIp field in the ingress-controller service configuration.
Questions:
I have appengine services running in GCP which need to access this elasticsearch setup. Can I avoid exposing my elasticsearch service outside - with some kind of an "internal" IP which only my appengine services can access? Is using VPC one of the ways of doing this?
I see that my ingress was also assigned an external IP address (the static IP I created was assigned to the nginx-ingress-controller service). However, when I hit this IP on port 80, I get connection refused and on 9200 port, it times out. Can I avoid having two external IPs? How secure is this ingress IP address? What are its open ports?
Here is my ingress configuration:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/auth-realm: Authentication Required - ok
nginx.ingress.kubernetes.io/auth-secret: basic-auth
nginx.ingress.kubernetes.io/auth-type: basic
name: basic-ingress
namespace: default
spec:
rules:
- http:
paths:
- backend:
serviceName: elasticsearch-1-elasticsearch-svc
servicePort: 9200
path: /
Here is the ingress controller service configuration:
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx-ingress
chart: nginx-ingress-1.6.15
component: controller
heritage: Tiller
release: nginx-ingress
name: nginx-ingress-controller
namespace: default
spec:
clusterIP: <Some IP>
externalTrafficPolicy: Cluster
loadBalancerIP: <External IP>
ports:
- name: http
nodePort: 30290
port: 80
protocol: TCP
targetPort: http
- name: https
nodePort: 30119
port: 443
protocol: TCP
targetPort: https
selector:
app: nginx-ingress
component: controller
release: nginx-ingress
sessionAffinity: None
type: LoadBalancer
My suggestion is to use 2 load balancer, 1 for public and 1 for private. To create private load balancer you just need to add following line in metadata section
cloud.google.com/load-balancer-type: "Internal"
Reference:
https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing
I have created a Kubernetes cluster by Kubeadm with 3 nodes. Their IP address and hostname are 10.10.10.146/24(k8s1, master), 10.10.10.135/24(k8s2), 10.10.10.170/24(k8s3).
Now I create a nginx service which contains 3 pods with this yaml file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-app
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx-app
image: nginx:1.14.0
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-app-srv
spec:
type: ClusterIP
selector:
app: nginx
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
externalIPs:
- 10.10.100.1
Finally one of the pods was scheduled to k8s2 and two of them to k8s3.
Then if I route 10.10.100.0/24 to k8s2, only one pod work. If to k8s3, only two pods work. And if to k8s1, no pods work.
How can I make all pods work fine through external-IP from outside just like through cluster-IP from inside no matter which node I route the external-IP subnet to? Or that's not possible or I need something else such as Kubernetes Ingress?
There are several options to expose your service outside the cluster:
The first option is to use NodePort type of Kubernetes Service. This way Service will open a port on the network interface of each node in the cluster, and all traffic coming to that port will be forwarded to the service.
By default, port range for this kind of service is limited to 30000–32767.
Here is an example of Service NodePort configuration:
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: MyApp
type: NodePort
ports:
- name: http
protocol: TCP
port: 80
targetPort: 9376
nodePort: 30080
The second option is available if you are running Kubernetes cluster in the clouds like AWS, GCP, Azure. If you create LoadBalancer type of service, it will create a new load balancer in the cloud and forward all traffic from that load balancer to the service.
The downside of this way is that each service creates a separate load balancer, which will cost you additional money.
Here is an example of Service LoadBalancer configuration:
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: MyApp
type: LoadBalancer
ports:
- name: http
protocol: TCP
port: 80
targetPort: 9376
The third option is to use an Ingress object. An Ingress controller should be running in the cluster to configure the cloud networking according to the content of the Ingress object you created. Ingress can provide you functionality of routing requests to different services depending of dns name and URI path. You can also use it on bare-metal Kubernetes cluster, but in this case, you are responsible for routing network traffic to the Ingress controller, for example by firewall rules.
Here are a couple of examples of Ingress configuration:
# redirect all traffic to a service
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
backend:
serviceName: testsvc
servicePort: 80
# redirect traffic to a service for specified URI path
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /testpath
backend:
serviceName: test
servicePort: 80