Hi I have a react docker that uses nginx
with this service
apiVersion: v1
kind: Service
metadata:
labels:
appcluster: ethernial
app: clientweb
visibility: external
name: clientweb-service-ext
spec:
ports:
- port: 80
name: http
selector:
app: clientweb
type: ClusterIp
I want to expose it, I have only 1 Node that is the Master, but the port 80 is already in use by apache running on master node (cannot shutdown it yet)
I want to expose my react app so I can reach it by http://:30000 for example
(I also need to expose other REST apis externally and internally, one hosted on a pod and each one uses port 80)
so how I setup my ingress?
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: clientweb-ingress
spec:
defaultBackend:
service:
name: clientweb
port:
number: 8080
thanks!
You need to expose the ingress controller using a NodePort service on port 30000. Once you do that you can access backend pods exposed via ingress resource using 30000 port. If you are using nginx ingress controller then follow this doc and the NodePort service(taken from the nginx installation docs) would look like below with your desired port 30000 and 30001.
apiVersion: v1
kind: Service
metadata:
labels:
helm.sh/chart: ingress-nginx-2.13.0
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.35.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
type: NodePort
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
nodePort: 30000 # Specified nodeport
- name: https
port: 443
protocol: TCP
targetPort: https
nodePort: 30001 # Specified nodeport
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
In this case you can still continue to have apache on port 80 on the host system.
curl http://NODEIP:30000/<path-in-ingress>
curl https://NODEIP:30001/<path-in-ingress>
First, you need to understand the relationship between Ingress and Ingress Controller. Ingress is just a kind of resources, and it will do nothing except declare the ingress rules. All Ingress rules will need a certain Ingress Controller to implement its rules.
Then you need to deploy an Ingress Controller, typically a deployment(for certain pod) and a service(for external access). You can have a look at Nginx Ingress Controller at https://kubernetes.github.io/ingress-nginx/ and use kubectl or helm to make the deployment. Do not forget to annotate the ingress-class as it will be used later.
After this, you can apply any Ingress to this certain Ingress Controller by adding kubernetes.io/ingress.class: "nginx" annotation to your Ingress. And ingress controller(nginx server) will add your rules to its config, that means your Ingress rules has been applied.
Finally, as your ingress-controller-service has expose it self(LoadBalanceIp or NodePort from port 30000), all traffic to this endpoint will go through your Ingress rules and redirect to the desire service.
Related
I have the following template, with a deployment, a service and an Ingress. I ran minikube addons enable ingress locally to add an ingress controller before.
apiVersion: apps/v1
kind: Deployment
metadata:
name: fastapi
labels:
app: fastapi
spec:
replicas: 1
selector:
matchLabels:
app: fastapi
template:
metadata:
labels:
app: fastapi
spec:
containers:
- name: fastapi
image: datamastery/fastapi
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: fastapi-service
spec:
selector:
app: fastapi
type: LoadBalancer
ports:
- protocol: TCP
port: 5000
targetPort: 3000
nodePort: 30002
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dashboard-ingress
namespace: kubernetes-dashboard
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: datamastery.com
http:
paths:
- path: /
pathType: Exact
backend:
service:
name: fastapi-service
port:
number: 3000
When I run kubectl get servicesI get:
fastapi-service LoadBalancer 10.108.5.228 <pending> 5000:30002/TCP 5d22h
I my etc/hosts/ file I added the following:
10.108.5.228 datamastery.com
Normally I would expect now to be able to open my service in the browser, but nothing happens. What did I do wrong? Did I miss something in the template? Is the IP wrong? Something in the hosts file?
Thank you!
fastapi-service LoadBalancer 10.108.5.228 5000:30002/TCP 5d22h
10.108.5.228 is an address within your SDN. Only members of your SDN can reach this address, it is unlikely your workstation would have a route sending this trafic to one of your Kubernetes nodes.
<pending> means your cluster is not integrated with a cloud provider with LoadBalancer capabilities. When in doubt: you should use ClusterIP as your service type. LoadBalancer only makes sense in specific cases. While setting a nodePort as you did is also not required (would make sense with a NodePort service, which is as well useful in few use cases, though should not be used otherwise).
You did create an Ingress. If you have an Ingress Controller, you want to connect to that ip/port. The Host header would tell your ingress controller where to route this, within your SDN.
I believe what you are doing here is trying to combine two different things.
NodePort is only sufficient if you have only one node OR you really control where your service pods getting deployed. Otherwise it is not suitable to use the node IP to access services.
To overcome this issue we usually use ingress as a service proxy. Incoming traffic will be routed to the correct service pods depending on the URL and port. Ingress also manages the SSL termination. So basically this is your first "load balancer" as ingress assigned traffic to services across nodes and pods.
In production environment you deploy the ingress controller with the type: Loadbalancer in the kube-system namespace, example for Nginx-ingress:
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
namespace: kube-system
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
type: LoadBalancer
ports:
- port: 80
name: http
targetPort: 80
- port: 443
name: https
targetPort: 443
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
This would spin up a cloud load balancer of your provider and link it to the ingress service in your cluster. So basically now you would have a real load balancer in place balancing traffic between your nodes and ingress routes them to your services and services to your pods.
Back to your question:
In your config files you try to spin up a service with the type: LoadBalancer. This would skip the ingress part and spin up a second cloud load balancer from your provider dedicated for this single service.
You have to remove the type (and nodePort) to use default ClusterIP for your service.
apiVersion: v1
kind: Service
metadata:
name: fastapi-service
spec:
selector:
app: fastapi
ports:
- protocol: TCP
port: 3000
targetPort: 3000
In addition you have mentioned a wrong port. Your ingress object points on port 3000. You Service object on port 5000. So we also change this.
With this config your traffic on the FQDN is routed to ingress, to ClusterIP service on port 3000 to your pods.
Is there any way to limit the access to Kubernetes Service of type LoadBalancer from outside the cluster?
I would like to expose my database's pod to the Internet using the LoadBalancer service that would be accessible only for my external IP address.
My Kubernetes cluster runs on GKE.
You can use loadBalancerSourceRanges to filter load balanced traffic as mentioned here.
Here is the simple example of Service in front of Nginx Ingress controllers:
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: external
app.kubernetes.io/name: ingress-nginx
name: external-ingress-nginx-controller
namespace: kube-ingress
spec:
loadBalancerSourceRanges:
- <YOUR_IP_1>
- <YOUR_IP_2>
- <YOUR_IP_3>
ports:
- name: https
nodePort: 32293
port: 443
targetPort: https
selector:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: external
app.kubernetes.io/name: ingress-nginx
type: LoadBalancer
Yes, you can achieve that on Kubernetes level with a native Kubernetes Network Policy. There you can limit the Ingress traffic to your Kubernetes Service by specifying policies for the Ingress type.
An example could be:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
role: db
policyTypes:
- Ingress
ingress:
- from:
- ipBlock:
cidr: 172.17.0.0/16
except:
- 172.17.1.0/24
ports:
- protocol: TCP
port: 6379
More information can be found in the official documentation.
If you already want to block traffic from unwanted IP addresses on Load Balancer level, you have to define firewall rules and apply them on your GCP load balancer.
More information regarding the GCP firewall rules can also be found in the documentation.
I find some usecases of k8s in production which work with the Public Cloud will put a LoadBalancer type of Service in front of the Nginx Ingress. (You can find an example from the below yaml.)
As I known, ingress can be used to expose the internal servcie to the public, so what's the point to put a loadbalancer in front of the ingress? Can I delete that service?
apiVersion: v1
kind: Service
metadata:
annotations:
labels:
helm.sh/chart: ingress-nginx-3.27.0
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.45.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
kubernetes.io/elb.class: union
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
type: LoadBalancer
loadBalancerIP: xxx.xxx.xxx.xxx
externalTrafficPolicy: Cluster
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
...so what's the point to put a loadbalancer in front of the ingress?
This way allows you to take advantage of the cloud provider LB facilities (eg. multi-az etc), then with Ingress you can further control routing using path or name-based virtual host for services in the cluster.
Can I delete that service?
Ingress doesn't do port mapping or pods selection, and you can't resolve an Ingress name with DNS.
Because the Ingress Controller itself is, in this case, running inside a Pod so it needs to be exposed to the internet like anything else running in Pod. Some Ingress Controllers have the actual proxy running externally, like the AWS ALB one. But Nginx is just running inside the container like normal.
I don't mean being able to route to a specific port, I mean to actually change the port the ingress listens on.
Is this possible? How? Where is this documented?
No. From the kubernetes documentation:
An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically uses a service of type Service.Type=NodePort or Service.Type=LoadBalancer.
It may be possible to customize a LoadBalancer on a cloud provider like AWS to listen on other ports.
I assume you are using NGINX Ingress Controller. In this case, during installation, instead of doing a kubectl apply in the official yaml like this is one, you can try downloading the yaml and changing the port. The file above, which is used for an L4 AWS ELB, would become like this:
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "60"
spec:
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- port: {custom port 1}
targetPort: http
- port: {custom port 2}
targetPort: https
An alternative is to use a more powerful ingress controller.
Here is a list of different controllers.
My personal choice is Ambassador. If you follow the getting-started page, you just need to change the service definition for the port of your choice:
---
apiVersion: v1
kind: Service
metadata:
name: ambassador
spec:
type: LoadBalancer
externalTrafficPolicy: Local
ports:
- port: {custom port}
targetPort: 8080
selector:
service: ambassador
An Ingress definition is backed by an ingress controller. The ingress controller is deployed with normal Kubernetes objects so will have a Service associated with it that exposes ports for the ingress controller.
The kubernetes/ingress-nginx static deploys have a deploy.yaml with a Service type LoadBalancer:
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
externalTrafficPolicy: Local
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: https
Modify the ports the load balancer is configured with, in spec.ports[*].port in the external service, however that is deployed.
If you're using Helm to deploy the Kubernetes ingress-nginx controller, you can do this to change the ports from their defaults of 80 and 443 in a Helm values override file:
ingress-nginx:
enabled: true
...
controller:
service:
ports:
http: 8123
https: 9456
See here: https://artifacthub.io/packages/helm/ingress-nginx/ingress-nginx#values
In this example, any services exposed through the ingress are now available on ports 8123 (http) and 9456 (https).
I don't mean being able to route to a specific port, I mean to actually change the port the ingress listens on.
Is this possible? How? Where is this documented?
No. From the kubernetes documentation:
An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically uses a service of type Service.Type=NodePort or Service.Type=LoadBalancer.
It may be possible to customize a LoadBalancer on a cloud provider like AWS to listen on other ports.
I assume you are using NGINX Ingress Controller. In this case, during installation, instead of doing a kubectl apply in the official yaml like this is one, you can try downloading the yaml and changing the port. The file above, which is used for an L4 AWS ELB, would become like this:
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "60"
spec:
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- port: {custom port 1}
targetPort: http
- port: {custom port 2}
targetPort: https
An alternative is to use a more powerful ingress controller.
Here is a list of different controllers.
My personal choice is Ambassador. If you follow the getting-started page, you just need to change the service definition for the port of your choice:
---
apiVersion: v1
kind: Service
metadata:
name: ambassador
spec:
type: LoadBalancer
externalTrafficPolicy: Local
ports:
- port: {custom port}
targetPort: 8080
selector:
service: ambassador
An Ingress definition is backed by an ingress controller. The ingress controller is deployed with normal Kubernetes objects so will have a Service associated with it that exposes ports for the ingress controller.
The kubernetes/ingress-nginx static deploys have a deploy.yaml with a Service type LoadBalancer:
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
externalTrafficPolicy: Local
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: https
Modify the ports the load balancer is configured with, in spec.ports[*].port in the external service, however that is deployed.
If you're using Helm to deploy the Kubernetes ingress-nginx controller, you can do this to change the ports from their defaults of 80 and 443 in a Helm values override file:
ingress-nginx:
enabled: true
...
controller:
service:
ports:
http: 8123
https: 9456
See here: https://artifacthub.io/packages/helm/ingress-nginx/ingress-nginx#values
In this example, any services exposed through the ingress are now available on ports 8123 (http) and 9456 (https).