I'm running a bare metal Kubernetes cluster with 1 master node and 3 worker Nodes. I have a bunch of services deployed inside with Istio as an Ingress-gateway.
Everything works fine since I can access my services from outside using the ingress-gateway NodePort.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 10.106.9.2 <pending> 15021:32402/TCP,80:31106/TCP,443:31791/TCP 2d23h
istiod ClusterIP 10.107.220.130 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 2d23h
In our case the port 31106.
The issues is, I don't want my customer to access my service on port 31106. that's not user friendly. So is there a way to expose the port 80 to the outside ?
In other word, instead of typing http://example.com:31106/ , I want them to be able to type http://example.com/
Any solution could help.
Based on official documentation:
If the EXTERNAL-IP value is set, your environment has an external load balancer that you can use for the ingress gateway. If the EXTERNAL-IP value is <none> (or perpetually <pending>), your environment does not provide an external load balancer for the ingress gateway. In this case, you can access the gateway using the service’s node port.
This is in line with what David Maze wrote in the comment:
A LoadBalancer-type service would create that load balancer, but only if Kubernetes knows how; maybe look up metallb for an implementation of that. The NodePort port number will be stable unless the service gets deleted and recreated, which in this case would mean wholesale uninstalling and reinstalling Istio.
In your situation you need to access the gateway using the NodePort. Then you can configure istio. Everything is described step by step in this doc. You need to choose the instructions corresponding to NodePort and then set the ingress IP depends on the cluster provider. You can also find sample yaml files in the documentation.
Related
I have a created an nginx pod and nginx clusterIP service and assign an externalIP to that service like below
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
test-nginx ClusterIP 10.110.93.251 192.168.0.10 443/TCP,80/TCP,8000/TCP,5443/TCP 79m
In one of my application pod, I am trying to execute below command and get the fqdn of it.
>>> import socket
>>> socket.getfqdn('192.168.0.10')
'test-nginx.test.svc.cluster.local'
It returns me the nginx service fqdn instead of my host machine fqdn. Is there a way to block dns resolution only for external-ip ? or is there any other workaround for this problem?
You assigned an external ip to a ClusterIP service in Kubernetes, so you can access your application from outside the Cluster, but you are concerned about the Pods having access to that external ip and want to block the dns resolution.
This is not the best approach to your issue, Kubernetes has several ways to expose the services without compromising the security; for what you want, maybe a better option is to implement an Ingress instead.
As you can see in the diagram, the Ingress routes the incoming traffic to the desired service based on configured rules, isolating the outside world from your service and only allowing specific traffic to go in. You can also implement features as TLS termination for your HTTPS traffic, and it performs load balancing by default.
Even further, if your main concern is security within your Cluster, you can take a look at the Istio Service mesh.
I have a Google Cloud Project with:
Internal network.
I also deployed Kubernetes using this internal network.
I deployed a deployment with a service (no external IP).
Services:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
redis-master ClusterIP 10.0.0.213 <none> 6379/TCP 27s
Now, I also deployed another VM instance within the same internal network. I want that this VM will access the IP: 10.0.0.213 on port 6379. But it's not working.
I read here, that I need to port-forward it in order to make it possible. But I don't want to expose my kubernetes cluster credentials in this VM.
LoadBalacer will give me external IP, which will work within the internal network but will work also from the internet.
So, how to expose it just to the Google internal network?
I guess what you need is an Internal Load Balancer. You can simply annotate the Service with cloud.google.com/load-balancer-type: "Internal". See the internal-load-balancing.
Recently i started building my very own kubernetes cluster using a few Raspberry pi's.
I have gotten to the point where i have a cluster up and running!
Some background info on how i setup the cluster, i used this guide
But now, when i want to deploy and expose an application i encounter some issues...
Following the kubernetes tutorials i have made an deployment of nginx, this is running fine. when i do a port-forward i can see the default nginx page on my localhost.
Now the tricky part, creating an service and routing the traffic from the internet through an ingress to the service.
i have executed the following command's
kubectl expose deployment/nginx --type="NodePort" --port 80
kubectl expose deployment/nginx --type="Loadbalancer" --port 80
And these result in the following.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 25h
nginx NodePort 10.103.77.5 <none> 80:30106/TCP 7m50s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 25h
nginx LoadBalancer 10.107.233.191 <pending> 80:31332/TCP 4s
The external ip address never shows, which makes it quite impossible for me to access the application from outside of the cluster by doing curl some-ip:80 which in the end is the whole reason for me to setup this cluster.
If any of you have some clear guides or advice i can work with it would be really appreciated!
Note:
I have read things about LoadBalancer, this is supposed to be provided by the cloud host. since i run on RPI i don't think this will work for me. but i believe NodePort should be just fine to route with an ingress.
Also i am aware of the fact that i should have an ingress-controller of some sort for ingress to work.
Edit
So i have the following now for the nodeport - 30168
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 26h
nginx NodePort 10.96.125.112 <none> 80:30168/TCP 6m20s
and for the ip address i have either 192.168.178.102 or 10.44.0.1
$ kubectl describe pod nginx-688b66fb9c-jtc98
Node: k8s-worker-2/192.168.178.102
IP: 10.44.0.1
But when i enter either of these ip addresses in the browser with the nodeport i still don't see the nginx page. am i doing something wrong?
Any of your worker nodes' IP address will work for a NodePort (or LoadBalancer) service. From the description of NodePort services :
If you set the type field to NodePort, the Kubernetes control plane allocates a port from a range specified by --service-node-port-range flag (default: 30000-32767). Each node proxies that port (the same port number on every Node) into your Service.
If you don't know those IP addresses kubectl get nodes can tell you; if you're planning on calling them routinely then setting up a load balancer in front of the cluster or configuring DNS (or both!) can be helpful.
In your example, say some node has the IP address 10.20.30.40 (you log into the Raspberry PI directly and run ifconfig and that's the host's address); you can reach the nginx from the second example at http://10.20.30.40:31332.
The EXTERNAL-IP field will never fill in for a NodePort service, or when you're not in a cloud environment that can provide an external load balancer for you. That doesn't affect this case, for either of these service types you can still call the port on the node directly.
Since you are not in a cloud provider, you need to use MetalLB to have the LoadBalancer features working.
Kubernetes does not offer an implementation of network load-balancers (Services of type LoadBalancer) for bare metal clusters. The implementations of Network LB that Kubernetes does ship with are all glue code that calls out to various IaaS platforms (GCP, AWS, Azure…). If you’re not running on a supported IaaS platform (GCP, AWS, Azure…), LoadBalancers will remain in the “pending” state indefinitely when created.
Bare metal cluster operators are left with two lesser tools to bring user traffic into their clusters, “NodePort” and “externalIPs” services. Both of these options have significant downsides for production use, which makes bare metal clusters second class citizens in the Kubernetes ecosystem.
MetalLB aims to redress this imbalance by offering a Network LB implementation that integrates with standard network equipment, so that external services on bare metal clusters also “just work” as much as possible
The MetalLB setup is very easy:
kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.8.3/manifests/metallb.yaml
This will deploy MetalLB to your cluster, under the metallb-system namespace
You need to create a configMap with the ip range you want to use, create a file named metallb-cf.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.1.240-192.168.1.250 <= Select the range you want.
kubectl apply -f metallb-cf.yaml
That's all.
To use on your services just create with type LoadBalancer and MetalLB will do the rest. If you want to customize the configuration see here
MetalLB will assign a IP for your service/ingress, but if you are in a NAT network you need to configure your router to forward the requests for your ingress/service IP.
EDIT:
You have problem to get External IP with MetalLB running on Raspberry Pi, try to change iptables to legacy version:
sudo sysctl net.bridge.bridge-nf-call-iptables=1
sudo update-alternatives --set iptables /usr/sbin/iptables-legacy
Reference: https://www.shogan.co.uk/kubernetes/building-a-raspberry-pi-kubernetes-cluster-part-2-master-node/
I hope that helps.
My organization offers Containers as a Service through Rancher. I start a rabbitmq service using some web interface. The service started OK. I'm having trouble accessing this service through an external IP.
Using kubectl, I tried to get the list of the running services:
$ kubectl get services -n flash
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
rabbitmq-ha ClusterIP XX.XX.X.XXX <none> 15672/TCP,5672/TCP,4369/TCP 46m
rabbitmq-ha-discovery ClusterIP None <none> 15672/TCP,5672/TCP,4369/TCP 46m
How do I expose the 'rabbitmq-ha' service to the external word so I can access it via IP address:15672, etc? Right now, the external IP is none. I'm not sure how to get kubernetes to assign one.
If you are in supported cloud environment(AWS, GCP, Azure...etc) then you can create a service of type Loadbalancer and an external Load Balancer will be provisioned and an external IP or DNS will be assigned by your cloud provider.Here is the docs on this.
If you are on bare metal on prem then you can use melatLB which provides an implementation of LoadBalancer.
Apart from above you can also use Nodeport Type service to expose a service to be accessible outside your kubernetes cluster. Here is guide on how to do that.
One disadvantage of using LoadBalancer type service is that for every service an external load balancer will be provisioned which is costly, as an alternative you can use ingress abstraction. Ingress is implemented by many softwares such as nginx, HAProxy, traefik.
I have a api-service with type 'ClusterIp' which is working fine and is accessible on the node with clusterip. I want to access it externally . It's a baremetal installation with kubeadm . I cannot use Loadbalancer or Nodeport.
If I use nginx-ingress that too I will use as 'ClusterIP' so how to get the service externally accessible in either api service or nginx-ingress case .
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
api ClusterIP 10.97.48.17 <none> 80/TCP 41s
ingress-nginx ClusterIP 10.107.76.178 <none> 80/TCP 3h49m
Changes to solve the issue:
nginx configuration on node
in /etc/nginx/sites-available
upstream backend {
server node1:8001;
server node2:8001;
server node3:8001;
}
server_name _;
location / {
proxy_pass http://backend;
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
Ran my two services as DaemonSet
ClusterIP services are accesible only within the cluster.
For bare metal clusters, you can use any of the following approaches to make a service available externally. Suggestions are from most recommended to least recommended order:
Use metallb to implement LoadBalancer service type support - https://metallb.universe.tf/. You will need a pool of IP addresses for metallb to handout. It also supports IP sharing mode where you can use same IP for multiple LoadBalancer services.
Use NodePort service. You can access your service from any node IP:node_port address. NodePort service selects random port in node port range by default. You can choose a custom port in node port range using spec.ports.nodePort field in the service specification.
Disadvantage: The node port range by default is 30000-32767. So you cannot bind to any custom port that you want like 8080. Although you can change the node port range with --service-node-port-range flag of kube-api-server, it is not recommended to use it with low port ranges.
Use hostPort to bind a port on the node.
Disadvantage: You don't have fixed IP address because you don't know which node your pod gets scheduled to unless you use nodeAffinity. You can make your pod a daemonset if you want it to be accessible from all nodes on the given port.
If you are dealing with HTTP traffic, another option is installing an IngressController like nginx or Traefik and use Ingress resource. As part of their installation, they use one of the approaches mentioned above to make themselves available externally.
Well, as you can guess by reading the name, ClusterIp is only accessible from inside the cluster.
To make a service accessible from outside the cluster, you havec 3 options :
NodePort Service type
LoadBalancer Service type (you still have to manage your LoadBalancer manually though)
Ingress
There is a fourth option which is hostPort (which is not a service type) but I'd rather keep it from special case when you're absolutely sure that your pod will always be located on the same node (or eventually for debug).
Having this said, then this leaves us only with one solution offered by Kubernetes : Ingress.