Access services on k8s on prem - kubernetes

I have 3 virtual machines (ubuntu 18 lts) on my local pc: 1 is master and 2 are nodes. I was able to install kubernetes and also to setup my application.
My application consist of 3 parts: database, backend and frontend. For each of these parts I've created and deployed services. I want to expose the FE service outside the cluster to be able to access it from one of the nodes.
The service description looks like this:
apiVersion: v1
kind: Service
metadata:
name: fe-deployment
labels:
run: fe-srv
spec:
ports:
- protocol: TCP
port: 8085
targetPort: 80
selector:
app: fe
type: NodePort
The ouput of
kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8node1 Ready <none> 2d22h v1.16.0 172.17.199.105 <none> Ubuntu 18.04.3 LTS 5.0.0-29-generic docker://18.9.7
k8node2 Ready <none> 2d22h v1.16.0 172.17.199.110 <none> Ubuntu 18.04.3 LTS 5.0.0-29-generic docker://18.9.7
kubectl get service -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
be-deployment ClusterIP 10.96.169.225 <none> 8080/TCP 2d22h app=be
db-deployment ClusterIP 10.110.14.88 <none> 3306/TCP 2d22h app=db
fe-deployment NodePort 10.104.211.32 <none> 8085:32476/TCP 2d21h app=fe
I would have expected that using one node IP and the node port to be able to access my FE from browser, but it doesn't work.
What am I missing? How to access my FE from outside the cluster?
Edit
Based on the documentation, NodePort service type should:
Exposes the Service on each Node’s IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You’ll be able to contact the NodePort Service, from outside the cluster, by requesting NodeIP:NodePort
I understand that I will access my service from outside of the cluster using node IP and static port. From the node IP statement I understand that it refers to the machine (the VM in my case) IP.
Later Edit
I've checked the firewall and it seems that is disable on all my machines:
sudo ufw status
Status: inactive
Later later edit
As I told in a comment, trying to telnet to IPv4 address didn't work. Trying with IPv6 does work on localhost and also using the ethernet interface IPv6 IP.
The netstat output is:
netstat -6 -a | grep 324
tcp6 1 0 [::]:32476 [::]:* LISTEN
Despite the fact that it should work (based on the information I read on internet) it doesn't work with IPv4. Is there a way to change this?
Later later later edit
It seems that this is a bug

You can assign EXTERNAL-IP for fe service as IP address if node.
Then you can check : curl -k http://EXTERNAL-IP:PORT
EXTERNAL-IP is Node of IP adress Server.

In your case, due to you didn't defined nodePort, kubernetes randomly assigned port 32476 to your service. To access that service go to <EXTERNAL-NODE-IP>:32476 (kubernetes-docs).
If you want to assign specific port, you need to define nodePort in service definition (example for ingress based on nginx):
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
name: ingress-nginx
spec:
ports:
- name: http
nodePort: 30080
port: 80
protocol: TCP
targetPort: 80
selector:
app.kubernetes.io/name: ingress-nginx
type: NodePort

You would not get an external IP when exposing service as a nodeport.
Exposing Service on a Nodeport means that your service would be available on externally via the NodeIP of any node in the cluster at a random port between 30000-32767(default behaviour) .
Each of the nodes in the cluster proxy that port (the same port number on every Node) into the pod where your service is launched.
From your kubectl get service -o wide output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
fe-deployment NodePort 10.104.211.32 <none> 8085:32476/TCP 2d21h app=fe
We can find that port on which your service is exposed is port 32476.
From Your kubectl get node -o wide output:
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8node1 Ready <none> 2d22h v1.16.0 172.17.199.105 <none> Ubuntu 18.04.3 LTS 5.0.0-29-generic docker://18.9.7
k8node2 Ready <none> 2d22h v1.16.0 172.17.199.110 <none> Ubuntu 18.04.3 LTS 5.0.0-29-generic docker://18.9.7
We can find that your node ips are: 172.17.199.105 and 172.17.199.110
You can now access your service externally using <Node-IP>:<Node-Port>.
So in Your case these are 172.17.199.105:32476 and 172.17.199.110:32476 depending on which node you want to access Your service.
Additionally, if you want a fixed Node port, you can specify that in the yaml.
You need to make sure you add a security rule on your nodes to allow traffic on the particular port.

Related

Expose a server on port 80 via kubernetes

I'm attempting to expose a server on port 80 via kubernetes.
Start minikube :
minikube start
Create a deployment by running the command
"kubectl create deployment apache --image=httpd:2.4"
Create a service by running the command
"kubectl create service nodeport apache --tcp=80:80"
kubectl get svc
returns :
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
apache NodePort 10.105.48.77 <none> 80:31619/TCP 5s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 43s
I've attempted to open 10.105.48.77 & 10.96.0.1 on port 80 but the service is not running.
How to start a simple http server on port 80 via kubernetes that will serve requests to that same port ?
NodePort has a range 30000-32767. Your log shows 31619 is assigned, you may try that. If you really want port 80 you will need other types of service, for example LoadBalancer. You can also use port-forward to forward you local port 80 to the apache pod.

Ingress Niginx on Multi-Node Virtualbox Driver Minikube

I am following this tutorial for setting up Ingress with Ingress-Nginx on Minikube. But I can't seem to get it to work. I get a connection refused when I try to connect to port 80 on the VM IP address returned by minikube ip
My setup is this:
Minikube version: v1.25.1
VirtualBox version: 6.1
Kubernetes version: v1.22.5
The ingress-nginx namespace has the below resources:
NAME READY STATUS RESTARTS AGE
pod/ingress-nginx-controller-85f4c5b458-2dhqh 1/1 Running 0 49m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/ingress-nginx-controller NodePort 10.102.88.109 <none> 80:30551/TCP,443:31918/TCP 20h
service/ingress-nginx-controller-admission ClusterIP 10.103.134.39 <none> 443/TCP 20h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/ingress-nginx-controller 1/1 1 1 20h
NAME DESIRED CURRENT READY AGE
replicaset.apps/ingress-nginx-controller-85f4c5b458 1 1 1 20h
NAME COMPLETIONS DURATION AGE
job.batch/ingress-nginx-admission-create 1/1 6s 20h
job.batch/ingress-nginx-admission-patch 1/1 6s 20h
The default namespace has the below resources
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/web-79d88c97d6-rvp2r 1/1 Running 0 47m 10.244.1.4 minikube-m02 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 20h <none>
service/web NodePort 10.104.20.14 <none> 8080:31613/TCP 20h app=web
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress.networking.k8s.io/example-ingress nginx hello-world.info localhost 80 20h
Minikube is exposing these services:
|---------------|------------------------------------|--------------|-----------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|---------------|------------------------------------|--------------|-----------------------------|
| default | kubernetes | No node port |
| default | web | 8080 | http://192.168.59.106:31613 |
| ingress-nginx | ingress-nginx-controller | http/80 | http://192.168.59.106:30551 |
| | | https/443 | http://192.168.59.106:31918 |
| ingress-nginx | ingress-nginx-controller-admission | No node port |
| kube-system | kube-dns | No node port |
| kube-system | registry | No node port |
|---------------|------------------------------------|--------------|-----------------------------|
In step 4 of the Create an Ingress section The tutorial mentions this:
Add the following line to the bottom of the /etc/hosts file on your computer (you will need administrator access):
172.17.0.15 hello-world.info
Note: If you are running Minikube locally, use minikube ip to get the external IP. The IP address displayed within the ingress list will be the internal IP.
It's a three node cluster using VirtualBox. I've tried adding the Minikube ingress-nginx-controller service's IP (192.168.59.106, which is also the result of minikube ip) to my hosts file, but it doesn't work. And as far as I know, I can't include the service's node port 30551 in the hosts file to test that.
Some guidance on how to get this working would be much appreciated
You are correct. You cannot include the port in the /etc/hosts file. To get there, you would need to specify the full path in your browser or some other application as following (assuming no connectivity issues):
hello-world.info:30551
I'd recommend you to tell specifically what type of issue you have. There can be multiple issues and each one will have different solution.
For example there will be a difference between the inability to access the Service and getting the 404 message.
I'm not sure if it's related but I had connectivity issues when I created a cluster in a following way:
minikube start --driver="virtualbox"
minikube node add
minikube node add
However, when I ran below command, I encountered noone:
minikube start --driver="virtualbox" --nodes=3
Assuming that you would like to expose your Nginx Ingress controller to be available on the ports 80 and 443 instead of NodePort's you can do:
Spawn your cluster
Deploy:
Metallb.universe.tf
Configure your address pool similar to:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.59.200-192.168.59.210"
Change the Service of your: ingress-nginx-controller to LoadBalancer instead of NodePort (kubectl edit svc -n ingress-nginx ingress-nginx-controller)
Check on the Service:
kubectl get svc -n ingress-nginx ingress-nginx-controller
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.106.63.253 192.168.59.201 80:30092/TCP,443:30915/TCP 23m
Put the EXTERNAL-IP of your Ingress controller to your /etc/hosts file.
Create an Ingress resource that is matching what you've inputted as a name to /etc/hosts and has some backend.
Additional resources:
Kubernetes.github.io: Ingress nginx
Kubernetes.io: Docs: Concepts: Services networking: Service
When following the tutorial, I enabled the ingress addon after creating my cluster by running minikube addons enable ingress
This appeared to succeed, but when trying to connect to port 80 on the IP address returned by minikube ip (which is also the ingress-nginx-controller minikube service address), I got a connection refused. This can be validated by running:
nc -zv $(minikube ip) 80
However, when I enabled ingress at the time of initial cluster creation with this command:
minikube start --driver=virtualbox \
--kubernetes-version=v1.22.5 --nodes 3 \
--addons=ingress
and then ran nc -zv $(minikube ip) 80, the connection was accepted. I'm not sure if this is an issue with Minikube or with VirtualBox, but enabling ingress at the initial cluster creation time rather then subsequently worked for me
I was then able to update my hosts file with just the IP of the minikube node and the hello-world.info host
One thing that might also be worth noting if you create and delete your clusters a lot, I found sometimes when updating the hosts file on a Mac that old IPs were being cached. Running sudo dscacheutil -flushcache may help with this

ClusterIP not reachable within the Cluster

I'm struggling with kubernates configurations. What I want to get it's just to reach a deployment within the cluster. The cluster is on my dedicated server and I'm deploying it by using Kubeadm.
My nodes:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 9d v1.19.3
k8s-worker1 Ready <none> 9d v1.19.3
I've a deployment running (nginx basic example)
$ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 2/2 2 2 29m
I've created a service
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 9d
my-service ClusterIP 10.106.109.94 <none> 80/TCP 20m
The YAML file for my service is the following:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: nginx-deployment
ports:
- protocol: TCP
port: 80
Now I should expect, if I run curl 10.106.109.94:80 on my k8s-master to get the http answer.. but what I got is:
curl: (7) Failed to connect to 10.106.109.94 port 80: Connection refused
I've tried with NodePort as well and with targetPort and nodePort but the result is the same.
The cluster ip can not be reachable from outside of your cluster that means you will not get any response from the host machine that host your k8s cluster as this ip is not a part of your machine or any other machine rather than its a cluster ip which is used by your cluster CNI network like flunnel,weave.
So to get your services accessible from the outside or atleast from the host machine you have to change the type of your service like NodePort,LoadBalancer,K8s port-forward.
If you can change the service type NodePort then you will get response with any of your host machine ip and the allocated nodeport.
For example,if your k8s-master is 192.168.x.x and nodePort is 33303 then you can get response by
curl http://192.168.x.x:33303
or
curl http://worker_node_ip:33303
if your cluster is in locally installed, then you can install metalLB to get the privilege of load balancer.
You can also use port-forward to get your service accessible from the host that has kubectl client with k8s cluster access.
kubectl port-forward svc/my-service 80:80
kubectl -n namespace port-forward svc/service_name Port:Port

How to assign an IP to istio-ingressgateway on localhost?

I am using kubespray to run a kubernetes cluster on my laptop. The cluster is running on 7 VMs and the roles of the VM's spread as follows:
NAME STATUS ROLES AGE VERSION
k8s-1 Ready master 2d22h v1.16.2
k8s-2 Ready master 2d22h v1.16.2
k8s-3 Ready master 2d22h v1.16.2
k8s-4 Ready master 2d22h v1.16.2
k8s-5 Ready <none> 2d22h v1.16.2
k8s-6 Ready <none> 2d22h v1.16.2
k8s-7 Ready <none> 2d22h v1.16.2
I've installed https://istio.io/ to build a microservices environment.
I have 2 services running and like to access from outside:
k get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
greeter-service ClusterIP 10.233.50.109 <none> 3000/TCP 47h
helloweb ClusterIP 10.233.8.207 <none> 3000/TCP 47h
and the running pods:
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default greeter-service-v1-8d97f9bcd-2hf4x 2/2 Running 0 47h 10.233.69.7 k8s-6 <none> <none>
default greeter-service-v1-8d97f9bcd-gnsvp 2/2 Running 0 47h 10.233.65.3 k8s-2 <none> <none>
default greeter-service-v1-8d97f9bcd-lkt6p 2/2 Running 0 47h 10.233.68.9 k8s-7 <none> <none>
default helloweb-77c9476f6d-7f76v 2/2 Running 0 47h 10.233.64.3 k8s-1 <none> <none>
default helloweb-77c9476f6d-pj494 2/2 Running 0 47h 10.233.69.8 k8s-6 <none> <none>
default helloweb-77c9476f6d-tnqfb 2/2 Running 0 47h 10.233.70.7 k8s-5 <none> <none>
The problem is, I can not access the services from outside, because I do not have the EXTERNAL IP address(remember the cluster is running on my laptop).
k get svc istio-ingressgateway -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 10.233.61.112 <pending> 15020:31311/TCP,80:30383/TCP,443:31494/TCP,15029:31383/TCP,15030:30784/TCP,15031:30322/TCP,15032:30823/TCP,15443:30401/TCP 47h
As you can see, the column EXTERNAL-IP the value is <pending>.
The question is, how to assign an EXTERNAL-IP to the istio-ingressgateway.
First of all, you can't make k8s to assign you an external IP address, as LoadBalancer service is Cloud Provider specific. You could push your router external IP address to be mapped to it, I guess, but it is not trivial.
To reach the service, you can do this:
kubectl edit svc istio-ingressgateway -n istio-system
Change the type of the service from LoadBalancer to ClusterIp. You can also do NodePort. Actually you can skip this step, as LoadBalancer service already contains NodePort and ClusterIp. It is just to get rid of that pending status.
kubectl port-forward svc/istio-ingressgateway YOUR_LAPTOP_PORT:INGRESS_CLUSTER_IP_PORT -n istio-system
I don't know to which port you want to access from your localhost. Say 80, you can do:
kubectl port-forward svc/istio-ingressgateway 8080:80 -n istio-system
Now port 8080 of your laptop (localhost:8080) will be mapped to the port 80 of istio-ingressgateway service.
By default, there is no way Kubernetes can assign external IP to LoadBalancer service.
This service type needs infrastructure support which works in cloud offerings like GKE, AKS, EKS etc.
As you are running this cluster inside your laptop, deploy MetalLB Load Balancer to get EXTERNAL-IP
It's not possible as Suresh explained.
But if you want to access from your laptop you can use in your service type: NodePort, which gives you access from outside the cluster.
You should first obtain the IP of your cluster, then create your service with something like this:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: my-app
type: NodePort
ports:
- name: http
protocol: TCP
port: 3000
targetPort: 3000
nodePort: 30000
After that, you can access from your laptop with: http://cluster-ip:30000
There is no need to create an ingress for that.
You should use a port in range (30000-32767), as stated below:
If you set the type field to NodePort, the Kubernetes control plane allocates a port from a range specified by --service-node-port-range flag (default: 30000-32767).
If you are using minikube, just run:
$ minikube tunnel
$ k get svc -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 10.111.187.167 127.0.0.1 15021:31949/TCP,80:32215/TCP,443:30585/TCP 9m48s

ingress-nginx No IP Address

I've created a test k8s cluster using kubespray (3 nodes, virtualbox
centos vm based) and have been trying to follow the guide for setting up nginx ingress, but i never seem to get an external address assigned to my service:
I can see that the ingress controller is apparently installed:
[root#k8s-01 ~]# kubectl get pods --all-namespaces -l app=ingress-nginx
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress-nginx nginx-ingress-controller-58c9df5856-v6hml 1/1 Running 0 28m
And following the prerequisites docs, i have set up the http-svc sample service:
[root#k8s-01 ~]# kubectl get po
NAME READY STATUS RESTARTS AGE
http-svc-794dc89f5-f2vlx 1/1 Running 0 27m
[root#k8s-01 ~]# kubectl get svc http-svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
http-svc LoadBalancer 10.233.25.131 <pending> 80:30055/TCP 27m
[root#k8s-01 ~]# kubectl describe svc http-svc
Name: http-svc
Namespace: default
Labels: app=http-svc
Annotations: <none>
Selector: app=http-svc
Type: LoadBalancer
IP: 10.233.25.131
Port: http 80/TCP
TargetPort: 8080/TCP
NodePort: http 30055/TCP
Endpoints: 10.233.65.5:8080
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Type 27m service-controller ClusterIP -> LoadBalancer
As far as i know, i should see a LoadBalancer Ingress entry, but the External IP for the service still appears to be pending, so something isn't working, but i'm at a loss where to diagnose what has gone wrong
Since you are creating your cluster locally, exposing your service as type LoadBalancer will not provision a loadbalancer for you. Use the type LoadBalancer if you are creating your cluster in a cloud environment such as AWS or GKE. In AWS it will auto-provision you an loadbalancer (ELB) and assign an external ip for the service.
To make your service work with current settings and environment change your service type from Loadbalancer to NodePort.