I'm struggling with kubernates configurations. What I want to get it's just to reach a deployment within the cluster. The cluster is on my dedicated server and I'm deploying it by using Kubeadm.
My nodes:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 9d v1.19.3
k8s-worker1 Ready <none> 9d v1.19.3
I've a deployment running (nginx basic example)
$ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 2/2 2 2 29m
I've created a service
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 9d
my-service ClusterIP 10.106.109.94 <none> 80/TCP 20m
The YAML file for my service is the following:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: nginx-deployment
ports:
- protocol: TCP
port: 80
Now I should expect, if I run curl 10.106.109.94:80 on my k8s-master to get the http answer.. but what I got is:
curl: (7) Failed to connect to 10.106.109.94 port 80: Connection refused
I've tried with NodePort as well and with targetPort and nodePort but the result is the same.
The cluster ip can not be reachable from outside of your cluster that means you will not get any response from the host machine that host your k8s cluster as this ip is not a part of your machine or any other machine rather than its a cluster ip which is used by your cluster CNI network like flunnel,weave.
So to get your services accessible from the outside or atleast from the host machine you have to change the type of your service like NodePort,LoadBalancer,K8s port-forward.
If you can change the service type NodePort then you will get response with any of your host machine ip and the allocated nodeport.
For example,if your k8s-master is 192.168.x.x and nodePort is 33303 then you can get response by
curl http://192.168.x.x:33303
or
curl http://worker_node_ip:33303
if your cluster is in locally installed, then you can install metalLB to get the privilege of load balancer.
You can also use port-forward to get your service accessible from the host that has kubectl client with k8s cluster access.
kubectl port-forward svc/my-service 80:80
kubectl -n namespace port-forward svc/service_name Port:Port
Related
I run a local kubernetes cluster (Minikube) and I try to connect pgAdmin to postgresql, bot run in Kubernetes.
What would be the connection string? Shall I access by service ip address or by service name?
kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dbpostgresql NodePort 10.103.252.31 <none> 5432:30201/TCP 19m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d21h
pgadmin-service NodePort 10.109.58.168 <none> 80:30200/TCP 40h
kubectl get ingress:
NAME CLASS HOSTS ADDRESS PORTS AGE
pgadmin-ingress <none> * 192.168.49.2 80 40h
kubectl get pod
NAME READY STATUS RESTARTS AGE
pgadmin-5569ddf4dd-49r8f 1/1 Running 1 40h
postgres-78f4b5db97-2ngck 1/1 Running 0 23m
I have tried with 10.103.252.31:30201 but without success.
Inside the cluster, services can refer to each other by DNS based on Service object names. So in this case you would use dbpostgresql or dbpostgresql.default.svc.cluster.local as the hostname.
Remember minikube is running inside its' own container, the NodePort clusterIPs you're getting back are open inside of minikube. So to get minikube's resolution of port and ip, run: minikube service <your-service-name> --url
This will return something like http://127.0.0.1:50946 which you can use to create an external DB connection.
Another option would be to use kubectl to forward a local port to the service running on localhost ex. kubectl port-forward service/django-service 8080:80
I installed microk8s on my ubuntu machine based on steps here https://ubuntu.com/kubernetes/install#single-node
Then I followed kubernetes official tutorial and created and exposed a deployment like this
microk8s.kubectl create deployment kubernetes-bootcamp --image=gcr.io/google-samples/kubernetes-bootcamp:v1
microk8s.kubectl expose deployment/kubernetes-bootcamp --type=NodePort --port 8083
This is my kubectl get services output
akila#ubuntu:~$ microk8s.kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 25h
kubernetes-bootcamp NodePort 10.152.183.11 <none> 8083:31695/TCP 17s
This is my kubectl get pods output
akila#ubuntu:~$ microk8s.kubectl get pods
NAME READY STATUS RESTARTS AGE
kubernetes-bootcamp-6f6656d949-rllgt 1/1 Running 0 51m
But I can't access the service from my browser using http://localhost:8083 OR using http://10.152.183.11:31695
When I tried http://localhost:31695 I'm getting ERR_CONNECTION_REFUSED.
How can I access this "kubernetes-bootcamp" service from my browser ?
Am I mising anything ?
The IP 10.152.183.11 is CLUSTER-IP and not accessible from outside the cluster i.e from a browser. You should be using http://localhost:31695 where 31695 is the NodePort opened on the host system.
The container of the gcr.io/google-samples/kubernetes-bootcamp:v1 image need to listen on port 8083 because you are exposing it on that port. Double check that because otherwise this will lead to ERR_CONNECTION_REFUSED error.
If the container is listening on port 8080 then use below command to expose that port
microk8s.kubectl expose deployment/kubernetes-bootcamp --type=NodePort --port 8080
Try this
kubectl port-forward <pod_name> <local_port>:<pod_port>
then access http://localhost:<local_port>
I have 3 virtual machines (ubuntu 18 lts) on my local pc: 1 is master and 2 are nodes. I was able to install kubernetes and also to setup my application.
My application consist of 3 parts: database, backend and frontend. For each of these parts I've created and deployed services. I want to expose the FE service outside the cluster to be able to access it from one of the nodes.
The service description looks like this:
apiVersion: v1
kind: Service
metadata:
name: fe-deployment
labels:
run: fe-srv
spec:
ports:
- protocol: TCP
port: 8085
targetPort: 80
selector:
app: fe
type: NodePort
The ouput of
kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8node1 Ready <none> 2d22h v1.16.0 172.17.199.105 <none> Ubuntu 18.04.3 LTS 5.0.0-29-generic docker://18.9.7
k8node2 Ready <none> 2d22h v1.16.0 172.17.199.110 <none> Ubuntu 18.04.3 LTS 5.0.0-29-generic docker://18.9.7
kubectl get service -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
be-deployment ClusterIP 10.96.169.225 <none> 8080/TCP 2d22h app=be
db-deployment ClusterIP 10.110.14.88 <none> 3306/TCP 2d22h app=db
fe-deployment NodePort 10.104.211.32 <none> 8085:32476/TCP 2d21h app=fe
I would have expected that using one node IP and the node port to be able to access my FE from browser, but it doesn't work.
What am I missing? How to access my FE from outside the cluster?
Edit
Based on the documentation, NodePort service type should:
Exposes the Service on each Node’s IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You’ll be able to contact the NodePort Service, from outside the cluster, by requesting NodeIP:NodePort
I understand that I will access my service from outside of the cluster using node IP and static port. From the node IP statement I understand that it refers to the machine (the VM in my case) IP.
Later Edit
I've checked the firewall and it seems that is disable on all my machines:
sudo ufw status
Status: inactive
Later later edit
As I told in a comment, trying to telnet to IPv4 address didn't work. Trying with IPv6 does work on localhost and also using the ethernet interface IPv6 IP.
The netstat output is:
netstat -6 -a | grep 324
tcp6 1 0 [::]:32476 [::]:* LISTEN
Despite the fact that it should work (based on the information I read on internet) it doesn't work with IPv4. Is there a way to change this?
Later later later edit
It seems that this is a bug
You can assign EXTERNAL-IP for fe service as IP address if node.
Then you can check : curl -k http://EXTERNAL-IP:PORT
EXTERNAL-IP is Node of IP adress Server.
In your case, due to you didn't defined nodePort, kubernetes randomly assigned port 32476 to your service. To access that service go to <EXTERNAL-NODE-IP>:32476 (kubernetes-docs).
If you want to assign specific port, you need to define nodePort in service definition (example for ingress based on nginx):
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
name: ingress-nginx
spec:
ports:
- name: http
nodePort: 30080
port: 80
protocol: TCP
targetPort: 80
selector:
app.kubernetes.io/name: ingress-nginx
type: NodePort
You would not get an external IP when exposing service as a nodeport.
Exposing Service on a Nodeport means that your service would be available on externally via the NodeIP of any node in the cluster at a random port between 30000-32767(default behaviour) .
Each of the nodes in the cluster proxy that port (the same port number on every Node) into the pod where your service is launched.
From your kubectl get service -o wide output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
fe-deployment NodePort 10.104.211.32 <none> 8085:32476/TCP 2d21h app=fe
We can find that port on which your service is exposed is port 32476.
From Your kubectl get node -o wide output:
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8node1 Ready <none> 2d22h v1.16.0 172.17.199.105 <none> Ubuntu 18.04.3 LTS 5.0.0-29-generic docker://18.9.7
k8node2 Ready <none> 2d22h v1.16.0 172.17.199.110 <none> Ubuntu 18.04.3 LTS 5.0.0-29-generic docker://18.9.7
We can find that your node ips are: 172.17.199.105 and 172.17.199.110
You can now access your service externally using <Node-IP>:<Node-Port>.
So in Your case these are 172.17.199.105:32476 and 172.17.199.110:32476 depending on which node you want to access Your service.
Additionally, if you want a fixed Node port, you can specify that in the yaml.
You need to make sure you add a security rule on your nodes to allow traffic on the particular port.
I deployed my service with the command kubectl expose deployment hotspot-deployment --type=LoadBalancer --port=8080
and the external-ip was pending:
hotspot-deployment LoadBalancer 10.104.104.81 <pending> 8080:31904/TCP 1m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 18m
After searching for it's solution, I assigned the external ip to my service with:
kubectl patch svc hotspot-deployment -p '{"spec": {"type": "LoadBalancer", "externalIPs":["192.168.98.103"]}}'
and the ip got assigned:
hotspot-deployment LoadBalancer 10.106.137.71 192.168.98.103 8080:31354/TCP 12m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 35m
Now, when I try to hit my service using the URL: http://192.168.98.103:8080
The page doesn't open. I tried debugging with
minikube tunnel
, it shows:
machine: minikube
pid: 33058
route: 10.96.0.0/12 -> 192.168.99.105
minikube: Running
services: [hotspot-deployment]
errors:
minikube: no errors
router: no errors
loadbalancer emulator: no errors
Please help!
First of all, you can not expose your service in minikube with LoadBalanser type because it intended to use cloud provider’s load balancer
You should rather use service command:
minikube service hotspot-deployment --url
Then open url in your browser
I've created a test k8s cluster using kubespray (3 nodes, virtualbox
centos vm based) and have been trying to follow the guide for setting up nginx ingress, but i never seem to get an external address assigned to my service:
I can see that the ingress controller is apparently installed:
[root#k8s-01 ~]# kubectl get pods --all-namespaces -l app=ingress-nginx
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress-nginx nginx-ingress-controller-58c9df5856-v6hml 1/1 Running 0 28m
And following the prerequisites docs, i have set up the http-svc sample service:
[root#k8s-01 ~]# kubectl get po
NAME READY STATUS RESTARTS AGE
http-svc-794dc89f5-f2vlx 1/1 Running 0 27m
[root#k8s-01 ~]# kubectl get svc http-svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
http-svc LoadBalancer 10.233.25.131 <pending> 80:30055/TCP 27m
[root#k8s-01 ~]# kubectl describe svc http-svc
Name: http-svc
Namespace: default
Labels: app=http-svc
Annotations: <none>
Selector: app=http-svc
Type: LoadBalancer
IP: 10.233.25.131
Port: http 80/TCP
TargetPort: 8080/TCP
NodePort: http 30055/TCP
Endpoints: 10.233.65.5:8080
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Type 27m service-controller ClusterIP -> LoadBalancer
As far as i know, i should see a LoadBalancer Ingress entry, but the External IP for the service still appears to be pending, so something isn't working, but i'm at a loss where to diagnose what has gone wrong
Since you are creating your cluster locally, exposing your service as type LoadBalancer will not provision a loadbalancer for you. Use the type LoadBalancer if you are creating your cluster in a cloud environment such as AWS or GKE. In AWS it will auto-provision you an loadbalancer (ELB) and assign an external ip for the service.
To make your service work with current settings and environment change your service type from Loadbalancer to NodePort.