I have a service (LoadBalancer) definition in a k8s cluster, that is exposing 80 and 443 ports.
In the k8s dashboard, it indicates that these are the external endpoints:
(the k8s has been deployed using rancher for what that matters)
<some_rancher_agent_public_ip>:80
<some_rancher_agent_public_ip>:443
Here comes the weird (?) part:
From a busybox pod spawned within the cluster:
wget <some_rancher_agent_public_ip>:80
wget <some_rancher_agent_public_ip>:443
both succeed (i.e they fetch the index.html file)
From outside the cluster:
Connecting to <some_rancher_agent_public_ip>:80... connected.
HTTP request sent, awaiting response...
2018-01-05 17:42:51 ERROR 502: Bad Gateway.
I am assuming this is not a security groups issue given that:
it does connect to <some_rancher_agent_public_ip>:80
I have also tested this by allowing all traffic from all sources in the sg the instance with <some_rancher_agent_public_ip> belongs to
In addition, nmap-ing the above public ip, shows 80 and 443 in open state.
Any suggestions?
update:
$ kubectl describe svc ui
Name: ui
Namespace: default
Labels: <none>
Annotations: service.beta.kubernetes.io/aws-load-balancer-ssl-cert=arn:aws:acm:eu-west-1:somecertid
Selector: els-pod=ui
Type: LoadBalancer
IP: 10.43.74.106
LoadBalancer Ingress: <some_rancher_agent_public_ip>, <some_rancher_agent_public_ip>
Port: http 80/TCP
TargetPort: %!d(string=ui-port)/TCP
NodePort: http 30854/TCP
Endpoints: 10.42.179.14:80
Port: https 443/TCP
TargetPort: %!d(string=ui-port)/TCP
NodePort: https 31404/TCP
Endpoints: 10.42.179.14:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
and here is the respective pod description:
kubectl describe pod <the_pod_id>
Name: <pod_id>
Namespace: default
Node: ran-agnt-02/<some_rancher_agent_public_ip>
Start Time: Fri, 29 Dec 2017 16:48:42 +0200
Labels: els-pod=ui
pod-template-hash=375086521
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"ui-deployment-7c94db965","uid":"5cea65ea-eca7-11e7-b8e0-0203f78b...
Status: Running
IP: 10.42.179.14
Created By: ReplicaSet/ui-deployment-7c94db965
Controlled By: ReplicaSet/ui-deployment-7c94db965
Containers:
ui:
Container ID: docker://some-container-id
Image: docker-registry/imagename
Image ID: docker-pullable://docker-registry/imagename#sha256:some-sha
Port: 80/TCP
State: Running
Started: Fri, 05 Jan 2018 16:24:56 +0200
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Fri, 05 Jan 2018 16:23:21 +0200
Finished: Fri, 05 Jan 2018 16:23:31 +0200
Ready: True
Restart Count: 5
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-8g7bv (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
default-token-8g7bv:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-8g7bv
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s
node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
Kubernetes provides different ways of exposing pods to outside the cluster, mainly Services and Ingress. I'll focus on Servicessince you are having issues with that.
There are different Services types, among those:
ClusterIP: default type. Choosing this type means that your service gets an stable IP which is reachable only from inside of the cluster. Not relevant here.
NodePort: Besides having a cluster-internal IP, expose the service on a random port on each node of the cluster (the same port on each node). You’ll be able to contact the service on any NodeIP:NodePort address. That's why you can contact your rancher_agent_public_ip:NodePort from outside the cluster.
LoadBalancer: Besides having a cluster-internal IP and exposing service on a NodePort, ask the cloud provider for a load balancer that exposes the service externally using a cloud provider’s load balancer.
Creating a Service of type LoadBalancer makes it NodePort as well. That's why you can reach rancher_agent_public_ip:30854.
I have no experience on rancher, but it seems that creating a LoadBalancer Service deploys a HAProxy to act as a Load balancer. That HAProxy that was created by Rancher needs a public IP thats reachable from outside the cluster, and a port that will redirect requests to the NodePort.
But in your service, the IP looks like an internal IP 10.43.74.106. That IP won't be reachable from outside the cluster. You need a public IP.
Related
I have run a basic example project and can confirm it is running, but I cannot identify its URL?
Kubectl describe service - gives me
NAME READY STATUS RESTARTS AGE
frontend-6c8b5cc5b-v9jlb 1/1 Running 0 26s
PS D:\git\helm3\lab1_kubectl_version1\yaml> kubectl describe service
Name: frontend
Namespace: default
Labels: name=frontend
Annotations: <none>
Selector: app=frontend
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.108.59.44
IPs: 10.108.59.44
Port: <unset> 80/TCP
TargetPort: 4200/TCP
Endpoints: 10.1.0.38:4200
Session Affinity: None
Events: <none>
Name: kubernetes
Namespace: default
Labels: component=apiserver
provider=kubernetes
Annotations: <none>
Should I be able to hit this locally or not? The demo suggests yes but no URL is given and anything I attempt fails.
From outside you do not have any way to connect to your service since its type is set to ClusterIP if you want directly to expose your service, you should set it to either type LoadBalancer or NodePort. For more info about these types check this link.
However your service has an internal url ( which works within the cluster, for example if you exec into a pod and curl that url, you will get a response ) and that is: <your service>.<your namespace>.svc.cluster.local
Instead of <your service> type the name of the service and instead of <your namespace> namespace in which that service resides. The rest of the url is the same for all services.
Hi Kubernetes Experts,
I have an application cluster running in the azure kubernetes cluster. There are 3 pods inside the application cluster. The app is designed in a way, that each pod listens on a different port. For example, pod 1 listens on 31090, pod2 on 31091 and pod 3 on 31092.
This application is needed to be connected from outside the network. At this point, I need to create a separate load balancer service for each of the pods.
In the service, I cannot use selector as app name/label as it tries to distribute traffic between all 3 pods in round robin way. Now as you see above that one port (say 31090) is running only on one pod. So, external connections to the load balancer IP fails 2/3 rd of times.
So, I am trying to create 3 different load balancer services individual to each pod, without mentioning the selector and later assigning endpoint individually to them.
The approach is explained here:
In Kubernetes, how does one select a pod by name in a service selector?
But after the endpoint is created, the service shows endpoint as blank. See below.
First I created only the service
---
apiVersion: v1
kind: Service
metadata:
name: myservice
spec:
ports:
- protocol: TCP
port: 31090
targetPort: 31090
name: b0
type: LoadBalancer
After this, the service shows endpoint as "none". So far, so good.
kubectl describe service myservice
Name: myservice
Namespace: confluent
Labels: <none>
Annotations: <none>
Selector: <none>
Type: LoadBalancer
IP: 10.0.184.1
Port: b0 31090/TCP
TargetPort: 31090/TCP
NodePort: b0 31354/TCP
**Endpoints: <none>**
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 3s service-controller Ensuring load balancer
Then I create the end point, I have made sure the names match between service and endpoint, including any spaces or tabs. But the endpoint in service desc shows "" (blank). And this is why, I am unable to get to the app from outside network. Telnet to the port and external IP just keeps trying.
---
apiVersion: v1
kind: Endpoints
metadata:
name: myservice
subsets:
- addresses:
- ip: 10.240.1.32
ports:
- port: 31090
kubectl describe service myservice
Name: myservice
Namespace: confluent
Labels: <none>
Annotations: <none>
Selector: <none>
Type: LoadBalancer
IP: 10.0.184.1
LoadBalancer Ingress: 20.124.49.192
Port: b0 31090/TCP
TargetPort: 31090/TCP
NodePort: b0 31354/TCP
**Endpoints:**
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 3m22s service-controller Ensuring load balancer
Normal EnsuredLoadBalancer 3m10s service-controller Ensured load balancer
Only this service is failing (using no selector). All my other external load balancer services are working fine. They all are getting to the pods. They all are using selector as app label.
Here is the pod ip. I have ensured port 31090 is running inside the pod.
kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ck21-cp-kafka-0 2/2 Running 2 (76m ago) 78m **10.240.1.32** aks-agentpool-26199219-vmss000013 <none> <none>
Can someone please help me here?
Thanks !
I am having a networking issue in Kubernetes.
I am trying to preserve the source IP of incoming requests to a clusterIP service, but I find that the requests appear to be source NAT'd. That is, they carry the IP address of the node as the source IP rather than the IP of the pod making the request. I am following the example for cluster IPs here: https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-type-clusterip
but I find that the behavior of Kubernetes is totally different for me.
The above example has me deploy an echo server which reports the source IP. This is deployed behind a clusterIP service which I request from a separate pod running busybox. The response from the echo server is below:
CLIENT VALUES:
client_address=10.1.36.1
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://10.152.183.99:8080/
SERVER VALUES:
server_version=nginx: 1.10.0 - lua: 10001
HEADERS RECEIVED:
connection=close
host=10.152.183.99
user-agent=Wget
BODY
The source IP 10.1.36.1 belongs to the node. I expected to see the address of busybox which is 10.1.36.168.
Does anyone know why SNAT would be enabled for a clusterIP? It's really strange to me that this directly contradicts the official documentation. (edited)
All of this is running on the same node. The node is running in iptables mode. I am using microk8s.
My microk8s version:
Client:
Version: v1.2.5
Revision: bb71b10fd8f58240ca47fbb579b9d1028eea7c84
Server:
Version: v1.2.5
Revision: bb71b10fd8f58240ca47fbb579b9d1028eea7c84
Output of kubectl describe service clusterip:
Name: clusterip
Namespace: default
Labels: app=source-ip-app
Annotations: <none>
Selector: app=source-ip-app
Type: ClusterIP
IP: 10.152.183.106
Port: <unset> 80/TCP
TargetPort: 8080/TCP
Endpoints: 10.1.36.225:8080
Session Affinity: None
Events: <none>
Output of kubectl describe pod source-ip-app-7c79c78698-xgd5w:
Name: source-ip-app-7c79c78698-xgd5w
Namespace: default
Priority: 0
Node: riley-virtualbox/10.0.2.15
Start Time: Wed, 12 Feb 2020 09:19:18 -0600
Labels: app=source-ip-app
pod-template-hash=7c79c78698
Annotations: <none>
Status: Running
IP: 10.1.36.225
IPs:
IP: 10.1.36.225
Controlled By: ReplicaSet/source-ip-app-7c79c78698
Containers:
echoserver:
Container ID: containerd://6775c010145d3951d067e3bb062bea9b70d305f96f84aa870963a8b385a4a118
Image: k8s.gcr.io/echoserver:1.4
Image ID: sha256:523cad1a4df732d41406c9de49f932cd60d56ffd50619158a2977fd1066028f9
Port: <none>
Host Port: <none>
State: Running
Started: Wed, 12 Feb 2020 09:19:23 -0600
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-7pszf (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-7pszf:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-7pszf
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/source-ip-app-7c79c78698-xgd5w to riley-virtualbox
Normal Pulled 2m58s kubelet, riley-virtualbox Container image "k8s.gcr.io/echoserver:1.4" already present on machine
Normal Created 2m55s kubelet, riley-virtualbox Created container echoserver
Normal Started 2m54s kubelet, riley-virtualbox Started container echoserver
I am getting the following error when forwarding port. Can anyone help?
mjafary$ sudo kubectl port-forward sa-frontend 88:82
Forwarding from 127.0.0.1:88 -> 82
Forwarding from [::1]:88 -> 82
The error log :
Handling connection for 88
Handling connection for 88
E1214 01:25:48.704335 51463 portforward.go:331] an error occurred forwarding 88 -> 82: error forwarding port 82 to pod a017a46573bbc065902b600f0767d3b366c5dcfe6782c3c31d2652b4c2b76941, uid : exit status 1: 2018/12/14 08:25:48 socat[19382] E connect(5, AF=2 127.0.0.1:82, 16): Connection refused
Here is the description of the pod. My expectation is that when i hit localhost:88 in the browser the request should forward to the jafary/sentiment-analysis-frontend container and the application page should load
mjafary$ kubectl describe pods sa-frontend
Name: sa-frontend
Namespace: default
Node: minikube/192.168.64.2
Start Time: Fri, 14 Dec 2018 00:51:28 -0700
Labels: app=sa-frontend
Annotations: <none>
Status: Running
IP: 172.17.0.23
Containers:
sa-frontend:
Container ID: docker://a87e614545e617be104061e88493b337d71d07109b0244b2b40002b2f5230967
Image: jafary/sentiment-analysis-frontend
Image ID: docker-pullable://jafary/sentiment-analysis-frontend#sha256:5ac784b51eb5507e88d8e2c11e5e064060871464e2c6d467c5b61692577aeeb1
Port: 82/TCP
Host Port: 0/TCP
State: Running
Started: Fri, 14 Dec 2018 00:51:30 -0700
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-mc5cn (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
default-token-mc5cn:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-mc5cn
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
The reason the connection is refused is that there is no process listening on port 82. The dockerfile used to create the nginx image exposes port 80, and in your pod spec you have also exposed port 82. However, nginx is configured to listen on port 80.
What this means is your pod has two ports that have been exposed: 80 and 82. The nginx application, however, is actively listening on port 80 so only requests to port 80 work.
To make your setup work using port 82, you need to change the nginx config file so that it listens on port 82 instead of 80. You can either do this by creating your own docker image with the changes built into your image, or you can use a configMap to replace the default config file with the settings you want
As #Patric W said, the connection is refused because there is no process listening on port 82. That port hasn't been exposed.
Now, to get the port on which your pod is listening to, you can run the commands
NB: Be sure to change any value in <> with real values.
First, get the name of the pods in the specified namespace kubectl get po -n <namespace>
Now check the exposed port of the pod you'll like to forward.
kubectl get pod <pod-name> -n <namespace> --template='{{(index (index .spec.containers 0).ports 0).containerPort}}{{"\n"}}'
Now use the resulting exposed port above to run port-forward with the command
kubectl port-forward pod/<pod-name> <local-port>:<exposed-port>
where local-port is the port from which the container will be accessed from the browser ..localhost:<local-port> while the exposed-port is the port on which the container listens to. Usually defined with the EXPOSE command in the Dockerfile
Get more information here
As Patrick pointed out correctly. I had this same issue which plagued me for 2 days. So steps would be:
Ensure your Dockerfile is using your preferred port (EXPOSE 5000)
In your pod.yml file ensure containerPort is 5000 (containerPort: 5000)
Apply the kubectl command to reflect the above:
kubectl port-forward pod/my-name-of-pod 8080:5000
So I've been playing around with Minkube.
I've managed to deploy a simple python flask container:
PS C:\Users\Will> kubectl run test-flask-deploy --image
192.168.1.201:5000/test_flask:1
deployment "test-flask-deploy" created
I've also then managed to expose the deployment as a service:
PS C:\Users\Will> kubectl expose deployment/test-flask-deploy --
type="NodePort" --port 8080
service "test-flask-deploy" exposed
In the dashboard I can see that the service has a Cluster IP:
10.0.0.132.
I access the dashboard on a 192.168.xxx.xxx address, so I'm hoping I can expose the service on that external IP.
Any idea how I go about this?
A separate and slightly less important question: I've got minikube talking to a docker registry on my network. If i deploy an image (which has not yet been pulled local to the minikube) the deployment fails, yet when I run the docker pull command on minikube locally, the deployment then succeeds. So minikube is able to pull docker images, but when I deploy an image which is accessible via the registry, yet not pulled locally, it fails. Any thoughts?
EDIT: More detail in response to comment:
PS C:\Users\Will> kubectl describe pod test-flask-deploy
Name: test-flask-deploy-1049547027-rgf7d
Namespace: default
Node: minikube/192.168.99.100
Start Time: Sat, 07 Oct 2017 10:19:58 +0100
Labels: pod-template-hash=1049547027
run=test-flask-deploy
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"test-flask-deploy-1049547027","uid":"b06a14b8-ab40-11e7-9714-080...
Status: Running
IP: 172.17.0.4
Created By: ReplicaSet/test-flask-deploy-1049547027
Controlled By: ReplicaSet/test-flask-deploy-1049547027
Containers:
test-flask-deploy:
Container ID: docker://577e339ce680bc5dd9388293f1f1ea62be59a6acc25be22889310761222c760f
Image: 192.168.1.201:5000/test_flask:1
Image ID: docker-pullable://192.168.1.201:5000/test_flask#sha256:d303ed635888394f69223cc0a66c5778444fd3636dfcde42295fd512be948898
Port: <none>
State: Running
Started: Sat, 07 Oct 2017 10:19:59 +0100
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-5rrpm (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
default-token-5rrpm:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-5rrpm
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: <none>
Events: <none>
First, check the nodeport that is assigned to your service:
$ kubectl get svc test-flask-deploy
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
test-flask-deploy 10.0.0.76 <nodes> 8080:30341/TCP 4m
Now you should be able to access it on 192.168.xxxx:30341 or whatever your minikubeIP:nodeport is.