Connection Refused on Kubernetes Pod (Plex) - kubernetes

Kubernetes setup on a baremetal three node local cluster.
Plex deployment:
kind: Deployment
apiVersion: apps/v1
metadata:
name: plex
labels:
app: plex
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
name: plex
template:
metadata:
labels:
name: plex
spec:
containers:
- name: plex
image: plexinc/pms-docker:plexpass
imagePullPolicy: Always
ports:
- containerPort: 32400
hostPort: 32400
volumeMounts:
- name: nfs-plex-meta
mountPath: "/data"
- name: nfs-plex
mountPath: "/config"
volumes:
- name: nfs-plex-meta
persistentVolumeClaim:
claimName: nfs-plex-meta
- name: nfs-plex
persistentVolumeClaim:
claimName: nfs-plex
Deployment is happy. Pod is happy.
I've tried exposing the Pod via NodePort, ClusterIP, HostPort, LoadBallancer (metalDB) and in every permutation, I get a connection refused error in the browser or via Curl.
NodePort Example:
$ kubectl expose deployment plex --type=NodePort --name=plex
service/plex exposed
$ kubectl describe svc plex
Name: plex
Namespace: default
Labels: app=plex
Annotations: <none>
Selector: name=plex
Type: NodePort
IP: 10.111.13.7
Port: <unset> 32400/TCP
TargetPort: 32400/TCP
NodePort: <unset> 30275/TCP
Endpoints: 10.38.0.0:32400
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
$ curl 10.111.13.7:32400
curl: (7) Failed to connect to 10.111.13.7 port 32400: Connection refused
$ curl 10.38.0.0:32400
curl: (7) Failed to connect to 10.38.0.0 port 32400: Connection refused
$ curl 192.168.1.11:32400
curl: (7) Failed to connect to 192.168.1.110 port 32400: Connection refused
$ curl 192.168.1.11:30275
curl: (7) Failed to connect to 192.168.1.110 port 30275: Connection refused
What am I missing here?

So of those, only the last might be right. The IP in that output is a cluster IP, which usually (though not always, it’s up to your CNI plugin and config) is only accessible inside the cluster from other pods. NodePort means that the service is also reachable on that port on any node IP. This might be getting blocked by a firewall on your node though, so check that. Also make sure that’s a valid node IP.

Related

Kubernetes cannot access service through clusterIP

I have created 1 deployment (I am using minikube)
kind: Deployment
apiVersion: apps/v1
metadata:
name: mydeployments
spec:
replicas: 1
selector: # tells the controller which pods to watch/belong to
matchLabels:
name: deployment
template:
metadata:
name: testpod1
labels:
name: deployment
spec:
containers:
- name: c00
image: httpd
ports:
- containerPort: 80
and one service
kind: Service # Defines to create Service type Object
apiVersion: v1
metadata:
name: demoservice
spec:
ports:
- port: 80 # Containers port exposed
targetPort: 80 # Pods port
selector:
name: deployment # Apply this service to any pods which has the specific label, this shoudl mach with your deployment that you have created
type: ClusterIP # Specifies the service type i.e ClusterIP or NodePort, this is the default service. Within the cluster. Next is nodeport then loadbalancer
here is the pod ip address and its port
IP: 172.17.0.5
IPs:
IP: 172.17.0.5
Controlled By: ReplicaSet/mydeployments-84c5754d58
Containers:
c00:
Container ID: docker://bda97868c71993b12a5087a45aed7fe2217850e6f2ad5fb2830be9e4fae8b7fb
Image: httpd
Image ID: docker-pullable://httpd#sha256:71e882df50adc606c57e46e5deb3c933288e2c7775472a639326d9e4e40a47c2
Port: 80/TCP
when I exec into the the pod and run curl command it works
root#mydeployments-84c5754d58-l7g9q:/usr/local/apache2# curl 172.17.0.5:80
<html><body><h1>It works!</h1></body></html>
and here is the ClusterIP ip address
demoservice ClusterIP 10.99.55.212 <none> 80/TCP 20h
but when I run the curl command it does nothing and even if I paste this ip address along with the port 10.99.55.212:80 it does not work

GCP health check failing for kubernetes pod

I'm trying to launch an application on GKE and the health checks made by the Ingress always fail.
Here's my full k8s yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: tripvector
labels:
app: tripvector
spec:
replicas: 1
minReadySeconds: 60
selector:
matchLabels:
app: tripvector
template:
metadata:
labels:
app: tripvector
spec:
containers:
- name: tripvector
readinessProbe:
httpGet:
port: 3000
path: /healthz
initialDelaySeconds: 30
timeoutSeconds: 10
periodSeconds: 11
image: us-west1-docker.pkg.dev/triptastic-1542412229773/tripvector/tripvector:healthz2
env:
- name: ROOT_URL
value: https://paymahn.tripvector.io/
- name: MAIL_URL
valueFrom:
secretKeyRef:
key: MAIL_URL
name: startup
- name: METEOR_SETTINGS
valueFrom:
secretKeyRef:
key: METEOR_SETTINGS
name: startup
- name: MONGO_URL
valueFrom:
secretKeyRef:
key: MONGO_URL
name: startup
ports:
- containerPort: 3000
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: tripvector
spec:
defaultBackend:
service:
name: tripvector-np
port:
number: 60000
---
apiVersion: v1
kind: Service
metadata:
name: tripvector-np
annotations:
cloud.google.com/neg: '{"ingress": true}'
spec:
type: ClusterIP
selector:
app: tripvector
ports:
- protocol: TCP
port: 60000
targetPort: 3000
This yaml should do the following:
make a deployment with my healthz2 image along with a readiness check at /healthz on port 3000 which is exposed by the image
launch a cluster IP service
launch an ingress
When I check for the status of the service I see it's unhealth:
❯❯❯ gcloud compute backend-services get-health k8s1-07274a01-default-tripvector-np-60000-a912870e --global
---
backend: https://www.googleapis.com/compute/v1/projects/triptastic-1542412229773/zones/us-central1-a/networkEndpointGroups/k8s1-07274a01-default-tripvector-np-60000-a912870e
status:
healthStatus:
- healthState: UNHEALTHY
instance: https://www.googleapis.com/compute/v1/projects/triptastic-1542412229773/zones/us-central1-a/instances/gke-tripvector2-default-pool-78cf58d9-5dgs
ipAddress: 10.12.0.29
port: 3000
kind: compute#backendServiceGroupHealth
It seems that the healthcheck is hitting the right port but this output doesn't confirm if it's hitting the right path. If I look up the health check object in the console I see the following:
Which confirms the GKE health check is hitting the healthz path.
I've verified in the following ways that the health check endpoint I'm using for the readiness probe works but something still isn't working properly:
exec into the pod and run wget
port forward the pod and check /healthz in my browser
port forward the service and check /healthz in my browser
In all three instances above, I can see the /healthz endpoint working. I'll outline each one below.
Here's evidence that running wget from within the pod:
❯❯❯ k exec -it tripvector-65ff4c4dbb-vwvtr /bin/sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
/tripvector # ls
bundle
/tripvector # wget localhost:3000/healthz
Connecting to localhost:3000 (127.0.0.1:3000)
saving to 'healthz'
healthz 100% |************************************************************************************************************************************************************| 25 0:00:00 ETA
'healthz' saved
/tripvector # cat healthz
[200] Healthcheck passed./tripvector #
Here's what happens when I perform a port forward from the pod to my local machine:
❯❯❯ k port-forward tripvector-65ff4c4dbb-vwvtr 8081:3000
Forwarding from 127.0.0.1:8081 -> 3000
Forwarding from [::1]:8081 -> 3000
Handling connection for 8081
Handling connection for 8081
Handling connection for 8081
Handling connection for 8081
Handling connection for 8081
Handling connection for 8081
Handling connection for 8081
Handling connection for 8081
Handling connection for 8081
And here's what happens when I port forward from the Service object:
2:53PM /Users/paymahn/code/tripvector/tripvector ✘ 1 docker ⬆ ✱
❯❯❯ k port-forward svc/tripvector-np 8082:60000
Forwarding from 127.0.0.1:8082 -> 3000
Forwarding from [::1]:8082 -> 3000
Handling connection for 8082
How can I get the healthcheck for the ingress and network endpoint group to succeed so that I can access my pod from the internet?

pointing selenium tests through nodePort in a service

I have this in a selenium-hub-service.yml file:
apiVersion: v1
kind: Service
metadata:
name: selenium-srv
spec:
selector:
app: selenium-hub
ports:
- port: 4444
nodePort: 30001
type: NodePort
sessionAffinity: None
When I do kubectl describe service on terminal, I get the endpoint of kubernetes service as 192.168.49.2:8443. I then take that and point the browser to 192.168.49.2:30001 but browser is not able to reach that endpoint. I was expecting to reach selenium hub.
When I do minikube service selenium-srv --url, which gives me http://127.0.0.1:56498 and point browser to it, I can reach the hub.
My question is: why am I not able to reach through nodePort?
I would like to do it through nodePort way because I know the port beforehand and if kubernetes service end point remains constant then it may be easy to point my tests to a known endpoint when I integrate it with azure pipeline.
EDIT: output of kubectl get service:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d
selenium-srv NodePort 10.96.34.117 <none> 4444:30001/TCP 2d2h
Posted community wiki based on this Github topic. Feel free to expand it.
The information below assumes that you are using the default driver docker.
Minikube on macOS behaves a bit differently than on Linux. While on Linux, you have special interfaces used for docker and for connecting to the minikube node port, like this one:
3: docker0:
...
inet 172.17.0.1/16
And this one:
4: br-42319e616ec5:
...
inet 192.168.49.1/24 brd 192.168.49.255 scope global br-42319e616ec5
There is no such solution implemented on macOS. Check this:
This is a known issue, Docker Desktop networking doesn't support ports. You will have to use minikube tunnel.
Also:
there is no bridge0 on Macos, and it makes container IP unreachable from host.
That means you can't connect to your service using IP address 192.168.49.2.
Check also this article: Known limitations, use cases, and workarounds - Docker Desktop for Mac:
There is no docker0 bridge on macOS
Because of the way networking is implemented in Docker Desktop for Mac, you cannot see a docker0 interface on the host. This interface is actually within the virtual machine.
I cannot ping my containers
Docker Desktop for Mac can’t route traffic to containers.
Per-container IP addressing is not possible
The docker (Linux) bridge network is not reachable from the macOS host.
There are few ways to setup minikube to use NodePort at the localhost address on Mac, like this one:
minikube start --driver=docker --extra-config=apiserver.service-node-port-range=32760-32767 --ports=127.0.0.1:32760-32767:32760-32767`
You can also use minikube service command which will return a URL to connect to a service.
is your deployment running on port 4444 ?
try this
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: selenium-hub
labels:
app: selenium-hub
spec:
replicas: 1
selector:
matchLabels:
app: selenium-hub
template:
metadata:
labels:
app: selenium-hub
spec:
containers:
- name: selenium-hub
image: selenium/hub:3.141
ports:
- containerPort: 4444
resources:
limits:
memory: "1000Mi"
cpu: ".5"
livenessProbe:
httpGet:
path: /wd/hub/status
port: 4444
initialDelaySeconds: 30
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /wd/hub/status
port: 4444
initialDelaySeconds: 30
timeoutSeconds: 5
service.yaml
apiVersion: v1
kind: Service
metadata:
name: selenium-hub
labels:
app: selenium-hub
spec:
ports:
- port: 4444
targetPort: 4444
name: port0
selector:
app: selenium-hub
type: NodePort
sessionAffinity: None
if you want to use to chrome
apiVersion: apps/v1
kind: Deployment
metadata:
name: selenium-node-chrome
labels:
app: selenium-node-chrome
spec:
replicas: 2
selector:
matchLabels:
app: selenium-node-chrome
template:
metadata:
labels:
app: selenium-node-chrome
spec:
volumes:
- name: dshm
emptyDir:
medium: Memory
containers:
- name: selenium-node-chrome
image: selenium/node-chrome-debug:3.141
ports:
- containerPort: 5555
volumeMounts:
- mountPath: /dev/shm
name: dshm
env:
- name: HUB_HOST
value: "selenium-hub"
- name: HUB_PORT
value: "4444"
testing python code
from selenium import webdriver
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
def check_browser(browser):
driver = webdriver.Remote(
command_executor='http://<IP>:<PORT>/wd/hub',
desired_capabilities=getattr(DesiredCapabilities, browser)
)
driver.get("http://google.com")
assert "google" in driver.page_source
driver.quit()
print("Browser %s checks out!" % browser)
check_browser("CHROME")

Kubernetes NFS server pod mount works with pod ip but not with kubernetes service

I created a nfs server in a pod to use it as a volume. When creating another pod with a volume, the volume mount does work with the ip of the nfs pod. Since this ip is not guaranteed to stay the same, I added a service for my nfs pod and added a fixed cluster ip. When starting the container with the volume mount, it always fails with the following error:
Unable to mount volumes for pod "nginx_default(35ecd8ec-a077-11e8-b7bc-0cc47a9aec96)": timeout expired waiting for volumes to attach or mount for pod "default"/"nginx". list of unmounted volumes=[nfs-demo]. list of unattached volumes=[nfs-demo nginx-test-account-token-2dpgg]
apiVersion: v1
kind: Pod
metadata:
name: nfs-server
labels:
name: nfs-server
spec:
containers:
- name: nfs-server
image: my-nfs-server:v1
args: ["/exports"]
securityContext:
privileged: true
---
kind: Service
apiVersion: v1
metadata:
name: nfs-service
spec:
selector:
name: nfs-server
clusterIP: "10.96.0.3"
ports:
- name: nfs
port: 2049
protocol: UDP
- name: mountd
port: 20048
protocol: UDP
- name: rpcbind
port: 111
protocol: UDP
- name: nfs-tcp
port: 2049
protocol: TCP
- name: mountd-tcp
port: 20048
protocol: TCP
- name: rpcbind-tcp
port: 111
protocol: TCP
My pod trying to mount the server:
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
name: nginx
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- mountPath: "/exports"
name: nfs-demo
securityContext:
privileged: true
securityContext:
supplementalGroups: [100003]
serviceAccountName: nginx-test-account
volumes:
- name: nfs-demo
nfs:
server: 10.96.0.3
path: "/exports"
readOnly: false
I used this as a base for my nfs server image:
https://github.com/cpuguy83/docker-nfs-server
https://medium.com/#aronasorman/creating-an-nfs-server-within-kubernetes-e6d4d542bbb9
Does anyone have an idea why the mount ist working with the pod ip but not with the service ip?
I found a new way to solve this problem ,you can set nfs-server port to be fixed ,then mount nfs-server by service . you can refer to https://wiki.debian.org/SecuringNFS
Try removing the ClusterIP ip address (let kube assign an ip to nfs service) and use the name 'nfs-service' in your volume mount definition. Make sure that the nginx pod and the nfs service are on the same namespace.
As mentioned by Bal Chua you probably
didn't export the nfs port in nfs-server pod definition.
nfs-server-pod.yaml
apiVersion: v1beta1
kind: Pod
id: nfs-server
desiredState:
manifest:
version: v1beta1
id: nfs-server
containers:
- name: nfs-server
image: jsafrane/nfs-data
privileged: true
ports:
- name: nfs
containerPort: 2049
protocol: tcp
labels:
name: nfs-server
nfs-server-service.yaml
id: nfs-server
kind: Service
apiVersion: v1beta1
port: 2049
protocol: tcp
selector:
name: nfs-server
Taken from example of NFS volume page.
I found the solution to my problem:
There were ports missing in my service, not the pod. To find the ports I needed, I opened a console to my pod (kubectl exec) and used the "rpcinfo -p" command to list the ports needed for the service.
It does fix the connection problem, but only temporarily. These ports are not static, so it is not better than using the port IP itself.
I do think it is possible to configure static ports though.
If anyone with a similar problem needs further reading:
http://tldp.org/HOWTO/NFS-HOWTO/security.html
https://wiki.debian.org/SecuringNFS
The second problem I encountered: the mount only worked if the nfs-server pod and the pod mounting it were on the same node. I could fix it when updating to kubernetes version 1.11.
Since my original problem is solved, I consider my question answered though.

Connection refused to GCP LoadBalancer in Kubernetes

When I create a deployment and a service in a Kubernetes Engine in GCP I get connection refused for no apparent reason.
The service creates a Load Balancer in GCP and all corresponding firewall rules are in place (allows traffic to port 80 from 0.0.0.0/0). The underlying service is running fine, when I kubectl exec into the pod and curl localhost:8000/ I get the correct response.
This deployment setting used to work just fine for other images, but yesterday and today I keep getting
curl: (7) Failed to connect to 35.x.x.x port 80: Connection refused
What could be the issue? I tried deleting and recreating the service multiple times, with no luck.
kind: Service
apiVersion: v1
metadata:
name: my-app
spec:
selector:
app: app
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 8000
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: my-app
spec:
replicas: 1
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
containers:
- name: my-app
image: gcr.io/myproject/my-app:0.0.1
imagePullPolicy: Always
ports:
- containerPort: 8000
This turned out to be a dumb mistake on my part. The gunicorn server was using a bind to 127.0.0.1 instead of 0.0.0.0, so it wasn't accessible from outside of the pod, but worked when I exec-ed into the pod.
The fix in my case was changing the entrypoint of the Dockerfile to
CMD [ "gunicorn", "server:app", "-b", "0.0.0.0:8000", "-w", "3" ]
rebuilding the image and updating the deployment.
Is the service binding to your pod? What does "kubectl describe svc my-app" say?
Make sure it transfers through to your pod on the correct port? You can also try, assuming you're using an instance on GCP, to curl the IP and port of the pod and make sure it's responding as it should?
ie, kubectl get pods -o wide, will tell you the IP of the pod
does curl ipofpod:8000 work?