kubernetes connection refused during deployment - kubernetes

I'm trying to achieve a zero downtime deployment using kubernetes and during my test the service doesn't load balance well.
My kubernetes manifest is:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: myapp-deployment
spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 0
maxSurge: 1
template:
metadata:
labels:
app: myapp
version: "0.2"
spec:
containers:
- name: myapp-container
image: gcr.io/google-samples/hello-app:1.0
imagePullPolicy: Always
ports:
- containerPort: 8080
protocol: TCP
readinessProbe:
httpGet:
path: /
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
successThreshold: 1
---
apiVersion: v1
kind: Service
metadata:
name: myapp-lb
labels:
app: myapp
spec:
type: LoadBalancer
externalTrafficPolicy: Local
ports:
- port: 80
targetPort: 8080
selector:
app: myapp
If I loop over the service with the external IP, let's say:
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.35.240.1 <none> 443/TCP 1h
myapp-lb LoadBalancer 10.35.252.91 35.205.100.174 80:30549/TCP 22m
using the bash script:
while True
do
curl 35.205.100.174
sleep 0.2s
done
I receive some connection refused during the deployment:
curl: (7) Failed to connect to 35.205.100.174 port 80: Connection refused
The application is the default helloapp provided by Google Cloud Platform and running on 8080.
Cluster information:
Kubernetes version: 1.8.8
Google cloud platform
Machine type: g1-small

It looks like your request goes to a not started pod. I have avoided this by adding a few parameters:
Liveness probe to be sure app has already started
maxUnavalible: 1 to deploy pods one by one
I still have some errors, but they are acceptable because they rarely happen . During the deployment, an error may occur once or twice, so with increasing load you will have a negligible amount of errors. I mean one or two errors per 2000 requests during the deployment.

Related

Kubernetes service routes traffic to only one of 5 pods

i'm playing around with k8s services. I have created simple Spring Boot app, that display it's version number and pod name when curling endpoint:
curl localhost:9000/version
1.3_car-registry-deployment-66684dd8c4-r274b
Then i dockerized it, pushed into my local Kind cluster and deployed with 5 replicas. Next I created service targeting all 5 pods. Lastly, i exposed service like so:
kubectl port-forward svc/car-registry-service 9000:9000
Now when curling my endpoint i expected to see randomly picked pod names, but instead I only get responses from single pod. Moreover, if i kill that one pod then my service stops working, ie i'm getting ERR_EMPTY_RESPONSE, even though there are 4 more pods available. What am I missing? Here's my deployment and service yamls:
apiVersion: apps/v1
kind: Deployment
metadata:
name: car-registry-deployment
spec:
replicas: 5
selector:
matchLabels:
app: car-registry
template:
metadata:
name: car-registry
labels:
app: car-registry
spec:
containers:
- name: car-registry
image: car-registry-database:v1.3
ports:
- containerPort: 9000
protocol: TCP
name: rest
readinessProbe:
exec:
command:
- sh
- -c
- curl http://localhost:9000/healthz | grep "OK"
initialDelaySeconds: 15
periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
name: car-registry-service
spec:
type: ClusterIP
selector:
app: car-registry
ports:
- protocol: TCP
port: 9000
targetPort: 9000
You’re using TCP, so you’re probably using keep-alive. Try to hit it with your browser or a new tty.
Try:
curl -H "Connection: close" http://your-service:port/path
Else, check kube-proxy logs to see if there’s any additional info. Your initial question doesn’t provide much detail.

pointing selenium tests through nodePort in a service

I have this in a selenium-hub-service.yml file:
apiVersion: v1
kind: Service
metadata:
name: selenium-srv
spec:
selector:
app: selenium-hub
ports:
- port: 4444
nodePort: 30001
type: NodePort
sessionAffinity: None
When I do kubectl describe service on terminal, I get the endpoint of kubernetes service as 192.168.49.2:8443. I then take that and point the browser to 192.168.49.2:30001 but browser is not able to reach that endpoint. I was expecting to reach selenium hub.
When I do minikube service selenium-srv --url, which gives me http://127.0.0.1:56498 and point browser to it, I can reach the hub.
My question is: why am I not able to reach through nodePort?
I would like to do it through nodePort way because I know the port beforehand and if kubernetes service end point remains constant then it may be easy to point my tests to a known endpoint when I integrate it with azure pipeline.
EDIT: output of kubectl get service:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d
selenium-srv NodePort 10.96.34.117 <none> 4444:30001/TCP 2d2h
Posted community wiki based on this Github topic. Feel free to expand it.
The information below assumes that you are using the default driver docker.
Minikube on macOS behaves a bit differently than on Linux. While on Linux, you have special interfaces used for docker and for connecting to the minikube node port, like this one:
3: docker0:
...
inet 172.17.0.1/16
And this one:
4: br-42319e616ec5:
...
inet 192.168.49.1/24 brd 192.168.49.255 scope global br-42319e616ec5
There is no such solution implemented on macOS. Check this:
This is a known issue, Docker Desktop networking doesn't support ports. You will have to use minikube tunnel.
Also:
there is no bridge0 on Macos, and it makes container IP unreachable from host.
That means you can't connect to your service using IP address 192.168.49.2.
Check also this article: Known limitations, use cases, and workarounds - Docker Desktop for Mac:
There is no docker0 bridge on macOS
Because of the way networking is implemented in Docker Desktop for Mac, you cannot see a docker0 interface on the host. This interface is actually within the virtual machine.
I cannot ping my containers
Docker Desktop for Mac can’t route traffic to containers.
Per-container IP addressing is not possible
The docker (Linux) bridge network is not reachable from the macOS host.
There are few ways to setup minikube to use NodePort at the localhost address on Mac, like this one:
minikube start --driver=docker --extra-config=apiserver.service-node-port-range=32760-32767 --ports=127.0.0.1:32760-32767:32760-32767`
You can also use minikube service command which will return a URL to connect to a service.
is your deployment running on port 4444 ?
try this
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: selenium-hub
labels:
app: selenium-hub
spec:
replicas: 1
selector:
matchLabels:
app: selenium-hub
template:
metadata:
labels:
app: selenium-hub
spec:
containers:
- name: selenium-hub
image: selenium/hub:3.141
ports:
- containerPort: 4444
resources:
limits:
memory: "1000Mi"
cpu: ".5"
livenessProbe:
httpGet:
path: /wd/hub/status
port: 4444
initialDelaySeconds: 30
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /wd/hub/status
port: 4444
initialDelaySeconds: 30
timeoutSeconds: 5
service.yaml
apiVersion: v1
kind: Service
metadata:
name: selenium-hub
labels:
app: selenium-hub
spec:
ports:
- port: 4444
targetPort: 4444
name: port0
selector:
app: selenium-hub
type: NodePort
sessionAffinity: None
if you want to use to chrome
apiVersion: apps/v1
kind: Deployment
metadata:
name: selenium-node-chrome
labels:
app: selenium-node-chrome
spec:
replicas: 2
selector:
matchLabels:
app: selenium-node-chrome
template:
metadata:
labels:
app: selenium-node-chrome
spec:
volumes:
- name: dshm
emptyDir:
medium: Memory
containers:
- name: selenium-node-chrome
image: selenium/node-chrome-debug:3.141
ports:
- containerPort: 5555
volumeMounts:
- mountPath: /dev/shm
name: dshm
env:
- name: HUB_HOST
value: "selenium-hub"
- name: HUB_PORT
value: "4444"
testing python code
from selenium import webdriver
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
def check_browser(browser):
driver = webdriver.Remote(
command_executor='http://<IP>:<PORT>/wd/hub',
desired_capabilities=getattr(DesiredCapabilities, browser)
)
driver.get("http://google.com")
assert "google" in driver.page_source
driver.quit()
print("Browser %s checks out!" % browser)
check_browser("CHROME")

GCP GKE load balancer connectio refused

I'm doing a deployment on the GKE service and I find that when I try to access the page the message
ERR_CONNECTION_REFUSED
I have defined a load balancing service for deployment and the configuration is as follows.
This is the .yaml for the deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: bonsai-onboarding
spec:
selector:
matchLabels:
app: bonsai-onboarding
replicas: 2
template:
metadata:
labels:
app: bonsai-onboarding
spec:
containers:
- name: bonsai-onboarding
image: "eu.gcr.io/diaphanum/onboarding-iocash-master_web:v1"
ports:
- containerPort: 3000
This is the service .yaml file.
apiVersion: v1
kind: Service
metadata:
name: lb-onboarding
spec:
type: LoadBalancer
selector:
app: bonsai-onboarding
ports:
- protocol: TCP
port: 3000
targetPort: 3000
This working fine, and all is green in GKE :)
kubectl get pods,svc
NAME READY STATUS RESTARTS AGE
pod/bonsai-onboarding-8586b9b699-flhbn 1/1 Running 0 3h23m
pod/bonsai-onboarding-8586b9b699-p9sn9 1/1 Running 0 3h23m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP XX.xx.yy.YY <none> 443/TCP 29d
service/lb-onboarding LoadBalancer XX.xx.yy.YY XX.xx.yy.YY 3000:32618/TCP 3h
Then when i tried to connect the error is ERR_CONNECTION_REFUSED
I think is about the network because y did the next test from my local machine
Ping [load balancer IP] ---> Correct
Telnet [Load Balancer IP] 3000 ---> Correct
From cloud shell i forward the port 3000 to 8080 and in other cloudShell make a Curl http://localhost:8080, and work fine.
Any idea about the problem?
Thanks in advance
I've changed a little bit your deployment to check it on my cluster because your image was unreachable:
deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: bonsai-onboarding
spec:
selector:
matchLabels:
app: bonsai-onboarding
replicas: 2
template:
metadata:
labels:
app: bonsai-onboarding
spec:
containers:
- name: bonsai-onboarding
image: nginx:latest
ports:
- containerPort: 80
service:
apiVersion: v1
kind: Service
metadata:
name: lb-onboarding
spec:
type: LoadBalancer
selector:
app: bonsai-onboarding
ports:
- protocol: TCP
port: 3000
targetPort: 80
and it works out of the box:
kubectl get pods,svc
NAME READY STATUS RESTARTS AGE
pod/bonsai-onboarding-7bdf584499-j2nv7 1/1 Running 0 6m58s
pod/bonsai-onboarding-7bdf584499-vc7kh 1/1 Running 0 6m58s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.XXX.XXX.1 <none> 443/TCP 8m35s
service/lb-onboarding LoadBalancer 10.XXX.XXX.230 35.XXX.XXX.235 3000:31637/TCP 67s
and I'm able reach 35.XXX.XXX.235:3000 from any IP:
Welcome to nginx!
...
Thank you for using nginx.
You can check if your app is reachable using this command:
nmap -Pn $(kubectl get svc lb-onboarding -o jsonpath='{.status.loadBalancer.ingress[*].ip}')
Maybe the cause of your problem with "ERR_CONNECTION_REFUSED" in configuration of your image? I found no problem with your deployment and load balancer configuration.
Ensure containerPort is defined in the spec of the deployment/statefulset/pod and the application is listening on that port. Also ensure your firewall rules are not blocking the nodeport.
gcloud compute firewall-rules create myservice --allow tcp:3000

Docker for Mac Kubernetes load balancer NoHttpResponseException

I'm trying to run a simple load balancing server connecting to a Deployment pod.
I've installed Docker for Mac edge version.
The problem is that when I try to make a GET request to the exposed load balancer url http://localhost:8081/api/v1/posts/health, the error appearing is:
org.apache.http.NoHttpResponseException: localhost:8081 failed to respond
When doing:
k get services
I get:
Clearly, the service is running, but localhost:8081 fails to respond, no idea why, I keep struggling with this.
My service resource:
---
apiVersion: v1
kind: Service
metadata:
name: posts-api-svc
# namespace: nginx-ingress
labels:
app: posts-api
rel: beta
env: dev
spec:
type: LoadBalancer
selector:
app: posts-api
rel: beta
env: dev
ports:
- protocol: TCP
port: 8081
My deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: posts-api-deployment
# namespace: nginx-ingress
labels:
app: posts-api
rel: beta
env: dev
spec:
replicas: 1
selector:
matchLabels:
app: posts-api
env: dev
rel: beta
template:
metadata:
labels:
app: posts-api
env: dev
rel: beta
spec:
containers:
- name: posts-api
image: kimgysen/posts-api:latest
ports:
- containerPort: 8083
livenessProbe:
httpGet:
path: /api/v1/posts/health
port: 8083
initialDelaySeconds: 120
timeoutSeconds: 1
Should be a basic setup!
My deployment pod does not show any restarts, everything looks good:
Any advice welcome!
Note:
Edit
When using port 31082, I get the error:
org.apache.http.conn.HttpHostConnectException: Connect to
localhost:31082 [localhost/127.0.0.1] failed: Connection refused
(Connection refused)
There is no specific reason why I used port 8083.
It is because I tried nodeport first (with multiple services), now Load Balancer.
Next step will be ingress, but it didn't really work out for me the first time, and so I try to go step by step.
I used port 8081 instead of port 80 because I read somewhere that on Mac OSX port 80 is only to be used by root user.
The Service port had to correspond to the Deployment containerPort.
I can now access the api on localhost:8083.

Kubernetes endpoints empty , can I restart the pods?

I have a situation where I have zero endpoints available for one service. To test this, I specially crafted a yaml descriptor that uses a simple node server to set and retrieve the ready/live status for a pod:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nodejs-deployment
labels:
app: nodejs
spec:
replicas: 3
selector:
matchLabels:
app: nodejs
template:
metadata:
labels:
app: nodejs
spec:
containers:
- name: nodejs
image: nodejs_server
ports:
- containerPort: 8080
livenessProbe:
httpGet:
path: /is_alive
port: 8080
initialDelaySeconds: 5
timeoutSeconds: 3
periodSeconds: 10
readinessProbe:
httpGet:
path: /is_ready
port: 8080
initialDelaySeconds: 5
timeoutSeconds: 3
periodSeconds: 10
---
apiVersion: v1
kind: Service
metadata:
name: nodejs-service
labels:
app: nodejs
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: nodejs
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nodejs-ingress
spec:
backend:
serviceName: nodejs-service
servicePort: 80
The node server has methods to set and retrieve the liveness and readiness.
When the app start I can see that 3 replicas are created and the status of them is ready. OK then now I trigger manually the status of their readiness to set to false [from outside the ingress]. One pod is correctly removed from the endpoint so no traffic is routed to it[that's OK as this is the expected behavior]. When I set all the ready-statuses to false for all pods the endpoints list is empty [still the expected behavior].
At that point I cannot set ready=true from outside the ingress as the traffic is not routed to any pod. Is there a way here for example of triggering a restart of the pod when the ready is not achieved after n-timer or n-seconds? Or when the endpoints list is empty?
Well, that is perfectly normal and expected behaviour. What you can do, on the side, is to forward traffic from localhost to a particular pod with kubectl port-forward. That way you can access the pod directly, without ingresses etc. and set it's readiness back to ok. If you want to restart when host it not ready for to long, just use the same endpoint for liveness probe, but trigger it after more tries.