Connection refused when connecting to svc from another pod - kubernetes

I have simple python application
RUN pip install -r requirements.txt
EXPOSE 8000
CMD ["gunicorn", "main:api", "-c", "/app/gunicorn_conf.py" ,"--reload"]
where my gunicorn conf file:
import multiprocessing
bind = '0.0.0.0:8000'
workers = multiprocessing.cpu_count() * 2 + 1
timeout = 30
worker_connections = 1000
and my deployment YAML file:
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: test-deployment
spec:
selector:
matchLabels:
app: test-pod
replicas: 1
template:
metadata:
labels:
app: test-pod
spec:
containers:
- name: test-container
image: localhost:5000/test123:v4
ports:
- name: http
containerPort: 8000
---
apiVersion: v1
kind: Service
metadata:
name: test-svc
spec:
ports:
- protocol: TCP
port: 5000
targetPort: http
selector:
app: test-pod
When I try curl command from test-pod its works file
curl -X POST localhost:8000/test
but when I try curl from other pods, I get connection refused error. (busybox pod)
curl -X POST test-svc:5000/test
My kubectl describe svc test-svc
Name: test-svc
Namespace: default
Labels: <none>
Annotations: Selector: app=test-pod
Type: ClusterIP
IP: 10.99.11.154
Port: <unset> 5000/TCP
TargetPort: http/TCP
Endpoints: 10.1.0.11:8000
Session Affinity: None
Events: <none>
nslookup works fine
Name: test-svc.default.svc.cluster.local
Address: 10.99.11.154

Related

Kubernetes ingress nginx "not found" (les jackson tutorial)

I'm following the tutorial from Less Jackson about Kubernetes but I'm stuck around 04:40:00. I always get an 404 returned from my Ingress Nginx Controller. I followed everything he does, but I can't get it to work.
I also read that this could have something to do with IIS, so I stopped the default website which also runs on port 80.
The apps running in the containers are .NET Core.
Commands-deply & cluster ip
apiVersion: apps/v1
kind: Deployment
metadata:
name: commands-depl
spec:
replicas: 1
selector:
matchLabels:
app: commandservice
template:
metadata:
labels:
app: commandservice
spec:
containers:
- name: commandservice
image: maartenvissershub/commandservice:latest
---
apiVersion: v1
kind: Service
metadata:
name: commands-clusterip-srv
spec:
type: ClusterIP
selector:
app: commandservice
ports:
- name: commandservice
protocol: TCP
port: 80
targetPort: 80
Platforms-depl & cluster ip
apiVersion: apps/v1
kind: Deployment
metadata:
name: platforms-depl
spec:
replicas: 1
selector:
matchLabels:
app: platformservice
template:
metadata:
labels:
app: platformservice
spec:
containers:
- name: platformservice
image: maartenvissershub/platformservice:latest
---
apiVersion: v1
kind: Service
metadata:
name: platforms-clusterip-srv
spec:
type: ClusterIP
selector:
app: platformservice
ports:
- name: platformservice
protocol: TCP
port: 80
targetPort: 80
Ingress-srv
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-srv
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: acme.com
http:
paths:
- path: /api/platforms
pathType: Prefix
backend:
service:
name: platforms-clusterip-srv
port:
number: 80
- path: /api/c/platforms
pathType: Prefix
backend:
service:
name: commands-clusterip-srv
port:
number: 80
I also added this to my hosts file:
127.0.0.1 acme.com
And I applied this from the nginx documentation:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.3.0/deploy/static/provider/cloud/deploy.yaml
kubectl get ingress
kubectl describe ing ingress-srv
Dockerfile CommandService
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build-env
WORKDIR /app
COPY *.csproj ./
RUN dotnet restore
COPY . ./
RUN dotnet publish -c Release -o out
FROM mcr.microsoft.com/dotnet/aspnet:5.0
WORKDIR /app
COPY --from=build-env /app/out .
ENTRYPOINT [ "dotnet", "PlatformService.dll" ]
kubectl logs ingress-nginx-controller-6bf7bc7f94-v2jnp -n ingress-nginx
Am I missing something?
I found my solution. There was a process running on port 80 with pid 4: 0.0.0.0:80. I could stop it using NET stop HTTP in an admin cmd.
I noticed that running kubectl get services -n=ingress-nginx resulted a ingress-nginx-controll, which is fine, but with an external-ip . Running kubectl get ingress also didn't show an ADDRESS. Now they both show "localhost" as value for external-ip and ADDRESS.
Reference: Port 80 is being used by SYSTEM (PID 4), what is that?
So this can occur from several reasons:
Pods or containers are not working - try using kubectl get pods -n <your namespace> to see if any are not in 'running' status.
Assuming they are running, try kubectl describe pod <pod name> -n <your namespace> to see the events on your pod just to make sure its running properly.
I have noticed you are not exposing ports in your deployment. please update your deployments like so:
apiVersion: apps/v1
kind: Deployment
metadata:
name: platforms-depl
spec:
replicas: 1
selector:
matchLabels:
app: platformservice
template:
metadata:
labels:
app: platformservice
spec:
containers:
- name: platformservice
image: maartenvissershub/platformservice:latest
ports:
- name: http
containerPort: 80
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: platforms-clusterip-srv
spec:
type: ClusterIP
selector:
app: platformservice
ports:
- name: platformservice
protocol: TCP
port: 80
targetPort: 80
apiVersion: apps/v1
kind: Deployment
metadata:
name: commands-depl
spec:
replicas: 1
selector:
matchLabels:
app: commandservice
template:
metadata:
labels:
app: commandservice
spec:
containers:
- name: commandservice
image: maartenvissershub/commandservice:latest
ports:
- name: http
containerPort: 80
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: commands-clusterip-srv
spec:
type: ClusterIP
selector:
app: commandservice
ports:
- name: commandservice
protocol: TCP
port: 80
targetPort: 80
Hope this helps!

Unable to resolve Kubernetes service names inside containers with the hostNetwork

Unable to use Kubernetes internal DNS when hostNetwork is used:
/ test# nslookup echo
Server: 10.96.0.10
Address 1: 10.96.0.10
nslookup: can't resolve 'echo'
Without hostNetwork:
/ test# nslookup echo
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: echo
Address 1: 10.98.232.198 echo.default.svc.cluster.local
Kubernetes 1.18.5 on bare-metal not upgraded (fresh install).
Full config:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: test
labels:
app: test
spec:
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- image: busybox:1.28
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
name: busybox
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo
spec:
replicas: 1
selector:
matchLabels:
app: echo
template:
metadata:
labels:
app: echo
spec:
containers:
- name: echo
image: jmalloc/echo-server
ports:
- name: http-port
containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: echo
spec:
ports:
- name: http-port
port: 80
targetPort: http-port
protocol: TCP
selector:
app: echo
A fresh install of Kubernetes 1.19.0 solved this problem.

Cannot access Kubernetes Service - Connection Refused

I'm trying to get a very basic app running on my Kubernetes cluster.
It consists of an ASP.NET Core WebAPI and a .NET Core console application that it accessing the WebAPI.
In the console application, I receive the error: "Connection Refused":
[rro-a#dh-lnx-01 ~]$ sudo kubectl exec -it synchronizer-54b47f496b-lkb67 -n pv2-test -- /bin/bash
root#synchronizer-54b47f496b-lkb67:/app# curl http://svc-coreapi/api/Synchronizations
curl: (7) Failed to connect to svc-coreapi port 80: Connection refused
root#synchronizer-54b47f496b-lkb67:/app#
Below is my YAML for the service:
apiVersion: v1
kind: Service
metadata:
name: svc-coreapi
namespace: pv2-test
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
---
The YAML for the WebAPI:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
creationTimestamp: null
labels:
app: pv2
name: coreapi
namespace: pv2-test
spec:
replicas: 1
selector:
matchLabels:
app: pv2
strategy: {}
template:
metadata:
annotations:
creationTimestamp: null
labels:
app: pv2
spec:
containers:
- env:
- name: DBNAME
value: <DBNAME>
- name: DBPASS
value: <PASSWORD>
- name: DBSERVER
value: <SQLSERVER>
- name: DBUSER
value: <DBUSER>
image: myrepoapi:latest
name: coreapi
ports:
- containerPort: 80
resources: {}
restartPolicy: Always
imagePullSecrets:
- name: pv2-repo-cred
status: {}
The most funny thing is: When I execute kubectl expose deployment coreapi --type=NodePort --name=svc-coreapi it works, but I do not want the WebAPI exposed to the outside.
Omitting the --type=NodePort reverts the type back to ClusterIP and I will get the Connection Refused again.
Can anyone tell me what I can do to resolve this issue?
As #David Maze suggested your ClusterIP service definition lacks selector field which is responsible for selecting a set of Pods labelled with key app and value pv2 as in your example:
...
spec:
replicas: 1
selector:
matchLabels:
app: pv2
strategy: {}
template:
metadata:
annotations:
creationTimestamp: null
labels:
app: pv2
...
Your service definition may look like this and it should work just fine:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: pv2
ports:
- protocol: TCP
port: 80
targetPort: 80

Kubernetes's LoadBalancer yaml not working even though CLI `expose` function works

This is my Service and Deployment yaml that I am running on minikube:
apiVersion: apps/v1
kind: Deployment
metadata:
name: node-hello-world
labels:
app: node-hello-world
spec:
replicas: 1
selector:
matchLabels:
app: node-hello-world
template:
metadata:
labels:
app: node-hello-world
spec:
containers:
- name: node-hello-world
image: node-hello-world:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: node-hello-world-load-balancer
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 9000
targetPort: 8080
nodePort: 30002
selector:
name: node-hello-world
Results:
$ minikube service node-hello-world-load-balancer --url
http://192.168.99.101:30002
$ curl http://192.168.99.101:30002
curl: (7) Failed to connect to 192.168.99.101 port 30002: Connection refused
However, running the following CLI worked:
$ kubectl expose deployment node-hello-world --type=LoadBalancer
$ minikube service node-hello-world --url
http://192.168.99.101:30130
$ curl http://192.168.99.101:30130
Hello World!
What am I doing wrong with my LoadBalancer yaml config?
you have configured wrong the service selector
selector:
name: node-hello-world
it should be:
selector:
app: node-hello-world
https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/
you can debug this by describing the service, and seeing that the endpoint list is empty, so there are no pods mapped to your endpoint's service list
kubectl describe svc node-hello-world-load-balancer | grep -i endpoints
Endpoints: <none>

unable to access the application deployed on kubernetes cluster using kubernetes playground

I have a 3 node cluster created on kubernetes playground
The 3 nodes as seen on the UI are :
192.168.0.13 : Master
192.168.0.12 : worker
192.168.0.11 : worker
I have a front end app connected to backend mysql.
The deployment and service definition for front end is as below.
apiVersion: v1
kind: Service
metadata:
name: springboot-app
spec:
type: NodePort
ports:
- port: 8080
selector:
app: springboot-app
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: springboot-app
spec:
replicas: 3
selector:
matchLabels:
app: springboot-app
template:
metadata:
labels:
app: springboot-app
spec:
containers:
- image: chinmayeepdas/springbootapp:1.0
name: springboot-app
env:
- name: DATABASE_HOST
value: demo-mysql
- name: DATABASE_NAME
value: chinmayee
- name: DATABASE_USER
value: root
- name: DATABASE_PASSWORD
value: root
- name: DATABASE_PORT
value: "3306"
ports:
- containerPort: 8080
name: app-port
My PODs for UI and backend are up and running.
[node1 ~]$ kubectl describe service springboot-app
Name: springboot-app
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=springboot-app
Type: NodePort
IP: 10.96.187.226
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 30373/TCP
Endpoints: 10.32.0.2:8080,10.32.0.3:8080,10.40.0.3:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Now when i do,
http://192.168.0.12:30373/employee/getAll
I dont see any result. I get This site can’t be reached
What IP address i have to give in the URL?
try this solution
kubectl proxy --address 0.0.0.0
Then access it as http://localhost:30373/employee/getAll
or maybe:
http://localhost:8080/employee/getAll
let me know if this fixes the access issue and which one works.