I'm doing a deployment on the GKE service and I find that when I try to access the page the message
ERR_CONNECTION_REFUSED
I have defined a load balancing service for deployment and the configuration is as follows.
This is the .yaml for the deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: bonsai-onboarding
spec:
selector:
matchLabels:
app: bonsai-onboarding
replicas: 2
template:
metadata:
labels:
app: bonsai-onboarding
spec:
containers:
- name: bonsai-onboarding
image: "eu.gcr.io/diaphanum/onboarding-iocash-master_web:v1"
ports:
- containerPort: 3000
This is the service .yaml file.
apiVersion: v1
kind: Service
metadata:
name: lb-onboarding
spec:
type: LoadBalancer
selector:
app: bonsai-onboarding
ports:
- protocol: TCP
port: 3000
targetPort: 3000
This working fine, and all is green in GKE :)
kubectl get pods,svc
NAME READY STATUS RESTARTS AGE
pod/bonsai-onboarding-8586b9b699-flhbn 1/1 Running 0 3h23m
pod/bonsai-onboarding-8586b9b699-p9sn9 1/1 Running 0 3h23m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP XX.xx.yy.YY <none> 443/TCP 29d
service/lb-onboarding LoadBalancer XX.xx.yy.YY XX.xx.yy.YY 3000:32618/TCP 3h
Then when i tried to connect the error is ERR_CONNECTION_REFUSED
I think is about the network because y did the next test from my local machine
Ping [load balancer IP] ---> Correct
Telnet [Load Balancer IP] 3000 ---> Correct
From cloud shell i forward the port 3000 to 8080 and in other cloudShell make a Curl http://localhost:8080, and work fine.
Any idea about the problem?
Thanks in advance
I've changed a little bit your deployment to check it on my cluster because your image was unreachable:
deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: bonsai-onboarding
spec:
selector:
matchLabels:
app: bonsai-onboarding
replicas: 2
template:
metadata:
labels:
app: bonsai-onboarding
spec:
containers:
- name: bonsai-onboarding
image: nginx:latest
ports:
- containerPort: 80
service:
apiVersion: v1
kind: Service
metadata:
name: lb-onboarding
spec:
type: LoadBalancer
selector:
app: bonsai-onboarding
ports:
- protocol: TCP
port: 3000
targetPort: 80
and it works out of the box:
kubectl get pods,svc
NAME READY STATUS RESTARTS AGE
pod/bonsai-onboarding-7bdf584499-j2nv7 1/1 Running 0 6m58s
pod/bonsai-onboarding-7bdf584499-vc7kh 1/1 Running 0 6m58s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.XXX.XXX.1 <none> 443/TCP 8m35s
service/lb-onboarding LoadBalancer 10.XXX.XXX.230 35.XXX.XXX.235 3000:31637/TCP 67s
and I'm able reach 35.XXX.XXX.235:3000 from any IP:
Welcome to nginx!
...
Thank you for using nginx.
You can check if your app is reachable using this command:
nmap -Pn $(kubectl get svc lb-onboarding -o jsonpath='{.status.loadBalancer.ingress[*].ip}')
Maybe the cause of your problem with "ERR_CONNECTION_REFUSED" in configuration of your image? I found no problem with your deployment and load balancer configuration.
Ensure containerPort is defined in the spec of the deployment/statefulset/pod and the application is listening on that port. Also ensure your firewall rules are not blocking the nodeport.
gcloud compute firewall-rules create myservice --allow tcp:3000
Related
I am new to Kubernetes and I am trying to host a testing site,I have pods running as below
NAME READY STATUS RESTARTS AGE
sasank-website-78864ff54b-656ld 1/1 Running 0 30m
sasank-website-78864ff54b-qdn65 1/1 Running 0 30m
Deployment file used:
piVersion: apps/v1
kind: Deployment
metadata:
name: sasank-website
labels:
app: website
spec:
replicas: 2
selector:
matchLabels:
app: website
template:
metadata:
labels:
app: website
spec:
containers:
- name: webtesting
image: 9110727495/userdetails:latest
ports:
- containerPort: 80
Service file used:
apiVersion: v1
kind: Service
metadata:
name: testingsite
labels:
app: website
spec:
type: NodePort
externalIPs:
- 192.168.1.10
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
selector:
app: website
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 102m
testingsite NodePort 10.96.246.110 192.168.1.10 80:31438/TCP 5m9s
When I try to access the Ip with port 31438 it is refusing to connect but it is using port 80 in the clustr. When I try to access with the same IP outside the cluster it is refusing to connect even to port 80. I am not sure how to understand this.. Please help. Thank you.
I tried to do a simple deployment of nextcloud on a k8s cluster hosted using minikube on my local machine for learning purposes. This deployment doesn't have any database/storage attached to it. I'm simply looking to open the nextcloud homepage on my local machine. However, I am unable to do so. Here are my yamls.
Deployment yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nextcloud-deployment
labels:
app: nextcloud
spec:
replicas: 1
selector:
matchLabels:
app: nextcloud
template:
metadata:
labels:
app: nextcloud
spec:
containers:
- name: nextcloud
image: nextcloud:latest
ports:
- containerPort: 80
Service yaml:
apiVersion: v1
kind: Service
metadata:
name: nextcloud-service
spec:
selector:
app: nextcloud
type: LoadBalancer
ports:
- protocol: TCP
port: 8000
targetPort: 80
nodePort: 30000
I can see that it is up and running, however when i navigate to localhost:30000, i see that the page is unavailable. How do i begin to diagnose the issue?
This was the output of kubectl get service:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d5h
nextcloud-service LoadBalancer 10.104.40.61 <pending> 8000:30000/TCP 23m
Run minikube service nextcloud-service and it will open it for you.
I apply a deployment of 2 some http pods, and a service for it, and it works fine. I can curl the serviceip or servicename. The service did round robin well.
But after I delete one pod, k8s create a new one to replace it. When I curl the service, the new pod doesn't return, only the other old one is OK.
The question is why k8s not update new pod to the service so I can curl the serviceip or servicename as before?
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mvn-dp
spec:
selector:
matchLabels:
run: mvn-demo
replicas: 2
template:
metadata:
labels:
run: mvn-demo
spec:
containers:
- name: mvndemo
image: 192.168.0.193:59999/mvndemo
ports:
- containerPort: 8080
Service:
apiVersion: v1
kind: Service
metadata:
name: mvn-svc
labels:
run: mvn-demo
spec:
ports:
- port: 8080
protocol: TCP
#type: NodePort
selector:
run: mvn-demo
kdes svc mvn-svc
Name: mvn-svc
Namespace: default
Labels: run=mvn-demo
Annotations: Selector: run=mvn-demo
Type: ClusterIP
IP: 10.97.21.218
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
Endpoints: 100.101.153.220:8080,100.79.233.220:8080
Session Affinity: None
Events: <none>
kpod
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
mvn-dp-8f59c694f-2mwq8 1/1 Running 0 81m 100.79.233.220 worker2 <none> <none>
mvn-dp-8f59c694f-xmt6m 1/1 Running 0 87m 100.101.153.220 worker3 <none> <none>
[root#master1 k8s-yaml]# curl http://10.97.21.218:8080
Hello Docker World, from: mvn-dp-8f59c694f-xmt6m
[root#master1 k8s-yaml]# curl http://10.97.21.218:8080
**curl: (7) Failed connect to 10.97.21.218:8080; 连接超时(connetion timeout)**
As u can see the age of mvn-dp-8f59c694f-2mwq8 is newer than the other one, because I deleted one pod and k8s replace it with this new one.
Set a label to your deployment metadata
apiVersion: apps/v1
kind: Deployment
metadata:
name: mvn-dp
labels:
run: mvn-demo
it will work
I have created a k8s deployment and service yaml for a static website. External IP address is also resolved in kubernetes service. But when I try to access the website through curl or browser, it returns connection timed out.
Dockerfile:
FROM nginx:alpine
COPY . /usr/share/nginx/html
K8s deployment yml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: ohno-website
labels:
app: ohno-website
spec:
replicas: 1
selector:
matchLabels:
app: ohno-website
template:
metadata:
labels:
app: ohno-website
spec:
containers:
- name: ohno-website
image: gkganeshr/ohno-website:v0.1
imagePullPolicy: Always
ports:
- containerPort: 80
k8s service yml:
apiVersion: v1
kind: Service
metadata:
name: ohno-website
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 9376
selector:
app: ohno-website
ohno_fooserver#cloudshell:~ (fourth-webbing-279817)$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.16.0.1 <none> 443/TCP 8h
ohno-website LoadBalancer 10.16.12.162 34.70.213.174 80:31977/TCP 7h4m
The target port defined in the service defition YAML is incorrect. It should match with container port from pod definition in deployment YAML
targetPort: 9376
should be changed to
targetPort: 80
I am trying to learn how to use Kibernetes with Minikube and have the following deployment and service:
---
kind: Service
apiVersion: v1
metadata:
name: exampleservice
spec:
selector:
app: myapp
ports:
- protocol: "TCP"
# Port accessible inside cluster
port: 8081
# Port to forward to inside the pod
targetPort: 8080
# Port accessible outside cluster
nodePort: 30002
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: myappdeployment
spec:
replicas: 5
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: tutum/hello-world
ports:
- containerPort: 8080
I expect to be able to hit this service from my local machine at
http://192.168.64.2:30002
As per the command: minikube service exampleservice --url but when I try to access this from the browser I get a site cannot be reached error.
Some information that may help debugging:
kubectl get services --all-namespaces:
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default exampleservice LoadBalancer 10.104.248.158 <pending> 8081:30002/TCP 26m
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2h
default user-service-service LoadBalancer 10.110.181.202 <pending> 8080:30001/TCP 42m
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 2h
kube-system kubernetes-dashboard ClusterIP 10.110.65.24 <none> 80/TCP 2h
I am running minikube on OSX.
This is expected.
Do note that LoadBalancer is for cloud to create external load balancer like ALP/NLP in AWS and something similar in GCP/Azure etc.
Update the service as shown here. here i assume 192.168.64.2 is your minikube ip. if not, update it with minikube ip to make it work.
kind: Service
apiVersion: v1
metadata:
name: exampleservice
spec:
selector:
app: myapp
ports:
- protocol: "TCP"
# Port accessible inside cluster
port: 8081
# Port to forward to inside the pod
targetPort: 80
# Port accessible outside cluster
nodePort: 30002
type: LoadBalancer
externalIPs:
- 192.168.64.2
Now you can access your application at http://192.168.64.2:8081/
If you need to access the application at 30002, you can use it like this
kind: Service
apiVersion: v1
metadata:
name: exampleservice
spec:
selector:
app: myapp
ports:
- protocol: "TCP"
# Port accessible inside cluster
port: 8081
# Port to forward to inside the pod
targetPort: 80
# Port accessible outside cluster
nodePort: 30002
type: NodePort
Your deployment file does not look correct to me.
delete it
kubectl delete deploy/myappdeployment
use this to create again.
apiVersion: apps/v1beta1
kind: Deployment
metadata:
labels:
app: myapp
name: myappdeployment
spec:
replicas: 5
selector:
matchLabels:
app: myapp
strategy: {}
template:
metadata:
labels:
app: myapp
spec:
containers:
- image: tutum/hello-world
name: myapp
ports:
- containerPort: 80
NOTE: Minikube support LoadBalancer services (via minikube tunnel)
you can get the IP and port through which you
can access the service by running
minikube service kubia-http #=> To open a browser with an IP and port
OR
minikube service kubia --url #=> To get the IP and port in the terminal