I tried to do a simple deployment of nextcloud on a k8s cluster hosted using minikube on my local machine for learning purposes. This deployment doesn't have any database/storage attached to it. I'm simply looking to open the nextcloud homepage on my local machine. However, I am unable to do so. Here are my yamls.
Deployment yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nextcloud-deployment
labels:
app: nextcloud
spec:
replicas: 1
selector:
matchLabels:
app: nextcloud
template:
metadata:
labels:
app: nextcloud
spec:
containers:
- name: nextcloud
image: nextcloud:latest
ports:
- containerPort: 80
Service yaml:
apiVersion: v1
kind: Service
metadata:
name: nextcloud-service
spec:
selector:
app: nextcloud
type: LoadBalancer
ports:
- protocol: TCP
port: 8000
targetPort: 80
nodePort: 30000
I can see that it is up and running, however when i navigate to localhost:30000, i see that the page is unavailable. How do i begin to diagnose the issue?
This was the output of kubectl get service:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d5h
nextcloud-service LoadBalancer 10.104.40.61 <pending> 8000:30000/TCP 23m
Run minikube service nextcloud-service and it will open it for you.
Related
I am new to Kubernetes and I am trying to host a testing site,I have pods running as below
NAME READY STATUS RESTARTS AGE
sasank-website-78864ff54b-656ld 1/1 Running 0 30m
sasank-website-78864ff54b-qdn65 1/1 Running 0 30m
Deployment file used:
piVersion: apps/v1
kind: Deployment
metadata:
name: sasank-website
labels:
app: website
spec:
replicas: 2
selector:
matchLabels:
app: website
template:
metadata:
labels:
app: website
spec:
containers:
- name: webtesting
image: 9110727495/userdetails:latest
ports:
- containerPort: 80
Service file used:
apiVersion: v1
kind: Service
metadata:
name: testingsite
labels:
app: website
spec:
type: NodePort
externalIPs:
- 192.168.1.10
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
selector:
app: website
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 102m
testingsite NodePort 10.96.246.110 192.168.1.10 80:31438/TCP 5m9s
When I try to access the Ip with port 31438 it is refusing to connect but it is using port 80 in the clustr. When I try to access with the same IP outside the cluster it is refusing to connect even to port 80. I am not sure how to understand this.. Please help. Thank you.
I built a k8s cluster with help of this guide: rpi+k8s. I got some basic nginx service up and running and and I can curl from master node to worker node to get the nginx welcome page content using:
k exec nginx-XXX-XXX -it -- curl localhost:80
I tried following suggestions in the following SO posts:
link 1
link 2
However, I still can't access a simple nginx service on the worker node from my local computer (linux). I used, NODE IP:NODE PORT. I also installed kubefwd and ran, sudo kubefwd svc -n nginx-ns but I don't see the expected output where it would show the port forwards. Any help would be appreciated. Thanks.
Output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/nginx-svc NodePort 10.101.19.230 <none> 80:32749/TCP 168m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx 3/3 3 3 168m
NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx-54485b444f 3 3 3 168m
And here is the yaml file:
kind: Namespace
apiVersion: v1
metadata:
name: nginx-ns
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: nginx-ns
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.19-alpine
ports:
- name: nginxport
containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
namespace: nginx-ns
labels:
app: nginx
spec:
selector:
app: nginx
ports:
- protocol: TCP
name: nginxport
port: 80
targetPort: 80
nodePort: 32749
type: NodePort
selector:
app: backend
You need to update your service nginx-svc where you have used two selector.
remove below part:
selector:
app: backend
Updated service.yaml:
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
namespace: nginx-ns
labels:
app: nginx
spec:
selector:
app: nginx
ports:
- protocol: TCP
name: nginxport
port: 80
targetPort: 80
nodePort: 32749
type: NodePort
Then, Try this one for port-forwarding.
kubectl port-forward -n nginx-ns svc/nginx-svc 8080:80
Template is like this:
kubectl port-forward -n <namespace> svc/<svc_name> <local_port>:<svc_port>
Then try in the browser with 127.0.0.1:8080 or localhost:8080
I have created a k8s deployment and service yaml for a static website. External IP address is also resolved in kubernetes service. But when I try to access the website through curl or browser, it returns connection timed out.
Dockerfile:
FROM nginx:alpine
COPY . /usr/share/nginx/html
K8s deployment yml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: ohno-website
labels:
app: ohno-website
spec:
replicas: 1
selector:
matchLabels:
app: ohno-website
template:
metadata:
labels:
app: ohno-website
spec:
containers:
- name: ohno-website
image: gkganeshr/ohno-website:v0.1
imagePullPolicy: Always
ports:
- containerPort: 80
k8s service yml:
apiVersion: v1
kind: Service
metadata:
name: ohno-website
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 9376
selector:
app: ohno-website
ohno_fooserver#cloudshell:~ (fourth-webbing-279817)$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.16.0.1 <none> 443/TCP 8h
ohno-website LoadBalancer 10.16.12.162 34.70.213.174 80:31977/TCP 7h4m
The target port defined in the service defition YAML is incorrect. It should match with container port from pod definition in deployment YAML
targetPort: 9376
should be changed to
targetPort: 80
I'm doing a deployment on the GKE service and I find that when I try to access the page the message
ERR_CONNECTION_REFUSED
I have defined a load balancing service for deployment and the configuration is as follows.
This is the .yaml for the deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: bonsai-onboarding
spec:
selector:
matchLabels:
app: bonsai-onboarding
replicas: 2
template:
metadata:
labels:
app: bonsai-onboarding
spec:
containers:
- name: bonsai-onboarding
image: "eu.gcr.io/diaphanum/onboarding-iocash-master_web:v1"
ports:
- containerPort: 3000
This is the service .yaml file.
apiVersion: v1
kind: Service
metadata:
name: lb-onboarding
spec:
type: LoadBalancer
selector:
app: bonsai-onboarding
ports:
- protocol: TCP
port: 3000
targetPort: 3000
This working fine, and all is green in GKE :)
kubectl get pods,svc
NAME READY STATUS RESTARTS AGE
pod/bonsai-onboarding-8586b9b699-flhbn 1/1 Running 0 3h23m
pod/bonsai-onboarding-8586b9b699-p9sn9 1/1 Running 0 3h23m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP XX.xx.yy.YY <none> 443/TCP 29d
service/lb-onboarding LoadBalancer XX.xx.yy.YY XX.xx.yy.YY 3000:32618/TCP 3h
Then when i tried to connect the error is ERR_CONNECTION_REFUSED
I think is about the network because y did the next test from my local machine
Ping [load balancer IP] ---> Correct
Telnet [Load Balancer IP] 3000 ---> Correct
From cloud shell i forward the port 3000 to 8080 and in other cloudShell make a Curl http://localhost:8080, and work fine.
Any idea about the problem?
Thanks in advance
I've changed a little bit your deployment to check it on my cluster because your image was unreachable:
deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: bonsai-onboarding
spec:
selector:
matchLabels:
app: bonsai-onboarding
replicas: 2
template:
metadata:
labels:
app: bonsai-onboarding
spec:
containers:
- name: bonsai-onboarding
image: nginx:latest
ports:
- containerPort: 80
service:
apiVersion: v1
kind: Service
metadata:
name: lb-onboarding
spec:
type: LoadBalancer
selector:
app: bonsai-onboarding
ports:
- protocol: TCP
port: 3000
targetPort: 80
and it works out of the box:
kubectl get pods,svc
NAME READY STATUS RESTARTS AGE
pod/bonsai-onboarding-7bdf584499-j2nv7 1/1 Running 0 6m58s
pod/bonsai-onboarding-7bdf584499-vc7kh 1/1 Running 0 6m58s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.XXX.XXX.1 <none> 443/TCP 8m35s
service/lb-onboarding LoadBalancer 10.XXX.XXX.230 35.XXX.XXX.235 3000:31637/TCP 67s
and I'm able reach 35.XXX.XXX.235:3000 from any IP:
Welcome to nginx!
...
Thank you for using nginx.
You can check if your app is reachable using this command:
nmap -Pn $(kubectl get svc lb-onboarding -o jsonpath='{.status.loadBalancer.ingress[*].ip}')
Maybe the cause of your problem with "ERR_CONNECTION_REFUSED" in configuration of your image? I found no problem with your deployment and load balancer configuration.
Ensure containerPort is defined in the spec of the deployment/statefulset/pod and the application is listening on that port. Also ensure your firewall rules are not blocking the nodeport.
gcloud compute firewall-rules create myservice --allow tcp:3000
I am trying to learn how to use Kibernetes with Minikube and have the following deployment and service:
---
kind: Service
apiVersion: v1
metadata:
name: exampleservice
spec:
selector:
app: myapp
ports:
- protocol: "TCP"
# Port accessible inside cluster
port: 8081
# Port to forward to inside the pod
targetPort: 8080
# Port accessible outside cluster
nodePort: 30002
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: myappdeployment
spec:
replicas: 5
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: tutum/hello-world
ports:
- containerPort: 8080
I expect to be able to hit this service from my local machine at
http://192.168.64.2:30002
As per the command: minikube service exampleservice --url but when I try to access this from the browser I get a site cannot be reached error.
Some information that may help debugging:
kubectl get services --all-namespaces:
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default exampleservice LoadBalancer 10.104.248.158 <pending> 8081:30002/TCP 26m
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2h
default user-service-service LoadBalancer 10.110.181.202 <pending> 8080:30001/TCP 42m
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 2h
kube-system kubernetes-dashboard ClusterIP 10.110.65.24 <none> 80/TCP 2h
I am running minikube on OSX.
This is expected.
Do note that LoadBalancer is for cloud to create external load balancer like ALP/NLP in AWS and something similar in GCP/Azure etc.
Update the service as shown here. here i assume 192.168.64.2 is your minikube ip. if not, update it with minikube ip to make it work.
kind: Service
apiVersion: v1
metadata:
name: exampleservice
spec:
selector:
app: myapp
ports:
- protocol: "TCP"
# Port accessible inside cluster
port: 8081
# Port to forward to inside the pod
targetPort: 80
# Port accessible outside cluster
nodePort: 30002
type: LoadBalancer
externalIPs:
- 192.168.64.2
Now you can access your application at http://192.168.64.2:8081/
If you need to access the application at 30002, you can use it like this
kind: Service
apiVersion: v1
metadata:
name: exampleservice
spec:
selector:
app: myapp
ports:
- protocol: "TCP"
# Port accessible inside cluster
port: 8081
# Port to forward to inside the pod
targetPort: 80
# Port accessible outside cluster
nodePort: 30002
type: NodePort
Your deployment file does not look correct to me.
delete it
kubectl delete deploy/myappdeployment
use this to create again.
apiVersion: apps/v1beta1
kind: Deployment
metadata:
labels:
app: myapp
name: myappdeployment
spec:
replicas: 5
selector:
matchLabels:
app: myapp
strategy: {}
template:
metadata:
labels:
app: myapp
spec:
containers:
- image: tutum/hello-world
name: myapp
ports:
- containerPort: 80
NOTE: Minikube support LoadBalancer services (via minikube tunnel)
you can get the IP and port through which you
can access the service by running
minikube service kubia-http #=> To open a browser with an IP and port
OR
minikube service kubia --url #=> To get the IP and port in the terminal