Can't access my Pod locally using Minikube - kubernetes

Sorry for that noobish question, but I'm having an issue reaching my pod and I have no idea why.. (I'm using Minikube locally)
So I've created this basic pod:
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
type: front-end
spec:
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
And this basic service:
apiVersion: v1
kind: Service
metadata:
name: service
spec:
type: NodePort
ports:
- targetPort: 80
port: 80
nodePort: 30008
selector:
app: myapp
type: front-end
However when I try reaching nginx through the browser I fail to do so..
I enter http://NodeIP:30008 .
However when I'm typing minikube service service --url I am able to access it..
So basically I have 2 questions-
1- Why does my attempt enteting the nodeip and port fail? I 've seen that when I enter minikube ssh and try to curl here http://NodeIP:30008 it works, so basically while I'm using Minikube I won't be able to browse to my apps? only curl through the minikube ssh or the below command.?
2- Why does the minikube service --url command works? what's the difference?
Thanks a lot!

Use the external IP address (LoadBalancer Ingress) to access to your application:
curl http://<external-ip>:<port>
where is the external IP address (LoadBalancer Ingress) of your Service, and is the value of Port in your Service description. If you are using minikube, typing minikube service my-service will automatically open your application in a browser.
You can find more details here

Related

ERR_CONNECTION_TIMED_OUT kubernetes minikube service

I am getting ERR_CONNECTION_TIMED_OUT when trying to access minikube service in localhost.
apiVersion: apps/v1
kind: Deployment
metadata:
name: identityserver
spec:
selector:
matchLabels:
app: identityserver
template:
metadata:
labels:
app: identityserver
spec:
containers:
- name: identityserver
image: identityserver:0
ports:
- containerPort: 5001
imagePullPolicy: "Never"
I have created service as following.
apiVersion: v1
kind: Service
metadata:
name: identityserver
spec:
type: NodePort
selector:
app: identityserver
ports:
- port: 5001
nodePort: 30002
I am trying to load in my local browser using following command. But it is not getting accessible in localhost. Internal kubernetes apps are able to communicate with service but not externally.
minikube service identityserver
I tried making type as clusterip and then it worked with port forwarding and only nodeport is having issue accessing.
kubectl port-forward service/identityserver 18080:5001 --address 0.0.0.0
This seems to be an issue with the Docker driver. I was able to run this with VirtualBox driver.
So I just had to start using VirtualBox driver (Even though virtualization was enabled in my machine it was giving an error. so had to append the --no-vtx-check flag, you can skip that if not facing an error without that flag)
minikube start --driver=virtualbox --no-vtx-check
There are several ways of trying minikube on Windows + docker:
Docker Desktop app (with Enable Kubernetes option)
Docker Desktop app (without enabling Kubernetes option) and installing minikube to wsl2
No Docker Desktop at all, installing docker and minikube in wsl2
Let's test it with the link you gave in comments - Set up Ingress on Minikube with the NGINX Ingress Controller.
Docker Desktop v.20.10.12 (with Enable Kubernetes option v.1.22.5), Win10, wsl2 backend.
Enable Kubernetes in Docker Desktop.
Check if ingress-controller is installed:
$ kubectl get pods -n ingress-nginx
The output should be similar to:
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-g9g49 0/1 Completed 0 11m
ingress-nginx-admission-patch-rqp78 0/1 Completed 1 11m
ingress-nginx-controller-59b45fb494-26npt 1/1 Running 0 11m
Create a Deployment using the following command:
kubectl create deployment web --image=gcr.io/google-samples/hello-app:1.0
Expose the Deployment:
kubectl expose deployment web --type=NodePort --port=8080
Create example-ingress.yaml from the following file:
$ kubectl apply -f example-ingress.yaml
$ cat example-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
ingressClassName: nginx # this line is essential!
rules:
- host: hello-world.info
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web
port:
number: 8080
Verify the IP address is set:
NAME CLASS HOSTS ADDRESS PORTS AGE
example-ingress <none> hello-world.info localhost 80 38s
Add the following line to the bottom of the C:\Windows\System32\drivers\etc\hosts file on your computer (you will need administrator access):
127.0.0.1 hello-world.info
DONE. Open hello-world.info in a browser.
How to access the NodePort service? In C:\Windows\System32\drivers\etc\hosts find these lines:
# Added by Docker Desktop
192.168.1.179 host.docker.internal
192.168.1.179 gateway.docker.internal
Use this IP and node port: curl 192.168.1.179:portNumber

NodePort type service not accessible outside cluster

I am trying to setup a local cluster using minikube in a Windows machine. Following some tutorials in kubernetes.io, I got the following manifest for the cluster:
apiVersion: apps/v1
kind: Deployment
metadata:
name: external-nginx-deployment
labels:
app: external-nginx
spec:
selector:
matchLabels:
app: external-nginx
replicas: 2
template:
metadata:
labels:
app: external-nginx
spec:
containers:
- name: external-nginx
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: expose-nginx
labels:
service: expose-nginx
spec:
type: NodePort
selector:
app: external-nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 32000
If I got things right, this should create a pod with a nginx instance and expose it to the host machine at port 32000.
However, when I run curl http://$(minikube ip):32000, I get a connection refused error.
I ran bash inside the service expose-nginx via kubectl exec svc/expose-nginx -it bash and from there I was able to access the external-nginx pods normally, which lead me to believe it is not a problem within the cluster.
I also tried to change the type of the service to LoadBalancer and enable the minikube tunnel, but got the same result.
Is there something I am missing?
Almost always by default minikube uses docker driver for the minikube VM creation. In the host system it looks like a big docker container for the VM in which other kubernetes components are run as containers as well. Based on tests NodePort for services often doesn't work as it's supposed to like accessing the service exposed via NodePort should work on minikube_IP:NodePort address.
Solutions are:
for local testing use kubectl port-forward to expose service to the local machine (which OP did)
use minikube service command which will expose the service to the host machine. Works in a very similar way as kubectl port-forward.
instead of docker driver use proper virtual machine which will get its own IP address (VirtualBox or hyperv drivers - depends on the system). Reference.
(Not related to minikube) Use built-in feature kubernetes in Docker Desktop for Windows. I've already tested it and service type should be LoadBalancer - it will be exposed to the host machine on localhost.

How to Make Google Kubernetes Engine LoadBalancer Service Receive External Traffic When Using Google Cloud Code ItelliJ?

I have a working GKE cluster serving content at port 80. How do I get the load balancer service to deliver the content on the external (regional reserved) static IP 111.222.333.123?
I see that kubectl get service shows that the external static IP is successfully registered. The external IP does respond to ping requests.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.16.0.1 <none> 443/TCP 17h
myapp-cluster NodePort 10.16.5.168 <none> 80:30849/TCP 84m
myapp-service LoadBalancer 10.16.9.255 111.222.333.123 80:30879/TCP 6m20s
Additionally, the Google Cloud Platform console shows that the forwarding rule is established and correctly referencing the GKE target pool.
The deployment and service manifest I am using is shown below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
labels:
app: myapp
spec:
replicas: 2
selector:
matchLabels:
app: myapp
environment: sandbox
template:
metadata:
labels:
app: myapp
environment: sandbox
spec:
containers:
- name: myapp
image: myapp
imagePullPolicy: Always
ports:
- containerPort: 8080
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: myapp-service
spec:
selector:
app: myapp
environment: sandbox
ports:
- port: 80
targetPort: 8080
type: LoadBalancer
loadBalancerIP: "111.222.333.123"
The associated skaffold configuration file for reference:
apiVersion: skaffold/v2beta18
kind: Config
metadata:
name: myapp
build:
artifacts:
- image: myapp
context: .
docker: {}
deploy:
kubectl:
manifests:
- gcloud_k8_staticip_deployment.yaml
What am I missing to allow traffic to reach the GKE cluster when running this configuration using Google Cloud Code?
Apologies if this has been asked before. Happy to take a pointer if I missed the right solution reviewing questions.
I replicated your setup and faced the same issue as yours (was able to ping the service IP but couldn’t connect to it from the browser).
Then I changed Deployment container port to 80, service target port to 80 and service port to 8080, and it worked, I was then able to connect to the deployment from the browser using the service IP.
Deployment manifest file :
Service manifest file:
For all I know the quoted configuration in this question should actually work, as long as the image is pointing to an accessible location. I have confirmed this configuration to be working using a toy setup entirely without IDE, just using gcloud shell and everything worked well.
The problem originates from Google Cloud Code changing the kubectl context without any additional warning when a context switch is configured in the run configuration.

Kubernetes - Ingress with Minikube

I am learning kubernetes by playing with minikube.
This is my pod deployment file which is fine.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
replicas: 2
selector:
matchLabels:
component: web
template:
metadata:
labels:
component: web
spec:
containers:
- name: myapp
image: myid/myimage
I am exposing the above pods using NodePort. I am able to access using minikube IP at port 30002.
apiVersion: v1
kind: Service
metadata:
name: my-ip-service
spec:
type: NodePort
externalIPs:
- 192.168.99.100
selector:
component: web
ports:
- port: 3000
nodePort: 30002
targetPort: 8080
Now i would like to use ingress to access the application at port 80 which will forward the request the ip-service at port 3000. It does NOT work
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
backend:
serviceName: my-ip-service
servicePort: 3000
If i try to access to ingress, address is blank.
NAME HOSTS ADDRESS PORTS AGE
test-ingress * 80 41m
How to use ingress with minikube? Or how to bind the minikube ip with ingress service - so that the app can be exposed outside without using nodeport
You can get your minikube node's IP address with:
minikube ip
The ingress' IP address will not populate in minikube because minikube lacks a load balancer. If you'd like something that behaves like a load balancer for your minikube cluster, https://github.com/knative/serving/blob/master/docs/creating-a-kubernetes-cluster.md#loadbalancer-support-in-minikube suggests running the following commands to patch your cluster:
sudo ip route add $(cat ~/.minikube/profiles/minikube/config.json | jq -r ".KubernetesConfig.ServiceCIDR") via $(minikube ip)
kubectl run minikube-lb-patch --replicas=1 --image=elsonrodriguez/minikube-lb-patch:0.1 --namespace=kube-system
I think you are missing the ingress controller resource on minikube itself. There are many possible ways to create an ingress-controller resource on K8s , but i think for you the best way to start on minikube is to follow this documentation.
Don't forget to read about Ingress in general once you get this working.

Connection refused to GCP LoadBalancer in Kubernetes

When I create a deployment and a service in a Kubernetes Engine in GCP I get connection refused for no apparent reason.
The service creates a Load Balancer in GCP and all corresponding firewall rules are in place (allows traffic to port 80 from 0.0.0.0/0). The underlying service is running fine, when I kubectl exec into the pod and curl localhost:8000/ I get the correct response.
This deployment setting used to work just fine for other images, but yesterday and today I keep getting
curl: (7) Failed to connect to 35.x.x.x port 80: Connection refused
What could be the issue? I tried deleting and recreating the service multiple times, with no luck.
kind: Service
apiVersion: v1
metadata:
name: my-app
spec:
selector:
app: app
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 8000
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: my-app
spec:
replicas: 1
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
containers:
- name: my-app
image: gcr.io/myproject/my-app:0.0.1
imagePullPolicy: Always
ports:
- containerPort: 8000
This turned out to be a dumb mistake on my part. The gunicorn server was using a bind to 127.0.0.1 instead of 0.0.0.0, so it wasn't accessible from outside of the pod, but worked when I exec-ed into the pod.
The fix in my case was changing the entrypoint of the Dockerfile to
CMD [ "gunicorn", "server:app", "-b", "0.0.0.0:8000", "-w", "3" ]
rebuilding the image and updating the deployment.
Is the service binding to your pod? What does "kubectl describe svc my-app" say?
Make sure it transfers through to your pod on the correct port? You can also try, assuming you're using an instance on GCP, to curl the IP and port of the pod and make sure it's responding as it should?
ie, kubectl get pods -o wide, will tell you the IP of the pod
does curl ipofpod:8000 work?