I am trying to run the bookinfo example on my local with wsl2 and docker desk. I am having issues when trying to access the productpage service via the gateway as I got the connection refused. I am not sure whether I missed anything. Here is what I have done after googled a lot on the internet
Deployed all services from bookinfo example and all up running, I can curl productpage from other service using kubectl exec
Deployed bookinfo-gateway using the file from the example without any change under the default namespace
Name: bookinfo-gateway
Namespace: default
Labels: <none>
Annotations: <none>
API Version: networking.istio.io/v1beta1
Kind: Gateway
Metadata:
Creation Timestamp: 2021-06-06T20:47:18Z
Generation: 1
Managed Fields:
API Version: networking.istio.io/v1alpha3
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:kubectl.kubernetes.io/last-applied-configuration:
f:spec:
.:
f:selector:
.:
f:istio:
f:servers:
Manager: kubectl-client-side-apply
Operation: Update
Time: 2021-06-06T20:47:18Z
Resource Version: 2053564
Self Link: /apis/networking.istio.io/v1beta1/namespaces/default/gateways/bookinfo-gateway
UID: aa390a1d-2e34-4599-a1ec-50ad7aa9bdc6
Spec:
Selector:
Istio: ingressgateway
Servers:
Hosts:
*
Port:
Name: http
Number: 80
Protocol: HTTP
Events: <none>
The istio-ingressgateway can expose to the outside via localhost (not sure how this can be configured as it is deployed during istio installation) on 80, which I as understand will be used by bookinfo-gateway
kubectl get svc istio-ingressgateway -n istio-system
following Determining the ingress IP and ports section in the instruction.
My INGRESS_HOST=127.0.0.1 and INGRESS_PORT is 80
curl -v -s http://127.0.0.1:80/productpage | grep -o ".*"
* Trying 127.0.0.1:80...
* TCP_NODELAY set
* connect to 127.0.0.1 port 80 failed: Connection refused
* Failed to connect to 127.0.0.1 port 80: Connection refused
* Closing connection 0
trying this http://127.0.0.1/productpage on browser, return 404. Does this 404 mean the gateway is kind of up but virtual service is not working??
further question if it is relevant. I am a bit confusing how wsl2 works now. It looks like localhost on windows browser and wsl2 terminal are not the same thing, though I know there is kind of forwarding from windows to wsl2 server (which I can get its IP from /etc/resolv.conf). if it is the same, why one return connection refused and the other return 404
On windows I have tried to disable IIS or anything running on port 80 (net stop http). Somehow, I still can see something listen to port 80
netstat -aon | findstr :80
TCP 0.0.0.0:80 0.0.0.0:0 LISTENING 4
tasklist /svc /FI "PID eq 4"
Image Name PID Services
========================= ======== ============================================
System 4 N/A
I am wondering whether this is what causes the difference in point 7? As windows is running on another http server on port 80?
I know this a lot of questions asked. I believe many of us that new to istio and wsl2 may have similar questions. Hopefully, this helps others as well. Please advise.
There seems to be a problem with WSL2 itself, probably connected with Local sites running in WSL2 not accessible in browser #5298.
You can work around that by issuing
ip addr show
in your WSL distro, and replacing 127.0.0.1/localhost with eth0 address. In my case it is 172.21.29.254 - so URL is http://172.21.29.254/productpage
This workaround worked for me.
I managed to get this working: This is what I did.
Shell into the distro (mine was Ubuntu 20.04 LTS)
Run:
sudo apt-get -y install socat
sudo apt update
sudo apt upgrade
exit
The above will add socat (which I was getting errors with when looking at the istio logs - connection refused) and update the distro to the latest updates (and upgrade them)
Now you have to run a port-forward to be able to host localhost: to hit the istio gateway with:
kubectl port-forward svc/istio-ingressgateway 8080:80 -n istio-system
If you are already using 8080, just remove it from the command, just use :80 and the port forward will select a free port.
Now go to
http://localhost:8080/productpage
You should hit the page and the port-forward should output
Handling connection for 8080
Hope that helps...
Good thing is now I don't have to use Hyper-V or another cluster installer like minikube/microk8s and use the built-in kubernetes in docker desktop and... my laptop doesn't seem under load for what I'm doing too.
Related
I'm freshman on microk8s, and I'm trying out things by deploying a simple apache2 to see things working on my Mac M1:
◼ ~ $ microk8s kubectl run apache --image=ubuntu/apache2:2.4-22.04_beta --port=80
pod/apache created
◼ ~ $ microk8s kubectl get pods
NAME READY STATUS RESTARTS AGE
apache 1/1 Running 0 5m37s
◼ ~ $ microk8s kubectl port-forward pod/apache 3000:80
Forwarding from 127.0.0.1:3000 -> 80
but:
◼ ~ $ curl http://localhost:3000
curl: (7) Failed to connect to localhost port 3000 after 5 ms: Connection refused
I've also tried to use a service:
◼ ~ $ microk8s kubectl expose pod apache --type=NodePort --port=4000 --target-port=80
service/apache exposed
◼ ~ $ curl http://localhost:4000
curl: (7) Failed to connect to localhost port 4000 after 3 ms: Connection refused
I guess I'm doing something wrong?
For some reason I haven't figured it out, if I port-forward right within the VM by opening a shell via multipass, it does work. Next, you simply have to point to the VM's IP:
within a VM's shell:
ubuntu#microk8s-vm:~$ sudo microk8s kubectl port-forward service/hellopg 8080:8080 --address="0.0.0.0"
Forwarding from 0.0.0.0:8080 -> 8080
Handling connection for 8080
ubuntu#microk8s-vm:~$ ifconfig enp0s1
enp0s1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.64.2 netmask 255.255.255.0 broadcast 192.168.64.255
inet6 fde3:1a04:ba31:1209:5054:ff:fea9:9cf4 prefixlen 64 scopeid 0x0<global>
from the host:
curl http://192.168.64.2:8080/hello 7
{"status": "how you doing?", "env_var":"¡hola mundo!"}
it works. I guess the command via microk8s is not executed properly within the machine? If anybody can explain this I'll update the question
Microk8’s acts the same as kuberentes. So, it's better to create a Service with NodePort. This would expose your apache.
apiVersion: v1
kind: Service
metadata:
name: my-apache
spec:
type: NodePort
selector:
app: apache
ports:
- port: 80
targetPort: 80
nodePort: 30004
Change the selector as per your requirement. For more detailed information to create NodePort service refer to this official document
You can use ingress as well. But in your case only for testing you can go with NodePort
I think the easiest way for you to test it would be adding: externalIPs to you service.
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
externalIPs:
- 192.168.56.100 #your cluster IP
Happy coding!
I am trying to run a local cluster on Mac with M1 chip using Minikube (Docker driver). I enabled ingress addon in Minikube, I have a separate terminal in which I'm running minikube tunnel and I enabled Minikube dashboard, which I want to expose using Ingress.
This is my configuration file:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dashboard-ingress
namespace: kubernetes-dashboard
spec:
rules:
- host: dashboard.com
http:
paths:
- backend:
service:
name: kubernetes-dashboard
port:
number: 80
pathType: Prefix
path: /
I also put "dashboard.com" in my /etc/hosts file and it's actually resolving to the right IP, but it's not responding when I put "http://dashboard.com" in a browser or when I try to ping it and I always receive a timeout.
NOTE: when I run minikube tunnel I get
❗ The service/ingress dashboard-ingress requires privileged ports to be exposed: [80 443]
🔑 sudo permission will be asked for it.
I insert my sudo password and then nothing gets printed afterwards. Not sure if this is is an issue or the expected behavior.
What am I doing wrong?
I had the same behavior, and apparently what's needed for minikube tunnel to work is to map "127.0.0.1" in /etc/hosts, instead of the output from minikube ip or the ingress description.
that fixed it for me
had a similar issue on mac m1, initialy tried addon ingress-dns but then realised while it can be enabled its not currently working or supported using the docker driver https://github.com/kubernetes/minikube/issues/7332#issuecomment-608133325
some other mac intel users have got it working using hyperkit driver but thats not available for mac m1 yet
my answer for now is to use minikube tunnel https://minikube.sigs.k8s.io/docs/handbook/accessing/ and add entry to /etc/hosts for ingress, also have to pass in the cluster name using the -p parameter eg: minikube tunnel --cleanup -p <CLUSTER_NAME>
How do I expose an ingress when running kubernetes with minikube in windows 10?
I have enabled the minikube ingress add on.
My ingress is running here...
NAME CLASS HOSTS ADDRESS PORTS AGE
helmtest-ingress nginx helmtest.info 192.168.49.2 80 37m
I have added my hosts entry...
192.168.49.2 helmtest.info
I just get nothing when attempting to browse or ping either 192.168.49.2 or helmtest.info
My ingress looks like the following
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: helmtest-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: helmtest.info
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: helmtest-service
port:
number: 80
My service looks like the following...
apiVersion: v1
kind: Service
metadata:
name: helmtest-service
labels:
app: helmtest-service
spec:
type: ClusterIP
selector:
app: helmtest
ports:
- port: 80
targetPort: 80
protocol: TCP
I can access my service successfully in the browser after running minikube service helmtest-service --url
If I run minikube tunnel it just hangs here....
minikube tunnel
❗ Access to ports below 1024 may fail on Windows with OpenSSH clients older than v8.1. For more information, see: https://minikube.sigs.k8s.io/docs/handbook/accessing/#access-to-ports-1024-on-windows-requires-root-permission
🏃 Starting tunnel for service helmtest-ingress.
Where am I going wrong here?
OP didn't provide further information so I will provide answer based on the current information.
You can run Ingress on Minikube using the $ minikube addons enable ingress command. However, ingress has more addons, like Ingress DNS using minikube addons enabled ingress-dns. In Minikube documentation you can find more details about this addon and when you should use it.
Minikube has quite a well described section about tunnel. Quite important fact about the tunnel is that it must be run in a separate terminal window to keep the LoadBalancer running.
Services of type LoadBalancer can be exposed via the minikube tunnel command. It must be run in a separate terminal window to keep the LoadBalancer running. Ctrl-C in the terminal can be used to terminate the process at which time the network routes will be cleaned up.
This part is described in Accessing apps documentation.
As OP mention
I can access my service successfully in the browser after running minikube service helmtest-service --url
If I run minikube tunnel it just hangs here....
Possible Solution
You might use the old version of SSH, update it.
You are using ports <1024. This situation it's described in this known issue part. Try to use higher port like 5000 like in this example
It might look like it just hangs, but you need a separate terminal window. Maybe it works correctly but you have to use another terminal
Useful links
How do I expose ingress to my local machine? (minikube on windows)
Cannot export a IP in minikube and haproxy loadBalancer - using minikube tunnel
It might be the host file missing minikube ip address with your host name. If ingress cannot resolve the hostname you set in yaml file it just stays in the schedule to sync phase
Similar answer
I have a host running HAProxy already. It's been up and running since before I did anything with Kubernetes. It works flawlessly as a reverse proxy and SSL terminator for various web things in various Docker containers on various other host machines.
Now I have a Kubernetes cluster up and running across some of those other machines. I've created the NodePort Service that exposes port 30080 on each worker node, as follows:
apiVersion: v1
kind: Service
metadata:
name: snginx
labels:
app: nginx
spec:
type: NodePort
externalTrafficPolicy: Local #Cluster or Local
selector:
app: nginx
ports:
- protocol: TCP
port: 8080
targetPort: 80
nodePort: 30080
From the machine running HAProxy (which is not part of the cluster), I can curl the NodePort successfully ( curl 10.0.0.20:30080 ), and I get "Welcome to nginx!..." However, if I set that NodePort as a backend in HAProxy, I get a 503 "No server is available", and HAProxy traffic log says:
localhost haproxy[18489]: [redactedIP]:49399 [30/Aug/2021:19:24:00.764] http-in~ load/load 66/0/-1/-1/67 503 212 - - SC-- 1/1/0/0/3 0/0 "GET / HTTP/1.1"
The haproxy admin log says:
Aug 30 20:07:13 localhost haproxy[18839]: Server load/load is DOWN, reason: Layer4 connection problem, info: "General socket error (Permission denied)"
However, I've disabled the firewall with
sudo systemctl disable --now firewalld
and verified the status is not running. Also, SELinux was disabled when I installed the cluster. Also, I can ping 10.0.0.20 just fine.
"load" is the hostname I'm using for testing load balancing (i.e. load.mydomain.com).
Also, if I use PAT on my physical router to route directly to that NodePort, from outside the building, it works as expected.
What gives? What's the difference between the proxied request and curl?
Thank you.
SELinux is the difference. That is, SELinux on the HAProxy host (not a cluster node):
"SELinux only allows the web server to make outbound connections to a limited set of ports"
That is, you can't make an outbound http request to any port in the NodePort range (30000-32768) without opening that port on the "client", which is the HAProxy server in this case.
sudo semanage port --add --type http_port_t --proto tcp 30080
I am new to Kubernetes and I'm working on deploying an application within a new Kubernetes cluster.
Currently, the service running has multiple pods that need to communicate with each other. I'm looking for a general approach to go about debugging the issue, rather than getting into the specifies of the service as the question will become much too specific.
The pods within the cluster are throwing an error:
err="Get \"http://testpod.mynamespace.svc.cluster.local:8080/": dial tcp 10.10.80.100:8080: connect: connection refused"
Both pods are in the same cluster.
What are the best steps to take to debug this?
I have tried running:
kubectl exec -it testpod --namespace mynamespace -- cat /etc/resolv.conf
And this returns:
search mynamespace.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal
Which I found here: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
First of all, the following pattern:
my-svc.my-namespace.svc.cluster-domain.example
is applicable only to FQDNs of Services, not Pods which have the following form:
pod-ip-address.my-namespace.pod.cluster-domain.example
e.g.:
172-17-0-3.default.pod.cluster.local
So in fact you're querying cluster dns about FQDN of the Service named testpod and not about FQDN of the Pod. Judging by the fact that it's being resolved successfully, such Service already exists in your cluster but most probably is misconfigured. The fact that you're getting the error message connection refused can mean the following:
your Service FQDN testpod.mynamespace.svc.cluster.local has been successfully resolved
(otherwise you would receive something like curl: (6) Could not resolve host: testpod.default.svc.cluster.local)
you've reached successfully your testpod Service
(otherwise, i.e. if it existed but wasn't listening on 8080 port, you're trying to connect to, you would receive timeout e.g. curl: (7) Failed to connect to testpod.default.svc.cluster.local port 8080: Connection timed out)
you've reached the Pod, exposed by testpod Service (you've been sussessfully redirected to it by the testpod Service)
but once reached the Pod, you're trying to connect to incorect port and that's why the connection is being refused by the server
My best guess is that your Pod in fact listens on different port, like 80 but you exposed it via the ClusterIP Service by specifying only --port value e.g. by:
kubectl expose pod testpod --port=8080
In such case both --port (port of the Service) and --targetPort (port of the Pod) will have the same value. In other words you've created a Service like the one below:
apiVersion: v1
kind: Service
metadata:
name: testpod
spec:
ports:
- protocol: TCP
port: 8080
targetPort: 8080
And you probably should've exposed it either this way:
kubectl expose pod testpod --port=8080 --targetPort=80
or with the following yaml manifest:
apiVersion: v1
kind: Service
metadata:
name: testpod
spec:
ports:
- protocol: TCP
port: 8080
targetPort: 80
Of course your targetPort may be different than 80, but connection refused in such case can mean only one thing: target http server (running in a Pod) refuses connection to 8080 port (most probably because it isn't listening on it). You didn't specify what image you're using, whether it's a standard nginx webserver or something based on your custom image. But if it's nginx and wasn't configured differently it listens on port 80.
For further debug, you can attach to your Pod:
kubectl exec -it testpod --namespace mynamespace -- /bin/sh
and if netstat command is not present (the most likely scenario) run:
apt update && apt install net-tools
and then check with netstat -ntlp on which port your container listens on.
I hope this helps you solve your issue. In case of any doubts, don't hesitate to ask.