Kubernetes Dashboard & Ingress on Docker Desktop - kubernetes

I am trying to access kubernetes dashboard on my local PC through Ingress. The steps I've done so far are:
Install Nginx Ingress by:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/cloud/deploy.yaml
PS D:\dev\kubernetes-dashboard-ingress> kubectl get all -n ingress-nginx
NAME READY STATUS RESTARTS AGE
pod/ingress-nginx-admission-create-7rzdl 0/1 Completed 0 148m
pod/ingress-nginx-admission-patch-295pf 0/1 Completed 0 148m
pod/ingress-nginx-controller-7fc74cf778-jz6ts 1/1 Running 0 148m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/ingress-nginx-controller LoadBalancer 10.106.183.115 localhost 80:30673/TCP,443:32591/TCP 148m
service/ingress-nginx-controller-admission ClusterIP 10.103.188.122 <none> 443/TCP 148m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/ingress-nginx-controller 1/1 1 1 148m
NAME DESIRED CURRENT READY AGE
replicaset.apps/ingress-nginx-controller-7fc74cf778 1 1 1 148m
NAME COMPLETIONS DURATION AGE
job.batch/ingress-nginx-admission-create 1/1 16s 148m
job.batch/ingress-nginx-admission-patch 1/1 16s 148m
Install kubernetes dashboard:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml
When I inspect kubernetes dashboard namespace, I notice that the service is running on port 443:
PS D:\dev\kubernetes-dashboard-ingress> kubectl get service -n kubernetes-dashboard -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
dashboard-metrics-scraper ClusterIP 10.110.109.6 <none> 8000/TCP 135m k8s-app=dashboard-metrics-scraper
kubernetes-dashboard ClusterIP 10.110.230.166 <none> 443/TCP 135m k8s-app=kubernetes-dashboard
So I created Ingress rule:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dashboard-ingress
namespace: kubernetes-dashboard
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
rules:
- host: "my-dashboard.com"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: kubernetes-dashboard
port:
number: 443
and after applying this rule:
PS D:\dev\kubernetes-dashboard-ingress> kubectl get ingress -n kubernetes-dashboard -o wide
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
dashboard-ingress <none> my-dashboard.com localhost 80 121m
I just add the following entry in my windows host file:
127.0.0.1 my-dashboard.com
However, I am getting nothing when I tried to access the dashboard through my browser (http://my-dashboard.com). Have I missed anything?
I was following the tutorial here: https://www.youtube.com/watch?v=X48VuDVv0do. The tutorial was done using minikube - so the dashboard there was available on port 80. Whereas the one i installed directly from github above was available on port 443. Do I need to configure some certificate / secret? I noticed that a few stuffs were created in the Secret by kubernetes-dashboard:
PS D:\dev\kubernetes-dashboard-ingress> kubectl get secret -n kubernetes-dashboard -o wide
NAME TYPE DATA AGE
default-token-97skl kubernetes.io/service-account-token 3 140m
kubernetes-dashboard-certs Opaque 0 140m
kubernetes-dashboard-csrf Opaque 1 140m
kubernetes-dashboard-key-holder Opaque 2 140m
kubernetes-dashboard-token-rwgs4 kubernetes.io/service-account-token 3 140m
and if i tried to describe Ingress:
PS D:\dev\kubernetes-dashboard-ingress> kubectl describe ingress dashboard-ingress -n kubernetes-dashboard
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
Name: dashboard-ingress
Namespace: kubernetes-dashboard
Address: localhost
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
my-dashboard.com
/ kubernetes-dashboard:443 (10.1.0.106:8443)
Annotations: kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
nginx.ingress.kubernetes.io/ssl-passthrough: true
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 7m4s (x10 over 144m) nginx-ingress-controller Scheduled for sync
I know I can access the dashboard using kubectl proxy - but I would like to test out Ingress (learning it). Thank you in advance!
I'm running the following:
Docker Desktop 3.2.2 (61853)
Engine: 20.10.5
Compose: 1.28.5
Kubernetes: v1.19.7

Your service name seems to be wrong:
You listed your services:
PS D:\dev\kubernetes-dashboard-ingress> kubectl get service -n kubernetes-dashboard -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
dashboard-metrics-scraper ClusterIP 10.110.109.6 <none> 8000/TCP 135m k8s-app=dashboard-metrics-scraper
kubernetes-dashboard ClusterIP 10.110.230.166 <none> 443/TCP 135m k8s-app=kubernetes-dashboard
In your ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dashboard-ingress
namespace: kubernetes-dashboard
annotations:
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
rules:
- host: "my-dashboard.com"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: my-dashboard # <<< This line should be kubernetes-dashboard
port:
number: 443

Ok. Figured out the issue. My request (in chrome) went through the corporate proxy, and that did not forward the request further to my kubernetes cluster. After adding 'my-dashboard.com' to the no proxy list, I can access it through browser.
Thank you thomas for the pointer !

Related

Access kubernetes-dashboard using ingess ( 404 Not Found )

I'm relatively new to k8s and was following an tutorial to get familiar with it. There was a example on exposing kubernetes-dashboard via ingress and I tried to try it.
Configured kubernetes-dashboard by running following. As per its documentation.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.6.1/aio/deploy/recommended.yaml
But different from the tutorial kubernetes-dashboard was exposed via port 443
service/dashboard-metrics-scraper ClusterIP 10.108.119.138 <none> 8000/TCP 50m
service/kubernetes-dashboard ClusterIP 10.100.58.17 <none> 443/TCP 50m
So I changed the ingress configuration yaml accordingly.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
name: ingress-dashboard
namespace: kubernetes-dashboard
spec:
rules:
- host: k8s-dashboard.com
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: kubernetes-dashboard
port:
number: 443
Then I describe the ingress and get the ip and added an entry in /etc/hosts for it
kubectl describe ingress ingress-dashboard -n kubernetes-dashboard
Name: ingress-dashboard
Labels: <none>
Namespace: kubernetes-dashboard
Address: 192.168.49.2
Ingress Class: <none>
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
k8s-dashboard.com
/ kubernetes-dashboard:443 (172.17.0.6:8443)
Annotations: kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 24m (x2 over 25m) nginx-ingress-controller Scheduled for sync
/etc/hosts change
192.168.49.2 k8s-dashbaord.com
When tried to access k8s-dashbaord.com. I get a 404 Not Found from nginx. So it seems like ingress is running but it cannot reach the service.
The ip mapped to ingress rule seems to be wrong though. (172.17.0.6:8443). Because that is not the ip of the service.
What am I doing wrong here?
P.S
If I just to a proxy ( kubectl proxy ) and access dashboard it works fine.

404 Not Found error after configuring the Nginx Ingress Controller

UPDATE:
The issue persists but I used another way (sub-domain name, instead of the path) to 'bypass' the issue:
ubuntu#df1:~$ cat k8s-dashboard-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: k8s-dashboard-ingress
namespace: kubernetes-dashboard
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/secure-backends: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- dashboard.XXXX
secretName: df1-tls
rules:
- host: dashboard.XXXX
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kubernetes-dashboard
port:
number: 443
This error bothers me for some time and I hope with your help I can come down to the bottom of it.
I have one K8S cluster (single node so far, to avoid any network related issues). I installed Grafana on it.
All pods are running fine:
ubuntu:~$ k get po -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default grafana-646c8874cb-h6tc5 1/1 Running 0 11h
default nginx-1-7bdc99b884-xh7kl 1/1 Running 0 36h
kube-system coredns-64897985d-4sk6l 1/1 Running 0 2d16h
kube-system coredns-64897985d-dx5h6 1/1 Running 0 2d16h
kube-system etcd-df1 1/1 Running 1 3d14h
kube-system kilo-kb52f 1/1 Running 0 2d16h
kube-system kube-apiserver-df1 1/1 Running 1 3d14h
kube-system kube-controller-manager-df1 1/1 Running 4 3d14h
kube-system kube-flannel-ds-fjkxv 1/1 Running 0 3d13h
kube-system kube-proxy-bd2xt 1/1 Running 0 3d14h
kube-system kube-scheduler-df1 1/1 Running 10 3d14h
kubernetes-dashboard dashboard-metrics-scraper-799d786dbf-5skdw 1/1 Running 0 2d16h
kubernetes-dashboard kubernetes-dashboard-6b6b86c4c5-56zp2 1/1 Running 0 2d16h
nginx-ingress nginx-ingress-5b467c7d7-qtqtq 1/1 Running 0 2d15h
As you saw, I installed nginx ingress controller.
Here is the ingress:
ubuntu:~$ k describe ing grafana
Name: grafana
Labels: app.kubernetes.io/instance=grafana
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=grafana
app.kubernetes.io/version=8.3.3
helm.sh/chart=grafana-6.20.5
Namespace: default
Address:
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
kalepa.k8s.io
/grafana grafana:80 (10.244.0.14:3000)
Annotations: meta.helm.sh/release-name: grafana
meta.helm.sh/release-namespace: default
Events: <none>
Here is the service that is defined in above ingress:
ubuntu:~$ k get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana ClusterIP 10.96.148.1 <none> 80/TCP 11h
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d14h
If I do a curl to the cluster ip of the service, it goes through without an issue:
ubuntu:~$ curl 10.96.148.1
Found.
If I do a curl to the hostname with the path to the service, I got the 404 error:
ubuntu:~$ curl kalepa.k8s.io/grafana
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.21.5</center>
</body>
</html>
The hostname is resolved to the cluster ip of the nginx ingress service (nodeport):
ubuntu:~$ grep kalepa.k8s.io /etc/hosts
10.96.241.112 kalepa.k8s.io
This is the nginx ingress service definition:
ubuntu:~$ k describe -n nginx-ingress svc nginx-ingress
Name: nginx-ingress
Namespace: nginx-ingress
Labels: <none>
Annotations: <none>
Selector: app=nginx-ingress
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.96.241.112
IPs: 10.96.241.112
Port: http 80/TCP
TargetPort: 80/TCP
NodePort: http 31803/TCP
Endpoints: 10.244.0.6:80
Port: https 443/TCP
TargetPort: 443/TCP
NodePort: https 31913/TCP
Endpoints: 10.244.0.6:443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
What am I missing? Thanks for your help!
This is happening as you are using /grafana and this path does not exist in the grafana application - hence 404. You need to first configure grafana to use this context path before you can forward your traffic to /grafana.
If you use / as path, it will work. That's why curl 10.96.148 works as you are not adding a route /grafana. But most likely that path is already used by some other service, that's why you were using /grafana to begin with.
Therefore, you need to update your grafana.ini file to set the context root explicitly as shown below.
You may put your grafana.ini in a configmap, mount it to the original grafana.ini location and recreate the deployment.
[server]
domain = kalepa.k8s.io
root_url = http://kalepa.k8s.io/grafana/
I can see there is no ingressClassName specified for your ingress. It looks something like this:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
spec:
ingressClassName: nginx
tls:
- hosts:
- kalepa.k8s.io
secretName: secret_name
rules:
- host: kalepa.k8s.io
http:
paths:
...

GKE - exposing Grafana externally not working using GCP Ingress

I've Prometheus/Grafana enabled on GKE (in namespace - monitoring)
Karans-MacBook-Pro:ingress-ns karanalang$ kc get svc -n monitoring
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana ClusterIP 10.80.9.14 <none> 3000/TCP 7d4h
prometheus-operated ClusterIP None <none> 9090/TCP 7d4h
prometheus-operator ClusterIP None <none> 8080/TCP 7d4h
I'm trying to expose Grafana using Ingress, below is the Ingress description,
the path is '/grafana'
Karans-MacBook-Pro:ingress-ns karanalang$ kc describe ingress ingress-grafana -n monitoring
Name: ingress-grafana
Namespace: monitoring
Address: 34.117.119.113
Default backend: default-http-backend:80 (10.76.0.8:8080)
Rules:
Host Path Backends
---- ---- --------
*
/grafana grafana:3000 (10.76.0.5:3000)
Annotations: ingress.kubernetes.io/backends: {"k8s-be-31823--45a575f79c8f25d8":"HEALTHY","k8s1-45a575f7-monitoring-grafana-3000-2fa5518a":"HEALTHY"}
ingress.kubernetes.io/forwarding-rule: k8s2-fr-tkept4vc-monitoring-ingress-grafana-y11p7u0i
ingress.kubernetes.io/target-proxy: k8s2-tp-tkept4vc-monitoring-ingress-grafana-y11p7u0i
ingress.kubernetes.io/url-map: k8s2-um-tkept4vc-monitoring-ingress-grafana-y11p7u0i
kubernetes.io/ingress.class: gce
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 23m loadbalancer-controller UrlMap "k8s2-um-tkept4vc-monitoring-ingress-grafana-y11p7u0i" created
Normal Sync 23m loadbalancer-controller TargetProxy "k8s2-tp-tkept4vc-monitoring-ingress-grafana-y11p7u0i" created
Normal Sync 23m loadbalancer-controller ForwardingRule "k8s2-fr-tkept4vc-monitoring-ingress-grafana-y11p7u0i" created
Normal IPChanged 23m loadbalancer-controller IP is now 34.117.119.113
Normal Sync 5m12s (x7 over 23m) loadbalancer-controller Scheduled for sync
when i do a curl /grafana, it shows that login is found
However - when i use the same on the browser, it gives '404 Not Found'
Karans-MacBook-Pro:ingress-ns karanalang$ curl 34.117.119.113/grafana
Found.
what needs to be done to debug/fix this ?
tia!
Based on the description provided for your ingress-grafana Ingress resource, it is a normal behavior. You only use /grafana path in the rules. But the screenshot shows a redirect to the /login page. And since you don't have other rules, you get this message.
To solve this problem, you can change path in the ingress-grafana Ingress resource to /* as shown below:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: grafana-ingress
spec:
rules:
- http:
paths:
- path: /*
pathType: ImplementationSpecific
backend:
service:
name: grafana
port:
number: 3000
This will allow to use redirections for the Grafana service.

Kubernetes ingress redirects to 504

I'm trying to learn kubernetes with a couple of rpi's at home. I'm trying to run pihole in the cluster, which has worked, now the issue i'm facing is a redirect issue with ingress.
my ingress.yaml file output:
## pihole.ingress.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
namespace: pihole
name: pihole-ingress
annotations:
# use the shared ingress-nginx
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: pihole.192.168.1.230.nip.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: pihole-web
port:
number: 80
output of kubectl describe ingress:
Name: pihole-ingress
Namespace: pihole
Address: 192.168.1.230
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
pihole.192.168.1.230.nip.io
/ pihole-web:80 (10.42.2.7:80)
Annotations: kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 18s (x12 over 11h) nginx-ingress-controller Scheduled for sync
Output of get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller-admission ClusterIP 10.43.240.186 <none> 443/TCP 22h
ingress-nginx-controller LoadBalancer 10.43.64.54 192.168.1.230 80:31093/TCP,443:30179/TCP 22h
I'm able to get into the pod and curl the cluster ip to get the output i expect, but when i try to visit pihole.192.168.1.230, i get a 504 error. Hoping anyone can assist with my ingress to redirect to the pihole-web service. Please let me know if there's any additional information i can provide.
EDIT:
kubectl get po -n pihole -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pihole-7d4dc6b8d8-vclxz 1/1 Running 0 9h 10.42.2.8 node02.iad <none> <none>
kubectl get svc -n pihole
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
pihole-web ClusterIP 10.43.102.198 <none> 80/TCP,443/TCP 9h
pihole-dhcp NodePort 10.43.191.110 <none> 67:32021/UDP 9h
pihole-dns-udp NodePort 10.43.214.15 <none> 53:31153/UDP 9h
pihole-dns-tcp NodePort 10.43.168.6 <none> 53:32754/TCP 9h
another edit: since this question was originally posted, and the above edit was made, pihole pod ip was changed from 10.42.2.7 to 10.42.2.8
I checked the logs for the ingress controller and saw the following. Hoping someone can help me decipher this:
2021/09/03 17:52:35 [error] 1938#1938: *3132346 upstream timed out (110: Operation timed out) while connecting to upstream, client: 10.42.1.1, server: pihole.192.168.1.230.nip.io, request: "GET / HTTP/1.1", upstream: "http://10.42.2.8:80/", host: "pihole.192.168.1.230.nip.io", referrer: "http://pihole.192.168.1.230.nip.io/"

can't get product page from outside using browser via istio

Hey it's been quite days struggling to make the sample book app running. I am new to istio and trying to get understand it. I followed this demo of an other way of setting up the bookinfo. I am using minikube in a virtualbox machine with docker as a driver. I set metalLB as a loadBalancer for ingress-gateway, here is the configmap i used for metalLB :
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: custom-ip-space
protocol: layer2
addresses:
- 192.168.49.2/28
the 192.168.49.2 is the result of the command: minikube ip
The ingressgateway yaml file:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: bookinfo-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: bookinfo
spec:
hosts:
- "*"
gateways:
- bookinfo-gateway
http:
- route:
- destination:
host: productpage
port:
number: 9080
and the output command of kubectl get svc -n istio-system:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana ClusterIP 10.111.105.179 <none> 3000/TCP 34m
istio-citadel ClusterIP 10.100.38.218 <none> 8060/TCP,15014/TCP 34m
istio-egressgateway ClusterIP 10.101.66.207 <none> 80/TCP,443/TCP,15443/TCP 34m
istio-galley ClusterIP 10.103.112.155 <none> 443/TCP,15014/TCP,9901/TCP 34m
istio-ingressgateway LoadBalancer 10.97.23.39 192.168.49.0 15020:32717/TCP,80:31380/TCP,443:31390/TCP,31400:31400/TCP,15029:32199/TCP,15030:30010/TCP,15031:30189/TCP,15032:31134/TCP,15443:30748/TCP 34m
istio-pilot ClusterIP 10.108.133.31 <none> 15010/TCP,15011/TCP,8080/TCP,15014/TCP 34m
istio-policy ClusterIP 10.100.74.207 <none> 9091/TCP,15004/TCP,15014/TCP 34m
istio-sidecar-injector ClusterIP 10.97.224.99 <none> 443/TCP,15014/TCP 34m
istio-telemetry ClusterIP 10.101.165.139 <none> 9091/TCP,15004/TCP,15014/TCP,42422/TCP 34m
jaeger-agent ClusterIP None <none> 5775/UDP,6831/UDP,6832/UDP 34m
jaeger-collector ClusterIP 10.111.188.83 <none> 14267/TCP,14268/TCP,14250/TCP 34m
jaeger-query ClusterIP 10.103.148.144 <none> 16686/TCP 34m
kiali ClusterIP 10.111.57.222 <none> 20001/TCP 34m
prometheus ClusterIP 10.107.204.95 <none> 9090/TCP 34m
tracing ClusterIP 10.104.88.173 <none> 80/TCP 34m
zipkin ClusterIP 10.111.162.93 <none> 9411/TCP 34m
and when trying to curl 192.168.49.0:80/productpage I am getting :
* Trying 192.168.49.0...
* TCP_NODELAY set
* Immediate connect fail for 192.168.49.0: Network is unreachable
* Closing connection 0
curl: (7) Couldn't connect to server
myhost#k8s:~$ curl 192.168.49.0:80/productpage
curl: (7) Couldn't connect to server
and before setting up the metalLB, I was getting connection refused!
Any solution for this please ? as it's been 5 days struggling to fix it.
I followed the steps here and all steps are ok!
In my opinion, this is a problem with the MetalLB configuration.
You are trying to give MetalLB control over IPs from the 192.168.49.2/28 network.
We can calculate for 192.168.49.2/28 network: HostMin=192.168.49.1 and HostMax=192.168.49.14.
As we can see, your istio-ingressgateway LoadBalancer Service is assigned the address 192.168.49.0 and I think that is the cause of the problem.
I recommend changing from 192.168.49.2/28 to a range, such as 192.168.49.10-192.168.49.20.
I've created an example to illustrate you how your configuration can be changed.
As you can see, at the beginning I had the configuration exactly like you (I also couldn't connect to the server using the curl command):
$ kubectl get svc -n istio-system istio-ingressgateway
NAME TYPE CLUSTER-IP EXTERNAL-IP
istio-ingressgateway LoadBalancer 10.109.75.19 192.168.49.0
$ curl 192.168.49.0:80/productpage
curl: (7) Couldn't connect to server
First, I modified the config ConfigMap:
NOTE: I changed 192.168.49.2/28 to 192.168.49.10-192.168.49.20
$ kubectl edit cm config -n metallb-system
Then I restarted all the controller and speaker Pods to force MetalLB to use new config (see: Metallb ConfigMap update).
$ kubectl delete pod -n metallb-system --all
pod "controller-65db86ddc6-gf49h" deleted
pod "speaker-7l66v" deleted
After some time, we should see a new EXTERNAL-IP assigned to the istio-ingressgateway Service:
kubectl get svc -n istio-system istio-ingressgateway
NAME TYPE CLUSTER-IP EXTERNAL-IP AGE
istio-ingressgateway LoadBalancer 10.106.170.227 192.168.49.10
Finally, we can check if it works as expected:
$ curl 192.168.49.10:80/productpage
<!DOCTYPE html>
<html>
<head>
<title>Simple Bookstore App</title>
...