In kubernetes kubeadm cluster i deployed my app with :
kubectl run gmt-dpl --image=192.168.56.33:5000/img:gmt --port 8181
Expose the app with:
kubectl expose deployment gmt-dpl --name=gmt-svc --type=NodePort --port=443 --target-port=8181
My gmt-svc:
Name: gmt-svc
Namespace: default
Labels: run=gmt-dpl
Annotations: <none>
Selector: run=gmt-dpl
Type: NodePort
IP: 10.96.74.133
Port: <unset> 443/TCP
TargetPort: 8181/TCP
NodePort: <unset> 30723/TCP
Endpoints: 10.44.0.17:8181
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
I'm able to access my app on https://noedip:30723
Following github doc i manage to install nginx- controller:
NAMESPACE NAME READY STATUS RESTARTS AGE
default gmt-dpl-84fb9cfd8d-vz2bw 1/1 Running 0 1h
ingress-nginx default-http-backend-55c6c69b88-c4k6c 1/1 Running 5 2d
ingress-nginx nginx-ingress-controller-9c7b694-szgjd 1/1 Running 8 2d
kube-system etcd-k8s-master 1/1 Running 5 2d
kube-system kube-apiserver-k8s-master 1/1 Running 1 4h
kube-system kube-controller-manager-k8s-master 1/1 Running 7 2d
kube-system kube-dns-6f4fd4bdf-px5f8 3/3 Running 12 2d
kube-system kube-proxy-jswfx 1/1 Running 8 2d
kube-system kube-proxy-n2chh 1/1 Running 4 2d
kube-system kube-scheduler-k8s-master 1/1 Running 7 2d
kube-system weave-net-4k8sp 2/2 Running 10 1d
kube-system weave-net-bqjzb 2/2 Running 19 1d
I exposed the nginx-controller:
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress-controller-svc
namespace: ingress-nginx
spec:
type: NodePort
ports:
- port: 80
nodePort: 30001
name: http
- port: 443
nodePort: 30000
name: https
- port: 18080
nodePort: 30002
name: status
selector:
app: ingress-nginx
And i created an ingress ressource:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/tls-acme: "true"
kubernetes.io/ingress.class: "nginx"
name: test-ingress
spec:
tls:
- hosts:
- kube.gmt.dz
secretName: foo-secret
rules:
- host: kube.gmt.dz
http:
paths:
- path: /*
backend:
serviceName: gmt-svc
servicePort: 443
Now while trying to access my app through the nginx-controller with https://ip_master:30000 i got default backend -404 and also with http://ip_master:30001 the same error knowing that my app is only accessible in https.
In ingress ressource i tried servicePort: 8181 but i got the same error.
Any ideas ?
Related
I have Keyclock installed on my Kubernetes cluster.
Default ingress which Keycloak creates looks like this.
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
route.openshift.io/termination: passthrough
creationTimestamp: "2022-11-09T13:08:00Z"
generation: 1
labels:
app: keycloak
app.kubernetes.io/managed-by: keycloak-operator
name: keycloak-kc-ingress
namespace: default
ownerReferences:
- apiVersion: k8s.keycloak.org/v2alpha1
blockOwnerDeletion: true
controller: true
kind: Keycloak
name: keycloak-kc
uid: 67a18d00-4bee-4587-b330-cdaf21b39084
resourceVersion: "155002"
uid: 87c2aff4-1489-4ba9-bdf6-9fe1a288c800
spec:
defaultBackend:
service:
name: keycloak-kc-service
port:
number: 8443
rules:
- host: keycloak.example.com
http:
paths:
- backend:
service:
name: keycloak-kc-service
port:
number: 8443
pathType: ImplementationSpecific
status:
loadBalancer:
ingress:
- ip: 10.0.0.3
After installing ingress-nginx and adding kubernetes.io/ingress.class=nginx annotation, everything works.
For some reasons, however, I need to use nginx-ingress.
My new ingress looks like this.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
kubernetes.io/ingress.class: nginx
# nginx.ingress.kubernetes.io/backend-protocol: HTTPS
# route.openshift.io/termination: passthrough
labels:
app: keycloak
app.kubernetes.io/managed-by: keycloak-operator
# target: keycloak-kc-service
name: keycloak-kc-ingress
namespace: default
spec:
defaultBackend:
service:
name: keycloak-kc-service
port:
number: 8443
rules:
- host: accounts.example.com
http:
paths:
- backend:
service:
name: keycloak-kc-service
port:
number: 8443
path: /
pathType: Prefix
tls:
- hosts:
- accounts.example.com
secretName: keycloak-tls-secret
Unfortunately, this ingress returns the error "502 Bad Gateway".
We can't handle it. Please help.
Information for debugging
kubectl get deployments -A
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
default keycloak-operator 2/2 2 2 141m
kube-system cilium-operator 1/1 1 1 148m
kube-system coredns 2/2 2 2 148m
kube-system konnectivity-agent 2/2 2 2 148m
kube-system metrics-server 2/2 2 2 148m
kubernetes-dashboard dashboard-metrics-scraper 2/2 2 2 148m
nginx-ingress nginx-ingress-nginx-ingress-nginx-ingress 1/1 1 1 127m
olm catalog-operator 1/1 1 1 142m
olm olm-operator 1/1 1 1 142m
olm packageserver 2/2 2 2 142m
kubectl get services -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default keycloak-kc-discovery ClusterIP None <none> 7800/TCP 114m
default keycloak-kc-service ClusterIP 10.240.18.67 <none> 8443/TCP 114m
default keycloak-operator ClusterIP 10.240.24.103 <none> 80/TCP 141m
default kubernetes ClusterIP 10.240.16.1 <none> 443/TCP 149m
default postgres-db ClusterIP 10.240.18.157 <none> 5432/TCP 140m
kube-system hcloud-csi-controller-metrics ClusterIP 10.240.30.190 <none> 9189/TCP 149m
kube-system hcloud-csi-node-metrics ClusterIP 10.240.26.123 <none> 9189/TCP 149m
kube-system kube-dns ClusterIP 10.240.16.10 <none> 53/TCP,53/UDP 149m
kube-system metrics-server ClusterIP 10.240.31.184 <none> 443/TCP 149m
kubernetes-dashboard dashboard-metrics-scraper ClusterIP 10.240.25.29 <none> 8000/TCP 149m
nginx-ingress nginx-ingress-nginx-ingress-nginx-ingress LoadBalancer 10.240.26.173 10.0.0.3,167.235.123.123,2a01:4f8:1c1f:6484::1 80:31670/TCP,443:30557/TCP 128m
olm operatorhubio-catalog ClusterIP 10.240.22.30 <none> 50051/TCP 142m
olm packageserver-service ClusterIP 10.240.23.246 <none>
Unfortunately, this ingress returns the error "502 Bad Gateway".
We can't handle it. Please help.
UPDATE:
The issue persists but I used another way (sub-domain name, instead of the path) to 'bypass' the issue:
ubuntu#df1:~$ cat k8s-dashboard-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: k8s-dashboard-ingress
namespace: kubernetes-dashboard
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/secure-backends: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- dashboard.XXXX
secretName: df1-tls
rules:
- host: dashboard.XXXX
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kubernetes-dashboard
port:
number: 443
This error bothers me for some time and I hope with your help I can come down to the bottom of it.
I have one K8S cluster (single node so far, to avoid any network related issues). I installed Grafana on it.
All pods are running fine:
ubuntu:~$ k get po -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default grafana-646c8874cb-h6tc5 1/1 Running 0 11h
default nginx-1-7bdc99b884-xh7kl 1/1 Running 0 36h
kube-system coredns-64897985d-4sk6l 1/1 Running 0 2d16h
kube-system coredns-64897985d-dx5h6 1/1 Running 0 2d16h
kube-system etcd-df1 1/1 Running 1 3d14h
kube-system kilo-kb52f 1/1 Running 0 2d16h
kube-system kube-apiserver-df1 1/1 Running 1 3d14h
kube-system kube-controller-manager-df1 1/1 Running 4 3d14h
kube-system kube-flannel-ds-fjkxv 1/1 Running 0 3d13h
kube-system kube-proxy-bd2xt 1/1 Running 0 3d14h
kube-system kube-scheduler-df1 1/1 Running 10 3d14h
kubernetes-dashboard dashboard-metrics-scraper-799d786dbf-5skdw 1/1 Running 0 2d16h
kubernetes-dashboard kubernetes-dashboard-6b6b86c4c5-56zp2 1/1 Running 0 2d16h
nginx-ingress nginx-ingress-5b467c7d7-qtqtq 1/1 Running 0 2d15h
As you saw, I installed nginx ingress controller.
Here is the ingress:
ubuntu:~$ k describe ing grafana
Name: grafana
Labels: app.kubernetes.io/instance=grafana
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=grafana
app.kubernetes.io/version=8.3.3
helm.sh/chart=grafana-6.20.5
Namespace: default
Address:
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
kalepa.k8s.io
/grafana grafana:80 (10.244.0.14:3000)
Annotations: meta.helm.sh/release-name: grafana
meta.helm.sh/release-namespace: default
Events: <none>
Here is the service that is defined in above ingress:
ubuntu:~$ k get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana ClusterIP 10.96.148.1 <none> 80/TCP 11h
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d14h
If I do a curl to the cluster ip of the service, it goes through without an issue:
ubuntu:~$ curl 10.96.148.1
Found.
If I do a curl to the hostname with the path to the service, I got the 404 error:
ubuntu:~$ curl kalepa.k8s.io/grafana
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.21.5</center>
</body>
</html>
The hostname is resolved to the cluster ip of the nginx ingress service (nodeport):
ubuntu:~$ grep kalepa.k8s.io /etc/hosts
10.96.241.112 kalepa.k8s.io
This is the nginx ingress service definition:
ubuntu:~$ k describe -n nginx-ingress svc nginx-ingress
Name: nginx-ingress
Namespace: nginx-ingress
Labels: <none>
Annotations: <none>
Selector: app=nginx-ingress
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.96.241.112
IPs: 10.96.241.112
Port: http 80/TCP
TargetPort: 80/TCP
NodePort: http 31803/TCP
Endpoints: 10.244.0.6:80
Port: https 443/TCP
TargetPort: 443/TCP
NodePort: https 31913/TCP
Endpoints: 10.244.0.6:443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
What am I missing? Thanks for your help!
This is happening as you are using /grafana and this path does not exist in the grafana application - hence 404. You need to first configure grafana to use this context path before you can forward your traffic to /grafana.
If you use / as path, it will work. That's why curl 10.96.148 works as you are not adding a route /grafana. But most likely that path is already used by some other service, that's why you were using /grafana to begin with.
Therefore, you need to update your grafana.ini file to set the context root explicitly as shown below.
You may put your grafana.ini in a configmap, mount it to the original grafana.ini location and recreate the deployment.
[server]
domain = kalepa.k8s.io
root_url = http://kalepa.k8s.io/grafana/
I can see there is no ingressClassName specified for your ingress. It looks something like this:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
spec:
ingressClassName: nginx
tls:
- hosts:
- kalepa.k8s.io
secretName: secret_name
rules:
- host: kalepa.k8s.io
http:
paths:
...
I'm trying to expose my backend API service using the nginx Ingress controller. Here is the Ingress service that I have defined:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: plant-simulator-ingress
namespace: plant-simulator-ns
annotations:
ingress.kubernetes.io/enable-cors: "true"
kubernetes.io/ingress.class: nginx
# nginx.ingress.kubernetes.io/rewrite-target: /
prometheus.io/scrape: 'true'
prometheus.io/path: /metrics
prometheus.io/port: '80'
spec:
rules:
- host: grafana.local
http:
paths:
- backend:
serviceName: grafana-ip-service
servicePort: 8080
- host: prometheus.local
http:
paths:
- backend:
serviceName: prometheus-ip-service
servicePort: 8080
- host: plant-simulator.local
http:
paths:
- backend:
serviceName: plant-simulator-service
servicePort: 9000
The plant-simulator-service is defined as a service:
apiVersion: v1
kind: Service
metadata:
name: plant-simulator-service
namespace: plant-simulator-ns
labels:
name: plant-simulator-service
spec:
ports:
- port: 9000
targetPort: 9000
protocol: TCP
name: plant-simulator-service-port
selector:
app: plant-simulator
type: LoadBalancer
I successfully deployed this on my Minikube and here is the set of pods running:
Joes-MacBook-Pro:~ joesan$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6955765f44-cvblh 1/1 Running 0 39m
kube-system coredns-6955765f44-xh2wg 1/1 Running 0 39m
kube-system etcd-minikube 1/1 Running 0 39m
kube-system kube-apiserver-minikube 1/1 Running 0 39m
kube-system kube-controller-manager-minikube 1/1 Running 0 39m
kube-system kube-proxy-n6scg 1/1 Running 0 39m
kube-system kube-scheduler-minikube 1/1 Running 0 39m
kube-system storage-provisioner 1/1 Running 0 39m
plant-simulator-ns flux-5476b788b9-g7xtn 1/1 Running 0 20m
plant-simulator-ns memcached-86bdf9f56b-zgshx 1/1 Running 0 20m
plant-simulator-ns plant-simulator-6d46dc89cb-xsjgv 1/1 Running 0 65s
Here is the list of services:
Joes-MacBook-Pro:~ joesan$ minikube service list
|--------------------|-------------------------|-----------------------------|-----|
| NAMESPACE | NAME | TARGET PORT | URL |
|--------------------|-------------------------|-----------------------------|-----|
| default | kubernetes | No node port |
| kube-system | kube-dns | No node port |
| plant-simulator-ns | memcached | No node port |
| plant-simulator-ns | plant-simulator-service | http://192.168.99.103:32638 |
|--------------------|-------------------------|-----------------------------|-----|
What I wanted to achieve is that my application backend is reachable via the dns entry that I have configured in my Ingress -
plant-simulator.local
Any ideas as to what I'm missing?
OP reported the case was solved by adding the IP and Hostname in the /etc/hosts
$ cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
192.168.99.103 plant-simulator.local
I have a Minikube cluster with Calico running and I am trying to make NetworkPolicies working. Here are my Pods and Services:
First pod (team-a):
apiVersion: v1
kind: Pod
metadata:
name: team-a
namespace: orga-1
labels:
run: nginx
app: team-a
spec:
containers:
- image: joshrosso/nginx-curl:v2
imagePullPolicy: IfNotPresent
name: nginx
---
apiVersion: v1
kind: Service
metadata:
name: team-a
namespace: orga-1
spec:
ports:
- port: 80
name: http
protocol: TCP
targetPort: 80
selector:
app: team-a
Second pod (team-b):
apiVersion: v1
kind: Pod
metadata:
name: team-b
namespace: orga-2
labels:
run: nginx
app: team-b
spec:
containers:
- image: joshrosso/nginx-curl:v2
imagePullPolicy: IfNotPresent
name: nginx
---
apiVersion: v1
kind: Service
metadata:
name: team-b
namespace: orga-2
spec:
ports:
- port: 80
name: http
protocol: TCP
targetPort: 80
selector:
app: team-b
When I execute a bash in team-a, I cannot curl orga-2.team-b:
dev#ubuntu:~$ kubectl exec -it -n orga-1 team-a /bin/bash
root#team-a:/# curl google.de
//Body removed...
root#team-a:/# curl orga-2.team-b
curl: (6) Could not resolve host: orga-2.team-b
Now I applied a network policy:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: deny-all-base-rule
namespace: orga-1
spec:
podSelector: {}
policyTypes:
- Ingress
ingress: []
When I now curl google in team-a, it still works. Here are my pods:
kube-system calico-etcd-hbpqc 1/1 Running 0 27m
kube-system calico-kube-controllers-6b86746955-5mk9v 1/1 Running 0 27m
kube-system calico-node-72rcl 2/2 Running 0 27m
kube-system coredns-fb8b8dccf-6j64x 1/1 Running 1 29m
kube-system coredns-fb8b8dccf-vjwl7 1/1 Running 1 29m
kube-system default-http-backend-6864bbb7db-5c25r 1/1 Running 0 29m
kube-system etcd-minikube 1/1 Running 0 28m
kube-system kube-addon-manager-minikube 1/1 Running 0 28m
kube-system kube-apiserver-minikube 1/1 Running 0 28m
kube-system kube-controller-manager-minikube 1/1 Running 0 28m
kube-system kube-proxy-p48xv 1/1 Running 0 29m
kube-system kube-scheduler-minikube 1/1 Running 0 28m
kube-system nginx-ingress-controller-586cdc477c-6rh6w 1/1 Running 0 29m
kube-system storage-provisioner 1/1 Running 0 29m
orga-1 team-a 1/1 Running 0 20m
orga-2 team-b 1/1 Running 0 7m20s
and my services:
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 29m
kube-system calico-etcd ClusterIP 10.96.232.136 <none> 6666/TCP 27m
kube-system default-http-backend NodePort 10.105.84.105 <none> 80:30001/TCP 29m
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 29m
orga-1 team-a ClusterIP 10.101.4.159 <none> 80/TCP 8m37s
orga-2 team-b ClusterIP 10.105.79.255 <none> 80/TCP 7m54s
The kube-dns endpoint is available, also the service.
Why is my network policy not working and why is the curl to the other pod not working?
Please run
curl team-a.orga-1.svc.cluster.local
curl team-b.orga-2.svc.cluster.local
verify entries in 'cat /etc/resolf.conf'
If you can reach your pods than please follow this tutorial
Deny all ingress traffic:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
namespace: orga-1
spec:
podSelector:
matchLabels: {}
policyTypes:
- Ingress
and Allow ingress traffic to Nginx:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: access-nginx
namespace: orga-1
spec:
podSelector:
matchLabels:
run: nginx
ingress:
- from:
- podSelector:
matchLabels: {}
Below you can find more information about:
Pod’s DNS Policy,
Network Policies
Hope this help.
I'm trying to inject istio into my kubernetes in minikube environment on my local ubuntu 16.04 system. this is my deployment yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nodejs-master
labels:
run: nodejs-master
spec:
replicas: 1
template:
metadata:
labels:
run: nodejs-master
spec:
containers:
- name: nodejs-master
image: hegdemahendra9/nodejs-master:v1
ports:
- containerPort: 8080
protocol: TCP
---
kind: Service
apiVersion: v1
metadata:
name: nodejs-master
spec:
selector:
run: nodejs-master
ports:
- name: port1
protocol: TCP
port: 8080
targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nodejs-slave
labels:
run: nodejs-slave
spec:
replicas: 1
template:
metadata:
labels:
run: nodejs-slave
spec:
containers:
- name: nodejs-slave
image: hegdemahendra9/nodejs-slave:v1
ports:
- containerPort: 8081
protocol: TCP
---
kind: Service
apiVersion: v1
metadata:
name: nodejs-slave
spec:
selector:
run: nodejs-slave
ports:
- name: port1
protocol: TCP
port: 8081
targetPort: 8081
type: NodePort
I've enabled automatic sidecar injection and ran $kubect apply -f deployment.yaml
I've installed istio via this method
here's my istio installation details :
$ kubectl get pods -n istio-system
NAME READY STATUS RESTARTS AGE
istio-citadel-6d7f9c545b-r665q 1/1 Running 0 2h
istio-cleanup-secrets-qg4zh 0/1 Completed 0 2h
istio-egressgateway-866885bb49-9l5rx 1/1 Running 0 2h
istio-galley-6d74549bb9-jslss 1/1 Running 0 2h
istio-ingressgateway-6c6ffb7dc8-rzvxb 1/1 Running 0 2h
istio-pilot-685fc95d96-6296x 0/2 Pending 0 2h
istio-policy-688f99c9c4-trg2j 2/2 Running 0 2h
istio-security-post-install-gs6vk 0/1 Completed 0 2h
istio-sidecar-injector-74855c54b9-j94qr 1/1 Running 0 2h
istio-telemetry-69b794ff59-rqbzw 2/2 Running 0 2h
prometheus-f556886b8-kj5ks 1/1 Running 0 2h
$ kubectl get svc -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-citadel ClusterIP 10.108.144.211 <none> 8060/TCP,9093/TCP 2h
istio-egressgateway NodePort 10.99.160.138 <none> 80:32415/TCP,443:32480/TCP 2h
istio-galley ClusterIP 10.97.0.188 <none> 443/TCP,9093/TCP 2h
istio-ingressgateway NodePort 10.97.75.20 <none> 80:31380/TCP,443:31390/TCP,31400:31400/TCP,15011:32188/TCP,8060:31372/TCP,853:31197/TCP,15030:30606/TCP,15031:31026/TCP 2h
istio-pilot ClusterIP 10.106.145.225 <none> 15010/TCP,15011/TCP,8080/TCP,9093/TCP 2h
istio-policy ClusterIP 10.110.104.100 <none> 9091/TCP,15004/TCP,9093/TCP 2h
istio-sidecar-injector ClusterIP 10.99.236.121 <none> 443/TCP 2h
istio-telemetry ClusterIP 10.103.92.170 <none> 9091/TCP,15004/TCP,9093/TCP,42422/TCP 2h
prometheus ClusterIP 10.105.31.126 <none> 9090/TCP
here's my deployment details
$kubectl get pods
NAME READY STATUS RESTARTS AGE
nodejs-master-6494d9dd66-pdbd6 2/2 Running 0 2h
nodejs-slave-599cd5d676-6w4s8 2/2 Running 0 2h
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 1d
nodejs-master ClusterIP 10.104.99.240 <none> 8080/TCP 2h
nodejs-slave NodePort 10.101.120.229 <none> 8081:31263/TCP 2h
Here's my gateway yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: ms-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: mater-slave
spec:
hosts:
- "*"
gateways:
- ms-gateway
http:
- match:
- uri:
prefix: /master
route:
- destination:
host: nodejs-master
port:
number: 8080
I've applied my gateway using kubectl apply command. and trying to access it using
http://($minikube ip):kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(#.name=="http2")].nodePort}'/master
i.e http://192.168.99.100:31380/master
but I'm getting connection refused error. Someone please help.
thanks in advance.
Maybe it's the name of the service port. It should be "tcp-*". https://istio.io/docs/setup/kubernetes/additional-setup/requirements/