NetworkPolicy can not restrict Ingress from UI - kubernetes

I have a flask service (6 replicas) and ui (3 replicas) deployed using a kind:Deployment but when i add a calico NetworkPolicy like this:
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: application-network-policy
namespace: team-prod-xyz
labels:
app: application-network-policy
spec:
podSelector:
matchLabels:
app: xyz-svc
run: xyz-svc
ingress:
- ports:
- port: 8000
from:
- podSelector:
matchLabels:
app: xyz-ui
egress:
- {}
policyTypes:
- Ingress
- Egress
My flask service goes like this if i directly access it
504 Gateway Time-out
nginx/1.15.3
which is probably expected but my UI can not hit the endpoints as well.
Why is that?
EDIT 2: Kubernetes and Ingress Information
Kubernetes Version -
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.8", GitCommit:"211047e9a1922595eaa3a1127ed365e9299a6c23", GitTreeState:"clean", BuildDate:"2019-10-15T12:02:12Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
NAME READY STATUS RESTARTS AGE
pod/xyz-mongodb-replicaset-0 1/1 Running 0 10d
pod/xyz-mongodb-replicaset-1 1/1 Running 0 7d
pod/xyz-mongodb-replicaset-2 1/1 Running 0 6d23h
pod/xyz-svc-7b589fbd4-25qd6 1/1 Running 0 20h
pod/xyz-svc-7b589fbd4-9n8jh 1/1 Running 0 20h
pod/xyz-svc-7b589fbd4-r5q9g 1/1 Running 0 20h
pod/xyz-ui-7d6f44b57b-8s4mq 1/1 Running 0 3d20h
pod/xyz-ui-7d6f44b57b-bl8r6 1/1 Running 0 3d20h
pod/xyz-ui-7d6f44b57b-jwhc2 1/1 Running 0 3d20h
pod/mongodb-backup-check 1/1 Running 0 20h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/xyz-mongodb-replicaset ClusterIP None <none> 27017/TCP 10d
service/xyz-prod-service ClusterIP 10.3.92.123 <none> 8000/TCP 20h
service/xyz-prod-ui ClusterIP 10.3.49.132 <none> 80/TCP 10d
--Deployment--
--Replicasset--
--Statefulset--
My ingress looks like -
Name: xyz-prod-svc
Namespace: prod-xyz
Address:
Default backend: default-http-backend:80 (<none>)
TLS:
prod terminates xyz.prod.domain.com
Rules:
Host Path Backends
---- ---- --------
xyz.prod.domain.com
/ xyz-prod-u:80 (10.7.2.4:80,10.7.4.22:80,10.7.5.24:80)
/project xyz-prod-servic:8000 (10.7.2.15:8000,10.7.5.10:8000,10.7.5.10:8000 + 3 more...)
/trigger xyz-prod-servic:8000 (10.7.2.15:8000,10.7.5.10:8000,10.7.5.10:8000 + 3 more...)
/kpi xyz-prod-servic:8000 (10.7.2.15:8000,10.7.5.10:8000,10.7.5.10:8000 + 3 more...)
/feedback xyz-prod-servic:8000 (10.7.2.15:8000,10.7.5.10:8000,10.7.5.10:8000 + 3 more...)
Do I have to specify my Ingress in the podSelector option of my Network Policy?
So far my Network Policy looks like this -
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: application-network-policy
namespace: app-prod-xyz
labels:
app: application-network-policy
spec:
podSelector:
matchLabel:
run: xyz-svc
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: xyz-ui
- podSelector:
matchLabels:
app: application-health-check
egress:
- to:
- podSelector:
matchLabels:
app: xyz-ui
- podSelector:
matchLabels:
app: xyz-mongodb-replicaset
- podSelector:
matchLabels:
app: mongodb-replicaset
EDIT 1: I learned that we need to expose port 8000 using a config map before the network policy.
EDIT 3: With UI I mean the deployment done with the node image. I have to check whether the request is being sent through the UI pod or directly to the svc pod.

Related

Can't curl AKS Load Balancer service

I have an AKS cluster with default settings. I'm trying to create a very simple Deployment/Service. The Service is type LoadBlanacer. I see the service is created, however I cannot curl the service public IP. I don't even get an error, curl just hangs.
$ kubectl get all --show-labels
NAME READY STATUS RESTARTS AGE LABELS
pod/myapp-79579b5b68-npb2g 1/1 Running 0 104m app=myapp,pod-template-hash=79579b5b68
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE LABELS
service/kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 26h component=apiserver,provider=kubernetes
service/myapp-service LoadBalancer 10.0.223.167 $PUBLIC_IP 8080:31000/TCP 104m <none>
NAME READY UP-TO-DATE AVAILABLE AGE LABELS
deployment.apps/myapp 1/1 1 1 104m app=myapp
NAME DESIRED CURRENT READY AGE LABELS
replicaset.apps/myapp-79579b5b68 1 1 1 104m app=myapp,pod-template-hash=79579b5b68
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
labels:
app: myapp
spec:
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: nginx:latest
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: myapp-service
labels:
app: myapp
spec:
selector:
app: myapp
type: LoadBalancer
ports:
- port: 8080
targetPort: 8080 # container port of Deployment; kubectl describe pod <podname> | grep Port
nodePort: 31000 # http://external-ip:nodePort
Depending on your requirements, you can create internal or public load balancer attached to application service. Post that you can access the service from outside the k8s cluster.

Nginx minikube ingress : 503 Server error

I am trying to use minikube to deploy a sample flask app. But getting 503 nginx error. Please note I am able to access the app using the Nodeport service config.
I checked with minikube IP which is mapped to local host and tried to access the app, but getting 503 error. Not sure if I missed anything. I enable the minikube addons for nginx.
Here are my files -
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: flaskapp-deployment
labels:
app: flaskapp
spec:
replicas: 1
selector:
matchLabels:
app: flaskapp
template:
metadata:
labels:
app: flaskapp
spec:
containers:
- name: flaskapp
image: <repo>/sample-flask-app:1.0
ports:
- containerPort: 5000
env:
- name: APPLICATION_SETTINGS
value: prd_config.py
imagePullSecrets:
- name: jfrog-secret
Service.yaml
apiVersion: v1
kind: Service
metadata:
name: flaskapp-service
labels:
app: flaskapp
spec:
selector:
app: flaskapp
ports:
- protocol: TCP
port: 5000
targetPort: 5000
Ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: flaskapp-ingress
labels:
app: flaskapp
spec:
defaultBackend:
service:
name: default-http-backend
port:
number: 80
rules:
- host: mydashboard.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: flaskapp-service
port:
number: 5000
Ingress status :
minikube kubectl -- get ingress flaskapp-ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
flaskapp-ingress nginx mydashboard.com localhost 80 18m
Cluster status:
minikube kubectl -- get all
NAME READY STATUS RESTARTS AGE
pod/flaskapp-deployment-7f59f96fd5-j9mv9 1/1 Running 1 (103m ago) 15h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/flaskapp-deployment ClusterIP 10.103.143.58 <none> 5000/TCP 34m
service/flaskapp-service ClusterIP 10.111.242.99 <none> 5000/TCP 15h
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 35h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/flaskapp-deployment 1/1 1 1 15h
NAME DESIRED CURRENT READY AGE
replicaset.apps/flaskapp-deployment-7f59f96fd5 1 1 1 15h

Kubernetes (on-premises) Metallb LoadBalancer and sticky sessions

I installed one Kubernetes Master and two kubernetes worker on-premises.
After I installed Metallb as LoadBalancer using commands below:
$ kubectl edit configmap -n kube-system kube-proxy
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxy
Configuration mode:
"ipvs" ipvs:
strictARP: true
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.6/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.6/manifests/metallb.yaml
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
vim config-map.yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 10.100.170.200-10.100.170.220
kubectl apply -f config-map.yaml
kubectl describe configmap config -n metallb-system
I created my yaml file as below:
myapp-tst-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-tst-deployment
labels:
app: myapp-tst
spec:
replicas: 2
selector:
matchLabels:
app: myapp-tst
template:
metadata:
labels:
app: myapp-tst
spec:
containers:
- name: myapp-tst
image: myapp-tomcat
securityContext:
privileged: true
capabilities:
add:
- SYS_ADMIN
myapp-tst-service.yaml
apiVersion: v1
kind: Service
metadata:
name: myapp-tst-service
labels:
app: myapp-tst
spec:
externalTrafficPolicy: Cluster
type: LoadBalancer
ports:
- name: myapp-tst-port
nodePort: 30080
port: 80
protocol: TCP
targetPort: 8080
selector:
app: myapp-tst
sessionAffinity: None
myapp-tst-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myapp-tst-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/affinity-mode: "persistent"
nginx.ingress.kubernetes.io/session-cookie-name: "INGRESSCOOKIE"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: myapp-tst-service
servicePort: myapp-tst-port
I run kubectl -f apply for all three files, and these is my result:
kubectl get all -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/myapp-tst-deployment-54474cd74-p8cxk 1/1 Running 0 4m53s 10.36.0.1 bcc-tst-docker02 <none> <none>
pod/myapp-tst-deployment-54474cd74-pwlr8 1/1 Running 0 4m53s 10.44.0.2 bca-tst-docker01 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/myapp-tst-service LoadBalancer 10.110.184.237 10.100.170.15 80:30080/TCP 4m48s app=myapp-tst,tier=backend
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6d22h <none>
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/myapp-tst-deployment 2/2 2 2 4m53s myapp-tst mferraramiki/myapp-test app=myapp-tst
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/myapp-tst-deployment-54474cd74 2 2 2 4m53s myapp-tst myapp/myapp-test app=myapp-tst,pod-template-hash=54474cd74
But when I try to connect using LB external IP (10.100.170.15) the system redirect the browser request
(on the same browser) on a pod, if I refresh or open a new tab (on the same url) the system reply redirect the request to another pod.
I need when a user digit url in the browser, he must be connect to a specific pod during all session, and not switch to other pods.
How can solve this problem if is it possible?
In my VM I resolved this issue using stickysession, how can enable it on LB or in Kubernetes components?
In the myapp-tst-service.yaml file the "sessionAffinity" is set to "None".
You should try to set it to "ClientIP".
From page https://kubernetes.io/docs/concepts/services-networking/service/ :
"If you want to make sure that connections from a particular client are passed to the same Pod each time, you can select the session affinity based on the client's IP addresses by setting service.spec.sessionAffinity to "ClientIP" (the default is "None"). You can also set the maximum session sticky time by setting service.spec.sessionAffinityConfig.clientIP.timeoutSeconds appropriately. (the default value is 10800, which works out to be 3 hours)."

Kubernetes Ingress: Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io"

Playing around with K8 and ingress in local minikube setup. Creating ingress from yaml file in networking.k8s.io/v1 api version fails. See below output.
Executing
> kubectl apply -f ingress.yaml
returns
Error from server (InternalError): error when creating "ingress.yaml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": an error on the server ("") has prevented the request from succeeding
in local minikube environment with hyperkit as vm driver.
Here is the ingress.yaml file:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: mongodb-express-ingress
namespace: hello-world
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: mongodb-express.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: mongodb-express-service-internal
port:
number: 8081
Here is the mongodb-express deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb-express
namespace: hello-world
labels:
app: mongodb-express
spec:
replicas: 1
selector:
matchLabels:
app: mongodb-express
template:
metadata:
labels:
app: mongodb-express
spec:
containers:
- name: mongodb-express
image: mongo-express
ports:
- containerPort: 8081
env:
- name: ME_CONFIG_MONGODB_ADMINUSERNAME
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongodb-root-username
- name: ME_CONFIG_MONGODB_ADMINPASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongodb-root-password
- name: ME_CONFIG_MONGODB_SERVER
valueFrom:
configMapKeyRef:
name: mongodb-configmap
key: mongodb_url
---
apiVersion: v1
kind: Service
metadata:
name: mongodb-express-service-external
namespace: hello-world
spec:
selector:
app: mongodb-express
type: LoadBalancer
ports:
- protocol: TCP
port: 8081
targetPort: 8081
nodePort: 30000
---
apiVersion: v1
kind: Service
metadata:
name: mongodb-express-service-internal
namespace: hello-world
spec:
selector:
app: mongodb-express
ports:
- protocol: TCP
port: 8081
targetPort: 8081
Some more information:
> kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.7", GitCommit:"1dd5338295409edcfff11505e7bb246f0d325d15", GitTreeState:"clean", BuildDate:"2021-01-13T13:23:52Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-13T13:20:00Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
> minikube version
minikube version: v1.19.0
commit: 15cede53bdc5fe242228853e737333b09d4336b5
> kubectl get all -n hello-world
NAME READY STATUS RESTARTS AGE
pod/mongodb-68d675ddd7-p4fh7 1/1 Running 0 3h29m
pod/mongodb-express-6586846c4c-5nfg7 1/1 Running 6 3h29m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/mongodb-express-service-external LoadBalancer 10.106.185.132 <pending> 8081:30000/TCP 3h29m
service/mongodb-express-service-internal ClusterIP 10.103.122.120 <none> 8081/TCP 3h3m
service/mongodb-service ClusterIP 10.96.197.136 <none> 27017/TCP 3h29m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/mongodb 1/1 1 1 3h29m
deployment.apps/mongodb-express 1/1 1 1 3h29m
NAME DESIRED CURRENT READY AGE
replicaset.apps/mongodb-68d675ddd7 1 1 1 3h29m
replicaset.apps/mongodb-express-6586846c4c 1 1 1 3h29m
> minikube addons enable ingress
▪ Using image k8s.gcr.io/ingress-nginx/controller:v0.44.0
▪ Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
▪ Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
🔎 Verifying ingress addon...
🌟 The 'ingress' addon is enabled
> kubectl get all -n ingress-nginx
NAME READY STATUS RESTARTS AGE
pod/ingress-nginx-admission-create-2bn8h 0/1 Completed 0 4h4m
pod/ingress-nginx-admission-patch-vsdqn 0/1 Completed 0 4h4m
pod/ingress-nginx-controller-5d88495688-n6f67 1/1 Running 0 4h4m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/ingress-nginx-controller NodePort 10.111.176.223 <none> 80:32740/TCP,443:30636/TCP 4h4m
service/ingress-nginx-controller-admission ClusterIP 10.97.107.77 <none> 443/TCP 4h4m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/ingress-nginx-controller 1/1 1 1 4h4m
NAME DESIRED CURRENT READY AGE
replicaset.apps/ingress-nginx-controller-5d88495688 1 1 1 4h4m
NAME COMPLETIONS DURATION AGE
job.batch/ingress-nginx-admission-create 1/1 7s 4h4m
job.batch/ingress-nginx-admission-patch 1/1 9s 4h4m
However, it works for the beta api version, i.e.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: mongodb-express-ingress-deprecated
namespace: hello-world
spec:
rules:
- host: mongodb-express.local
http:
paths:
- path: /
backend:
serviceName: mongodb-express-service-internal
servicePort: 8081
Any help very much appreciated.
I had the same issue. I successfully fixed it using:
kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission
then apply the yaml files:
kubectl apply -f ingress_file.yaml
I have the same problem with you, and you can see this issue https://github.com/kubernetes/minikube/issues/11121.
Two way you can try:
download the new version ,or go back the old version
Do a strange thing like what balnbibarbi said.
2. The Strange Thing
# Run without --addons=ingress
sudo minikube start --vm-driver=none #--addons=ingress
# install external ingress-nginx
sudo helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
sudo helm repo update
sudo helm install ingress-nginx ingress-nginx/ingress-nginx
# expose your services
And then you will find your Ingress lacks Endpoints. And then:
sudo minikube addons enable ingress
After minitues, the Endpoints appears.
Problem
If you search examples with addons Ingress by Google, you will find what the below lacks is ingress.
root#ubuntu:~# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-74ff55c5b-xnmx2 1/1 Running 1 4h40m
etcd-ubuntu 1/1 Running 1 4h40m
kube-apiserver-ubuntu 1/1 Running 1 4h40m
kube-controller-manager-ubuntu 1/1 Running 1 4h40m
kube-proxy-k9lnl 1/1 Running 1 4h40m
kube-scheduler-ubuntu 1/1 Running 2 4h40m
storage-provisioner 1/1 Running 3 4h40m
Ref: Expecting apiVersion - networking.k8s.io/v1 instead of extensions/v1beta1
TL;DR
kubectl explain predated a lot of the generic resource parsing logic, so it has a dedicated --api-version flag. This should do what you want.
kubectl explain ingresses --api-version=networking.k8s.io/v1
This should solve your doubt!
In my case, it was a previous deployment of NGINX. Check with:
kubectl get ValidatingWebhookConfiguration -A
If there is more than one NGINX, then delete the older one.
You can also get this error on GKE private clusters as a firewall rule is not configured automatically.
https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#add_firewall_rules
https://github.com/kubernetes/kubernetes/issues/79739

how to access istio mesh from browser

I'm trying to inject istio into my kubernetes in minikube environment on my local ubuntu 16.04 system. this is my deployment yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nodejs-master
labels:
run: nodejs-master
spec:
replicas: 1
template:
metadata:
labels:
run: nodejs-master
spec:
containers:
- name: nodejs-master
image: hegdemahendra9/nodejs-master:v1
ports:
- containerPort: 8080
protocol: TCP
---
kind: Service
apiVersion: v1
metadata:
name: nodejs-master
spec:
selector:
run: nodejs-master
ports:
- name: port1
protocol: TCP
port: 8080
targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nodejs-slave
labels:
run: nodejs-slave
spec:
replicas: 1
template:
metadata:
labels:
run: nodejs-slave
spec:
containers:
- name: nodejs-slave
image: hegdemahendra9/nodejs-slave:v1
ports:
- containerPort: 8081
protocol: TCP
---
kind: Service
apiVersion: v1
metadata:
name: nodejs-slave
spec:
selector:
run: nodejs-slave
ports:
- name: port1
protocol: TCP
port: 8081
targetPort: 8081
type: NodePort
I've enabled automatic sidecar injection and ran $kubect apply -f deployment.yaml
I've installed istio via this method
here's my istio installation details :
$ kubectl get pods -n istio-system
NAME READY STATUS RESTARTS AGE
istio-citadel-6d7f9c545b-r665q 1/1 Running 0 2h
istio-cleanup-secrets-qg4zh 0/1 Completed 0 2h
istio-egressgateway-866885bb49-9l5rx 1/1 Running 0 2h
istio-galley-6d74549bb9-jslss 1/1 Running 0 2h
istio-ingressgateway-6c6ffb7dc8-rzvxb 1/1 Running 0 2h
istio-pilot-685fc95d96-6296x 0/2 Pending 0 2h
istio-policy-688f99c9c4-trg2j 2/2 Running 0 2h
istio-security-post-install-gs6vk 0/1 Completed 0 2h
istio-sidecar-injector-74855c54b9-j94qr 1/1 Running 0 2h
istio-telemetry-69b794ff59-rqbzw 2/2 Running 0 2h
prometheus-f556886b8-kj5ks 1/1 Running 0 2h
$ kubectl get svc -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-citadel ClusterIP 10.108.144.211 <none> 8060/TCP,9093/TCP 2h
istio-egressgateway NodePort 10.99.160.138 <none> 80:32415/TCP,443:32480/TCP 2h
istio-galley ClusterIP 10.97.0.188 <none> 443/TCP,9093/TCP 2h
istio-ingressgateway NodePort 10.97.75.20 <none> 80:31380/TCP,443:31390/TCP,31400:31400/TCP,15011:32188/TCP,8060:31372/TCP,853:31197/TCP,15030:30606/TCP,15031:31026/TCP 2h
istio-pilot ClusterIP 10.106.145.225 <none> 15010/TCP,15011/TCP,8080/TCP,9093/TCP 2h
istio-policy ClusterIP 10.110.104.100 <none> 9091/TCP,15004/TCP,9093/TCP 2h
istio-sidecar-injector ClusterIP 10.99.236.121 <none> 443/TCP 2h
istio-telemetry ClusterIP 10.103.92.170 <none> 9091/TCP,15004/TCP,9093/TCP,42422/TCP 2h
prometheus ClusterIP 10.105.31.126 <none> 9090/TCP
here's my deployment details
$kubectl get pods
NAME READY STATUS RESTARTS AGE
nodejs-master-6494d9dd66-pdbd6 2/2 Running 0 2h
nodejs-slave-599cd5d676-6w4s8 2/2 Running 0 2h
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 1d
nodejs-master ClusterIP 10.104.99.240 <none> 8080/TCP 2h
nodejs-slave NodePort 10.101.120.229 <none> 8081:31263/TCP 2h
Here's my gateway yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: ms-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: mater-slave
spec:
hosts:
- "*"
gateways:
- ms-gateway
http:
- match:
- uri:
prefix: /master
route:
- destination:
host: nodejs-master
port:
number: 8080
I've applied my gateway using kubectl apply command. and trying to access it using
http://($minikube ip):kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(#.name=="http2")].nodePort}'/master
i.e http://192.168.99.100:31380/master
but I'm getting connection refused error. Someone please help.
thanks in advance.
Maybe it's the name of the service port. It should be "tcp-*". https://istio.io/docs/setup/kubernetes/additional-setup/requirements/