how to access istio mesh from browser - kubernetes

I'm trying to inject istio into my kubernetes in minikube environment on my local ubuntu 16.04 system. this is my deployment yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nodejs-master
labels:
run: nodejs-master
spec:
replicas: 1
template:
metadata:
labels:
run: nodejs-master
spec:
containers:
- name: nodejs-master
image: hegdemahendra9/nodejs-master:v1
ports:
- containerPort: 8080
protocol: TCP
---
kind: Service
apiVersion: v1
metadata:
name: nodejs-master
spec:
selector:
run: nodejs-master
ports:
- name: port1
protocol: TCP
port: 8080
targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nodejs-slave
labels:
run: nodejs-slave
spec:
replicas: 1
template:
metadata:
labels:
run: nodejs-slave
spec:
containers:
- name: nodejs-slave
image: hegdemahendra9/nodejs-slave:v1
ports:
- containerPort: 8081
protocol: TCP
---
kind: Service
apiVersion: v1
metadata:
name: nodejs-slave
spec:
selector:
run: nodejs-slave
ports:
- name: port1
protocol: TCP
port: 8081
targetPort: 8081
type: NodePort
I've enabled automatic sidecar injection and ran $kubect apply -f deployment.yaml
I've installed istio via this method
here's my istio installation details :
$ kubectl get pods -n istio-system
NAME READY STATUS RESTARTS AGE
istio-citadel-6d7f9c545b-r665q 1/1 Running 0 2h
istio-cleanup-secrets-qg4zh 0/1 Completed 0 2h
istio-egressgateway-866885bb49-9l5rx 1/1 Running 0 2h
istio-galley-6d74549bb9-jslss 1/1 Running 0 2h
istio-ingressgateway-6c6ffb7dc8-rzvxb 1/1 Running 0 2h
istio-pilot-685fc95d96-6296x 0/2 Pending 0 2h
istio-policy-688f99c9c4-trg2j 2/2 Running 0 2h
istio-security-post-install-gs6vk 0/1 Completed 0 2h
istio-sidecar-injector-74855c54b9-j94qr 1/1 Running 0 2h
istio-telemetry-69b794ff59-rqbzw 2/2 Running 0 2h
prometheus-f556886b8-kj5ks 1/1 Running 0 2h
$ kubectl get svc -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-citadel ClusterIP 10.108.144.211 <none> 8060/TCP,9093/TCP 2h
istio-egressgateway NodePort 10.99.160.138 <none> 80:32415/TCP,443:32480/TCP 2h
istio-galley ClusterIP 10.97.0.188 <none> 443/TCP,9093/TCP 2h
istio-ingressgateway NodePort 10.97.75.20 <none> 80:31380/TCP,443:31390/TCP,31400:31400/TCP,15011:32188/TCP,8060:31372/TCP,853:31197/TCP,15030:30606/TCP,15031:31026/TCP 2h
istio-pilot ClusterIP 10.106.145.225 <none> 15010/TCP,15011/TCP,8080/TCP,9093/TCP 2h
istio-policy ClusterIP 10.110.104.100 <none> 9091/TCP,15004/TCP,9093/TCP 2h
istio-sidecar-injector ClusterIP 10.99.236.121 <none> 443/TCP 2h
istio-telemetry ClusterIP 10.103.92.170 <none> 9091/TCP,15004/TCP,9093/TCP,42422/TCP 2h
prometheus ClusterIP 10.105.31.126 <none> 9090/TCP
here's my deployment details
$kubectl get pods
NAME READY STATUS RESTARTS AGE
nodejs-master-6494d9dd66-pdbd6 2/2 Running 0 2h
nodejs-slave-599cd5d676-6w4s8 2/2 Running 0 2h
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 1d
nodejs-master ClusterIP 10.104.99.240 <none> 8080/TCP 2h
nodejs-slave NodePort 10.101.120.229 <none> 8081:31263/TCP 2h
Here's my gateway yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: ms-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: mater-slave
spec:
hosts:
- "*"
gateways:
- ms-gateway
http:
- match:
- uri:
prefix: /master
route:
- destination:
host: nodejs-master
port:
number: 8080
I've applied my gateway using kubectl apply command. and trying to access it using
http://($minikube ip):kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(#.name=="http2")].nodePort}'/master
i.e http://192.168.99.100:31380/master
but I'm getting connection refused error. Someone please help.
thanks in advance.

Maybe it's the name of the service port. It should be "tcp-*". https://istio.io/docs/setup/kubernetes/additional-setup/requirements/

Related

ingress-nginx working but nginx-ingress not

I have Keyclock installed on my Kubernetes cluster.
Default ingress which Keycloak creates looks like this.
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
route.openshift.io/termination: passthrough
creationTimestamp: "2022-11-09T13:08:00Z"
generation: 1
labels:
app: keycloak
app.kubernetes.io/managed-by: keycloak-operator
name: keycloak-kc-ingress
namespace: default
ownerReferences:
- apiVersion: k8s.keycloak.org/v2alpha1
blockOwnerDeletion: true
controller: true
kind: Keycloak
name: keycloak-kc
uid: 67a18d00-4bee-4587-b330-cdaf21b39084
resourceVersion: "155002"
uid: 87c2aff4-1489-4ba9-bdf6-9fe1a288c800
spec:
defaultBackend:
service:
name: keycloak-kc-service
port:
number: 8443
rules:
- host: keycloak.example.com
http:
paths:
- backend:
service:
name: keycloak-kc-service
port:
number: 8443
pathType: ImplementationSpecific
status:
loadBalancer:
ingress:
- ip: 10.0.0.3
After installing ingress-nginx and adding kubernetes.io/ingress.class=nginx annotation, everything works.
For some reasons, however, I need to use nginx-ingress.
My new ingress looks like this.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
kubernetes.io/ingress.class: nginx
# nginx.ingress.kubernetes.io/backend-protocol: HTTPS
# route.openshift.io/termination: passthrough
labels:
app: keycloak
app.kubernetes.io/managed-by: keycloak-operator
# target: keycloak-kc-service
name: keycloak-kc-ingress
namespace: default
spec:
defaultBackend:
service:
name: keycloak-kc-service
port:
number: 8443
rules:
- host: accounts.example.com
http:
paths:
- backend:
service:
name: keycloak-kc-service
port:
number: 8443
path: /
pathType: Prefix
tls:
- hosts:
- accounts.example.com
secretName: keycloak-tls-secret
Unfortunately, this ingress returns the error "502 Bad Gateway".
We can't handle it. Please help.
Information for debugging
kubectl get deployments -A
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
default keycloak-operator 2/2 2 2 141m
kube-system cilium-operator 1/1 1 1 148m
kube-system coredns 2/2 2 2 148m
kube-system konnectivity-agent 2/2 2 2 148m
kube-system metrics-server 2/2 2 2 148m
kubernetes-dashboard dashboard-metrics-scraper 2/2 2 2 148m
nginx-ingress nginx-ingress-nginx-ingress-nginx-ingress 1/1 1 1 127m
olm catalog-operator 1/1 1 1 142m
olm olm-operator 1/1 1 1 142m
olm packageserver 2/2 2 2 142m
kubectl get services -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default keycloak-kc-discovery ClusterIP None <none> 7800/TCP 114m
default keycloak-kc-service ClusterIP 10.240.18.67 <none> 8443/TCP 114m
default keycloak-operator ClusterIP 10.240.24.103 <none> 80/TCP 141m
default kubernetes ClusterIP 10.240.16.1 <none> 443/TCP 149m
default postgres-db ClusterIP 10.240.18.157 <none> 5432/TCP 140m
kube-system hcloud-csi-controller-metrics ClusterIP 10.240.30.190 <none> 9189/TCP 149m
kube-system hcloud-csi-node-metrics ClusterIP 10.240.26.123 <none> 9189/TCP 149m
kube-system kube-dns ClusterIP 10.240.16.10 <none> 53/TCP,53/UDP 149m
kube-system metrics-server ClusterIP 10.240.31.184 <none> 443/TCP 149m
kubernetes-dashboard dashboard-metrics-scraper ClusterIP 10.240.25.29 <none> 8000/TCP 149m
nginx-ingress nginx-ingress-nginx-ingress-nginx-ingress LoadBalancer 10.240.26.173 10.0.0.3,167.235.123.123,2a01:4f8:1c1f:6484::1 80:31670/TCP,443:30557/TCP 128m
olm operatorhubio-catalog ClusterIP 10.240.22.30 <none> 50051/TCP 142m
olm packageserver-service ClusterIP 10.240.23.246 <none>
Unfortunately, this ingress returns the error "502 Bad Gateway".
We can't handle it. Please help.

Cluster Kubernetes - Deploy httpd and access from external

I created my Kubernetes cluster and I try to deploy this yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: httpd-deployment
spec:
selector:
matchLabels:
app: httpd
replicas: 1
template:
metadata:
labels:
app: httpd
spec:
containers:
- name: httpd
image: httpd
ports:
- containerPort: 80
---
kind: Service
apiVersion: v1
metadata:
name: httpd-service
spec:
selector:
app: httpd-app
ports:
- protocol: TCP
port: 8080
targetPort: 80
nodePort: 30020
name: httpd-port
type: NodePort
This is the configuration:
[root#BCA-TST-K8S01 httpd-deploy]# kubectl get all -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/httpd-deployment-57fc687dcc-rggx9 1/1 Running 0 8m51s 10.44.0.1 bcc-tst-docker02 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/httpd-service NodePort 10.102.138.175 <none> 8080:30020/TCP 8m51s app=httpd-app
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 134m <none>
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/httpd-deployment 1/1 1 1 8m51s httpd httpd app=httpd
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/httpd-deployment-57fc687dcc 1 1 1 8m51s httpd httpd app=httpd,pod-template-hash=57fc687dcc
But I can't connect to the worker or from the cluster IP:
curl http://bcc-tst-docker02:30020
curl: (7) Failed to connect to bcc-tst-docker02 port 30020: Connection refused
How can I fix the problem?
How can expose the cluster using the internal Matser IP (for example I need to access to the httpd-deploy from the master IP 10.100.170.150 open a browser in the same network)
UPDATE:
I modified my yaml file as below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: httpd-deployment
spec:
selector:
matchLabels:
app: httpd-app
replicas: 2
template:
metadata:
labels:
app: httpd-app
spec:
containers:
- name: httpd
image: httpd
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: http-service
spec:
externalIPs:
- 10.100.170.150 **--> IP K8S**
externalTrafficPolicy: Cluster
ports:
- name: httpd-port
protocol: TCP
port: 8080
targetPort: 80
nodePort: 30020
selector:
app: httpd-app
sessionAffinity: None
type: LoadBalancer
And these are the result after I run apply command:
[root#K8S01 LoadBalancer]# kubectl get all -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/httpd-deployment-65d64d47c5-72xp4 1/1 Running 0 60s 10.44.0.2 bcc-tst-docker02 <none> <none>
pod/httpd-deployment-65d64d47c5-fc645 1/1 Running 0 60s 10.36.0.1 bca-tst-docker01 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/http-service LoadBalancer 10.100.236.203 10.100.170.150 8080:30020/TCP 60s app=httpd-app
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 13d <none>
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/httpd-deployment 2/2 2 2 60s httpd httpd app=httpd-app
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/httpd-deployment-65d64d47c5 2 2 2 60s httpd httpd app=httpd-app,pod-template-hash=65d64d47c5
but now when I try to connect to the httpd using K8S IP I receive these error:
[root#K8S01 LoadBalancer]# curl http://10.100.170.150:8080
curl: (7) Failed to connect to 10.100.170.150 port 8080: No route to host
[root#K8S01 LoadBalancer]# curl http://10.100.236.203:8080
curl: (7) Failed to connect to 10.100.236.203 port 8080: No route to host
If I try to connect directly to the node I can connect:
[root#K8S01 LoadBalancer]# curl http://bca-tst-docker01:30020
<html><body><h1>It works!</h1></body></html>
[root#K8S01 LoadBalancer]# curl http://bcc-tst-docker02:30020
<html><body><h1>It works!</h1></body></html>
You're are getting the connection refused because the service does not have any endpoints behind it since your label selector is different from the deployment level.
The deployment has httpd label while the service is trying to catch all the deployments with httpd-app. Below you can find corrected selector:
kind: Service
apiVersion: v1
metadata:
name: httpd-service
spec:
selector:
app: httpd <-------
ports:
- protocol: TCP
port: 8080
targetPort: 80
nodePort: 30020
name: httpd-port
type: NodePort
You can always verify if the service has endpoints. Kubernetes has a great section about debugging services and one of it is called: Does the Service have any Endpoints?

k3s on arch linux ARM worker service not responding

Current setup:
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
cl01mtr01 Ready master 104m v1.18.2+k3s1 10.1.1.1 <none> Debian GNU/Linux 10 (buster) 4.19.0-9-amd64 containerd://1.3.3-k3s2
cl01wkr01 Ready <none> 9m20s v1.18.2+k3s1 10.1.1.101 <none> Arch Linux ARM 5.4.40-1-ARCH containerd://1.3.3-k3s2
Master installed with:
export INSTALL_K3S_VERSION="v1.18.2+k3s1"
curl -sSLf https://get.k3s.io | sh -s - server \
--write-kubeconfig-mode 644 \
--cluster-cidr 172.20.0.0/16 \
--service-cidr 172.21.0.0/16 \
--cluster-dns 172.21.0.10 \
--disable traefik
Worker installed with:
export INSTALL_K3S_VERSION="v1.18.2+k3s1"
curl -sSLf https://get.k3s.io | sh -s - agent \
--server https://10.1.1.1:6443 \
--token <token from master>
I also tried with a raspberry pi as master running arch linux and raspbian and a rock pi 64 with armbian.
I tried with k3s versions:
v1.17.4+k3s1
v1.17.5+k3s1
v1.18.2+k3s1
I also tested with docker and the --docker install option in k3s.
The nodes get discovered (as shown above), but I cannot access the service on my worker node(s) (raspberry pi 3 with arch linux arm) via http://10.1.1.1:30001 although, it can be accessed via kubectl exec.
I always get a connection timeout
This site can’t be reached
10.1.1.1 took too long to respond.
When the pod runs on the master node, or if the worker is an amd64 node, it can be accessed via http://10.1.1.1:30001.
This is the resource I try to load and access:
apiVersion: v1
kind: Namespace
metadata:
name: nginx
---
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-default-configmap
namespace: nginx
data:
default.conf: |
server {
listen 80;
listen [::]:80;
#server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
namespace: nginx
spec:
ports:
- name: http
targetPort: 80
port: 80
nodePort: 30001
- name: https
targetPort: 443
port: 443
nodePort: 30002
selector:
app: nginx
type: NodePort
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nginx-daemonset
namespace: nginx
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node-role.kubernetes.io/master
operator: NotIn
values:
- "true"
containers:
- name: nginx
image: nginx:stable
imagePullPolicy: Always
env:
- name: TZ
value: "Europe/Brussels"
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
volumeMounts:
- name: default-conf
mountPath: /etc/nginx/conf.d/default.conf
subPath: default.conf
readOnly: true
restartPolicy: Always
volumes:
- name: default-conf
configMap:
name: nginx-default-configmap
Some extra info:
> kubectl get all -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system pod/local-path-provisioner-6d59f47c7-d477m 1/1 Running 0 116m 172.20.0.4 cl01mtr01 <none> <none>
kube-system pod/metrics-server-7566d596c8-fbb7b 1/1 Running 0 116m 172.20.0.2 cl01mtr01 <none> <none>
kube-system pod/coredns-8655855d6-gnbsm 1/1 Running 0 116m 172.20.0.3 cl01mtr01 <none> <none>
nginx pod/nginx-daemonset-l4j7s 1/1 Running 0 52s 172.20.1.3 cl01wkr01 <none> <none>
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default service/kubernetes ClusterIP 172.21.0.1 <none> 443/TCP 116m <none>
kube-system service/kube-dns ClusterIP 172.21.0.10 <none> 53/UDP,53/TCP,9153/TCP 116m k8s-app=kube-dns
kube-system service/metrics-server ClusterIP 172.21.152.234 <none> 443/TCP 116m k8s-app=metrics-server
nginx service/nginx-service NodePort 172.21.14.185 <none> 80:30001/TCP,443:30002/TCP 52s app=nginx
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE CONTAINERS IMAGES SELECTOR
nginx daemonset.apps/nginx-daemonset 1 1 1 1 1 <none> 52s nginx nginx:stable app=nginx
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
kube-system deployment.apps/local-path-provisioner 1/1 1 1 116m local-path-provisioner rancher/local-path-provisioner:v0.0.11 app=local-path-provisioner
kube-system deployment.apps/metrics-server 1/1 1 1 116m metrics-server rancher/metrics-server:v0.3.6 k8s-app=metrics-server
kube-system deployment.apps/coredns 1/1 1 1 116m coredns rancher/coredns-coredns:1.6.3 k8s-app=kube-dns
NAMESPACE NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
kube-system replicaset.apps/local-path-provisioner-6d59f47c7 1 1 1 116m local-path-provisioner rancher/local-path-provisioner:v0.0.11 app=local-path-provisioner,pod-template-hash=6d59f47c7
kube-system replicaset.apps/metrics-server-7566d596c8 1 1 1 116m metrics-server rancher/metrics-server:v0.3.6 k8s-app=metrics-server,pod-template-hash=7566d596c8
kube-system replicaset.apps/coredns-8655855d6 1 1 1 116m coredns rancher/coredns-coredns:1.6.3 k8s-app=kube-dns,pod-template-hash=8655855d6

Kubernetes DNS and NetworkPolicy with Calico not working

I have a Minikube cluster with Calico running and I am trying to make NetworkPolicies working. Here are my Pods and Services:
First pod (team-a):
apiVersion: v1
kind: Pod
metadata:
name: team-a
namespace: orga-1
labels:
run: nginx
app: team-a
spec:
containers:
- image: joshrosso/nginx-curl:v2
imagePullPolicy: IfNotPresent
name: nginx
---
apiVersion: v1
kind: Service
metadata:
name: team-a
namespace: orga-1
spec:
ports:
- port: 80
name: http
protocol: TCP
targetPort: 80
selector:
app: team-a
Second pod (team-b):
apiVersion: v1
kind: Pod
metadata:
name: team-b
namespace: orga-2
labels:
run: nginx
app: team-b
spec:
containers:
- image: joshrosso/nginx-curl:v2
imagePullPolicy: IfNotPresent
name: nginx
---
apiVersion: v1
kind: Service
metadata:
name: team-b
namespace: orga-2
spec:
ports:
- port: 80
name: http
protocol: TCP
targetPort: 80
selector:
app: team-b
When I execute a bash in team-a, I cannot curl orga-2.team-b:
dev#ubuntu:~$ kubectl exec -it -n orga-1 team-a /bin/bash
root#team-a:/# curl google.de
//Body removed...
root#team-a:/# curl orga-2.team-b
curl: (6) Could not resolve host: orga-2.team-b
Now I applied a network policy:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: deny-all-base-rule
namespace: orga-1
spec:
podSelector: {}
policyTypes:
- Ingress
ingress: []
When I now curl google in team-a, it still works. Here are my pods:
kube-system calico-etcd-hbpqc 1/1 Running 0 27m
kube-system calico-kube-controllers-6b86746955-5mk9v 1/1 Running 0 27m
kube-system calico-node-72rcl 2/2 Running 0 27m
kube-system coredns-fb8b8dccf-6j64x 1/1 Running 1 29m
kube-system coredns-fb8b8dccf-vjwl7 1/1 Running 1 29m
kube-system default-http-backend-6864bbb7db-5c25r 1/1 Running 0 29m
kube-system etcd-minikube 1/1 Running 0 28m
kube-system kube-addon-manager-minikube 1/1 Running 0 28m
kube-system kube-apiserver-minikube 1/1 Running 0 28m
kube-system kube-controller-manager-minikube 1/1 Running 0 28m
kube-system kube-proxy-p48xv 1/1 Running 0 29m
kube-system kube-scheduler-minikube 1/1 Running 0 28m
kube-system nginx-ingress-controller-586cdc477c-6rh6w 1/1 Running 0 29m
kube-system storage-provisioner 1/1 Running 0 29m
orga-1 team-a 1/1 Running 0 20m
orga-2 team-b 1/1 Running 0 7m20s
and my services:
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 29m
kube-system calico-etcd ClusterIP 10.96.232.136 <none> 6666/TCP 27m
kube-system default-http-backend NodePort 10.105.84.105 <none> 80:30001/TCP 29m
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 29m
orga-1 team-a ClusterIP 10.101.4.159 <none> 80/TCP 8m37s
orga-2 team-b ClusterIP 10.105.79.255 <none> 80/TCP 7m54s
The kube-dns endpoint is available, also the service.
Why is my network policy not working and why is the curl to the other pod not working?
Please run
curl team-a.orga-1.svc.cluster.local
curl team-b.orga-2.svc.cluster.local
verify entries in 'cat /etc/resolf.conf'
If you can reach your pods than please follow this tutorial
Deny all ingress traffic:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
namespace: orga-1
spec:
podSelector:
matchLabels: {}
policyTypes:
- Ingress
and Allow ingress traffic to Nginx:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: access-nginx
namespace: orga-1
spec:
podSelector:
matchLabels:
run: nginx
ingress:
- from:
- podSelector:
matchLabels: {}
Below you can find more information about:
Pod’s DNS Policy,
Network Policies
Hope this help.

kubernetes ingress nginx controller

In kubernetes kubeadm cluster i deployed my app with :
kubectl run gmt-dpl --image=192.168.56.33:5000/img:gmt --port 8181
Expose the app with:
kubectl expose deployment gmt-dpl --name=gmt-svc --type=NodePort --port=443 --target-port=8181
My gmt-svc:
Name: gmt-svc
Namespace: default
Labels: run=gmt-dpl
Annotations: <none>
Selector: run=gmt-dpl
Type: NodePort
IP: 10.96.74.133
Port: <unset> 443/TCP
TargetPort: 8181/TCP
NodePort: <unset> 30723/TCP
Endpoints: 10.44.0.17:8181
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
I'm able to access my app on https://noedip:30723
Following github doc i manage to install nginx- controller:
NAMESPACE NAME READY STATUS RESTARTS AGE
default gmt-dpl-84fb9cfd8d-vz2bw 1/1 Running 0 1h
ingress-nginx default-http-backend-55c6c69b88-c4k6c 1/1 Running 5 2d
ingress-nginx nginx-ingress-controller-9c7b694-szgjd 1/1 Running 8 2d
kube-system etcd-k8s-master 1/1 Running 5 2d
kube-system kube-apiserver-k8s-master 1/1 Running 1 4h
kube-system kube-controller-manager-k8s-master 1/1 Running 7 2d
kube-system kube-dns-6f4fd4bdf-px5f8 3/3 Running 12 2d
kube-system kube-proxy-jswfx 1/1 Running 8 2d
kube-system kube-proxy-n2chh 1/1 Running 4 2d
kube-system kube-scheduler-k8s-master 1/1 Running 7 2d
kube-system weave-net-4k8sp 2/2 Running 10 1d
kube-system weave-net-bqjzb 2/2 Running 19 1d
I exposed the nginx-controller:
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress-controller-svc
namespace: ingress-nginx
spec:
type: NodePort
ports:
- port: 80
nodePort: 30001
name: http
- port: 443
nodePort: 30000
name: https
- port: 18080
nodePort: 30002
name: status
selector:
app: ingress-nginx
And i created an ingress ressource:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/tls-acme: "true"
kubernetes.io/ingress.class: "nginx"
name: test-ingress
spec:
tls:
- hosts:
- kube.gmt.dz
secretName: foo-secret
rules:
- host: kube.gmt.dz
http:
paths:
- path: /*
backend:
serviceName: gmt-svc
servicePort: 443
Now while trying to access my app through the nginx-controller with https://ip_master:30000 i got default backend -404 and also with http://ip_master:30001 the same error knowing that my app is only accessible in https.
In ingress ressource i tried servicePort: 8181 but i got the same error.
Any ideas ?