How to get logs of deployment from Kubernetes? - kubernetes

I am creating an InfluxDB deployment in a Kubernetes cluster (v1.15.2), this is my yaml file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: monitoring-influxdb
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
task: monitoring
k8s-app: influxdb
spec:
containers:
- name: influxdb
image: registry.cn-hangzhou.aliyuncs.com/google_containers/heapster-influxdb-amd64:v1.5.2
volumeMounts:
- mountPath: /data
name: influxdb-storage
volumes:
- name: influxdb-storage
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
task: monitoring
# For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
# If you are NOT using this as an addon, you should comment out this line.
kubernetes.io/cluster-service: 'true'
kubernetes.io/name: monitoring-influxdb
name: monitoring-influxdb
namespace: kube-system
spec:
ports:
- port: 8086
targetPort: 8086
selector:
k8s-app: influxdb
And this is the pod status:
$ kubectl get deployment -n kube-system
NAME READY UP-TO-DATE AVAILABLE AGE
coredns 1/1 1 1 163d
kubernetes-dashboard 1/1 1 1 164d
monitoring-grafana 0/1 0 0 12m
monitoring-influxdb 0/1 0 0 11m
Now, I've been waiting 30 minutes and there is still no pod available, how do I check the deployment log from command line? I could not access the Kubernetes dashboard now. I am searching a command to get the pod log, but now there is no pod available. I already tried to add label in node:
kubectl label nodes azshara-k8s03 k8s-app=influxdb
This is my deployment describe content:
$ kubectl describe deployments monitoring-influxdb -n kube-system
Name: monitoring-influxdb
Namespace: kube-system
CreationTimestamp: Wed, 04 Mar 2020 11:15:52 +0800
Labels: k8s-app=influxdb
task=monitoring
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"extensions/v1beta1","kind":"Deployment","metadata":{"annotations":{},"name":"monitoring-influxdb","namespace":"kube-system"...
Selector: k8s-app=influxdb,task=monitoring
Replicas: 1 desired | 0 updated | 0 total | 0 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 1 max unavailable, 1 max surge
Pod Template:
Labels: k8s-app=influxdb
task=monitoring
Containers:
influxdb:
Image: registry.cn-hangzhou.aliyuncs.com/google_containers/heapster-influxdb-amd64:v1.5.2
Port: <none>
Host Port: <none>
Environment: <none>
Mounts:
/data from influxdb-storage (rw)
Volumes:
influxdb-storage:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
OldReplicaSets: <none>
NewReplicaSet: <none>
Events: <none>
This is another way to get logs:
$ kubectl -n kube-system logs -f deployment/monitoring-influxdb
error: timed out waiting for the condition
There is no output for this command:
kubectl logs --selector k8s-app=influxdb
There is all my pod in kube-system namespace:
~/Library/Mobile Documents/com~apple~CloudDocs/Document/k8s/work/heapster/heapster-deployment ⌚ 11:57:40
$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-569fd64d84-5q5pj 1/1 Running 0 46h
kubernetes-dashboard-6466b68b-z6z78 1/1 Running 0 11h
traefik-ingress-controller-hx4xd 1/1 Running 0 11h

kubectl logs deployment/<name-of-deployment> # logs of deployment
kubectl logs -f deployment/<name-of-deployment> # follow logs

You can try kubectl describe deploy monitoring-influxdb to get some high-level view of the deployment, maybe some information here.
For more detailed logs, first get the pods: kubectl get po
Then, request the pod logs: kubectl logs <pod-name>

Adding references of two great tools that might help you view cluster logs:
If you wish to view logs from your terminal without using a "heavy" 3rd party logging solution I would consider using K9S which is a great CLI tool that help you get control over your cluster.
If you are not bound only to the CLI and still want run locally I would recommend on Lens.

Related

minikube service URL gives ECONNREFUSED on mac os Monterey

I have a spring-boot postgres setup that I am trying to containerize and deploy in minikube. My pods and services show that they are up.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
server-deployment-5bc57dcd4f-zrwzs 1/1 Running 0 14m
postgres-7f887f4d7d-5b8v5 1/1 Running 0 25m
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
server-service NodePort 10.100.21.232 <none> 8080:31457/TCP 15m
postgres ClusterIP 10.97.19.125 <none> 5432/TCP 26m
$ minikube service list
|-------------|------------------|--------------|-----------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-------------|------------------|--------------|-----------------------------|
| default | kubernetes | No node port |
| kube-system | kube-dns | No node port |
| custom | server-service | http/8080 | http://192.168.59.106:31457 |
| custom | postgres | No node port |
|-------------|------------------|--------------|-----------------------------|
But when I try to hit any of my endpoints using postman, I get:
Could not send request. Error: connect ECONNREFUSED 192.168.59.106:31457
I don't know where I am going wrong. I tried deploying the individual containers directly in docker (I had to modify some of the application.properties to get the rest server talking to the db container) and that works without a problem so clearly my server side code should not be a problem.
Here is the yml for the rest-server:
apiVersion: apps/v1
kind: Deployment
metadata:
name: server-deployment
namespace: custom
spec:
selector:
matchLabels:
app: server-deployment
template:
metadata:
name: server-deployment
labels:
app: server-deployment
spec:
containers:
- name: server-deployment
image: aruns1494/rest-server-k8s:latest
env:
- name: POSTGRES_USER
valueFrom:
configMapKeyRef:
name: postgres-config
key: postgres_user
- name: POSTGRES_PASSWORD
valueFrom:
configMapKeyRef:
name: postgres-config
key: postgres_password
- name: POSTGRES_SERVICE
valueFrom:
configMapKeyRef:
name: postgres-config
key: postgres_service
---
apiVersion: v1
kind: Service
metadata:
name: server-service
namespace: custom
spec:
selector:
name: server-deployment
ports:
- name: http
port: 8080
type: NodePort
I have not changed the spring boot's default port so I expect it to work on 8080. I tried connecting to that URL through chrome and Firefox and I get the same error message. I expect it to fall back to a default error message page when I try to hit the / endpoint.
I did look up several online articles but none of them seem to help. I am also attaching my kube-system pods if that helps:
$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-78fcd69978-x6mv6 1/1 Running 0 39m
etcd-minikube 1/1 Running 0 40m
kube-apiserver-minikube 1/1 Running 0 40m
kube-controller-manager-minikube 1/1 Running 0 40m
kube-proxy-dnr8p 1/1 Running 0 39m
kube-scheduler-minikube 1/1 Running 0 40m
storage-provisioner 1/1 Running 1 (39m ago) 40m
My proposition is to check that provided Deployment and Service have the same labels and selectors, because now in the Deployment config pod label is app: server-deployment, but in the Service config selector is name: server-deployment.
If we want to use name: server-deployment selector for the Service, then we need to update the Deployment as shown below (matchLabels and labels fields):
apiVersion: apps/v1
kind: Deployment
metadata:
name: server-deployment
namespace: custom
spec:
selector:
matchLabels:
name: server-deployment
template:
metadata:
name: server-deployment
labels:
name: server-deployment
spec:
containers:
...
Possibly MacOS Firewall is blocking the connection. Could you try navigating to System Preferences > Security & Privacy and see if the port is being blocked in General tab? You can also disable Firewall in Firewall tab.

How to identify unhealthy pods in a statefulset

I have a StatefulSet with 6 replicas.
All of a sudden StatefulSet thinks there are 5 ready replicas out if 6. When I look at the pod status all 6 pods are ready with all the readiness checks passed 1/1.
Now I am trying to find logs or status that shows which pod is unhealthy as per the StatefulSet, so I could debug further.
Where can I find information or logs for the StatefulSet that could tell me which pod is unhealthy? I have already checked the output of describe pods and describe statefulset but none of them show which pod is unhealthy.
So lets say you created next statefulset:
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
user: anurag
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
user: anurag # has to match .spec.template.metadata.labels
serviceName: "nginx"
replicas: 6 # by default is 1
template:
metadata:
labels:
user: anurag # has to match .spec.selector.matchLabels
spec:
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "standard"
resources:
requests:
storage: 1Gi
Result is:
kubectl get StatefulSet web -o wide
NAME READY AGE CONTAINERS IMAGES
web 6/6 8m31s nginx k8s.gcr.io/nginx-slim:0.8
What we can also check StatefulSet's status in:
kubectl get statefulset web -o yaml
status:
collisionCount: 0
currentReplicas: 6
currentRevision: web-599978b754
observedGeneration: 1
readyReplicas: 6
replicas: 6
updateRevision: web-599978b754
updatedReplicas: 6
As per Debugging a StatefulSet, you can list all the pods which belong to a current StatefulSet using labels.
$ kubectl get pods -l user=anurag
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 13m
web-1 1/1 Running 0 12m
web-2 1/1 Running 0 12m
web-3 1/1 Running 0 12m
web-4 1/1 Running 0 12m
web-5 1/1 Running 0 11m
Here, at this point, if any of your pods aren't available- you will definitely see that. And next debugging is Debug Pods and ReplicationControllers including checks if you have enough sufficient resources to start all these pods and etc etc.
Describe problematic pod (kubectl describe pod web-0) should give you an answer why that happened in the very end in Events section.
For example, if you will use origin yaml as it is for this example from statefulset components - you will have an error and any of your pods will up and running. (The reason is storageClassName: "my-storage-class" )
The exact error and understanding what is happening comes from describing problematic pod... that's how it works.
kubectl describe pod web-0
vents:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 31s (x2 over 31s) default-scheduler 0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.

Kubernetes Ingress: Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io"

Playing around with K8 and ingress in local minikube setup. Creating ingress from yaml file in networking.k8s.io/v1 api version fails. See below output.
Executing
> kubectl apply -f ingress.yaml
returns
Error from server (InternalError): error when creating "ingress.yaml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": an error on the server ("") has prevented the request from succeeding
in local minikube environment with hyperkit as vm driver.
Here is the ingress.yaml file:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: mongodb-express-ingress
namespace: hello-world
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: mongodb-express.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: mongodb-express-service-internal
port:
number: 8081
Here is the mongodb-express deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb-express
namespace: hello-world
labels:
app: mongodb-express
spec:
replicas: 1
selector:
matchLabels:
app: mongodb-express
template:
metadata:
labels:
app: mongodb-express
spec:
containers:
- name: mongodb-express
image: mongo-express
ports:
- containerPort: 8081
env:
- name: ME_CONFIG_MONGODB_ADMINUSERNAME
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongodb-root-username
- name: ME_CONFIG_MONGODB_ADMINPASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongodb-root-password
- name: ME_CONFIG_MONGODB_SERVER
valueFrom:
configMapKeyRef:
name: mongodb-configmap
key: mongodb_url
---
apiVersion: v1
kind: Service
metadata:
name: mongodb-express-service-external
namespace: hello-world
spec:
selector:
app: mongodb-express
type: LoadBalancer
ports:
- protocol: TCP
port: 8081
targetPort: 8081
nodePort: 30000
---
apiVersion: v1
kind: Service
metadata:
name: mongodb-express-service-internal
namespace: hello-world
spec:
selector:
app: mongodb-express
ports:
- protocol: TCP
port: 8081
targetPort: 8081
Some more information:
> kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.7", GitCommit:"1dd5338295409edcfff11505e7bb246f0d325d15", GitTreeState:"clean", BuildDate:"2021-01-13T13:23:52Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-13T13:20:00Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
> minikube version
minikube version: v1.19.0
commit: 15cede53bdc5fe242228853e737333b09d4336b5
> kubectl get all -n hello-world
NAME READY STATUS RESTARTS AGE
pod/mongodb-68d675ddd7-p4fh7 1/1 Running 0 3h29m
pod/mongodb-express-6586846c4c-5nfg7 1/1 Running 6 3h29m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/mongodb-express-service-external LoadBalancer 10.106.185.132 <pending> 8081:30000/TCP 3h29m
service/mongodb-express-service-internal ClusterIP 10.103.122.120 <none> 8081/TCP 3h3m
service/mongodb-service ClusterIP 10.96.197.136 <none> 27017/TCP 3h29m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/mongodb 1/1 1 1 3h29m
deployment.apps/mongodb-express 1/1 1 1 3h29m
NAME DESIRED CURRENT READY AGE
replicaset.apps/mongodb-68d675ddd7 1 1 1 3h29m
replicaset.apps/mongodb-express-6586846c4c 1 1 1 3h29m
> minikube addons enable ingress
â–ª Using image k8s.gcr.io/ingress-nginx/controller:v0.44.0
â–ª Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
â–ª Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
🔎 Verifying ingress addon...
🌟 The 'ingress' addon is enabled
> kubectl get all -n ingress-nginx
NAME READY STATUS RESTARTS AGE
pod/ingress-nginx-admission-create-2bn8h 0/1 Completed 0 4h4m
pod/ingress-nginx-admission-patch-vsdqn 0/1 Completed 0 4h4m
pod/ingress-nginx-controller-5d88495688-n6f67 1/1 Running 0 4h4m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/ingress-nginx-controller NodePort 10.111.176.223 <none> 80:32740/TCP,443:30636/TCP 4h4m
service/ingress-nginx-controller-admission ClusterIP 10.97.107.77 <none> 443/TCP 4h4m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/ingress-nginx-controller 1/1 1 1 4h4m
NAME DESIRED CURRENT READY AGE
replicaset.apps/ingress-nginx-controller-5d88495688 1 1 1 4h4m
NAME COMPLETIONS DURATION AGE
job.batch/ingress-nginx-admission-create 1/1 7s 4h4m
job.batch/ingress-nginx-admission-patch 1/1 9s 4h4m
However, it works for the beta api version, i.e.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: mongodb-express-ingress-deprecated
namespace: hello-world
spec:
rules:
- host: mongodb-express.local
http:
paths:
- path: /
backend:
serviceName: mongodb-express-service-internal
servicePort: 8081
Any help very much appreciated.
I had the same issue. I successfully fixed it using:
kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission
then apply the yaml files:
kubectl apply -f ingress_file.yaml
I have the same problem with you, and you can see this issue https://github.com/kubernetes/minikube/issues/11121.
Two way you can try:
download the new version ,or go back the old version
Do a strange thing like what balnbibarbi said.
2. The Strange Thing
# Run without --addons=ingress
sudo minikube start --vm-driver=none #--addons=ingress
# install external ingress-nginx
sudo helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
sudo helm repo update
sudo helm install ingress-nginx ingress-nginx/ingress-nginx
# expose your services
And then you will find your Ingress lacks Endpoints. And then:
sudo minikube addons enable ingress
After minitues, the Endpoints appears.
Problem
If you search examples with addons Ingress by Google, you will find what the below lacks is ingress.
root#ubuntu:~# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-74ff55c5b-xnmx2 1/1 Running 1 4h40m
etcd-ubuntu 1/1 Running 1 4h40m
kube-apiserver-ubuntu 1/1 Running 1 4h40m
kube-controller-manager-ubuntu 1/1 Running 1 4h40m
kube-proxy-k9lnl 1/1 Running 1 4h40m
kube-scheduler-ubuntu 1/1 Running 2 4h40m
storage-provisioner 1/1 Running 3 4h40m
Ref: Expecting apiVersion - networking.k8s.io/v1 instead of extensions/v1beta1
TL;DR
kubectl explain predated a lot of the generic resource parsing logic, so it has a dedicated --api-version flag. This should do what you want.
kubectl explain ingresses --api-version=networking.k8s.io/v1
This should solve your doubt!
In my case, it was a previous deployment of NGINX. Check with:
kubectl get ValidatingWebhookConfiguration -A
If there is more than one NGINX, then delete the older one.
You can also get this error on GKE private clusters as a firewall rule is not configured automatically.
https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#add_firewall_rules
https://github.com/kubernetes/kubernetes/issues/79739

why pods created by the Deployment running on NotReady node all the time

I have three nodes. when I shutdown cdh-k8s-3.novalocal ,pods running on it all the time
# kubectl get node
NAME STATUS ROLES AGE VERSION
cdh-k8s-1.novalocal Ready control-plane,master 15d v1.20.0
cdh-k8s-2.novalocal Ready <none> 9d v1.20.0
cdh-k8s-3.novalocal NotReady <none> 9d v1.20.0
# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-66b6c48dd5-5jtqv 1/1 Running 0 3h28m 10.244.26.110 cdh-k8s-3.novalocal <none> <none>
nginx-deployment-66b6c48dd5-fntn4 1/1 Running 0 3h28m 10.244.26.108 cdh-k8s-3.novalocal <none> <none>
nginx-deployment-66b6c48dd5-vz7hr 1/1 Running 0 3h28m 10.244.26.109 cdh-k8s-3.novalocal <none> <none>
my yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 0/3 3 0 3h28m
I find the Doc
DaemonSet pods are created with NoExecute tolerations for the following taints with no tolerationSeconds:
node.kubernetes.io/unreachable
node.kubernetes.io/not-ready
This ensures that DaemonSet pods are never evicted due to these problems.
But it is DaemonSet and not Deployment

Kubernetes DNS and NetworkPolicy with Calico not working

I have a Minikube cluster with Calico running and I am trying to make NetworkPolicies working. Here are my Pods and Services:
First pod (team-a):
apiVersion: v1
kind: Pod
metadata:
name: team-a
namespace: orga-1
labels:
run: nginx
app: team-a
spec:
containers:
- image: joshrosso/nginx-curl:v2
imagePullPolicy: IfNotPresent
name: nginx
---
apiVersion: v1
kind: Service
metadata:
name: team-a
namespace: orga-1
spec:
ports:
- port: 80
name: http
protocol: TCP
targetPort: 80
selector:
app: team-a
Second pod (team-b):
apiVersion: v1
kind: Pod
metadata:
name: team-b
namespace: orga-2
labels:
run: nginx
app: team-b
spec:
containers:
- image: joshrosso/nginx-curl:v2
imagePullPolicy: IfNotPresent
name: nginx
---
apiVersion: v1
kind: Service
metadata:
name: team-b
namespace: orga-2
spec:
ports:
- port: 80
name: http
protocol: TCP
targetPort: 80
selector:
app: team-b
When I execute a bash in team-a, I cannot curl orga-2.team-b:
dev#ubuntu:~$ kubectl exec -it -n orga-1 team-a /bin/bash
root#team-a:/# curl google.de
//Body removed...
root#team-a:/# curl orga-2.team-b
curl: (6) Could not resolve host: orga-2.team-b
Now I applied a network policy:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: deny-all-base-rule
namespace: orga-1
spec:
podSelector: {}
policyTypes:
- Ingress
ingress: []
When I now curl google in team-a, it still works. Here are my pods:
kube-system calico-etcd-hbpqc 1/1 Running 0 27m
kube-system calico-kube-controllers-6b86746955-5mk9v 1/1 Running 0 27m
kube-system calico-node-72rcl 2/2 Running 0 27m
kube-system coredns-fb8b8dccf-6j64x 1/1 Running 1 29m
kube-system coredns-fb8b8dccf-vjwl7 1/1 Running 1 29m
kube-system default-http-backend-6864bbb7db-5c25r 1/1 Running 0 29m
kube-system etcd-minikube 1/1 Running 0 28m
kube-system kube-addon-manager-minikube 1/1 Running 0 28m
kube-system kube-apiserver-minikube 1/1 Running 0 28m
kube-system kube-controller-manager-minikube 1/1 Running 0 28m
kube-system kube-proxy-p48xv 1/1 Running 0 29m
kube-system kube-scheduler-minikube 1/1 Running 0 28m
kube-system nginx-ingress-controller-586cdc477c-6rh6w 1/1 Running 0 29m
kube-system storage-provisioner 1/1 Running 0 29m
orga-1 team-a 1/1 Running 0 20m
orga-2 team-b 1/1 Running 0 7m20s
and my services:
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 29m
kube-system calico-etcd ClusterIP 10.96.232.136 <none> 6666/TCP 27m
kube-system default-http-backend NodePort 10.105.84.105 <none> 80:30001/TCP 29m
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 29m
orga-1 team-a ClusterIP 10.101.4.159 <none> 80/TCP 8m37s
orga-2 team-b ClusterIP 10.105.79.255 <none> 80/TCP 7m54s
The kube-dns endpoint is available, also the service.
Why is my network policy not working and why is the curl to the other pod not working?
Please run
curl team-a.orga-1.svc.cluster.local
curl team-b.orga-2.svc.cluster.local
verify entries in 'cat /etc/resolf.conf'
If you can reach your pods than please follow this tutorial
Deny all ingress traffic:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
namespace: orga-1
spec:
podSelector:
matchLabels: {}
policyTypes:
- Ingress
and Allow ingress traffic to Nginx:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: access-nginx
namespace: orga-1
spec:
podSelector:
matchLabels:
run: nginx
ingress:
- from:
- podSelector:
matchLabels: {}
Below you can find more information about:
Pod’s DNS Policy,
Network Policies
Hope this help.