404 page not found when expose service by ingress in k8s cluster - kubernetes-helm

I have a RESTFull service runs on k8s cluster(1-master, 2-nodes), which writing by golang and it has a GET method and return nothing. I want to expose it by Ingress.
After I installed it by helm and 2 pods get up , I tried to send the request(curl) from client. But it return 404 error. When I curl the RESTFull service in nginx-ingress-controller pod, the service works well.
Restfull & nginx-ingress services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
stee-webservice-svc NodePort 10.109.22.37 <none> 8080:30009/TCP 47m
nginx-ingress-controller LoadBalancer 10.106.34.249 <pending> 80:31368/TCP,443:31860/TCP 30h
Ingress Yaml
Name: nonexistent-raccoon-stee-ws
Namespace: default
Address:
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
*
/steews stee-webservice-svc:8080 (<none>)
Annotations:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CREATE 52m nginx-ingress-controller Ingress default/nonexistent-raccoon-stee-ws
Normal CREATE 52m nginx-ingress-controller Ingress default/nonexistent-raccoon-stee-ws
Normal UPDATE 2m27s (x101 over 52m) nginx-ingress-controller Ingress default/nonexistent-raccoon-stee-ws
Normal UPDATE 2m27s (x101 over 52m) nginx-ingress-controller Ingress default/nonexistent-raccoon-stee-ws
curl from client
curl http://10.106.34.249:80/steews/get -kL
404: Page Not Found
The ingress-controller log show the request has been received and did retrun 404 error to client. SO, problem is here, why the Ingress did not find the configed path "/steews" and return it correctly?
10.244.0.0 - [10.244.0.0] - - [06/Mar/2019:09:30:45 +0000] "GET /steews/get HTTP/1.1" 308 171 "-" "curl/7.58.0" 87 0.000 [default-stee-webservice-svc-8080] - - - - 61974b67eb85845faf3177979b851166
10.244.0.0 - [10.244.0.0] - - [06/Mar/2019:09:30:45 +0000] "GET /steews/get HTTP/2.0" 404 19 "-" "curl/7.58.0" 39 0.003 [default-stee-webservice-svc-8080] 10.244.1.38:8080 19 0.004 404 d29b5922d485c36cf0cf6f76b894770b*
curl in nginx-ingress-controller pod, it works fine.
kc exec -it nginx-ingress-controller-9cf6cf578-qhtl6 -- bash
ww-data#nginx-ingress-controller-9cf6cf578-qhtl6:/etc/nginx$ curl stee-webservice-svc:8080/get -kL -vv
* Trying 10.109.22.37...
* TCP_NODELAY set
* Connected to stee-webservice-svc (10.109.22.37) port 8080 (#0)
> GET /get HTTP/1.1
> Host: stee-webservice-svc:8080
> User-Agent: curl/7.62.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Wed, 06 Mar 2019 09:24:23 GMT
< Content-Length: 0
<
* Connection #0 to host stee-webservice-svc left intact

Related

Nginx Ingress Controller on Bare Metal expose problem

i try to deploy nginx-ingress-controller on bare metal , I have
4 Node
10.0.76.201 - Node 1
10.0.76.202 - Node 2
10.0.76.203 - Node 3
10.0.76.204 - Node 4
4 Worker
10.0.76.205 - Worker 1
10.0.76.206 - Worker 2
10.0.76.207 - Worker 3
10.0.76.214 - Worker 4
2 LB
10.0.76.208 - LB 1
10.0.76.209 - Virtual IP (keepalave)
10.0.76.210 - LB 10
Everything is on BareMetal , Load balancer located outside Cluster .
This is simple haproxy config , just check 80 port ( Worker ip )
frontend kubernetes-frontends
bind *:80
mode tcp
option tcplog
default_backend kube
backend kube
mode http
balance roundrobin
cookie lsn insert indirect nocache
option http-server-close
option forwardfor
server node-1 10.0.76.205:80 maxconn 1000 check
server node-2 10.0.76.206:80 maxconn 1000 check
server node-3 10.0.76.207:80 maxconn 1000 check
server node-4 10.0.76.214:80 maxconn 1000 check
I Install nginx-ingress-controller using Helm and everything work fine
NAME READY STATUS RESTARTS AGE
pod/ingress-nginx-admission-create-xb5rw 0/1 Completed 0 18m
pod/ingress-nginx-admission-patch-skt7t 0/1 Completed 2 18m
pod/ingress-nginx-controller-6dc865cd86-htrhs 1/1 Running 0 18m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/ingress-nginx-controller NodePort 10.106.233.186 <none> 80:30659/TCP,443:32160/TCP 18m
service/ingress-nginx-controller-admission ClusterIP 10.102.132.131 <none> 443/TCP 18m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/ingress-nginx-controller 1/1 1 1 18m
NAME DESIRED CURRENT READY AGE
replicaset.apps/ingress-nginx-controller-6dc865cd86 1 1 1 18m
NAME COMPLETIONS DURATION AGE
job.batch/ingress-nginx-admission-create 1/1 24s 18m
job.batch/ingress-nginx-admission-patch 1/1 34s 18m
Deploy nginx simple way and works fine
kubectl create deploy nginx --image=nginx:1.18
kubectl scale deploy/nginx --replicas=6
kubectl expose deploy/nginx --type=NodePort --port=80
after , i decided to create ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: tektutor-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: "tektutor.training.org"
http:
paths:
- pathType: Prefix
path: "/nginx"
backend:
service:
name: nginx
port:
number: 80
works fine
kubectl describe ingress tektutor-ingress
Name: tektutor-ingress
Labels: <none>
Namespace: default
Address: 10.0.76.214
Ingress Class: <none>
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
tektutor.training.org
/nginx nginx:80 (192.168.133.241:80,192.168.226.104:80,192.168.226.105:80 + 3 more...)
Annotations: kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal AddedOrUpdated 18m nginx-ingress-controller Configuration for default/tektutor-ingress was added or updated
Normal Sync 18m (x2 over 18m) nginx-ingress-controller Scheduled for sync
everything work fine , when i try curl any ip works curl (192.168.133.241:80,192.168.226.104:80,192.168.226.105:80 + 3 more...)
now i try to add hosts
10.0.76.201 tektutor.training.org
This is my master ip , is it correct to add here master ip ? when i try curl tektutor.training.org not working
Can you please explain what I am having problem with this last step?
I set the IP wrong? or what ? Thanks !
I hope I have written everything exhaustively
I used to this tutor Medium Install nginx Ingress Controller
TL;DR
Put in your haproxy backend config values shown below instead of the ones you've provided:
30659 instead of 80
32160 instead of 443 (if needed)
More explanation:
NodePort works on certain set of ports (default: 30000-32767) and in this scenario it allocated:
30659 for your ingress-nginx-controller port 80.
32160 for your ingress-nginx-controller port 443.
This means that every request trying to hit your cluster from outside will need to contact this ports (30...).
You can read more about it by following official documentation:
Kubernetes.io: Docs: Concepts: Services
A funny story that took 2 days :) In Ingress i have used the path /nginx but not hitting it while
Something like :
http://tektutor.training.org/nginx
THanks #Dawid Kruk who try to helm me :) !

service via ingress not reachable, only via nodeport:ip

Good afternoon,
i'd like to ask.
im a "little" bit upset regarding ingress and its traffic flow
i created test nginx deployment with service and ingress. ( in titaniun cloud )
i have no direct connect via browser so im using tunneling to get access via browser abd sock5 proxy in firefox.
deployment:
k describe deployments.apps dpl-nginx
Name: dpl-nginx
Namespace: xxx
CreationTimestamp: Thu, 09 Jun 2022 07:20:48 +0000
Labels: <none>
Annotations: deployment.kubernetes.io/revision: 1
field.cattle.io/publicEndpoints:
[{"port":32506,"protocol":"TCP","serviceName":"xxx:xxx-svc","allNodes":true},{"addresses":["172.xx.xx.117","172.xx.xx.131","172.xx.x...
Selector: app=xxx-nginx
Replicas: 2 desired | 2 updated | 2 total | 2 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=xxx-nginx
Containers:
nginx:
Image: nginx
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts:
/usr/share/nginx/html/ from nginx-index-file (rw)
Volumes:
nginx-index-file:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: index-html-configmap
Optional: false
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: xxx-dpl-nginx-6ff8bcd665 (2/2 replicas created)
Events: <none>
service:
Name: xxx-svc
Namespace: xxx
Labels: <none>
Annotations: field.cattle.io/publicEndpoints: [{"port":32506,"protocol":"TCP","serviceName":"xxx:xxx-svc","allNodes":true}]
Selector: app=xxx-nginx
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.43.95.33
IPs: 10.43.95.33
Port: http-internal 888/TCP
TargetPort: 80/TCP
NodePort: http-internal 32506/TCP
Endpoints: 10.42.0.178:80,10.42.0.179:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
ingress:
Name: test-ingress
Namespace: xxx
Address: 172.xx.xx.117,172.xx.xx.131,172.xx.xx.132
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
test.xxx.io
/ xxx-svc:888 (10.42.0.178:80,10.42.0.179:80)
Annotations: field.cattle.io/publicEndpoints:
[{"addresses":["172.xx.xx.117","172.xx.xx.131","172.xx.xx.132"],"port":80,"protocol":"HTTP","serviceName":"xxx:xxx-svc","ingressName...
nginx.ingress.kubernetes.io/proxy-read-timeout: 3600
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: false
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 9m34s (x37 over 3d21h) nginx-ingress-controller Scheduled for sync
when i try curl/wget to host / nodeIP ,direcly from cluster , both option works, i can get my custom index
wget test.xxx.io --no-proxy --no-check-certificate
--2022-06-13 10:35:12-- http://test.xxx.io/
Resolving test.xxx.io (test.xxx.io)... 172.xx.xx.132, 172.xx.xx.131, 172.xx.xx.117
Connecting to test.xxx.io (test.xxx.io)|172.xx.xx.132|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 197 [text/html]
Saving to: ‘index.html.1’
index.html.1 100%[===========================================================================================>] 197 --.-KB/s in 0s
curl:
curl test.xxx.io --noproxy '*' -I
HTTP/1.1 200 OK
Date: Mon, 13 Jun 2022 10:36:31 GMT
Content-Type: text/html
Content-Length: 197
Connection: keep-alive
Last-Modified: Thu, 09 Jun 2022 07:20:49 GMT
ETag: "62a19f51-c5"
Accept-Ranges: bytes
nslookup
nslookup,dig,ping from cluster is working as well:
nslookup test.xxx.io
Server: 127.0.0.53
Address: 127.0.0.53#53
Name: test.xxx.io
Address: 172.xx.xx.131
Name: test.xxx.io
Address: 172.xx.xx.132
Name: test.xxx.io
Address: 172.xx.xx.117
dig
dig test.xxx.io +noall +answer
test.xxx.io. 22 IN A 172.xx.xx.117
test.xxx.io. 22 IN A 172.xx.xx.132
test.xxx.io. 22 IN A 172.xx.xx.131
ping
ping test.xxx.io
PING test.xxx.io (172.xx.xx.132) 56(84) bytes of data.
64 bytes from xx-k3s-1 (172.xx.xx.132): icmp_seq=1 ttl=64 time=0.038 ms
64 bytes from xx-k3s-1 (172.xx.xx.132): icmp_seq=2 ttl=64 time=0.042 ms
also from ingress nginx pod curl works fine...
in firefox via nodeIP:port, i can get index, but via host its not possible
seems that ingress forwarding traffic to the pod, but is this issue only something to do with browser ?
Thanks for any advice
so for clarification,
as I'm using tunneling to reach ingress from local pc via browser with SOCKS5 proxy.
ssh xxxx#100.xx.xx.xx -D 1090
solution is trivial, add
172.xx.xx.117 test.xxx.io
into /etc/hosts on jump server.

Not able to access statefulset pod via headless service using fqdn

I have a k8 setup that looks like this
ingress -> headless service (k8 service with clusterIp: none) -> statefulsets ( 2pods)
Fqdn looks like this:
nslookup my-service
Server: 100.4.0.10
Address: 100.4.0.10#53
Name: my-service.my-namespace.svc.cluster.local
Address: 100.2.2.8
Name: my-service.my-namespace.svc.cluster.local
Address: 100.1.4.2
I am trying to reach one of the pod directly via the service using the following fqdn but not able to do so.
curl -I my-pod-0.my-service.my-namespace.svc.cluster.local:8222
curl: (6) Could not resolve host: my-pod-0.my-service.my-namespace.svc.cluster.local
If I try to hit the service directly then it works correctly (as a loadbalancer)
curl -I my-service.my-namespace.svc.cluster.local:8222
HTTP/1.1 200 OK
Date: Sat, 31 Jul 2021 21:24:42 GMT
Content-Length: 656
If I try to hit the pod directly using it's cluster ip, it also works fine
curl -I 100.2.2.8:8222
HTTP/1.1 200 OK
Date: Sat, 31 Jul 2021 21:29:22 GMT
Content-Length: 656
Content-Type: text/html; charset=utf-8
But my use case requires me to be able to hit the statefulset pod using fqdn i.e my-pod-0.my-service.my-namespace.svc.cluster.local . What am I missing here?
example statefulset called foo with image nginx:
k get statefulsets.apps
NAME READY AGE
foo 3/3 8m55s
This stateful set created following pods(foo-0,foo-1,foo-2):
k get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
busybox 1/1 Running 1 3h47m 10.1.198.71 ps-master <none> <none>
foo-0 1/1 Running 0 12m 10.1.198.121 ps-master <none> <none>
foo-1 1/1 Running 0 12m 10.1.198.77 ps-master <none> <none>
foo-2 1/1 Running 0 12m 10.1.198.111 ps-master <none> <none>
Now create a headless service(clusterIP is none) as follow:(make sure to use correct selector same as your statefulset)
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: foo
spec:
type: ClusterIP
clusterIP: None
ports:
- port: 80
name: web
selector:
app: foo
Now, do nslookup to see the dns resolution working for the service.(Optional step)
k exec -it busybox -- nslookup nginx.default.svc.cluster.local
Server: 10.152.183.10
Address 1: 10.152.183.10 kube-dns.kube-system.svc.cluster.local
Name: nginx.default.svc.cluster.local
Address 1: 10.1.198.77 foo-1.nginx.default.svc.cluster.local
Address 2: 10.1.198.111 foo-2.nginx.default.svc.cluster.local
Address 3: 10.1.198.121 foo-0.nginx.default.svc.cluster.local
Now validate that, individual resolution per-pod is working:
k exec -it busybox -- nslookup foo-1.nginx.default.svc.cluster.local
Server: 10.152.183.10
Address 1: 10.152.183.10 kube-dns.kube-system.svc.cluster.local
Name: foo-1.nginx.default.svc.cluster.local
Address 1: 10.1.198.77 foo-1.nginx.default.svc.cluster.local
More info: Here
Note: In this case OP had incorrect mapping of headless service and the statefulset, this can be verified with below command:
k get statefulsets.apps foo -o jsonpath="{.spec.serviceName}{'\n'}"
nignx
Ensure that, the mapping.
Original answer didn't clarify how OP fixed the issue, the problem was in serviceName property under statefulset implementation.

GKE basic-ingress intermittently returns 502 when backend returns 404/422

I have an ingress providing routing for two microservices running on GKE, and intermittently when the microservice returns a 404/422, the ingress returns a 502.
Here is my ingress definition:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: basic-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: develop-static-ip
ingress.gcp.kubernetes.io/pre-shared-cert: dev-ssl-cert
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: srv
servicePort: 80
- path: /c/*
backend:
serviceName: collection
servicePort: 80
- path: /w/*
backend:
serviceName: collection
servicePort: 80
I run tests that hit the srv back-end where I expect a 404 or 422 response. I have verified when I hit the srv back-end directly (bypassing the ingress) that the service responds correctly with the 404/422.
When I issue the same requests through the ingress, the ingress will intermittently respond with a 502 instead of the 404/422 coming from the back-end.
How can I have the ingress just return the 404/422 response from the back-end?
Here's some example code to demonstrate the behavior I'm seeing (the expected status is 404):
>>> for i in range(10):
resp = requests.get('https://<server>/a/v0.11/accounts/junk', cookies=<token>)
print(resp.status_code)
502
502
404
502
502
404
404
502
404
404
And here's the same requests issued from a python prompt within the pod, i.e. bypassing the ingress:
>>> for i in range(10):
... resp = requests.get('http://0.0.0.0/a/v0.11/accounts/junk', cookies=<token>)
... print(resp.status_code)
...
404
404
404
404
404
404
404
404
404
404
Here's the output of the kubectl commands to demonstrate that the loadbalancer is set up correctly (I never get a 502 for a 2xx/3xx response from the microservice):
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
srv-799976fbcb-4dxs7 2/2 Running 0 19m 10.24.3.8 gke-develop-default-pool-ea507abc-43h7 <none> <none>
srv-799976fbcb-5lh9m 2/2 Running 0 19m 10.24.1.7 gke-develop-default-pool-ea507abc-q0j3 <none> <none>
srv-799976fbcb-5zvmv 2/2 Running 0 19m 10.24.2.9 gke-develop-default-pool-ea507abc-jjzg <none> <none>
collection-5d9f8586d8-4zngz 2/2 Running 0 19m 10.24.1.6 gke-develop-default-pool-ea507abc-q0j3 <none> <none>
collection-5d9f8586d8-cxvgb 2/2 Running 0 19m 10.24.2.7 gke-develop-default-pool-ea507abc-jjzg <none> <none>
collection-5d9f8586d8-tzwjc 2/2 Running 0 19m 10.24.2.8 gke-develop-default-pool-ea507abc-jjzg <none> <none>
parser-7df86f57bb-9qzpn 1/1 Running 0 19m 10.24.0.8 gke-develop-parser-pool-5931b06f-6mcq <none> <none>
parser-7df86f57bb-g6d4q 1/1 Running 0 19m 10.24.5.5 gke-develop-parser-pool-5931b06f-9xd5 <none> <none>
parser-7df86f57bb-jchjv 1/1 Running 0 19m 10.24.0.9 gke-develop-parser-pool-5931b06f-6mcq <none> <none>
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
srv NodePort 10.0.2.110 <none> 80:30141/TCP 129d
collection NodePort 10.0.4.237 <none> 80:30270/TCP 129d
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 130d
$ kubectl get endpoints
NAME ENDPOINTS AGE
srv 10.24.1.7:80,10.24.2.9:80,10.24.3.8:80 129d
collection 10.24.1.6:80,10.24.2.7:80,10.24.2.8:80 129d
kubernetes 35.237.239.186:443 130d
tl;dr: GCP LoadBalancer/GKE Ingress will 502 if 404/422s from the back-ends don't have response bodies.
Looking at the LoadBalancer logs, I would see the following errors:
502: backend_connection_closed_before_data_sent_to_client
404: backend_connection_closed_after_partial_response_sent
Since everything was configured correctly (even the LoadBalancer said the backends were healthy)--backend was working as expected and no failed health checks--I experimented with a few things and noticed that all of my 404 responses had empty bodies.
Sooo, I added a body to my 404 and 422 responses and lo and behold no more 502s!
502 is a tricky status code, it can mean a context cancelled by the client or simply a bad gateway from the server you are trying to reach. In kubernetes a 502 usually means you cannot reach the service. Thus, I would go for debugging your services and deployments doc.
Use kubectl get pods -o wide to get your srv pod; check its clusterIP IP. Then make sure the service is load balancing the srv deployment. To accomplish this, run kubectl get svc and look for the srv service. Finally run kubectl get endpoints, get the IP assigned to the srv endpoint and match it against the IP you obtained from the pod. If this is all ok, then you are correctly load-balancing to your backend.
502 errors are expected when your backend service is returning 4xx errors. If the backend is returning 4xx, the health checks will fail. If all backends are failing, the Load Balancer will not have an available backend to send the traffic to and will return 502.
For any 502 error returned from the Load Balancer, I strongly recommend checking the stackdriver logs for the HTTP Load Balancer. Any 502 error will include a message output along with the 502 response. The message should clarify why 502 was reutned (there are a number of reasons).
In your current case, the 502 error log should mention "failed_to_pick_backend" or "failed_to_connect_to_backend", something to that tune. If you are using nginx ingress, similar behavior can be seen, but the 502 error message may say something different.

Getting 403 Forbidden from envoy when attempting to curl between sidecar enabled pods

I'm using a Kubernetes/Istio setup and my list of pods and services are as below:
NAME READY STATUS RESTARTS AGE
hr--debug-deployment-86575cffb6-wl6rx 2/2 Running 0 33m
hr--hr-deployment-596946948d-jrd7g 2/2 Running 0 33m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hr--debug-service ClusterIP 10.104.160.61 <none> 80/TCP 33m
hr--hr-service ClusterIP 10.102.117.177 <none> 80/TCP 33m
I'm attempting to curl into hr--hr-service from hr--debug-deployment-86575cffb6-wl6rx
pasan#ubuntu:~/product-vick$ kubectl exec -it hr--debug-deployment-86575cffb6-wl6rx /bin/bash
Defaulting container name to debug.
Use 'kubectl describe pod/hr--debug-deployment-86575cffb6-wl6rx -n default' to see all of the containers in this pod.
root#hr--debug-deployment-86575cffb6-wl6rx:/# curl hr--hr-service -v
* Rebuilt URL to: hr--hr-service/
* Trying 10.102.117.177...
* Connected to hr--hr-service (10.102.117.177) port 80 (#0)
> GET / HTTP/1.1
> Host: hr--hr-service
> User-Agent: curl/7.47.0
> Accept: */*
>
< HTTP/1.1 403 Forbidden
< date: Thu, 03 Jan 2019 04:06:17 GMT
< server: envoy
< content-length: 0
<
* Connection #0 to host hr--hr-service left intact
Can you please explain why I'm getting a 403 forbidden by envoy and how I can troubleshoot it?
If you have the envoy sidecar injected it really depends on what type of authentication policy you have between your services. Are you using a MeshPolicy or a Policy?
You can also try disabling authentication between your services to debug. Something like this (if your policy is defined like this):
apiVersion: "authentication.istio.io/v1alpha1"
kind: "Policy"
metadata:
name: "hr--hr-service"
spec:
targets:
- name: hr--hr-service
peers:
- mTLS:
mode: PERMISSIVE