Kubernetes on SuSE: NodePort Service Problems - kubernetes

for days I'm trying to track down a weird behaviour concerning NodePort Services when running Kubernetes on openSUSE Leap 15.3.
For testing purposes on my own server I installed 3 VMs with openSUSE 15.3. With this article: How to install kubernetes in Suse Linux enterprize server 15 virtual machines? I set up this Kubernetes Cluster:
kubix01:~ # k get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
kubix01 Ready control-plane,master 25h v1.22.2 192.168.42.51 <none> openSUSE Leap 15.3 5.3.18-59.27-default docker://20.10.6-ce
kubix02 Ready <none> 25h v1.22.2 192.168.42.52 <none> openSUSE Leap 15.3 5.3.18-59.27-default docker://20.10.6-ce
kubix03 Ready <none> 25h v1.22.2 192.168.42.53 <none> openSUSE Leap 15.3 5.3.18-59.27-default docker://20.10.6-ce
For testing things out I made a new 3 Replica Deployment for a traefik/whoami Image with this yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: whoami
labels:
app: whoami
spec:
replicas: 3
selector:
matchLabels:
app: whoami
template:
metadata:
labels:
app: whoami
spec:
containers:
- name: whoami
image: traefik/whoami
ports:
- containerPort: 80
This results in three Pods spread over the 2 worker nodes as expected:
kubix01:~/k8s/whoami # k get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
whoami-8557b59f65-2qkvq 1/1 Running 2 (24h ago) 25h 10.244.2.7 kubix03 <none> <none>
whoami-8557b59f65-4wnmd 1/1 Running 2 (24h ago) 25h 10.244.1.6 kubix02 <none> <none>
whoami-8557b59f65-xhx5x 1/1 Running 2 (24h ago) 25h 10.244.1.7 kubix02 <none> <none>
After that I created a NodePort Service for making things available to the world with this yaml:
apiVersion: v1
kind: Service
metadata:
name: whoami
spec:
type: NodePort
selector:
app: whoami
ports:
- protocol: TCP
port: 8080
targetPort: 80
nodePort: 30080
This is the result:
kubix01:~/k8s/whoami # k get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 25h
whoami NodePort 10.105.214.86 <none> 8080:30080/TCP 25h
kubix01:~/k8s/whoami # k describe svc whoami
Name: whoami
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=whoami
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.105.214.86
IPs: 10.105.214.86
Port: <unset> 8080/TCP
TargetPort: 80/TCP
NodePort: <unset> 30080/TCP
Endpoints: 10.244.1.6:80,10.244.1.7:80,10.244.2.7:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
So everything looks fine and I tested things out with curl:
curl on one Cluster Node to PodIP:PodPort
kubix01:~/k8s/whoami # curl 10.244.1.6
Hostname: whoami-8557b59f65-4wnmd
IP: 127.0.0.1
IP: 10.244.1.6
RemoteAddr: 10.244.0.0:50380
GET / HTTP/1.1
Host: 10.244.1.6
User-Agent: curl/7.66.0
Accept: */*
kubix01:~/k8s/whoami # curl 10.244.1.7
Hostname: whoami-8557b59f65-xhx5x
IP: 127.0.0.1
IP: 10.244.1.7
RemoteAddr: 10.244.0.0:36062
GET / HTTP/1.1
Host: 10.244.1.7
User-Agent: curl/7.66.0
Accept: */*
kubix01:~/k8s/whoami # curl 10.244.2.7
Hostname: whoami-8557b59f65-2qkvq
IP: 127.0.0.1
IP: 10.244.2.7
RemoteAddr: 10.244.0.0:43924
GET / HTTP/1.1
Host: 10.244.2.7
User-Agent: curl/7.66.0
Accept: */*
==> Everything works as expected
curl on Cluster Node to services ClusterIP:ClusterPort:
kubix01:~/k8s/whoami # curl 10.105.214.86:8080
Hostname: whoami-8557b59f65-xhx5x
IP: 127.0.0.1
IP: 10.244.1.7
RemoteAddr: 10.244.0.0:1106
GET / HTTP/1.1
Host: 10.105.214.86:8080
User-Agent: curl/7.66.0
Accept: */*
kubix01:~/k8s/whoami # curl 10.105.214.86:8080
Hostname: whoami-8557b59f65-4wnmd
IP: 127.0.0.1
IP: 10.244.1.6
RemoteAddr: 10.244.0.0:9707
GET / HTTP/1.1
Host: 10.105.214.86:8080
User-Agent: curl/7.66.0
Accept: */*
kubix01:~/k8s/whoami # curl 10.105.214.86:8080
Hostname: whoami-8557b59f65-2qkvq
IP: 127.0.0.1
IP: 10.244.2.7
RemoteAddr: 10.244.0.0:25577
GET / HTTP/1.1
Host: 10.105.214.86:8080
User-Agent: curl/7.66.0
Accept: */*
==> Everything fine, Traffic is LoadBalanced to the different pods.
curl on Cluster Node to NodeIP:NodePort
kubix01:~/k8s/whoami # curl 192.168.42.51:30080
Hostname: whoami-8557b59f65-2qkvq
IP: 127.0.0.1
IP: 10.244.2.7
RemoteAddr: 10.244.0.0:5463
GET / HTTP/1.1
Host: 192.168.42.51:30080
User-Agent: curl/7.66.0
Accept: */*
kubix01:~/k8s/whoami # curl 192.168.42.52:30080
^C [NoAnswer]
kubix01:~/k8s/whoami # curl 192.168.42.53:30080
^C [NoAnswer]
==> NodePort Service is only working at the same Node, no answer from the other nodes
curl from another Network Host to NodeIP:NodePort
user#otherhost:~$ curl 192.168.42.51:30080
^C [NoAnswer]
user#otherhost:~$ curl 192.168.42.52:30080
^C [NoAnswer]
user#otherhost:~$ curl 192.168.42.53:30080
^C [NoAnswer]
==> Service is not reachable from the outside at all, no answer on all nodes
Has anybody an idea what is going wrong here?
Thx in advance
T0mcat
PS:
Additionally here a little image for clearing things a bit more. Red curved arrows are the non - working connections, gree curved arrows the working ones:

FINALLY found the solution:
Weird behaviour of SUSE concerning ip_forward sysctl setting. During OS Installation the Option "IPv4 forwarding" was activated in the installer. After OS was installed, additionally added "net.ipv4.ip_forward = 1" and
"net.ipv4.conf.all.forwarding = 1" in /etc/sysctl.conf.
With "sysctl -l|grep ip_forward" checked, both options where activated but this obviously wasn't the truth.
Found the file "/etc/sysctl.d/70-yast.conf"... here both options where set to 0, which obviously was the truth. But manually setting these options to 1 in this file still weren't enough...
Using SUSEs YAST tool and entering "System/Network Settings" -> Routing one can see the Option "Enable IPv4 Forwarding", which was checked (remember: in 70-yast.conf these options where set to 0). After playing a bit with the options in YAST, so that the 70-yast.conf gets rewritten and both options where set to 1 in 70-yast.conf, NodePort services are working as expected now.
Conclusion:
IMHO this seems to be a Bug in SUSE... Activating "IPv4 Forwarding" during Installation leads to a 70-yast.conf with disabled forwarding options. And as a plus, "sysctl -l" displays wrong settings, so as YAST does...
The solution is to play around in YASTs network Settings, so that 70-yast.conf gets rewritten with activated ip_forwarding options.

Related

API connection refused on https pod kubernetes

I've deployed an Wazuh API on my kubernetes cluster and I cannot reach the 55000 port outside the pod , it says "curl: (7) Failed to connect to wazuh-master port 55000: Connection refused"
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
wazuh-manager-worker-0 1/1 Running 0 4m47s 172.16.0.231 10.0.0.10 <none> <none>
wazuh-master-0 1/1 Running 0 4m47s 172.16.0.230 10.0.0.10 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
wazuh-cluster ClusterIP None <none> 1516/TCP 53s
wazuh-master ClusterIP 10.247.82.29 <none> 1515/TCP,55000/TCP 53s
wazuh-workers LoadBalancer 10.247.70.29 <pending> 1514:32371/TCP 53s
I've couple of curl test scenarious:
1. curl the name of the pod "wazuh-master-0" - Could not resolve host: wazuh-master-0
curl -u user:pass -k -X GET "https://wazuh-master-0:55000/security/user/authenticate?raw=true"
curl: (6) Could not resolve host: wazuh-master-0
2. curl the name of the service of the pod "wazuh-master" - Failed to connect to wazuh-master port 55000: Connection refused
curl -u user:pass -k -X GET "https://wazuh-master:55000/security/user/authenticate?raw=true"
curl: (7) Failed to connect to wazuh-master port 55000: Connection refused
3. curl the ip of the pod "wazuh-master-0 172.16.0.230" - successfull
curl -u user:pass -k -X GET "https://172.16.0.230:55000/security/user/authenticate?raw=true"
eyJ0eXAiOiJKV1QiLCJhbGciOiJIUasfsdfgfhgsdoIEFQSSBSRVhgfhfghfghgfasasdasNUIiwibmJmIjoxNjYzMTQ3ODcxLCJleHAiOjE2NjMxNDg3NzEsInN1YiI6IndhenV
wazuh-master-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: wazuh-master
labels:
app: wazuh-manager
spec:
type: ClusterIP
selector:
app: wazuh-manager
node-type: LoadBalancer
ports:
- name: registration
port: 1515
targetPort: 1515
- name: api
port: 55000
targetPort: 55000
What i'm doing wrong ?
I've fixed it, i've removed the line below from the spec: selector: node-type
node-type: LoadBalancer

Not able to access statefulset pod via headless service using fqdn

I have a k8 setup that looks like this
ingress -> headless service (k8 service with clusterIp: none) -> statefulsets ( 2pods)
Fqdn looks like this:
nslookup my-service
Server: 100.4.0.10
Address: 100.4.0.10#53
Name: my-service.my-namespace.svc.cluster.local
Address: 100.2.2.8
Name: my-service.my-namespace.svc.cluster.local
Address: 100.1.4.2
I am trying to reach one of the pod directly via the service using the following fqdn but not able to do so.
curl -I my-pod-0.my-service.my-namespace.svc.cluster.local:8222
curl: (6) Could not resolve host: my-pod-0.my-service.my-namespace.svc.cluster.local
If I try to hit the service directly then it works correctly (as a loadbalancer)
curl -I my-service.my-namespace.svc.cluster.local:8222
HTTP/1.1 200 OK
Date: Sat, 31 Jul 2021 21:24:42 GMT
Content-Length: 656
If I try to hit the pod directly using it's cluster ip, it also works fine
curl -I 100.2.2.8:8222
HTTP/1.1 200 OK
Date: Sat, 31 Jul 2021 21:29:22 GMT
Content-Length: 656
Content-Type: text/html; charset=utf-8
But my use case requires me to be able to hit the statefulset pod using fqdn i.e my-pod-0.my-service.my-namespace.svc.cluster.local . What am I missing here?
example statefulset called foo with image nginx:
k get statefulsets.apps
NAME READY AGE
foo 3/3 8m55s
This stateful set created following pods(foo-0,foo-1,foo-2):
k get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
busybox 1/1 Running 1 3h47m 10.1.198.71 ps-master <none> <none>
foo-0 1/1 Running 0 12m 10.1.198.121 ps-master <none> <none>
foo-1 1/1 Running 0 12m 10.1.198.77 ps-master <none> <none>
foo-2 1/1 Running 0 12m 10.1.198.111 ps-master <none> <none>
Now create a headless service(clusterIP is none) as follow:(make sure to use correct selector same as your statefulset)
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: foo
spec:
type: ClusterIP
clusterIP: None
ports:
- port: 80
name: web
selector:
app: foo
Now, do nslookup to see the dns resolution working for the service.(Optional step)
k exec -it busybox -- nslookup nginx.default.svc.cluster.local
Server: 10.152.183.10
Address 1: 10.152.183.10 kube-dns.kube-system.svc.cluster.local
Name: nginx.default.svc.cluster.local
Address 1: 10.1.198.77 foo-1.nginx.default.svc.cluster.local
Address 2: 10.1.198.111 foo-2.nginx.default.svc.cluster.local
Address 3: 10.1.198.121 foo-0.nginx.default.svc.cluster.local
Now validate that, individual resolution per-pod is working:
k exec -it busybox -- nslookup foo-1.nginx.default.svc.cluster.local
Server: 10.152.183.10
Address 1: 10.152.183.10 kube-dns.kube-system.svc.cluster.local
Name: foo-1.nginx.default.svc.cluster.local
Address 1: 10.1.198.77 foo-1.nginx.default.svc.cluster.local
More info: Here
Note: In this case OP had incorrect mapping of headless service and the statefulset, this can be verified with below command:
k get statefulsets.apps foo -o jsonpath="{.spec.serviceName}{'\n'}"
nignx
Ensure that, the mapping.
Original answer didn't clarify how OP fixed the issue, the problem was in serviceName property under statefulset implementation.

Kubernetes pod Troubleshoot

I deployed my container in kubernetes pod, andd pod and related services are up and running.
please find the below pod and deployment status of the pods
root#jenkins-linux-vm:/home/admin/kubernetes# kubectl get pods
NAME READY STATUS RESTARTS AGE
angular-deployment-5d5fbf967c-zvzvl 1/1 Running 0 70m
root#jenkins-linux-vm:/home/admin/kubernetes# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
angular-service NodePort 10.96.16.68 <none> 80:31000/TCP 79m
root#jenkins-linux-vm:/home/admin/kubernetes# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
angular-deployment 1/1 1 1 70m
please find the below curl access
root#jenkins-linux-vm:/home/admin/kubernetes# kubectl exec -it angular-deployment-5d5fbf967c-zvzvl curl 10.0.0.1:31000
curl: (7) Failed to connect to 10.0.0.1 port 31000: Connection refused
command terminated with exit code 7
Even though i am not able to access my application services in the browser as like below.
https://10.0.0.1:31000
apiVersion: apps/v1
kind: Deployment
metadata:
name: angular-deployment
spec:
selector:
matchLabels:
app: frontend-app
replicas: 1
template:
metadata:
labels:
app: frontend-app
spec:
containers:
- name: frontend-app
image: ${IMAGE_NAME}:${IMAGE_TAG}
ports:
- containerPort: 80
---
kind: Service
apiVersion: v1
metadata:
name: angular-service
spec:
selector:
app: frontend-app
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 31000
type: NodePort
root#jenkins-linux-vm:/home/admin# kubectl describe pod angular-deployment-556c47f666-9d2x4
Name: angular-deployment-556c47f666-9d2x4
Namespace: pre-release
Priority: 0
Node: poc-worker2/10.0.0.2
Start Time: Sat, 18 Jan 2020 08:47:35 +0000
Labels: app=frontend-app
pod-template-hash=556c47f666
Annotations: <none>
Status: Running
IP: 10.32.0.8
IPs:
IP: 10.32.0.8
Controlled By: ReplicaSet/angular-deployment-556c47f666
Containers:
frontend-app:
Container ID: docker://43fea22e4c1d49e0c94fc8aca3a4b41df44b5f91f45ea29ede263c5a6bcf6503
Image: frontend-app:future-master-fix-f2d2a8bd
Image ID: docker://sha256:0099587db89de9ef999a7d1f087d4781e73c491b17e89392e92b08d2f935ad27
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Sat, 18 Jan 2020 08:47:40 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-r67p7 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-r67p7:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-r67p7
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 21s default-scheduler Successfully assigned pre-release/angular-deployment-556c47f666-9d2x4 to poc-worker2
Normal Pulled 17s kubelet, poc-worker2 Container image "frontend-app:future-master-fix-f2d2a8bd" already present on machine
Normal Created 16s kubelet, poc-worker2 Created container frontend-app
Normal Started 16s kubelet, poc-worker2 Started container frontend-app
please find the below are the svc for deployment
root#jenkins-linux-vm:/home/admin# kubectl describe svc angular-service
Name: angular-service
Namespace: pre-release
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"angular-service","namespace":"pre-release"},"spec":{"ports":[{"no...
Selector: app=frontend-app
Type: NodePort
IP: 10.96.227.143
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 31000/TCP
Endpoints: 10.32.0.4:80,10.32.0.8:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Please find the docker file here
FROM node:12.2.0
# set working directory
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
# install and cache app dependencies
COPY package.json /app/package.json
# add app
COPY . /app
# start app
CMD ng serve --host 0.0.0.0
Can you please some one help me how to fix this issue
Here I could see the issue is with your Angular Dockerfile, you are using ng serve if you see your dependencies in package.json you will see "#angular/cli": "*" in order to have that inside your docker you need to add RUN npm install this will install all your dependencies inside your docker container, and you can do ng serve but ng serve is for development locally, it's not a good approach I would say.
To identify these kind of issues, it's advisable to run something like this on local machine in order to find if your docker container is working fine or not, before you could deploy it onto to k8s cluster, as you know kubernetes is a very big universe it will take time to identify the actual problem.
Ok coming to the issue (I can simply add a single command to your dockerfile and add my answer, but I wouldn't suggest that approach. So adding the complete answer which would look good), when you are deploying some frontend related application your docker image need to have the capabilities of serving the index.html page it's the end product after you build your Angular or React applications.
There are several ways this could be done. And there are several tutorials explaining the same, here is something I would suggest, your dockerfile should look like this. Adding comments what they do.
#Stage0: builder, based on Node alpine imagine to build and compile your angular code
FROM node:10-alpine as builder
WORKDIR /app
COPY package*.json /app/
# This is one thing you forgot in your dockerfile, if you add this it might work
RUN npm install
COPY . .
# This is normal build, it does ng build
RUN npm run build
#Stage 1, based on Nginx imagine, to have only the compile app inside nginx folders to serve
FROM nginx:1.15
COPY --from=builder /app/dist/ /usr/share/nginx/html
# This one copies the local nginx.conf file as default.conf for nginx to let it serve
COPY ./nginx.conf /etc/nginx/conf.d/default.conf
Please make sure you have nginx.conf file inside your code, file level is same as package.json
server {
listen 80;
sendfile on;
default_type application/octet-stream;
gzip on;
gzip_http_version 1.1;
gzip_disable "MSIE [1-6]\.";
gzip_min_length 1100;
gzip_vary on;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css application/json application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript;
gzip_comp_level 9;
root /usr/share/nginx/html;
location / {
#try_files $uri $uri/ /index.html =404;
expires -1d;
alias /usr/share/nginx/html/;
try_files $uri$args $uri$args/ /index.html =404;
location ~* \.(?:ico|css|js|gif|jpe?g|png|svg|woff|woff2|ttf|eot)$ {
add_header Access-Control-Allow-Origin *;
}
}
}
Make sure you run docker run on local machine before you deploy it on to cluster.
Hope this helps.
If you are accessing the pod via nodeip:nodeport which will only work if you are accessing the pod from outside the cluster using a browser.
Here is a guide on how to expose an application via NodePort. In this case the nodes of your kubernetes cluster need to be accessible i.e should have public ip.
You should be using the cluster ip to access the pod from within the cluster i.e via exec from another pod as shown in below command.
kubectl exec -it angular-deployment-5d5fbf967c-zvzvl curl 10.96.16.68:80
I have a feeling that your docker container is not listening on port 80.

Introducing ingress to istio mesh

I had an Istio mesh with mtls disabled with following pods and services. I'm using kubeadm.
pasan#ubuntu:~$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default debug-tools 2/2 Running 0 2h
default employee--debug-deployment-57947cf67-gwpjq 2/2 Running 0 2h
default employee--employee-deployment-5f4d7c9d78-sfmtx 2/2 Running 0 2h
default employee--gateway-deployment-bc646bd84-wnqwq 2/2 Running 0 2h
default employee--salary-deployment-d4969d6c8-lz7n7 2/2 Running 0 2h
default employee--sts-deployment-7bb9b44bf7-lthc8 1/1 Running 0 2h
default hr--debug-deployment-86575cffb6-6wrlf 2/2 Running 0 2h
default hr--gateway-deployment-8c488ff6-827pf 2/2 Running 0 2h
default hr--hr-deployment-596946948d-rzc7z 2/2 Running 0 2h
default hr--sts-deployment-694d7cff97-4nz29 1/1 Running 0 2h
default stock-options--debug-deployment-68b8fccb97-4znlc 2/2 Running 0 2h
default stock-options--gateway-deployment-64974b5fbb-rjrwq 2/2 Running 0 2h
default stock-options--stock-deployment-d5c9d4bc8-dqtrr 2/2 Running 0 2h
default stock-options--sts-deployment-66c4799599-xx9d4 1/1 Running 0 2h
pasan#ubuntu:~$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
employee--debug-service ClusterIP 10.104.23.141 <none> 80/TCP 2h
employee--employee-service ClusterIP 10.96.203.80 <none> 80/TCP 2h
employee--gateway-service ClusterIP 10.97.145.188 <none> 80/TCP 2h
employee--salary-service ClusterIP 10.110.167.162 <none> 80/TCP 2h
employee--sts-service ClusterIP 10.100.145.102 <none> 8080/TCP,8081/TCP 2h
hr--debug-service ClusterIP 10.103.81.158 <none> 80/TCP 2h
hr--gateway-service ClusterIP 10.106.183.101 <none> 80/TCP 2h
hr--hr-service ClusterIP 10.107.136.178 <none> 80/TCP 2h
hr--sts-service ClusterIP 10.105.184.100 <none> 8080/TCP,8081/TCP 2h
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2h
stock-options--debug-service ClusterIP 10.111.51.88 <none> 80/TCP 2h
stock-options--gateway-service ClusterIP 10.100.81.254 <none> 80/TCP 2h
stock-options--stock-service ClusterIP 10.96.189.100 <none> 80/TCP 2h
stock-options--sts-service ClusterIP 10.108.59.68 <none> 8080/TCP,8081/TCP 2h
I accessed this service using a debug pod using the following command:
curl -X GET http://hr--gateway-service.default:80/info -H "Authorization: Bearer $token" -v
As the next step, I enabled mtls in the mesh. As expected the above curl command failed.
Now I want to set up an ingress controller so I can access the service mesh as I did before.
So I set up Gateway and VirtualService as below:
cat <<EOF | kubectl apply -f -
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: hr-ingress-gateway
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "hr--gateway-service.default"
EOF
cat <<EOF | kubectl apply -f -
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: hr-ingress-virtual-service
spec:
hosts:
- "*"
gateways:
- hr-ingress-gateway
http:
- match:
- uri:
prefix: /info/
route:
- destination:
port:
number: 80
host: hr--gateway-service
EOF
But still I'm getting the following output
wso2carbon#gateway-5bd88fd679-l8jn5:~$ curl -X GET http://hr--gateway-service.default:80/info -H "Authorization: Bearer $token" -v
Note: Unnecessary use of -X or --request, GET is already inferred.
* Trying 10.106.183.101...
* Connected to hr--gateway-service.default (10.106.183.101) port 80 (#0)
> GET /info HTTP/1.1
> Host: hr--gateway-service.default
> User-Agent: curl/7.47.0
> Accept: */*
...
* Recv failure: Connection reset by peer
* Closing connection 0
curl: (56) Recv failure: Connection reset by peer
Can you please let me know if my ingress set up is correct and how I can access the service using curl after the set up.
My Ingress services are listed as below:
ingress-nginx default-http-backend ClusterIP 10.105.46.168 <none> 80TCP 3h
ingress-nginx ingress-nginx NodePort 10.110.75.131 172.17.17.100 80:30770/TCP,443:32478/TCP
istio-ingressgateway NodePort 10.98.243.205 <none> 80:31380/TCP,443:31390/TCP,31400:31400/TCP,15011:31775/TCP,8060:32436/TCP,853:31351/TCP,15030:32149/TCP,15031:32653/TCP 3h
#Pasan to apply Istio CRDs (VirtualServices) to incoming traffic you need to use Istio's Ingress Gateway as a point of ingress as seen here: https://istio.io/docs/tasks/traffic-management/ingress/
The ingressgateway is a wrapper around the envoy which is configurable using Istio's CRDs.
Basically, you don't need a second ingress controller and during istio installation, the default one is installed, find out by executing:
kubectl get services -n istio-system -l app=istio-ingressgateway
and with the Ingress Gateway ip execute:
curl -X GET http://{INGRESSGATEWAY_IP}/info -H "Authorization: Bearer $token" -H "Host: hr--gateway-service.default"
I added the host as a header as it is defined in the Gateway meaning that only for this host ingress is allowed.

Getting 403 Forbidden from envoy when attempting to curl between sidecar enabled pods

I'm using a Kubernetes/Istio setup and my list of pods and services are as below:
NAME READY STATUS RESTARTS AGE
hr--debug-deployment-86575cffb6-wl6rx 2/2 Running 0 33m
hr--hr-deployment-596946948d-jrd7g 2/2 Running 0 33m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hr--debug-service ClusterIP 10.104.160.61 <none> 80/TCP 33m
hr--hr-service ClusterIP 10.102.117.177 <none> 80/TCP 33m
I'm attempting to curl into hr--hr-service from hr--debug-deployment-86575cffb6-wl6rx
pasan#ubuntu:~/product-vick$ kubectl exec -it hr--debug-deployment-86575cffb6-wl6rx /bin/bash
Defaulting container name to debug.
Use 'kubectl describe pod/hr--debug-deployment-86575cffb6-wl6rx -n default' to see all of the containers in this pod.
root#hr--debug-deployment-86575cffb6-wl6rx:/# curl hr--hr-service -v
* Rebuilt URL to: hr--hr-service/
* Trying 10.102.117.177...
* Connected to hr--hr-service (10.102.117.177) port 80 (#0)
> GET / HTTP/1.1
> Host: hr--hr-service
> User-Agent: curl/7.47.0
> Accept: */*
>
< HTTP/1.1 403 Forbidden
< date: Thu, 03 Jan 2019 04:06:17 GMT
< server: envoy
< content-length: 0
<
* Connection #0 to host hr--hr-service left intact
Can you please explain why I'm getting a 403 forbidden by envoy and how I can troubleshoot it?
If you have the envoy sidecar injected it really depends on what type of authentication policy you have between your services. Are you using a MeshPolicy or a Policy?
You can also try disabling authentication between your services to debug. Something like this (if your policy is defined like this):
apiVersion: "authentication.istio.io/v1alpha1"
kind: "Policy"
metadata:
name: "hr--hr-service"
spec:
targets:
- name: hr--hr-service
peers:
- mTLS:
mode: PERMISSIVE