API connection refused on https pod kubernetes - kubernetes

I've deployed an Wazuh API on my kubernetes cluster and I cannot reach the 55000 port outside the pod , it says "curl: (7) Failed to connect to wazuh-master port 55000: Connection refused"
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
wazuh-manager-worker-0 1/1 Running 0 4m47s 172.16.0.231 10.0.0.10 <none> <none>
wazuh-master-0 1/1 Running 0 4m47s 172.16.0.230 10.0.0.10 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
wazuh-cluster ClusterIP None <none> 1516/TCP 53s
wazuh-master ClusterIP 10.247.82.29 <none> 1515/TCP,55000/TCP 53s
wazuh-workers LoadBalancer 10.247.70.29 <pending> 1514:32371/TCP 53s
I've couple of curl test scenarious:
1. curl the name of the pod "wazuh-master-0" - Could not resolve host: wazuh-master-0
curl -u user:pass -k -X GET "https://wazuh-master-0:55000/security/user/authenticate?raw=true"
curl: (6) Could not resolve host: wazuh-master-0
2. curl the name of the service of the pod "wazuh-master" - Failed to connect to wazuh-master port 55000: Connection refused
curl -u user:pass -k -X GET "https://wazuh-master:55000/security/user/authenticate?raw=true"
curl: (7) Failed to connect to wazuh-master port 55000: Connection refused
3. curl the ip of the pod "wazuh-master-0 172.16.0.230" - successfull
curl -u user:pass -k -X GET "https://172.16.0.230:55000/security/user/authenticate?raw=true"
eyJ0eXAiOiJKV1QiLCJhbGciOiJIUasfsdfgfhgsdoIEFQSSBSRVhgfhfghfghgfasasdasNUIiwibmJmIjoxNjYzMTQ3ODcxLCJleHAiOjE2NjMxNDg3NzEsInN1YiI6IndhenV
wazuh-master-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: wazuh-master
labels:
app: wazuh-manager
spec:
type: ClusterIP
selector:
app: wazuh-manager
node-type: LoadBalancer
ports:
- name: registration
port: 1515
targetPort: 1515
- name: api
port: 55000
targetPort: 55000
What i'm doing wrong ?

I've fixed it, i've removed the line below from the spec: selector: node-type
node-type: LoadBalancer

Related

Why can I not use port 80 when using K3s Kubernetes

I have a simple NodeJS project running on a K3s cluster on a Raspberry Pi 4. The cluster has a service to expose it. The code is as follows...
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 3000
targetPort: 3000
I want to try and use port 80 instead of 3000 so I try...
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 3000
But it can't use the port.
Warning FailedScheduling 5m 0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.
Why am I having issues?
Update
Per the answer I tried...
pi#raspberrypi:~ $ sudo netstat -tulpn | grep :80
pi#raspberrypi:~ $ sudo ss -tulpn | grep :80
pi#raspberrypi:~ $
My guess is this is a K3s or Pi limitation.
Update 2
When I run kubectl get service --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 24d
kube-system kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 24d
kube-system metrics-server ClusterIP 10.43.48.200 <none> 443/TCP 24d
kube-system traefik-prometheus ClusterIP 10.43.89.96 <none> 9100/TCP 24d
kube-system traefik LoadBalancer 10.43.65.154 192.168.x.xxx 80:31065/TCP,443:32574/TCP 24d
test-namespace app-tier LoadBalancer 10.43.190.179 192.168.x.xxx 3000:31500/TCP 4d
k3s comes with a pre-installed traefik ingress controller which binds to 80, 443 and 8080 on the host, alhtough you should have seen that with ss or netstat
You should see this service if you run:
kubectl get service --all-namespaces
Although you should have seen it with netstat or ss if something is using the port if this is the case. But mb this service also failed to deploy but somehow blocks k3s from taking the port.
Another thing I can think of: Are you running the experimental rootless setup?

Kubernetes DNS Troubleshooting

I am trying to troubleshoot a DNS issue in our K8 cluster v1.19. There are 3 nodes (1 controller, 2 workers) all running vanilla Ubuntu 20.04 using Calico network with Metallb for inbound load balancing. This is all hosted on premise and has full access to the internet. There is also a proxy server (Traefik) in front of it that is handling the SSL to the K8 cluster and other services in the infrastructure.
This issue happened when I upgraded the helm chart for the pod that was/is connecting to the redis pod, but otherwise had been happy to run for the past 36 days.
In the log of one of the pods it is showing an error that it cannot determine where the redis pod(s) is/are:
2020-11-09 00:00:00 [1] [verbose]: [Cache] Attempting connection to redis.
2020-11-09 00:00:00 [1] [verbose]: [Cache] Successfully connected to redis.
2020-11-09 00:00:00 [1] [verbose]: [PubSub] Attempting connection to redis.
2020-11-09 00:00:00 [1] [verbose]: [PubSub] Successfully connected to redis.
2020-11-09 00:00:00 [1] [warn]: Secret key is weak. Please consider lengthening it for better security.
2020-11-09 00:00:00 [1] [verbose]: [Database] Connecting to database...
2020-11-09 00:00:00 [1] [info]: [Database] Successfully connected .
2020-11-09 00:00:00 [1] [verbose]: [Database] Ran 0 migration(s).
2020-11-09 00:00:00 [1] [verbose]: Sending request for public key.
Error: getaddrinfo EAI_AGAIN oct-2020-redis-master
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:67:26) {
errno: -3001,
code: 'EAI_AGAIN',
syscall: 'getaddrinfo',
hostname: 'oct-2020-redis-master'
}
[ioredis] Unhandled error event: Error: getaddrinfo EAI_AGAIN oct-2020-redis-master
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:67:26)
Error: connect ETIMEDOUT
at Socket.<anonymous> (/app/node_modules/ioredis/built/redis/index.js:307:37)
at Object.onceWrapper (events.js:421:28)
at Socket.emit (events.js:315:20)
at Socket.EventEmitter.emit (domain.js:486:12)
at Socket._onTimeout (net.js:483:8)
at listOnTimeout (internal/timers.js:554:17)
at processTimers (internal/timers.js:497:7) {
errorno: 'ETIMEDOUT',
code: 'ETIMEDOUT',
syscall: 'connect'
}
I have gone through the steps outlined in https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/
ubuntu#k8-01:~$ kubectl exec -i -t dnsutils -- nslookup kubernetes.default
;; connection timed out; no servers could be reached
command terminated with exit code 1
ubuntu#k8-01:~$ kubectl get pods --namespace=kube-system -l k8s-app=kube-dns
NAME READY STATUS RESTARTS AGE
coredns-f9fd979d6-lfm5t 1/1 Running 17 37d
coredns-f9fd979d6-sw2qp 1/1 Running 18 37d
ubuntu#k8-01:~$ kubectl logs --namespace=kube-system -l k8s-app=kube-dns
CoreDNS-1.7.0
linux/amd64, go1.14.4, f59c03d
[INFO] Reloading
[INFO] plugin/health: Going into lameduck mode for 5s
[INFO] plugin/reload: Running configuration MD5 = 3d3f6363f05ccd60e0f885f0eca6c5ff
[INFO] Reloading complete
[INFO] 10.244.210.238:34288 - 28733 "A IN oct-2020-redis-master.default.svc.cluster.local. udp 75 false 512" NOERROR qr,aa,rd 148 0.001300712s
[INFO] 10.244.210.238:44532 - 12032 "A IN oct-2020-redis-master.default.svc.cluster.local. udp 75 false 512" NOERROR qr,aa,rd 148 0.001279312s
[INFO] 10.244.210.235:44595 - 65094 "A IN oct-2020-redis-master.default.svc.cluster.local. udp 75 false 512" NOERROR qr,aa,rd 148 0.000163001s
[INFO] 10.244.210.235:55945 - 20758 "A IN oct-2020-redis-master.default.svc.cluster.local. udp 75 false 512" NOERROR qr,aa,rd 148 0.000141202s
ubuntu#k8-01:~$ kubectl get services --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default oct-2020-api ClusterIP 10.107.89.213 <none> 80/TCP 37d
default oct-2020-nginx-ingress-controller LoadBalancer 10.110.235.175 192.168.2.150 80:30194/TCP,443:31514/TCP 37d
default oct-2020-nginx-ingress-default-backend ClusterIP 10.98.147.246 <none> 80/TCP 37d
default oct-2020-redis-headless ClusterIP None <none> 6379/TCP 37d
default oct-2020-redis-master ClusterIP 10.109.58.236 <none> 6379/TCP 37d
default oct-2020-webclient ClusterIP 10.111.204.251 <none> 80/TCP 37d
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 37d
kube-system coredns NodePort 10.101.104.114 <none> 53:31245/UDP 15h
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 37d
When I enter the pod:
/app # grep "nameserver" /etc/resolv.conf
nameserver 10.96.0.10
/app # nslookup
BusyBox v1.31.1 () multi-call binary.
Usage: nslookup [-type=QUERY_TYPE] [-debug] HOST [DNS_SERVER]
Query DNS about HOST
QUERY_TYPE: soa,ns,a,aaaa,cname,mx,txt,ptr,any
/app # ping 10.96.0.10
PING 10.96.0.10 (10.96.0.10): 56 data bytes
^C
--- 10.96.0.10 ping statistics ---
3 packets transmitted, 0 packets received, 100% packet loss
/app # nslookup oct-20-redis-master
;; connection timed out; no servers could be reached
Any ideas on troubleshooting would be greatly appreciated.
To answer my own question, I deleted the DNS pods and then it worked again. The command was the following:
kubectl delete pod coredns-f9fd979d6-sw2qp --namespace=kube-system
This doesn't get to the underlying problem of why this is happening, or why K8 isn't detecting that something is wrong with those pods and recreating them. I am going to keep digging into this and put some more instrumenting on the DNS pods to see what it actually is that is causing this problem.
If anyone has any ideas on instrumenting to hook up or look at specifically, that would be appreciated.
This is how we test dns
Create below deployment
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
labels:
app: nginx
spec:
serviceName: "nginx"
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumes:
- name: www
emptyDir:
Run the below tests
master $ kubectl get po
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 1m
web-1 1/1 Running 0 1m
master $ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 35m
nginx ClusterIP None <none> 80/TCP 2m
master $ kubectl run -i --tty --image busybox:1.28 dns-test --restart=Never --rm
If you don't see a command prompt, try pressing enter.
/ # nslookup nginx
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: nginx
Address 1: 10.40.0.1 web-0.nginx.default.svc.cluster.local
Address 2: 10.40.0.2 web-1.nginx.default.svc.cluster.local
/ #
/ # nslookup web-0.nginx
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: web-0.nginx
Address 1: 10.40.0.1 web-0.nginx.default.svc.cluster.local
/ # nslookup web-0.nginx.default.svc.cluster.local
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: web-0.nginx.default.svc.cluster.local
Address 1: 10.40.0.1 web-0.nginx.default.svc.cluster.local

Not able to connect to kafka brokers

I've deployed https://github.com/confluentinc/cp-helm-charts/tree/master/charts/cp-kafka on my on prem k8s cluster.
I'm trying to expose it my using a TCP controller with nginx.
My TCP nginx configmap looks like
data:
"<zookeper-tcp-port>": <namespace>/cp-zookeeper:2181
"<kafka-tcp-port>": <namespace>/cp-kafka:9092
And i've made the corresponding entry in my nginx ingress controller
- name: <zookeper-tcp-port>-tcp
port: <zookeper-tcp-port>
protocol: TCP
targetPort: <zookeper-tcp-port>-tcp
- name: <kafka-tcp-port>-tcp
port: <kafka-tcp-port>
protocol: TCP
targetPort: <kafka-tcp-port>-tcp
Now I'm trying to connect to my kafka instance.
When i just try to connect to the IP and port using kafka tools, I get the error message
Unable to determine broker endpoints from Zookeeper.
One or more brokers have multiple endpoints for protocol PLAIN...
Please proved bootstrap.servers value in advanced settings
[<cp-broker-address-0>.cp-kafka-headless.<namespace>:<port>][<ip>]
When I enter, what I assume are the correct broker addresses (I've tried them all...) I get a time out. There are no logs coming from the nginx controler excep
[08/Apr/2020:15:51:12 +0000]TCP200000.000
[08/Apr/2020:15:51:12 +0000]TCP200000.000
[08/Apr/2020:15:51:14 +0000]TCP200000.001
From the pod kafka-zookeeper-0 I'm gettting loads of
[2020-04-08 15:52:02,415] INFO Accepted socket connection from /<ip:port> (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2020-04-08 15:52:02,415] WARN Unable to read additional data from client sessionid 0x0, likely client has closed socket (org.apache.zookeeper.server.NIOServerCnxn)
[2020-04-08 15:52:02,415] INFO Closed socket connection for client /<ip:port> (no session established for client) (org.apache.zookeeper.server.NIOServerCnxn)
Though I'm not sure these have anything to do with it?
Any ideas on what I'm doing wrong?
Thanks in advance.
TL;DR:
Change the value nodeport.enabled to true inside cp-kafka/values.yaml before deploying.
Change the service name and ports in you TCP NGINX Configmap and Ingress object.
Set bootstrap-server on your kafka tools to <Cluster_External_IP>:31090
Explanation:
The Headless Service was created alongside the StatefulSet. The created service will not be given a clusterIP, but will instead simply include a list of Endpoints.
These Endpoints are then used to generate instance-specific DNS records in the form of:
<StatefulSet>-<Ordinal>.<Service>.<Namespace>.svc.cluster.local
It creates a DNS name for each pod, e.g:
[ root#curl:/ ]$ nslookup my-confluent-cp-kafka-headless
Server: 10.0.0.10
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local
Name: my-confluent-cp-kafka-headless
Address 1: 10.8.0.23 my-confluent-cp-kafka-1.my-confluent-cp-kafka-headless.default.svc.cluster.local
Address 2: 10.8.1.21 my-confluent-cp-kafka-0.my-confluent-cp-kafka-headless.default.svc.cluster.local
Address 3: 10.8.3.7 my-confluent-cp-kafka-2.my-confluent-cp-kafka-headless.default.svc.cluster.local
This is what makes this services connect to each other inside the cluster.
I've gone through a lot of trial and error, until I realized how it was supposed to be working. Based your TCP Nginx Configmap I believe you faced the same issue.
The Nginx ConfigMap asks for: <PortToExpose>: "<Namespace>/<Service>:<InternallyExposedPort>".
I realized that you don't need to expose the Zookeeper, since it's a internal service and handled by kafka brokers.
I also realized that you are trying to expose cp-kafka:9092 which is the headless service, also only used internally, as I explained above.
In order to get outside access you have to set the parameters nodeport.enabled to true as stated here: External Access Parameters.
It adds one service to each kafka-N pod during chart deployment.
Then you change your configmap to map to one of them:
data:
"31090": default/demo-cp-kafka-0-nodeport:31090
Note that the service created has the selector statefulset.kubernetes.io/pod-name: demo-cp-kafka-0 this is how the service identifies the pod it is intended to connect to.
Edit the nginx-ingress-controller:
- containerPort: 31090
hostPort: 31090
protocol: TCP
Set your kafka tools to <Cluster_External_IP>:31090
Reproduction:
- Snippet edited in cp-kafka/values.yaml:
nodeport:
enabled: true
servicePort: 19092
firstListenerPort: 31090
Deploy the chart:
$ helm install demo cp-helm-charts
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
demo-cp-control-center-6d79ddd776-ktggw 1/1 Running 3 113s
demo-cp-kafka-0 2/2 Running 1 113s
demo-cp-kafka-1 2/2 Running 0 94s
demo-cp-kafka-2 2/2 Running 0 84s
demo-cp-kafka-connect-79689c5c6c-947c4 2/2 Running 2 113s
demo-cp-kafka-rest-56dfdd8d94-79kpx 2/2 Running 1 113s
demo-cp-ksql-server-c498c9755-jc6bt 2/2 Running 2 113s
demo-cp-schema-registry-5f45c498c4-dh965 2/2 Running 3 113s
demo-cp-zookeeper-0 2/2 Running 0 112s
demo-cp-zookeeper-1 2/2 Running 0 93s
demo-cp-zookeeper-2 2/2 Running 0 74s
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
demo-cp-control-center ClusterIP 10.0.13.134 <none> 9021/TCP 50m
demo-cp-kafka ClusterIP 10.0.15.71 <none> 9092/TCP 50m
demo-cp-kafka-0-nodeport NodePort 10.0.7.101 <none> 19092:31090/TCP 50m
demo-cp-kafka-1-nodeport NodePort 10.0.4.234 <none> 19092:31091/TCP 50m
demo-cp-kafka-2-nodeport NodePort 10.0.3.194 <none> 19092:31092/TCP 50m
demo-cp-kafka-connect ClusterIP 10.0.3.217 <none> 8083/TCP 50m
demo-cp-kafka-headless ClusterIP None <none> 9092/TCP 50m
demo-cp-kafka-rest ClusterIP 10.0.14.27 <none> 8082/TCP 50m
demo-cp-ksql-server ClusterIP 10.0.7.150 <none> 8088/TCP 50m
demo-cp-schema-registry ClusterIP 10.0.7.84 <none> 8081/TCP 50m
demo-cp-zookeeper ClusterIP 10.0.9.119 <none> 2181/TCP 50m
demo-cp-zookeeper-headless ClusterIP None <none> 2888/TCP,3888/TCP 50m
Create the TCP configmap:
$ cat nginx-tcp-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: kube-system
data:
31090: "default/demo-cp-kafka-0-nodeport:31090"
$ kubectl apply -f nginx-tcp.configmap.yaml
configmap/tcp-services created
Edit the Nginx Ingress Controller:
$ kubectl edit deploy nginx-ingress-controller -n kube-system
$kubectl get deploy nginx-ingress-controller -n kube-system -o yaml
{{{suppressed output}}}
ports:
- containerPort: 31090
hostPort: 31090
protocol: TCP
- containerPort: 80
name: http
protocol: TCP
- containerPort: 443
name: https
protocol: TCP
My ingress is on IP 35.226.189.123, now let's try to connect from outside the cluster. For that I'll connect to another VM where I have a minikube, so I can use kafka-client pod to test:
user#minikube:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
kafka-client 1/1 Running 0 17h
user#minikube:~$ kubectl exec kafka-client -it -- bin/bash
root#kafka-client:/# kafka-console-consumer --bootstrap-server 35.226.189.123:31090 --topic demo-topic --from-beginning --timeout-ms 8000 --max-messages 1
Wed Apr 15 18:19:48 UTC 2020
Processed a total of 1 messages
root#kafka-client:/#
As you can see, I was able to access the kafka from outside.
If you need external access to Zookeeper as well I'll leave a service model for you:
zookeeper-external-0.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: cp-zookeeper
pod: demo-cp-zookeeper-0
name: demo-cp-zookeeper-0-nodeport
namespace: default
spec:
externalTrafficPolicy: Cluster
ports:
- name: external-broker
nodePort: 31181
port: 12181
protocol: TCP
targetPort: 31181
selector:
app: cp-zookeeper
statefulset.kubernetes.io/pod-name: demo-cp-zookeeper-0
sessionAffinity: None
type: NodePort
It will create a service for it:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
demo-cp-zookeeper-0-nodeport NodePort 10.0.5.67 <none> 12181:31181/TCP 2s
Patch your configmap:
data:
"31090": default/demo-cp-kafka-0-nodeport:31090
"31181": default/demo-cp-zookeeper-0-nodeport:31181
Add the Ingress rule:
ports:
- containerPort: 31181
hostPort: 31181
protocol: TCP
Test it with your external IP:
pod/zookeeper-client created
user#minikube:~$ kubectl exec -it zookeeper-client -- /bin/bash
root#zookeeper-client:/# zookeeper-shell 35.226.189.123:31181
Connecting to 35.226.189.123:31181
Welcome to ZooKeeper!
JLine support is disabled
If you have any doubts, let me know in the comments!

Introducing ingress to istio mesh

I had an Istio mesh with mtls disabled with following pods and services. I'm using kubeadm.
pasan#ubuntu:~$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default debug-tools 2/2 Running 0 2h
default employee--debug-deployment-57947cf67-gwpjq 2/2 Running 0 2h
default employee--employee-deployment-5f4d7c9d78-sfmtx 2/2 Running 0 2h
default employee--gateway-deployment-bc646bd84-wnqwq 2/2 Running 0 2h
default employee--salary-deployment-d4969d6c8-lz7n7 2/2 Running 0 2h
default employee--sts-deployment-7bb9b44bf7-lthc8 1/1 Running 0 2h
default hr--debug-deployment-86575cffb6-6wrlf 2/2 Running 0 2h
default hr--gateway-deployment-8c488ff6-827pf 2/2 Running 0 2h
default hr--hr-deployment-596946948d-rzc7z 2/2 Running 0 2h
default hr--sts-deployment-694d7cff97-4nz29 1/1 Running 0 2h
default stock-options--debug-deployment-68b8fccb97-4znlc 2/2 Running 0 2h
default stock-options--gateway-deployment-64974b5fbb-rjrwq 2/2 Running 0 2h
default stock-options--stock-deployment-d5c9d4bc8-dqtrr 2/2 Running 0 2h
default stock-options--sts-deployment-66c4799599-xx9d4 1/1 Running 0 2h
pasan#ubuntu:~$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
employee--debug-service ClusterIP 10.104.23.141 <none> 80/TCP 2h
employee--employee-service ClusterIP 10.96.203.80 <none> 80/TCP 2h
employee--gateway-service ClusterIP 10.97.145.188 <none> 80/TCP 2h
employee--salary-service ClusterIP 10.110.167.162 <none> 80/TCP 2h
employee--sts-service ClusterIP 10.100.145.102 <none> 8080/TCP,8081/TCP 2h
hr--debug-service ClusterIP 10.103.81.158 <none> 80/TCP 2h
hr--gateway-service ClusterIP 10.106.183.101 <none> 80/TCP 2h
hr--hr-service ClusterIP 10.107.136.178 <none> 80/TCP 2h
hr--sts-service ClusterIP 10.105.184.100 <none> 8080/TCP,8081/TCP 2h
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2h
stock-options--debug-service ClusterIP 10.111.51.88 <none> 80/TCP 2h
stock-options--gateway-service ClusterIP 10.100.81.254 <none> 80/TCP 2h
stock-options--stock-service ClusterIP 10.96.189.100 <none> 80/TCP 2h
stock-options--sts-service ClusterIP 10.108.59.68 <none> 8080/TCP,8081/TCP 2h
I accessed this service using a debug pod using the following command:
curl -X GET http://hr--gateway-service.default:80/info -H "Authorization: Bearer $token" -v
As the next step, I enabled mtls in the mesh. As expected the above curl command failed.
Now I want to set up an ingress controller so I can access the service mesh as I did before.
So I set up Gateway and VirtualService as below:
cat <<EOF | kubectl apply -f -
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: hr-ingress-gateway
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "hr--gateway-service.default"
EOF
cat <<EOF | kubectl apply -f -
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: hr-ingress-virtual-service
spec:
hosts:
- "*"
gateways:
- hr-ingress-gateway
http:
- match:
- uri:
prefix: /info/
route:
- destination:
port:
number: 80
host: hr--gateway-service
EOF
But still I'm getting the following output
wso2carbon#gateway-5bd88fd679-l8jn5:~$ curl -X GET http://hr--gateway-service.default:80/info -H "Authorization: Bearer $token" -v
Note: Unnecessary use of -X or --request, GET is already inferred.
* Trying 10.106.183.101...
* Connected to hr--gateway-service.default (10.106.183.101) port 80 (#0)
> GET /info HTTP/1.1
> Host: hr--gateway-service.default
> User-Agent: curl/7.47.0
> Accept: */*
...
* Recv failure: Connection reset by peer
* Closing connection 0
curl: (56) Recv failure: Connection reset by peer
Can you please let me know if my ingress set up is correct and how I can access the service using curl after the set up.
My Ingress services are listed as below:
ingress-nginx default-http-backend ClusterIP 10.105.46.168 <none> 80TCP 3h
ingress-nginx ingress-nginx NodePort 10.110.75.131 172.17.17.100 80:30770/TCP,443:32478/TCP
istio-ingressgateway NodePort 10.98.243.205 <none> 80:31380/TCP,443:31390/TCP,31400:31400/TCP,15011:31775/TCP,8060:32436/TCP,853:31351/TCP,15030:32149/TCP,15031:32653/TCP 3h
#Pasan to apply Istio CRDs (VirtualServices) to incoming traffic you need to use Istio's Ingress Gateway as a point of ingress as seen here: https://istio.io/docs/tasks/traffic-management/ingress/
The ingressgateway is a wrapper around the envoy which is configurable using Istio's CRDs.
Basically, you don't need a second ingress controller and during istio installation, the default one is installed, find out by executing:
kubectl get services -n istio-system -l app=istio-ingressgateway
and with the Ingress Gateway ip execute:
curl -X GET http://{INGRESSGATEWAY_IP}/info -H "Authorization: Bearer $token" -H "Host: hr--gateway-service.default"
I added the host as a header as it is defined in the Gateway meaning that only for this host ingress is allowed.

Kubernetes ExternalName service not visible in DNS

I'm trying to expose a single database instance as a service in two Kubernetes namespaces. Kubernetes version 1.11.3 running on Ubuntu 16.04.1. The database service is visible and working in the default namespace. I created an ExternalName service in a non-default namespace referencing the fully qualified domain name in the default namespace as follows:
kind: Service
apiVersion: v1
metadata:
name: ws-mysql
namespace: wittlesouth
spec:
type: ExternalName
externalName: mysql.default.svc.cluster.local
ports:
- port: 3306
The service is running:
eric$ kubectl describe service ws-mysql --namespace=wittlesouth
Name: ws-mysql
Namespace: wittlesouth
Labels: <none>
Annotations: <none>
Selector: <none>
Type: ExternalName
IP:
External Name: mysql.default.svc.cluster.local
Port: <unset> 3306/TCP
TargetPort: 3306/TCP
Endpoints: <none>
Session Affinity: None
Events: <none>
If I check whether the service can be found by name from a pod running in the wittlesouth namespace, this service name does not resolve, but other services in that namespace (i.e. Jira) do:
root#rs-ws-diags-8mgqq:/# nslookup mysql.default.svc.cluster.local
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: mysql.default.svc.cluster.local
Address: 10.99.120.208
root#rs-ws-diags-8mgqq:/# nslookup ws-mysql.wittlesouth
Server: 10.96.0.10
Address: 10.96.0.10#53
*** Can't find ws-mysql.wittlesouth: No answer
root#rs-ws-diags-8mgqq:/# nslookup ws-mysql
Server: 10.96.0.10
Address: 10.96.0.10#53
*** Can't find ws-mysql: No answer
root#rs-ws-diags-8mgqq:/# nslookup ws-mysql.wittlesouth
Server: 10.96.0.10
Address: 10.96.0.10#53
*** Can't find ws-mysql.wittlesouth: No answer
root#rs-ws-diags-8mgqq:/# nslookup ws-mysql.wittlesouth.svc.cluster.local
Server: 10.96.0.10
Address: 10.96.0.10#53
*** Can't find ws-mysql.wittlesouth.svc.cluster.local: No answer
root#rs-ws-diags-8mgqq:/# nslookup ws-mysql.wittlesouth
Server: 10.96.0.10
Address: 10.96.0.10#53
*** Can't find ws-mysql.wittlesouth: No answer
root#rs-ws-diags-8mgqq:/# nslookup jira.wittlesouth
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: jira.wittlesouth.svc.cluster.local
Address: 10.105.30.239
Any thoughts on what might be the issue here? For the moment I've worked around it by updating applications that need to use the database to reference the fully qualified domain name of the service running in the default namespace, but I'd prefer to avoid that. My intent eventually is to have the namespaces have separate database instances, and would like to deploy apps configured to work that way now in advance of actually standing up the second instance.
This doesn't work for me with Kubernetes 1.11.2 with coredns and calico. It works only if you reference the external service directly in whichever namespace it runs:
$ kubectl get pods -n default
NAME READY STATUS RESTARTS AGE
mysql-0 2/2 Running 0 17m
mysql-1 2/2 Running 0 16m
$ kubectl get pods -n wittlesouth
NAME READY STATUS RESTARTS AGE
ricos-dummy-pod 1/1 Running 0 14s
kubectl exec -it ricos-dummy-pod -n wittlesouth bash
root#ricos-dummy-pod:/# ping mysql.default.svc.cluster.local
PING mysql.default.svc.cluster.local (192.168.1.40): 56 data bytes
64 bytes from 192.168.1.40: icmp_seq=0 ttl=62 time=0.578 ms
64 bytes from 192.168.1.40: icmp_seq=1 ttl=62 time=0.632 ms
64 bytes from 192.168.1.40: icmp_seq=2 ttl=62 time=0.628 ms
^C--- mysql.default.svc.cluster.local ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.578/0.613/0.632/0.025 ms
root#ricos-dummy-pod:/# ping ws-mysql
ping: unknown host
root#ricos-dummy-pod:/# exit
$ kubectl get svc mysql
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mysql ClusterIP None <none> 3306/TCP 45d
$ kubectl describe svc mysql
Name: mysql
Namespace: default
Labels: app=mysql
Annotations: <none>
Selector: app=mysql
Type: ClusterIP
IP: None
Port: mysql 3306/TCP
TargetPort: 3306/TCP
Endpoints: 192.168.1.40:3306,192.168.2.25:3306
Session Affinity: None
Events: <none>
The ExternalName service feature is only supported using kube-dns as per the docs and Kubernetes 1.11.x defaults to coredns. You might want to try changing from coredns to kube-dns or possibly changing the configs for your coredns deployment. I expect this to available at some point using coredns.