Kubernetes ClusterIP Service ENV set incorrectly - kubernetes

I created a ClusterIP service that has 3 pods in the deployment:
tornado-service ClusterIP 10.107.117.119 <none> 8085/TCP 2m
When I ssh into one of the pods it has these env variables:
TORNADO_SERVICE_PORT_8085_TCP_PROTO=tcp
TORNADO_SERVICE_PORT_8085_TCP=tcp://10.99.116.50:8085
TORNADO_SERVICE_SERVICE_HOST=10.99.116.50
Which doesn't match what kubectl gave me. When I curl another pod using the ENV ip address it hangs:
curl -XPOST 10.99.116.50:8085
But when I use the kubectl IP I get a 200 http response:
curl -XPOST 10.107.117.119:8085
Why is Kubernetes setting the service IP env incorrectly in my pods?

Related

Kafka connect api endpoint and problem with nginx ingress in second kubernetes namespace

In our kube cluster we have 2 namespaces kafka1 kafka2
kubectl get endpoints -n kafka2
NAME ENDPOINTS AGE
kafka2-akhq 10.233.76.160:8080 26d
kafka2-connect-api 10.233.67.2:8083,10.233.76.173:8083,10.233.90.52:8083 28m
and
kubectl get endpoints -n kafka1
NAME ENDPOINTS AGE
kafka1-akhq 10.233.67.157:8080 9d
kafka1-connect-api 10.233.67.3:8083,10.233.76.127:8083,10.233.90.4:8083 15h
everything is same expect 1 is replace by 2 in yaml
When I enter to ingress-nginx pod:
kubectl exec -n ingress-nginx ingress-nginx-controller -it -- bash
I can only curl second connector:
bash-5.1$ curl 10.233.67.2:8083
{"version":"3.2.0","commit":"38103ffaa962ef50","kafka_cluster_id":"KAEfPEuHSR-8znSgxlzJyQ"}bash-5.1$
when I try ip from first I got no response:
bash-5.1$ curl 10.233.67.3:8083
^C
Any idea how I can debug more information or how to fix this issue ?

ClusterIP not reachable within the Cluster

I'm struggling with kubernates configurations. What I want to get it's just to reach a deployment within the cluster. The cluster is on my dedicated server and I'm deploying it by using Kubeadm.
My nodes:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 9d v1.19.3
k8s-worker1 Ready <none> 9d v1.19.3
I've a deployment running (nginx basic example)
$ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 2/2 2 2 29m
I've created a service
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 9d
my-service ClusterIP 10.106.109.94 <none> 80/TCP 20m
The YAML file for my service is the following:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: nginx-deployment
ports:
- protocol: TCP
port: 80
Now I should expect, if I run curl 10.106.109.94:80 on my k8s-master to get the http answer.. but what I got is:
curl: (7) Failed to connect to 10.106.109.94 port 80: Connection refused
I've tried with NodePort as well and with targetPort and nodePort but the result is the same.
The cluster ip can not be reachable from outside of your cluster that means you will not get any response from the host machine that host your k8s cluster as this ip is not a part of your machine or any other machine rather than its a cluster ip which is used by your cluster CNI network like flunnel,weave.
So to get your services accessible from the outside or atleast from the host machine you have to change the type of your service like NodePort,LoadBalancer,K8s port-forward.
If you can change the service type NodePort then you will get response with any of your host machine ip and the allocated nodeport.
For example,if your k8s-master is 192.168.x.x and nodePort is 33303 then you can get response by
curl http://192.168.x.x:33303
or
curl http://worker_node_ip:33303
if your cluster is in locally installed, then you can install metalLB to get the privilege of load balancer.
You can also use port-forward to get your service accessible from the host that has kubectl client with k8s cluster access.
kubectl port-forward svc/my-service 80:80
kubectl -n namespace port-forward svc/service_name Port:Port

Kubernetes - resolve hostname of a service

I would like to perform a call to my echo-server but I can not figure out what's the hostname of my service:
orion:webanalytics papaburger$ kubectl get services -n web-analytics
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
echo-server ClusterIP 10.100.92.251 <none> 80/TCP 87m
web-api ClusterIP 10.100.92.250 <none> 8080/TCP 87m
I have tried to reach using kubectl exec -it curl-curl0 -- curl http://web-analytics.echo-server.svc.cluster.local/heythere but it fails:
curl: (6) Couldn't resolve host 'web-analytics.echo-server.svc.cluster.local'
If I change web-analytics.echo-server.svc.cluster.local to cluster ip, it works.
How can I make my pods (web-api) reach the echo server?
edit:
orion:webanalytics papaburger$ kubectl get ep -n web-analytics
NAME ENDPOINTS AGE
echo-server 172.16.187.247:80 95m
web-api 172.16.184.217:8080 95m
it should be like this
the service name is always like this
<service-name>.<namespace-name>.svc.cluster.local
kubectl exec -it curl-curl0 -- curl http://echo-servcer.web-analytics.svc.cluster.local/heythere
or alternative way would be you can directly curl the POD_IP:80
The DNS name is referred incorrectly, it follows the following format
my-svc.my-namespace.svc.cluster-domain.example
Based on the kubectl output, the DNS should be
echo-server.web-analytics.svc.cluster.local
The respective curl will be -
kubectl exec -it curl-curl0 -- curl http://echo-server.web-analytics.svc.cluster.local/heythere

k8s: Get access to pods

I newbie question related with k8s. I've just installed a k3d cluster.
I've deployed an this helm chart:
$ helm install stable/docker-registry
It's been installed and pod is running correctly.
Nevertheless, I don't quite figure out how to get access to this just deployed service.
According to documentation, it's listening on 5000 port, and is using a ClusterIP. A service is also deployed.
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 42h
docker-registry-1580212517 ClusterIP 10.43.80.185 <none> 5000/TCP 19m
EDIT
I've been able to say to chard creates an ingress:
$ kubectl get ingresses.networking.k8s.io -n default
NAME HOSTS ADDRESS PORTS AGE
docker-registry-1580214408 chart-example.local 172.20.0.4 80 10m
Nevertheless, I'm still without being able tp push images to registry:
$ docker push 172.20.0.4/feedly:v1
The push refers to repository [172.20.0.4/feedly]
Get https://172.20.0.4/v2/: x509: certificate has expired or is not yet valid
Since the service type is ClusterIP, you can't access the service from host system. You can run below command to access the service from your host system.
kubectl port-forward --address 0.0.0.0 svc/docker-registry-1580212517 5000:5000 &
curl <host IP/name>:5000

Can't Connect to Kubernetes Service from Inside Service Pod?

I create a one-replica zookeeper + kafka cluster with the official kafka chart from the official incubator repo:
helm install --name mykafka -f kafka.yaml incubator/kafka
This gives me two pods:
kubectl get pods
NAME READY STATUS
mykafka-kafka-0 1/1 Running
mykafka-zookeeper-0 1/1 Running
And four services (in addition to the default kubernetes service)
kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP
mykafka-kafka ClusterIP 10.108.143.59 <none> 9092/TCP
mykafka-kafka-headless ClusterIP None <none> 9092/TCP
mykafka-zookeeper ClusterIP 10.109.43.48 <none> 2181/TCP
mykafka-zookeeper-headless ClusterIP None <none> 2888/TCP,3888/TCP
If I shell into the zookeeper pod:
> kubectl exec -it mykafka-zookeeper-0 -- /bin/bash
I use the curl tool to test TCP connectivity. I expect a communications error as the server isn't using HTTP, but if curl can't even connect and I have to ctrl-C out, then the TCP connection isn't working.
I can access the local pod through curl localhost:2181:
root#mykafka-zookeeper-0:/# curl localhost:2181
curl: (52) Empty reply from server
I can access other pod through curl mykafka-kafka:9092:
root#mykafka-zookeeper-0:/# curl mykafka-kafka:9092
curl: (56) Recv failure: Connection reset by peer
But I can't access mykafka-zookeeper:2181. That name resolves to the cluster IP, but the attempt to TCP connect hangs until I ctrl-C:
root#mykafka-zookeeper-0:/# curl -v mykafka-zookeeper:2181
* Rebuilt URL to: mykafka-zookeeper:2181/
* Trying 10.109.43.48...
^C
Similarly, I can shell into the kafka pod:
> kubectl exec -it mykafka-kafka-0 -- /bin/bash
Connecting to the Zookeeper pod by the service name works fine:
root#mykafka-kafka-0:/# curl mykafka-zookeeper:2181
curl: (52) Empty reply from server
Connecting to localhost kafka works fine:
root#mykafka-kafka-0:/# curl localhost:9092
curl: (56) Recv failure: Connection reset by peer
But connecting to the Kafka pod by the service name doesn't work and I must ctrl-C the curl attempt:
curl -v mykafka-kafka:9092
* Rebuilt URL to: mykafka-kafka:9092/
* Hostname was NOT found in DNS cache
* Trying 10.108.143.59...
^C
Can anyone explain why using I can only connect to a Kubernetes service from outside the service and not from within the service?
I believe what you're experiencing can be resolved by looking at how your kubelet is set up to run. There is a setting you can toggle when starting up the kubelet called --hairpin-mode. By default this setting is set to the string promiscuous, where a pod can't connect to its own service, but you can change it to be hairpin-veth, which would allow a pod to connect to its own service.
There are a few issues on the topic, but this seems to be referenced the most:
https://github.com/kubernetes/kubernetes/issues/45790