Call a service from any POD - kubernetes

I would like how to call a service from any pod inside or outsite the node.
I have 3 nodes with deployment and services. I already have a kube-proxy.
I exec bash on other pod:
kubectl exec --namespace=develop myotherdpod-78c6bfd876-6zvh2 -i -t -- /bin/bash
And inside my other pod I have tried to exec curl:
curl -v http://myservice.develop.svc.cluster.local/user
This is my created service:
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "myservice",
"namespace": "develop",
"selfLink": "/api/v1/namespaces/develop/services/mydeployment-svc",
"uid": "1b5fb4ae-ecd1-11e7-8599-02cc6a4bf8be",
"resourceVersion": "10660278",
"creationTimestamp": "2017-12-29T19:47:30Z",
"labels": {
"app": "mydeployment-deployment"
}
},
"spec": {
"ports": [
{
"name": "http",
"protocol": "TCP",
"port": 80,
"targetPort": 8080
}
],
"selector": {
"app": "mydeployment-deployment"
},
"clusterIP": "100.99.99.140",
"type": "ClusterIP",
"sessionAffinity": "None"
},
"status": {
"loadBalancer": {}
}
}

It looks to me that something may be incorrect with the Network Overlay you deployed. First of all, I would double check that the pod can access kube-dns and obtain the proper IP of the service.
nslookup myservice.develop.svc.cluster.local
nslookup myservice # If they are in the same namespace it should work as well
If you are able to confirm that, then I would also check if services like kube-proxy are working correctly. You can do it by using
systemctl status kube-proxy
If that does not work I will also check the pods from the Overlay network by executing
kubectl get pods --namespace=kube-system
If they are all ok, I would try using a different network overlay: https://kubernetes.io/docs/concepts/cluster-administration/networking/
If that did not work either, I would check if there are firewall rules preventing some communication between the nodes.

Related

Unable to open service via kubectl proxy

➜ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
airflow-flower-service ClusterIP 172.20.119.107 <none> 5555/TCP 54d
airflow-service ClusterIP 172.20.76.63 <none> 80/TCP 54d
backend-service ClusterIP 172.20.39.154 <none> 80/TCP 54d
➜ kubectl proxy
xdg-open http://127.0.0.1:8001/api/v1/namespaces/edna/services/http:airflow-service:/proxy/#q=ip-192-168-114-35
and it fails with
Error trying to reach service: 'dial tcp 10.0.102.174:80: i/o timeout'
However if I expose the service via kubectl port-forward I can open the service in the browser
kubectl port-forward service/backend-service 8080:80 -n edna
xdg-open HTTP://localhost:8080
So how to open the service via that long URL (similar how we open the kubernetes dashboard?
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/overview?namespace=default
If I query the API with CURL I see the output
➜ curl http://127.0.0.1:8001/api/v1/namespaces/edna/services/backend-service/
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "backend-service",
"namespace": "edna",
"selfLink": "/api/v1/namespaces/edna/services/backend-service",
"uid": "7163dd4e-e76d-4517-b0fe-d2d516b5dc16",
"resourceVersion": "6433582",
"creationTimestamp": "2020-08-14T05:58:45Z",
"labels": {
"app.kubernetes.io/instance": "backend-etl"
},
"annotations": {
"argocd.argoproj.io/sync-wave": "10",
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Service\",\"metadata\":{\"annotations\":{\"argocd.argoproj.io/sync-wave\":\"10\"},\"labels\":{\"app.kubernetes.io/instance\":\"backend-etl\"},\"name\":\"backend-service\",\"namespace\":\"edna\"},\"spec\":{\"ports\":[{\"port\":80,\"protocol\":\"TCP\",\"targetPort\":80}],\"selector\":{\"app\":\"edna-backend\"},\"type\":\"ClusterIP\"}}\n"
}
},
"spec": {
"ports": [
{
"protocol": "TCP",
"port": 80,
"targetPort": 80
}
],
"selector": {
"app": "edna-backend"
},
"clusterIP": "172.20.39.154",
"type": "ClusterIP",
"sessionAffinity": "None"
},
"status": {
"loadBalancer": {
}
}
}
Instead of your URL:
http://127.0.0.1:8001/api/v1/namespaces/edna/services/http:airflow-service:/proxy
Try without 'http:'
http://127.0.0.1:8001/api/v1/namespaces/edna/services/airflow-service/proxy

why kubernetes dashboard service return json

I am access my kubernetes dashboard using this url:
https://kubernetes.example.com/api/v1/namespaces/kube-system/services/kubernetes-dashboard/#/workload?namespace=default
what make me confusing is the return content just json string, not the login page. The json content is :
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "kubernetes-dashboard",
"namespace": "kube-system",
"selfLink": "/api/v1/namespaces/kube-system/services/kubernetes-dashboard",
"uid": "884240d7-8f3f-41a4-a3a0-a89649545c82",
"resourceVersion": "133822",
"creationTimestamp": "2019-09-21T16:21:19Z",
"labels": {
"addonmanager.kubernetes.io/mode": "Reconcile",
"k8s-app": "kubernetes-dashboard",
"kubernetes.io/cluster-service": "true"
},
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Service\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"k8s-app\":\"kubernetes-dashboard\",\"kubernetes.io/cluster-service\":\"true\"},\"name\":\"kubernetes-dashboard\",\"namespace\":\"kube-system\"},\"spec\":{\"ports\":[{\"port\":443,\"targetPort\":8443}],\"selector\":{\"k8s-app\":\"kubernetes-dashboard\"},\"type\":\"NodePort\"}}\n"
}
},
"spec": {
"ports": [
{
"protocol": "TCP",
"port": 443,
"targetPort": 8443,
"nodePort": 31085
}
],
"selector": {
"k8s-app": "kubernetes-dashboard"
},
"clusterIP": "10.254.75.193",
"type": "NodePort",
"sessionAffinity": "None",
"externalTrafficPolicy": "Cluster"
},
"status": {
"loadBalancer": {
}
}
}
this is my nginx forward config:
upstream kubernetes{
server 172.19.104.231:8001;
}
this is my kubernetes cluster proxy command:
kubectl proxy --address=0.0.0.0 --port=8001 --accept-hosts='^*$'
You are accessing the Kubernetes API to get the kubernetes-dashboard Service resource manifest. This is the JSON that you get back.
If you want to access the Service, you need to access the Service itself, not the Kubernetes API. You can do this for example with port forwarding:
kubectl port-forward svc/kubernetes-dashboard 8443:443
And then access the Service with:
curl localhost:8443/#/workload?namespace=default
Kubernetes documentation has clearer instructions now on how to deploy and access their dashboard:
https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/
I went to 127.0.0.1:8001 at first, and got the API JSON (as per the original question) before noticing the URL given under the kubectl proxy instruction:
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/.

Telepresence fails, saying my namespace doesn't exist, pointing to problems with my k8s context

I've been working with a bunch of k8s clusters for a while, using kubectl from the command line to examine information. I don't actually call kubectl directly, I wrap it in multiple scripting layers. I also don't use contexts, as it's much easier for me to specify different clusters in a different way. The resulting kubectl command line has explicit --server, --namespace, and --token parameters (and one other flag to disable tls verify).
This all works fine. I have no trouble with this.
However, I'm now trying to use telepresence, which doesn't give me a choice (yet) of not using contexts to configure this. So, I now have to figure out how to use contexts.
I ran the following (approximate) command:
kubectl config set-context mycontext --server=https://host:port --namespace=abc-def-ghi --insecure-skip-tls-verify=true --token=mytoken
And it said: "Context "mycontext " modified."
I then ran "kubectl config view -o json" and got this:
{
"kind": "Config",
"apiVersion": "v1",
"preferences": {},
"clusters": [],
"users": [],
"contexts": [
{
"name": "mycontext",
"context": {
"cluster": "",
"user": "",
"namespace": "abc-def-ghi"
}
}
],
"current-context": "mycontext"
}
That doesn't look right to me.
I then ran something like this:
telepresence --verbose --swap-deployment mydeployment --expose 8080 --run java -jar target/my.jar -Xdebug -Xrunjdwp:transport=dt_socket,address=5000,server=y,suspend=n
And it said this:
T: Error: Namespace 'abc-def-ghi' does not exist
Update:
And I can confirm that this isn't a problem with telepresence. If I just run "kubectl get pods", it fails, saying "The connection to the server localhost:8080 was refused". That tells me it obviously can't connect to the k8s server. The key is my "set-context" command. It's obviously not working, and I don't understand what I'm missing.
You don't have any clusters or credentials defined in your configuration. First, you need to define a cluster:
$ kubectl config set-cluster development --server=https://1.2.3.4 --certificate-authority=fake-ca-file
Then something like this for the user:
$ kubectl config set-credentials developer --client-certificate=fake-cert-file --client-key=fake-key-seefile
Then you define your context based on your cluster, user and namespace:
$ kubectl config set-context dev-frontend --cluster=development --namespace=frontend --user=developer
More information here
Your config should look something like this:
$ kubectl config view -o json
{
"kind": "Config",
"apiVersion": "v1",
"preferences": {},
"clusters": [
{
"name": "development",
"cluster": {
"server": "https://1.2.3.4",
"certificate-authority-data": "DATA+OMITTED"
}
}
],
"users": [
{
"name": "developer",
"user": {
"client-certificate": "fake-cert-file",
"client-key": "fake-key-seefile"
}
}
],
"contexts": [
{
"name": "dev-frontend",
"context": {
"cluster": "development",
"user": "developer",
"namespace": "frontend"
}
}
],
"current-context": "dev-frontend"
}

Access pod localhost from Service

New to Kubernetes.
I have a private dockerhub image deployed on a Kubernetes instance. When I exec into the pod I can run the following so I know my docker image is running:
root#private-reg:/# curl 127.0.0.1:8085
Hello world!root#private-reg:/#
From the dashboard I can see my service has an external endpoint which ends with port 8085. When I try to load this I get 404. My service YAML is as below:
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "test",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/services/test",
"uid": "a1a2ae23-339b-11e9-a3db-ae0f8069b739",
"resourceVersion": "3297377",
"creationTimestamp": "2019-02-18T16:38:33Z",
"labels": {
"k8s-app": "test"
}
},
"spec": {
"ports": [
{
"name": "tcp-8085-8085-7vzsb",
"protocol": "TCP",
"port": 8085,
"targetPort": 8085,
"nodePort": 31859
}
],
"selector": {
"k8s-app": "test"
},
"clusterIP": "******",
"type": "LoadBalancer",
"sessionAffinity": "None",
"externalTrafficPolicy": "Cluster"
},
"status": {
"loadBalancer": {
"ingress": [
{
"ip": "******"
}
]
}
}
}
Can anyone point me in the right direction.
What is the output from the below command
curl cluzterIP:8085
If you get Hello world message then it means that the service is routing the traffic Correctly to the backend pod.
curl HostIP:NODEPORT should also be working
Most likely that service is not bound to the backend pod. Did you define the below label on the pod?
labels: {
"k8s-app": "test"
}
You didn't mention what type of load balancer or cloud provider you are using but if your load balancer provisioned correctly which you should be able to see in your kube-controller-manager logs, then you should be able to access your service with what you see here:
"status": {
"loadBalancer": {
"ingress": [
{
"ip": "******"
}
]
}
Then you could check by running:
$ curl <ip>:<whatever external port your lb is fronting>
It's likely that this didn't provision if as described in other answers this works:
$ curl <clusterIP for svc>:8085
and
$ curl <NodeIP>:31859 # NodePort
Give a check on services on kuberntes, there are a few types:
https://kubernetes.io/docs/concepts/services-networking/service/
ClusterIP: creates access to service only inside the cluster.
NodePort: Access service through a given port on the nodes.
LoadBalancer: service externally acessible through a LB.
I am assuming you are running on GKE.
What kind of service is it, the one launched?

Kubernetes 1.5.2 ping can't find ClusterIP of a Service from inside Cluster

I have setup a Cluster with some deployments and services.
I can log into any of my pods and ping the pods from their pod network ips ( 172.x.x.x ) and they are successful.
But when I try to ping the services ClusterIP addresses from any of my pods they never respond, so I can't access my services.
Below is my Kibana deployment, and 10.254.77.135 is the IP I am trying to connect to from my other services, I also can't use this node port, it never responds
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "kibana",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/services/kibana",
"uid": "21498caf-569c-11e7-a801-0050568fc023",
"resourceVersion": "3282683",
"creationTimestamp": "2017-06-21T16:10:23Z",
"labels": {
"component": "elk",
"role": "kibana"
}
},
"spec": {
"ports": [
{
"name": "http",
"protocol": "TCP",
"port": 5601,
"targetPort": 5601,
"nodePort": 31671
}
],
"selector": {
"k8s-app": "kibana"
},
"clusterIP": "10.254.77.135",
"type": "NodePort",
"sessionAffinity": "None"
},
"status": {
"loadBalancer": {}
}
}
Not sure if this is your problem, but apparently ping doesn't work on services ClusterIP addresses because they are virtual addresses created by iptables rules that just redirect packets to the endpoints(pods).