why kubernetes dashboard service return json - kubernetes

I am access my kubernetes dashboard using this url:
https://kubernetes.example.com/api/v1/namespaces/kube-system/services/kubernetes-dashboard/#/workload?namespace=default
what make me confusing is the return content just json string, not the login page. The json content is :
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "kubernetes-dashboard",
"namespace": "kube-system",
"selfLink": "/api/v1/namespaces/kube-system/services/kubernetes-dashboard",
"uid": "884240d7-8f3f-41a4-a3a0-a89649545c82",
"resourceVersion": "133822",
"creationTimestamp": "2019-09-21T16:21:19Z",
"labels": {
"addonmanager.kubernetes.io/mode": "Reconcile",
"k8s-app": "kubernetes-dashboard",
"kubernetes.io/cluster-service": "true"
},
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Service\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"k8s-app\":\"kubernetes-dashboard\",\"kubernetes.io/cluster-service\":\"true\"},\"name\":\"kubernetes-dashboard\",\"namespace\":\"kube-system\"},\"spec\":{\"ports\":[{\"port\":443,\"targetPort\":8443}],\"selector\":{\"k8s-app\":\"kubernetes-dashboard\"},\"type\":\"NodePort\"}}\n"
}
},
"spec": {
"ports": [
{
"protocol": "TCP",
"port": 443,
"targetPort": 8443,
"nodePort": 31085
}
],
"selector": {
"k8s-app": "kubernetes-dashboard"
},
"clusterIP": "10.254.75.193",
"type": "NodePort",
"sessionAffinity": "None",
"externalTrafficPolicy": "Cluster"
},
"status": {
"loadBalancer": {
}
}
}
this is my nginx forward config:
upstream kubernetes{
server 172.19.104.231:8001;
}
this is my kubernetes cluster proxy command:
kubectl proxy --address=0.0.0.0 --port=8001 --accept-hosts='^*$'

You are accessing the Kubernetes API to get the kubernetes-dashboard Service resource manifest. This is the JSON that you get back.
If you want to access the Service, you need to access the Service itself, not the Kubernetes API. You can do this for example with port forwarding:
kubectl port-forward svc/kubernetes-dashboard 8443:443
And then access the Service with:
curl localhost:8443/#/workload?namespace=default

Kubernetes documentation has clearer instructions now on how to deploy and access their dashboard:
https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/
I went to 127.0.0.1:8001 at first, and got the API JSON (as per the original question) before noticing the URL given under the kubectl proxy instruction:
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/.

Related

Access pod localhost from Service

New to Kubernetes.
I have a private dockerhub image deployed on a Kubernetes instance. When I exec into the pod I can run the following so I know my docker image is running:
root#private-reg:/# curl 127.0.0.1:8085
Hello world!root#private-reg:/#
From the dashboard I can see my service has an external endpoint which ends with port 8085. When I try to load this I get 404. My service YAML is as below:
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "test",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/services/test",
"uid": "a1a2ae23-339b-11e9-a3db-ae0f8069b739",
"resourceVersion": "3297377",
"creationTimestamp": "2019-02-18T16:38:33Z",
"labels": {
"k8s-app": "test"
}
},
"spec": {
"ports": [
{
"name": "tcp-8085-8085-7vzsb",
"protocol": "TCP",
"port": 8085,
"targetPort": 8085,
"nodePort": 31859
}
],
"selector": {
"k8s-app": "test"
},
"clusterIP": "******",
"type": "LoadBalancer",
"sessionAffinity": "None",
"externalTrafficPolicy": "Cluster"
},
"status": {
"loadBalancer": {
"ingress": [
{
"ip": "******"
}
]
}
}
}
Can anyone point me in the right direction.
What is the output from the below command
curl cluzterIP:8085
If you get Hello world message then it means that the service is routing the traffic Correctly to the backend pod.
curl HostIP:NODEPORT should also be working
Most likely that service is not bound to the backend pod. Did you define the below label on the pod?
labels: {
"k8s-app": "test"
}
You didn't mention what type of load balancer or cloud provider you are using but if your load balancer provisioned correctly which you should be able to see in your kube-controller-manager logs, then you should be able to access your service with what you see here:
"status": {
"loadBalancer": {
"ingress": [
{
"ip": "******"
}
]
}
Then you could check by running:
$ curl <ip>:<whatever external port your lb is fronting>
It's likely that this didn't provision if as described in other answers this works:
$ curl <clusterIP for svc>:8085
and
$ curl <NodeIP>:31859 # NodePort
Give a check on services on kuberntes, there are a few types:
https://kubernetes.io/docs/concepts/services-networking/service/
ClusterIP: creates access to service only inside the cluster.
NodePort: Access service through a given port on the nodes.
LoadBalancer: service externally acessible through a LB.
I am assuming you are running on GKE.
What kind of service is it, the one launched?

Kubernetes nginx ingress 0.22 not respecting cookie affinity annotation?

We recently upgraded to nginx-ingress 0.22. Before this upgrade, my service was using the old namespace ingress.kubernetes.io/affinity: cookie and everything was working as I expected. However, upon the upgrade to 0.22, affinity stopped being applied to my service (I don't see sticky anywhere in the nginx.conf).
I looked at the docs and changed the namespace to nginx.ingress.kubernetes.io as shown in this example, but it didn't help.
Is there some debug log I can look at that will show the configuration parsing/building process? My guess is that some other setting is preventing this from working (I can't imagine the k8s team shipped a release with this feature completely broken), but I'm not sure what that could be.
My ingress config as shown by the k8s dashboard follows:
"kind": "Ingress",
"apiVersion": "extensions/v1beta1",
"metadata": {
"name": "example-ingress",
"namespace": "master",
"selfLink": "/apis/extensions/v1beta1/namespaces/master/ingresses/example-ingress",
"uid": "01e81627-3b90-11e9-bb5a-f6bc944a4132",
"resourceVersion": "23345275",
"generation": 1,
"creationTimestamp": "2019-02-28T19:35:30Z",
"labels": {
},
"annotations": {
"ingress.kubernetes.io/backend-protocol": "HTTPS",
"ingress.kubernetes.io/limit-rps": "100",
"ingress.kubernetes.io/proxy-body-size": "100m",
"ingress.kubernetes.io/proxy-read-timeout": "60",
"ingress.kubernetes.io/proxy-send-timeout": "60",
"ingress.kubernetes.io/secure-backends": "true",
"ingress.kubernetes.io/secure-verify-ca-secret": "example-ingress-ssl",
"kubernetes.io/ingress.class": "nginx",
"nginx.ingress.kubernetes.io/affinity": "cookie",
"nginx.ingress.kubernetes.io/backend-protocol": "HTTPS",
"nginx.ingress.kubernetes.io/limit-rps": "100",
"nginx.ingress.kubernetes.io/proxy-body-size": "100m",
"nginx.ingress.kubernetes.io/proxy-buffer-size": "8k",
"nginx.ingress.kubernetes.io/proxy-read-timeout": "60",
"nginx.ingress.kubernetes.io/proxy-send-timeout": "60",
"nginx.ingress.kubernetes.io/secure-verify-ca-secret": "example-ingress-ssl",
"nginx.ingress.kubernetes.io/session-cookie-expires": "172800",
"nginx.ingress.kubernetes.io/session-cookie-max-age": "172800",
"nginx.ingress.kubernetes.io/session-cookie-name": "route",
"nginx.org/websocket-services": "example"
}
},
"spec": {
"tls": [
{
"hosts": [
"*.example.net"
],
"secretName": "example-ingress-ssl"
}
],
"rules": [
{
"host": "*.example.net",
"http": {
"paths": [
{
"path": "/",
"backend": {
"serviceName": "example",
"servicePort": 443
}
}
]
}
}
]
},
"status": {
"loadBalancer": {
"ingress": [
{}
]
}
}
}
As I tested Sticky session affinity with Nginx Ingress version 0.22, I can assure that it works just fine. Then when I was looking for your configuration, I replaced wildcard host host: "*.example.net" with i.e host: "stickyingress.example.net" just to ignore wildcard, and it worked fine again.
So after some search I found out that from this issue
Wildcard hostnames are not supported by the Ingress spec (only SSL
wildcard certificates are)
Even this issue was opened for NGINX Ingress controller version:
0.21.0

Extract LoadBalancer name from kubectl output with go-template

I'm trying to write a go template that extracts the value of the load balancer. Using --go-template={{status.loadBalancer.ingress}} returns [map[hostname:GUID.us-west-2.elb.amazonaws.com]]% When I add .hostname to the template I get an error saying, "can't evaluate field hostname in type interface {}". I've tried using the range keyword, but I can't seem to get the syntax right.
{
"apiVersion": "v1",
"kind": "Service",
"metadata": {
"creationTimestamp": "2018-07-30T17:22:12Z",
"labels": {
"run": "nginx"
},
"name": "nginx-http",
"namespace": "jx",
"resourceVersion": "495789",
"selfLink": "/api/v1/namespaces/jx/services/nginx-http",
"uid": "18aea6e2-941d-11e8-9c8a-0aae2cf24842"
},
"spec": {
"clusterIP": "10.100.92.49",
"externalTrafficPolicy": "Cluster",
"ports": [
{
"nodePort": 31032,
"port": 80,
"protocol": "TCP",
"targetPort": 8080
}
],
"selector": {
"run": "nginx"
},
"sessionAffinity": "None",
"type": "LoadBalancer"
},
"status": {
"loadBalancer": {
"ingress": [
{
"hostname": "GUID.us-west-2.elb.amazonaws.com"
}
]
}
}
}
As you can see from the JSON, the ingress element is an array. You can use the template function index to grab this array element.
Try:
kubectl get svc <name> -o=go-template --template='{{(index .status.loadBalancer.ingress 0 ).hostname}}'
This is assuming of course that you're only provisioning a single loadbalancer, if you have multiple, you'll have to use range
try this:
kubectl get svc <name> -o go-template='{{range .items}}{{range .status.loadBalancer.ingress}}{{.hostname}}{{printf "\n"}}{{end}}{{end}}'

Call a service from any POD

I would like how to call a service from any pod inside or outsite the node.
I have 3 nodes with deployment and services. I already have a kube-proxy.
I exec bash on other pod:
kubectl exec --namespace=develop myotherdpod-78c6bfd876-6zvh2 -i -t -- /bin/bash
And inside my other pod I have tried to exec curl:
curl -v http://myservice.develop.svc.cluster.local/user
This is my created service:
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "myservice",
"namespace": "develop",
"selfLink": "/api/v1/namespaces/develop/services/mydeployment-svc",
"uid": "1b5fb4ae-ecd1-11e7-8599-02cc6a4bf8be",
"resourceVersion": "10660278",
"creationTimestamp": "2017-12-29T19:47:30Z",
"labels": {
"app": "mydeployment-deployment"
}
},
"spec": {
"ports": [
{
"name": "http",
"protocol": "TCP",
"port": 80,
"targetPort": 8080
}
],
"selector": {
"app": "mydeployment-deployment"
},
"clusterIP": "100.99.99.140",
"type": "ClusterIP",
"sessionAffinity": "None"
},
"status": {
"loadBalancer": {}
}
}
It looks to me that something may be incorrect with the Network Overlay you deployed. First of all, I would double check that the pod can access kube-dns and obtain the proper IP of the service.
nslookup myservice.develop.svc.cluster.local
nslookup myservice # If they are in the same namespace it should work as well
If you are able to confirm that, then I would also check if services like kube-proxy are working correctly. You can do it by using
systemctl status kube-proxy
If that does not work I will also check the pods from the Overlay network by executing
kubectl get pods --namespace=kube-system
If they are all ok, I would try using a different network overlay: https://kubernetes.io/docs/concepts/cluster-administration/networking/
If that did not work either, I would check if there are firewall rules preventing some communication between the nodes.

Kubernetes 1.5.2 ping can't find ClusterIP of a Service from inside Cluster

I have setup a Cluster with some deployments and services.
I can log into any of my pods and ping the pods from their pod network ips ( 172.x.x.x ) and they are successful.
But when I try to ping the services ClusterIP addresses from any of my pods they never respond, so I can't access my services.
Below is my Kibana deployment, and 10.254.77.135 is the IP I am trying to connect to from my other services, I also can't use this node port, it never responds
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "kibana",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/services/kibana",
"uid": "21498caf-569c-11e7-a801-0050568fc023",
"resourceVersion": "3282683",
"creationTimestamp": "2017-06-21T16:10:23Z",
"labels": {
"component": "elk",
"role": "kibana"
}
},
"spec": {
"ports": [
{
"name": "http",
"protocol": "TCP",
"port": 5601,
"targetPort": 5601,
"nodePort": 31671
}
],
"selector": {
"k8s-app": "kibana"
},
"clusterIP": "10.254.77.135",
"type": "NodePort",
"sessionAffinity": "None"
},
"status": {
"loadBalancer": {}
}
}
Not sure if this is your problem, but apparently ping doesn't work on services ClusterIP addresses because they are virtual addresses created by iptables rules that just redirect packets to the endpoints(pods).