How to resolve dns problem in Kubernetes? - kubernetes

I've a very strange error in my kubernetes configuration.
When I'm trying to connect to KubeDNS (I'm using minikube) I have the following error:
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "services \"kube-dns:dns\" is forbidden: User \"system:anonymous\" cannot get resource \"services/proxy\" in API group \"\" in the namespace \"kube-system\"",
"reason": "Forbidden",
"details": {
"name": "kube-dns:dns",
"kind": "services"
},
"code": 403
}
I read something about RCAD but I can't find anything that explaine clearly what must I do.
Can anyone help me?

Related

How can I filter events for the cluster autoscaler in kubernetes?

I see the following event from kubectl get events:
{
"apiVersion": "v1",
"count": 1,
"eventTime": null,
"firstTimestamp": "2019-12-04T19:52:51Z",
"involvedObject": {
"apiVersion": "v1",
"kind": "Pod",
"name": "example-deployment-55f789d54c-tlwnz",
"namespace": "default",
"resourceVersion": "82663",
"uid": "2fdbd034-16cf-11ea-bc4a-42010a800186"
},
"kind": "Event",
"lastTimestamp": "2019-12-04T19:52:51Z",
"message": "Unable to mount volumes for pod \"example-deployment-55f789d54c-tlwnz_default(2fdbd034-16cf-11ea-bc4a-42010a800186)\": timeout expired waiting for volumes to attach or mount for pod \"default\"/\"example-deployment-55f789d54c-tlwnz\". list of unmounted volumes=[nfs-volume]. list of unattached volumes=[nfs-volume default-token-kc7ks]",
"metadata": {
"creationTimestamp": "2019-12-04T19:52:51Z",
"name": "example-deployment-55f789d54c-tlwnz.15dd430deb31e8fd",
"namespace": "default",
"resourceVersion": "1529",
"selfLink": "/api/v1/namespaces/default/events/example-deployment-55f789d54c-tlwnz.15dd430deb31e8fd",
"uid": "a7c80266-16cf-11ea-bc4a-42010a800186"
},
"reason": "FailedMount",
"reportingComponent": "",
"reportingInstance": "",
"source": {
"component": "kubelet",
"host": "gke-test-a2e50ea5b9f1dd9-my-node-pool-5a20b1ac-vk9q"
},
"type": "Warning"
}
....
I've tried filtering by: kubectl get events --all-namespaces -o json --field-selector source.component=cluster-autoscaler but that errors with:
{
"apiVersion": "v1",
"items": [],
"kind": "List",
"metadata": {
"resourceVersion": "",
"selfLink": ""
}
}
Error from server (BadRequest): Unable to find "/v1, Resource=events" that match label selector "", field selector "source.component=cluster-autoscaler": field label not supported: source.component
How can I filter this?
Can be done using jq (though it does not return a JSON array - but individual JSON objects seperated by newlines):
kubectl get events --all-namespaces -o json | jq '.items[]|select(.source.component=="cluster-autoscaler")'

why kubernetes dashboard service return json

I am access my kubernetes dashboard using this url:
https://kubernetes.example.com/api/v1/namespaces/kube-system/services/kubernetes-dashboard/#/workload?namespace=default
what make me confusing is the return content just json string, not the login page. The json content is :
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "kubernetes-dashboard",
"namespace": "kube-system",
"selfLink": "/api/v1/namespaces/kube-system/services/kubernetes-dashboard",
"uid": "884240d7-8f3f-41a4-a3a0-a89649545c82",
"resourceVersion": "133822",
"creationTimestamp": "2019-09-21T16:21:19Z",
"labels": {
"addonmanager.kubernetes.io/mode": "Reconcile",
"k8s-app": "kubernetes-dashboard",
"kubernetes.io/cluster-service": "true"
},
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Service\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"k8s-app\":\"kubernetes-dashboard\",\"kubernetes.io/cluster-service\":\"true\"},\"name\":\"kubernetes-dashboard\",\"namespace\":\"kube-system\"},\"spec\":{\"ports\":[{\"port\":443,\"targetPort\":8443}],\"selector\":{\"k8s-app\":\"kubernetes-dashboard\"},\"type\":\"NodePort\"}}\n"
}
},
"spec": {
"ports": [
{
"protocol": "TCP",
"port": 443,
"targetPort": 8443,
"nodePort": 31085
}
],
"selector": {
"k8s-app": "kubernetes-dashboard"
},
"clusterIP": "10.254.75.193",
"type": "NodePort",
"sessionAffinity": "None",
"externalTrafficPolicy": "Cluster"
},
"status": {
"loadBalancer": {
}
}
}
this is my nginx forward config:
upstream kubernetes{
server 172.19.104.231:8001;
}
this is my kubernetes cluster proxy command:
kubectl proxy --address=0.0.0.0 --port=8001 --accept-hosts='^*$'
You are accessing the Kubernetes API to get the kubernetes-dashboard Service resource manifest. This is the JSON that you get back.
If you want to access the Service, you need to access the Service itself, not the Kubernetes API. You can do this for example with port forwarding:
kubectl port-forward svc/kubernetes-dashboard 8443:443
And then access the Service with:
curl localhost:8443/#/workload?namespace=default
Kubernetes documentation has clearer instructions now on how to deploy and access their dashboard:
https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/
I went to 127.0.0.1:8001 at first, and got the API JSON (as per the original question) before noticing the URL given under the kubectl proxy instruction:
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/.

What is my Custom Resource Definition URL in Kubernetes

I am trying to hit my custom resource definition endpoint in Kubernetes but cannot find an exact example for how Kubernetes exposes my custom resource definition in the Kubernetes API. If I hit the custom services API with this:
https://localhost:6443/apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions
I get back this response
"items": [
{
"metadata": {
"name": "accounts.stable.ibm.com",
"selfLink": "/apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions/accounts.stable.ibm.com",
"uid": "eda9d695-d3d4-11e9-900f-025000000001",
"resourceVersion": "167252",
"generation": 1,
"creationTimestamp": "2019-09-10T14:11:48Z",
"deletionTimestamp": "2019-09-12T22:26:20Z",
"finalizers": [
"customresourcecleanup.apiextensions.k8s.io"
]
},
"spec": {
"group": "stable.ibm.com",
"version": "v1",
"names": {
"plural": "accounts",
"singular": "account",
"shortNames": [
"acc"
],
"kind": "Account",
"listKind": "AccountList"
},
"scope": "Namespaced",
"versions": [
{
"name": "v1",
"served": true,
"storage": true
}
],
"conversion": {
"strategy": "None"
}
},
"status": {
"conditions": [
{
"type": "NamesAccepted",
"status": "True",
"lastTransitionTime": "2019-09-10T14:11:48Z",
"reason": "NoConflicts",
"message": "no conflicts found"
},
{
"type": "Established",
"status": "True",
"lastTransitionTime": null,
"reason": "InitialNamesAccepted",
"message": "the initial names have been accepted"
},
{
"type": "Terminating",
"status": "True",
"lastTransitionTime": "2019-09-12T22:26:20Z",
"reason": "InstanceDeletionCheck",
"message": "could not confirm zero CustomResources remaining: timed out waiting for the condition"
}
],
"acceptedNames": {
"plural": "accounts",
"singular": "account",
"shortNames": [
"acc"
],
"kind": "Account",
"listKind": "AccountList"
},
"storedVersions": [
"v1"
]
}
}
]
}
This leads me to believe I have correctly created the custom resource accounts. There are a number of examples that don't seem to be quite right and I cannot find my resource in the Kubernetes REST api. I can use with my custom resource from kubectl but I need to expose it with RESTful APIs.
https://localhost:6443/apis/stable.example.com/v1/namespaces/default/accounts
returns
404 page not found
Where as:
https://localhost:6443/apis/apiextensions.k8s.io/v1beta1/apis/stable.ibm.com/namespaces/default/accounts
returns
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "the server could not find the requested resource",
"reason": "NotFound",
"details": {},
"code": 404
}
I have looked at https://docs.okd.io/latest/admin_guide/custom_resource_definitions.html and https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/
The exact URL would be appreciated.
This is a quite decent way retrieving K8s REST API resource executing kubectl get command on some top debugging levels, like #Suresh Vishnoi mentioned in the comment:
kubectl get <api-resource> -v=8
Apparently, eventually checked by #Amit Kumar Gupta, the correct URL accessing custom resource as per your CRD json output is the following:
https://<API_server>:port/apis/stable.ibm.com/v1/namespaces/default/accounts
Depending on the authentication method you may choose: X509 Client Certs, Static Token File, Bearer Token or HTTP API proxy in order to authenticate user requests against Kubernetes API.

Resource Not Found for Creating CronJob

I am running 1.6.2, and am hitting the /apis/batch/v2alpha1/namespaces/<namespace>/cronjobs endpoint, with a valid namespace and a request body of
{
"body": {
"apiVersion": "batch/v2alpha1",
"kind": "CronJob",
"metadata": {
"name": "hello"
},
"spec": {
"schedule": "*/1 * * * *",
"jobTemplate": {
"spec": {
"template": {
"spec": {
"containers": [
{
"name": "hello",
"image": "busybox",
"args": [
"/bin/sh",
"-c",
"date; echo Hello from the Kubernetes cluster"
]
}
],
"restartPolicy": "OnFailure"
}
}
}
}
}
}
}
I receive a response of
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "the server could not find the requested resource",
"reason": "NotFound",
"details": {},
"code": 404
}
According to the documentation, this endpoint should exist. I figure I probably have some setting set incorrectly, but I'm not sure which one and how to correct it. Any help is appreciated.
The v2alpha1 features are not enabled by default. Make sure you are starting your kube-apiserver with this switch to enable the CronJob resource: --runtime-config=batch/v2alpha1=true.

How can I set a node to unschedulable status via the Kubernetes api?

I am attempting to emulate the behavior of kubectl patch. I'm sending an HTTP PATCH with a json payload of the following:
{
"apiVersion": "v1",
"kind": "Node",
"metadata": {
"name": "my-node-hostname"
},
"spec": {
"unschedulable": true
}
}
However, no matter how I seem to tweak this JSON, I keep getting a 415 and the following JSON status back:
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "the server responded with the status code 415 but did not return more information",
"details": {},
"code": 415
}
Even with debug on kube-apiserver set to 1000, I get no feedback about why the payload is wrong!
Is there a particular format that one should use in the JSON payload sent via PATCH to enable this to work?
After a helpful member of the Kubernetes Slack channel mentioned I could get the payload from kubectl patch via the --verbose flag, it turns out that Kubernetes expects to get "Content-Type: application/strategic-merge-patch+json" when you are sending the PATCH payload.