Kubernetes - kubectl or istioctl list all internal dns - kubernetes

In Kubernetes(kubectl) 1.10 or Istio(istioctl) 1.0
I know I can list services with kubectl get svc, and I can figure out the namespace and from there add .svc.cluster.local.
Is it possible to list internal dns?
Thanks

Unfortunately you would have to list the DNS of each pod manually as there is no command available for both Istio or Kubernetes. You can however look at this following link which will guide you into creating a shell in the running container so that you can list the internal DNS manually

Related

Assign FQDN for Internal Services in a Private Kubernetes Cluster

I setup a private K8S cluster with RKE 1.2.2 and so my K8S version is 1.19. We have some internal services, and it is necessary to access each other using custom FQDN instead of simple service names. As I searched the web, the only solution I found is adding rewrite records for CoreDNS ConfigMap described in this REF. However, this solution results in manual configuration, and I want to define a record automatically during service setup. Is there any solution for this automation? Does CoreDNS have such an API to add or delete rewrite records?
Note1: I also tried to mount the CoreDNS's ConfigMap and update it via another pod, but the content is mounted read-only.
Note2: Someone proposed calling kubectl get cm -n kube-system coredns -o yaml | sed ... | kubectl apply .... However, I want to automate it during service setup or in a pod or in an initcontainer.
Note3: I wish there were something like hostAliases for services, something called serviceAliases for internal services (ClusterIP).
Currently, there is no ready solution for this.
Only thing comes to my mind is to use MutatingAdmissionWebhook. It would need catch moment, when new Kubernetes service was created and then modify ConfigMap for CoreDNS as it's described in CoreDNS documentation.
After that, you would need to reload CoreDNS configuration to apply new configuration from ConfigMap. To achieve that, you can use reload plugin for CoreDNS. More details about this plugin can be found here.
Instead of above you can consider using sidecarContainer for CoreDNS, which will send SIGUSR1 signal to CoreDNS conatiner.
Example of this method can be found in this Github thread.

Nginx Ingress Controller Installation Error, "dial tcp 10.96.0.1:443: i/o timeout"

I'm trying to setup a kubernetes cluster with kubeadm and vagrant. I faced an error during installing nginx ingress controller was timeout when the pods is trying to retrieve the configmap through kubernetes API. I have looked around and trying to apply their solution, still no luck, this is the reason I come out with this post.
Environment:
I'm using vagrant to setup 2 nodes with ubuntu/xenial image.
kmaster
-------------------------------------------
network:
Adapter1: NAT
Adapter2: HostOnly-network, IP:192.168.2.71
kworker1
-------------------------------------------
network:
Adapter1: NAT
Adapter2: HostOnly-network, IP:192.168.2.72
I followed the kubeadm to setup the cluster
[Setup kubernetes with kubeadm]
And my kube cluster init command as below:
kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=192.168.2.71
and apply calico network plugin policy:
kubectl apply -f \
https://docs.projectcalico.org/v3.4/getting-started/kubernetes/installation/hosted/etcd.yaml
kubectl apply -f \
https://docs.projectcalico.org/v3.4/getting-started/kubernetes/installation/hosted/calico.yaml
(Calico is a plugin I currently successful installed with, I will come out another post for flannel plugin which the plugin unable to access the service)
I'm using helm to install ingress controller followed the tutorial
https://kubernetes.github.io/ingress-nginx/deploy/
That's the error occurred once I applied helm deploy command when I describe the pod
Appreciate someone can help, as I know the reason was the pod unable to access kubernetes API. But not this already should enable by kubernetes by default?
My kubesystem pods status as below:
Another solution provided from kubernetes official website:
1) install kube-proxy with sidecar, I still new with kubernetes and I'm looking for example how to install kube-proxy with sidecar. Appreciate if someone could provide an example.
2) use client-go, I'm very confuse when I read this post, it seems that using go command to pull the go script, and I have no clue how's it working with kubernetes pods.
You guys are right, I have tested with digital ocean's droplet and it works as expected, I hit another error is "forbidden, user service account not permitted". Look like the pods is able to access the kubernetes api already. I also tested install istio which I encountered the same issue before, and now it worked in digital ocean droplet.
Thank you guys.

Kubernetes 1.11 could not find heapster for metrics

I'm using Kubernetes 1.11 on Digital Ocean, when I try to use kubectl top node I get this error:
Error from server (NotFound): the server could not find the requested resource (get services http:heapster:)
but as stated in the doc, heapster is deprecated and no longer required from kubernetes 1.10
If you are running a newer version of Kubernetes and still receiving this error, there is probably a problem with your installation.
Please note that to install metrics server on kubernetes, you should first clone it by typing:
git clone https://github.com/kodekloudhub/kubernetes-metrics-server.git
then you should install it, WITHOUT GOING INTO THE CREATED FOLDER AND WITHOUT MENTIONING AN SPECIFIC YAML FILE , only via:
kubectl create -f kubernetes-metrics-server/
In this way all services and components are installed correctly and you can run:
kubectl top nodes
or
kubectl top pods
and get the correct result.
For kubectl top node/pod to work you either need the heapster or the metrics server installed on your cluster.
Like the warning says: heapster is being deprecated so the recommended choice now is the metrics server.
So follow the directions here to install the metrics server

K8S dashboard not accessible after first cluster in GKE - GCP using console

Newbie setup :
Created First project in GCP
Created cluster with default, 3 nodes. Node version 1.7.6. cluster master version 1.7.6-gke.1.
Deployed aan application in a pod, per example.
Able to access "hello world" and the hostname, using the external-ip and the port.
In GCP / GKE webpage of my cloud console, clicked "discovery and loadbalancing", I was able to see the "kubernetes-dashboard" process in green-tick, but cannot access throught the IP listed. tried 8001,9090, /ui and nothing worked.
not using any cloud shell or gcloud commands on my local laptop. Everything is done on console.
Questions :
How can anyone access the kubernetes-dashboard of the cluster created in console?
docs are unclear, are the dashboard components incorporated in the console itself? Are the docs out of sync with GCP-GKE screens?
tutorial says run "kubectl proxy" and then to open
"http://localhost:8001/ui", but it doesnt work, why?
If you create a cluster with with version 1.9.x or greater, then u can access using tokens.
get secret.
kubectl -n kube-system describe secrets `kubectl -n kube-system get secrets | awk '/clusterrole-aggregation-controller/ {print $1}'` | awk '/token:/ {print $2}'
Copy secret.
kubectl proxy.
Open UI using 127.0.0.1:8001/ui. This will redirect to login page.
there will be two options to login, kubeconfig and token.
Select token and paste the secret copied earlier.
hope this helps
It seems to be an issue with the internal Kubernetes DNS service starting at version 1.7.6 on Google Cloud.
The solution is to access the dashboard at this endpoint instead:
http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
Github Issue links:
https://github.com/kubernetes/dashboard/issues/2368
https://github.com/kubernetes/kubernetes/issues/52729
The address of the dashboard service is only accessible from inside of the cluster. If you ssh into a node in your cluster, you should be able to connect to the dashboard. You can verify this by noticing that the address is within the services CIDR range for your cluster.
The dashboard in running as a pod inside of your cluster with an associated service. If you open the Workloads view you will see the kubernetes-dashboard deployment and can see the pod that was created by the deployment. I'm not sure which docs you are referring to, since you didn't provide a link.
When you run kubectl proxy it creates a secure connection from your local machine into your cluster. It works by connecting to your master and then running through a proxy on the master to the pod/service/host that you are connecting to via an ssh tunnel. It's possible that it isn't working because the ssh tunnels are not running; you should verify that your project has newly created ssh rules allowing access from the cluster endpoint IP address. Otherwise, if you could explain more about how it fails, that would be useful for debugging.
First :
gcloud container clusters get-credentials cluster-1 --zone my-zone --project my-project
Then find your kubernetes dashboard endpoint doing :
kubectl cluster-info
It will be something like https://42.42.42.42/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy
Install kube-dashboard
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
Run:
$ kubectl proxy
Access:
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login

Kubernetes NFS: Using service name instead of hardcoded server IP address

I was able to get it working following NFS example in Kubernetes.
https://github.com/kubernetes/kubernetes/tree/master/examples/volumes/nfs
However, when I want to automate all the steps, I need to find the IP and update nfs-pv.yaml PV file with the hard coded IP address as mentioned in the example link page.
Replace the invalid IP in the nfs PV. (In the future, we'll be able to
tie these together using the service names, but for now, you have to
hardcode the IP.)
Now, I wonder that how can we tie these together using the services names?
Or, it is not possible at the latest version of Kubernetes (as of today, the latest stable version is v1.6.2) ?
I got it working after I add kube-dns address to the each minion|node where Kubernetes is running. After login each minion, update resolv.conf file as the following;
cat /etc/resolv.conf
# Generated by NetworkManager
search openstacklocal localdomai
nameserver 10.0.0.10 # I added this line
nameserver 159.107.164.10
nameserver 153.88.112.200
....
I am not sure is it the best way but this works.
Any better solution is welcome.
You can use do this with the help of kube-dns,
check whether it's service running or not,
kubectl get svc --namespace=kube-system
and kube-dns pod also,
kubectl get pods --namespace=kube-system
you have to add respected name-server according to kube-dns on each node in cluster,
For more troubleshooting, follow this document,
https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/