I was able to get it working following NFS example in Kubernetes.
https://github.com/kubernetes/kubernetes/tree/master/examples/volumes/nfs
However, when I want to automate all the steps, I need to find the IP and update nfs-pv.yaml PV file with the hard coded IP address as mentioned in the example link page.
Replace the invalid IP in the nfs PV. (In the future, we'll be able to
tie these together using the service names, but for now, you have to
hardcode the IP.)
Now, I wonder that how can we tie these together using the services names?
Or, it is not possible at the latest version of Kubernetes (as of today, the latest stable version is v1.6.2) ?
I got it working after I add kube-dns address to the each minion|node where Kubernetes is running. After login each minion, update resolv.conf file as the following;
cat /etc/resolv.conf
# Generated by NetworkManager
search openstacklocal localdomai
nameserver 10.0.0.10 # I added this line
nameserver 159.107.164.10
nameserver 153.88.112.200
....
I am not sure is it the best way but this works.
Any better solution is welcome.
You can use do this with the help of kube-dns,
check whether it's service running or not,
kubectl get svc --namespace=kube-system
and kube-dns pod also,
kubectl get pods --namespace=kube-system
you have to add respected name-server according to kube-dns on each node in cluster,
For more troubleshooting, follow this document,
https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
Related
Let's suppose I have pods deployed in a Kubernetes cluster and I have exposed them via a NodePort service. Is there a way to get the pod external endpoint in one command ?
For example:
kubectl <cmd>
Response : <IP_OF_NODE_HOSTING_THE_POD>:30120 ( 30120 being the nodeport external port )
The requirement is a complex one and requires query to list object. I am going to explain with assumptions. Additionally if you need internal address then you can use endpoint object(ep), because the target resolution is done at the endpoint level.
Assumption: 1 Pod, 1 NodePort service(32320->80); both with name nginx
The following command will work with the stated assumption and I hope this will give you an idea for the best approach to follow for your requirement.
Note: This answer is valid based on the assumption stated above. However for more generalized solution I recommend to use -o jsonpath='{range.. for this type of complex queries. For now the following command will work.
command:
kubectl get pods,ep,svc nginx -o jsonpath=' External:http://{..status.hostIP}{":"}{..nodePort}{"\n Internal:http://"}{..subsets..ip}{":"}{..spec.ports..port}{"\n"}'
Output:
External:http://192.168.5.21:32320
Internal:http://10.44.0.21:80
If the node port service is known then something like kubectl get svc <svc_name> -o=jsonpath='{.spec.clusterIP}:{.spec.ports[0].nodePort} should work.
I setup a private K8S cluster with RKE 1.2.2 and so my K8S version is 1.19. We have some internal services, and it is necessary to access each other using custom FQDN instead of simple service names. As I searched the web, the only solution I found is adding rewrite records for CoreDNS ConfigMap described in this REF. However, this solution results in manual configuration, and I want to define a record automatically during service setup. Is there any solution for this automation? Does CoreDNS have such an API to add or delete rewrite records?
Note1: I also tried to mount the CoreDNS's ConfigMap and update it via another pod, but the content is mounted read-only.
Note2: Someone proposed calling kubectl get cm -n kube-system coredns -o yaml | sed ... | kubectl apply .... However, I want to automate it during service setup or in a pod or in an initcontainer.
Note3: I wish there were something like hostAliases for services, something called serviceAliases for internal services (ClusterIP).
Currently, there is no ready solution for this.
Only thing comes to my mind is to use MutatingAdmissionWebhook. It would need catch moment, when new Kubernetes service was created and then modify ConfigMap for CoreDNS as it's described in CoreDNS documentation.
After that, you would need to reload CoreDNS configuration to apply new configuration from ConfigMap. To achieve that, you can use reload plugin for CoreDNS. More details about this plugin can be found here.
Instead of above you can consider using sidecarContainer for CoreDNS, which will send SIGUSR1 signal to CoreDNS conatiner.
Example of this method can be found in this Github thread.
When I exec into a container I see an /etc/resolv.conf file that looks like this:
$ cat /etc/resolv.conf
search namespace.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.43.0.10
options ndots:5
How can I append to the search domains for all containers that get deployed such that the search domains will include extra domains? e.g. If I wanted to add foo.com and bar.com by default for any pod how can I update the search line to look like bellow?
search namespace.svc.cluster.local svc.cluster.local cluster.local foo.com bar.com
Notes:
This is a self managed k8s cluster. I am able to update the DNS/cluster configuration however I need to. I have already updated the coredns component to resolve my nameservers correctly, but this setting needs to be applied to each pod I would imagine.
I have looked at the pod spec, but this wouldn't be a great solution as it would need to be added to every pods (or deployment/ job/ replicaset/ etc) manifest in the system. I need this to be applied to all pods by default.
Due to the way hostnames are returned in numerous existing services, I cannot reasonably expect hostnames to be fully qualified domain names. This is an effort to maintain backwards compatibility with many services we already have (e.g. an LDAP request might return host fizz, but the lookup will need to fully resolve to fizz.foo.com). This is the way bare metal machines and VMs are normally configured here.
I found a possible solution, but I won't mark this as correct myself, because this was not directly specific to k8s, but rather k3s. I might come back later and provide more
In my case my test cluster was a k3s service, which I was assuming would act mostly the same as k8s. The way my environment was set up, my normal /etc/resolv.conf was being replaced by a new file on the node. I was able to circumvent this issues by supplying --resolv-conf where the files looks like this:
$ cat /somedir/resolv.conf
search foo.com bar.com
nameserver 8.8.8.8
Then start the server with /bin/k3s server --resolv-conf=/somedir/resolv.conf
Now when pods are spawned, k3s will parse this file for the search line and automatically append the search domains to whatever pod is created.
I'm not sure if I'm going to run into this issue again when I try this on actual k8s, but at least this gets me back up and running!
I am going to deploy Keycloak on my K8S cluster and as a database I have chosen PostgreSQL.
To adjust the business requirements, we have to add additional features to Keycloak, for example custom theme, etc. That means, for every changes on Keycloak we are going to trigger CI/CD pipeline. We use Drone for CI and ArgoCD for CD.
In the pipeline, before it hits the CD part, we would like to ensure, that PostgreSQL is up and running.
The question is, does it exist a tool for K8S, that we can validate, if particular services are up and running.
"Up and running" != "Exists"
1: To check if a service exists, just do a kubectl get service <svc>
2: To check if it has active endpoints do kubectl get endpoints <svc>
3: You can also check if backing pods are in ready state.
2 & 3 requires readiness probe to be properly configured on the pod/deployment
Radek is right in his answer but I would like to expand on it with the help of the official docs. To make sure that the service exists and is working properly you need to:
Make sure that Pods are actually running and serving: kubectl get pods -o go-template='{{range .items}}{{.status.podIP}}{{"\n"}}{{end}}'
Check if Service exists: kubectl get svc
Check if Endopints exist: kubectl get endopints
If needed, check if the Service is working by DNS name: nslookup hostnames (from a Pod in the same Namespace) or nslookup hostnames.<namespace> (if it is in a different one)
If needed, check if the Service is working by IP: for i in $(seq 1 3); do
wget -qO- <IP:port>
done
Make sure that the Service is defined correctly: kubectl get service <service name> -o json
Check if the kube-proxy working: ps auxw | grep kube-proxy
If any of the above is causing a problem, you can find the troubleshooting steps in the link above.
Regarding your question in the comments: I don't think there is a n easier way considering that you need to make sure that everything is working fine. You can skip some of the steps but that would depend on your use case.
I hope it helps.
In Kubernetes(kubectl) 1.10 or Istio(istioctl) 1.0
I know I can list services with kubectl get svc, and I can figure out the namespace and from there add .svc.cluster.local.
Is it possible to list internal dns?
Thanks
Unfortunately you would have to list the DNS of each pod manually as there is no command available for both Istio or Kubernetes. You can however look at this following link which will guide you into creating a shell in the running container so that you can list the internal DNS manually