I am trying to create a deployment of 3 replicas of containers of the same image.
Each of container would require to include the unique hostname in two places:
/etc/hosts or resolvable by a dns
an application properties file and cmd line
However, the hostname is not known until after container is started and even it's started, I found that I can ping ip of another container in one container but NOT the container name.
I understand that I can write a script to update but is there a more generic way to get this done. This is not an uncommon task/requirement.
Thanks.
Related
We have to use two name servers in the pod deployed in the cluster managed by other team.
The coredns for service name resolution of other pods inside cluster
The custom dns server for some external dns queries made by
application inside pod.
We have use the dnsPolicy as ClusterFirst and also specify the nameserver in dnsConfig
as ip of custom dns server and options with ndots = 1.
After deploying the pod we got the correct entries in /etc/resolv.conf with coredns entry as first entry. But when the application tries to resolve some domain, it first query with the absolute name (as option ndots=1 specified in /etc/resolv.conf from the first name server specified in /etc/resolv.conf, after failure it appends the search string that was automatically inserted when we specified the dnsPolicy as ClusterFirst, again query with the first nameserver and then try with second name server.
Why it is not trying the absolute name query with the second name server after failure from first name server, while in case when it append the search string, it query with both the name server in sequential order?
Is there any way we can insert the custom dns entry at top?
Note:- We cannot use the forward functionality of coredns as this dns server is used by other pods/services in the cluster.
This question already has answers here:
Is there a way to add arbitrary records to kube-dns?
(6 answers)
Closed 1 year ago.
Is it possible to add a custom DNS entry (type A) inside Kubernetes 1.19? I'd like to be able to do:
kubectl run -it --rm --restart=Never --image=infoblox/dnstools:latest dnstools
host custom-dns-entry.example.com
custom-dns-entry.example.com has address 10.0.0.72
with custom-dns-entry.example.com not being registered inside my upstream DNS server (and also not having a corresponding k8s service at all).
Following example https://coredns.io/2017/05/08/custom-dns-entries-for-kubernetes/ seems to provide a solution, but it is a bit complex and it may be deprecated. Is there a simpler way to do it please? For example with a few kubectl commands?
The reason why I need this is because I run my workload on kind so my ingress DNS record is not registered inside upstream DNS and some pods require access to this ingress DNS record from inside (maybe to configure a javascript client provided by the pods which will effectively access the ingress DNS record from outside...). However I cannot modify the workload code as I am not maintaining it, so addgin this custom DNS entry seems to be a reasonable solution
CoreDNS would be the place to do this. You can also do similar-ish things using ExternalName-type Services but that wouldn't give you full control over the hostname (it would be a Service name like anything else).
In CKAD exam I have been asked to SSH to other node in cluster to do some kubectl operations like kubectl get all, though with that getting below:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Tried doing sudo, but did not work and did check kubectl config view (can see empty file in client node)
How to do this?
You need to list the available nodes in the cluster, but first, make sure you're using the correct context:
k get nodes
You will get the available noted like:
node-0 node-1 (to see which one is the worker node, or if you were asked to ssh to a specific node then copy-paste it) should be:
ssh node-0
This is to create some files/directory (ex: to persist data) once you finish return to the master to complete your task.
i am new to kubernetes and i have some functionally that i need to implement.
i need to set an env variable for only one docker container in a service.
for example- if i have 3 users containers then 1 of them need to have env variable named master
i did it with nomad. nomad set an env variable named NOMAD_ALLOC_INDEX, that give me the index of the container, this way i checked that if the container index was 0 then it is master.
i try find if kubernetes have a similar variable but didn't find anywhere.
i also try find in google an alternative solution but ended up with nothing.
any ideas of how i can achieve it ?
If you want sequential indexes, StatefulSet is your solution. Otherwise lookup kubernetes leader election, there are ways to solve it with ie. sidecar container performing leader election and exposing status via http call so you can curl localhost:port and see if the pod is master or not.
Background:
Let's say I have a replication controller with some pods. When these pods were first deployed they were configured to expose port 8080. A service (of type LoadBalancer) was also create to expose port 8080 publicly. Later we decide that we want to export an additional port from the pods (port 8081). We change the pod definition and do a rolling-update with no downtime, great! But we want this port to be publicly accessible as well.
Question:
Is there a good way to update a service without downtime (for example by adding a an additional port to expose)? If I just do:
kubectl replace -f my-service-with-an-additional-port.json
I get the following error message:
Replace failedspec.clusterIP: invalid value '': field is immutable
If you name the ports in the pods, you can specify the target ports by name in the service rather than by number, and then the same service can direct target to pods using different port numbers.
Or, as Yu-Ju suggested, you can do a read-modify-write of the live state of the service, such as via kubectl edit. The error message was due to not specifying the clusterIP that had already been set.
In such case you can create a second service to expose the second port, it won't conflict with the other one and you'll have no downtime.
If you have more that one pod running for the same service you may use the Kubernetes Engine within the Google Cloud Console as follows:
Under "Workloads", select your Replication Controller. Within that screen, click "EDIT" then update and save your replication controller details.
Under "Discover & Load Balancing", select your Service. Within that screen, click "EDIT" then update and save your service details. If you changed ports you should see those reflecting under the column "Endpoints" when you've finished editing the details.
Assuming you have at least two pods running on a machine (and a restart policy of Always), if you wanted to update the pods with the new configuration or container image:
Under "Workloads", select your Replication Controller. Within that screen, scroll down to "Managed pods". Select a pod, then in that screen click "KUBECTL" -> "Delete". Note, you can do the same with the command line: kubectl delete pod <podname>. This would delete and restart it with the newly downloaded configuration and container image. Delete each pod one at a time, making sure to wait until a pod has fully restarted and working (i.e. check logs, debug) etc, before deleting the next.