I am trying to set up MongoDB and MongoDB monitoring agent on a kubernetes cluster.
The monitoring agent first queries the service endpoint for the mongodb instance, and receives the hostname as a response. It then stops using the service endpoint, and starts to use the hostname to connect to the instance which fails as there is no resolution to get the container name resolved.
I think I can use a headless service to achieve this, although using headless service is not an option.
Is there any way to enable hostname resolution for containers/pods in Kubernetes or inject custom DNS records in kube-dns?
You should create a StatefulSet for your use case. Because you need your pod to have a unique identifier. To quote the docs, StatefulSets have:
StatefulSet Pods have a unique identity that is comprised of an ordinal, a stable network identity, and stable storage. The identity sticks to the Pod, regardless of which node it’s (re)scheduled on.
So if you are using Deployment object for MongoDB modify it to StatefulSet object type.
Your pods will be name resolved as well.
Docs:
StatefulSet: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/
StatefulSet Basics: https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/
Related
I am testing stateful sets with replicas, is there a way to force a service on each replica? For example, if I refer to the following note:
https://itnext.io/introduction-to-stateful-services-kubernetes-6018fd99338d
It shows headless service is created on top of pods. I do not have a way to force the connection to the first pod or the pod-0 or the 2nd pod i.e. pod-1.
You can access the pods directly, or you can create headless services as you write. This headless service is not created automatically, it is up to you to create it.
you are responsible for creating the Headless Service responsible for the network identity of the pods.
From StatefulSet - Stable Network Identity
Also see StatefulSet Basics - Headless Services on how to create headless services, by setting clusterIP: "None"
I have a pod running in a statefulset but it needs to know the hostname or address of all pods running in another statefulset to communicate with them. The second statefulset is being created by a separate helm chart. Can the pod work this out dynamically? Can I inject this information into the pod through an env similar to setting .Status.ip?
Edit: Each statefulSet has its own headless service
As discussed in the comments, the way to go here is to use a service-resource as this will give you a static DNS within the cluster to reach all the pods that a targeted by that service.
The DNS for the service is:
the services name if you access it from within the same namespace
<my-service-name>.<namespace-name>.svc.cluster.local if you access it from another namespace, and where cluster.local is the clusters domain that might differ from cluster to cluster depending on the clusters configuration
If you further need more configuration options, e.g. when you want to deploy your chart into different cloud environments where the clusters-domain might actually differ, you can use kustomize.io to adjust your configuration at apply time.
I am trying to connect to the MongoDB replica set that is hosted in another Kubernetes cluster of the same GCP project. I want to use DNS names in the connection string.
I was able to connect to mongodb hosted in the same cluster using this connection string:
mongodb://<pod-name>.<service-name>.<namespace>.svc.cluster.local:27017,<pod-name>.<service-name>.<namespace>.svc.cluster.local:27017/?replicaSet=<rs-name>
So my question is:
Is it possible to use the DNS name to reference the pod in another cluster? I looked through this document and it states:
Any pods created by a Deployment or DaemonSet have the following DNS
resolution available:
pod-ip-address.deployment-name.my-namespace.svc.cluster-domain.example.
But I am not sure what is the format of the cluster-domain.example part.
You can not use Kubernetes Service DNS(CoreDNS) to access a pod from outside the kubernetes cluster even from another kubernetes cluster. You need to expose the mongodb pod via LoadBalancer(recommended) or NodePort type service and access it using LoadBalancer endpoint or NodeIP:NodePort from the other kubernetes cluster.
I was wondering how pods are accessed when no service is defined for that specific pod. If it's through the environment variables, how does the cluster retrieve these?
Also, when services are defined, where on the master node is it stored?
Kind regards,
Charles
If you define a service for your app , you can access it outside the cluster using that service
Services are of several types , including nodePort , where you can access that port on any cluster node and you will have access to the service regardless of the actual location of the pod
you can access the endpoints or actual pod ports inside the cluster as well , but not outside
all of the above uses the kubernetes service discovery
There are two type of service dicovery though
Internal Service discovery
External Service Discovery.
You cannot "access" a pods container port(s) without a service. Services are objects that define the desired state of an ultimate set of iptable rule(s).
Also, services, like all other objects, are stored in etcd and maintained through your master(s).
You could however manually create an iptable rule forwarding traffic to the local container port that docker has exposed.
Hope this helps! If you still have any questions drop them here.
Just for debugging purposes, you can forward a port from your machine to one in the pod:
kubectl port-forward POD_NAME HOST_PORT:POD_PORT
If you have to access it from anywhere, you should use services, but you got to have a deployment created
Create deployment
kubectl create -f https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/service/networking/run-my-nginx.yaml
Expose the deployment with a NodePort service
kubectl expose deployment deployment/my-nginx --type=NodePort --name=nginx-service
Then list the services and get the port of the service
kubectl get services | grep nginx-service
All cluster data is stored in etcd which is a distributed key-value store. If etcd goes down, cluster becomes unstable and no new pods can come up.
Kubernetes has a way to access any pod within the cluster. Service is a logical way to access a set of pods bound by a selector. An individual pod can still be accessed irrespective of the service. Further service can be created to access the pods from outside the cluster (NodePort service)
I am trying to understand k8s and helm.
When I create a helm chart, there are 2 files: service.yaml and deployment.yaml. Both of them have a name field.
If I understand correctly, the deployment will be responsible for managing the pods, replicasets, etc and thus the service.
Basically, why am I allowed use a separate name for the service and for the deployment? Under what scenario would we want these 2 names to differ? Can a deployment have more than 1 service?
The "service" creates a persistent IP address in your cluster which is how everything else connects it. The Deployment creates a ReplicaSet, which creates a Pod, and this Pod is the backend for that service. There can be more than 1 pod, in which case the service load balances, and these pods can change over time, change IP's, but your service remains constant.
Think of the service as a load balancer which points to your pods. It's analogous to interfaces and implementations. The service is like an interface, which is backed by the pods, the impementations.
The mapping is m:n. You can have multiple services backed by a single pod, or multiple pods backing a single service.