Why rancher to create virtual cluster in Kubernetes? - kubernetes

Manifest(yml) with kubernetes resource type(kind: Namespace) can be applied through kubectl to create a virtual cluster
In our environment, manifest yaml's are applied using kubectl to create kubernetes resource types(deployment, service, autoscaling, ingress) under the given namespace
But, rancher is used to create kubernetes resource type(kind: Namespace virtual cluster).
What is the advantage of creating kubernetes resource type(Namespace) using rancher? instead of a manifest yaml applied through kubectl

Rancher uses concept of "Project" which is not present in "vanilla" kubernetes, which allows you to assign RBAC roles, PodSecurityPolicy etc to a group of namespaces in easy way.
If you are not using rancher to create projects and namespaces - you have to assign all these Roles and PSPs by yourself. For example, if you have default restricted policy on your cluster, namespace created by kubectl create namespace foo won't be able to run any pods by default, see https://rancher.com/docs/rancher/v2.5/en/admin-settings/pod-security-policies/
Namespaces that are not assigned to projects do not inherit PSPs, regardless of whether the PSP is assigned to a cluster or project. Because these namespaces have no PSPs, workload deployments to these namespaces will fail, which is the default Kubernetes behavior.
To sum it up, namespaces can be created using kubectl create namespace or manifests, but it might be cumbersome to make it all work well. Using rancher to provision namespaces is easier to maintain and troubleshoot.
As for advantages, having ability to group namespaces under "project" and assign resources, PSP and roles to a group of namespaces with rancher UI support is one of the main selling points of having rancher in a first place. Namespace objects themselves are basically the same as anywhere else.

Related

Is it possible to create a Kubernetes service and pod in different namespaces

Is it possible to create a Kubernetes service and pod in different namespaces, for example, having myweb-svc pointing to the actual running myweb-pod, while myweb-svc and myweb-pod are in different namespaces?
YAML manifest to create both the pod and the service in their respective namespaces. You need to specify the ‘namespace’ field in the ‘metadata’ section of both the ‘pod’ and ‘service’ objects to specify the namespace in which they should be created.
Also, if you want to point your Service to a Service in a different namespace or on another cluster you can use service without a pod selector.
Refer to this link on Understanding kubernetes Object for more information.
Kubernetes API objects that are connected together at the API layer generally need to be in the same namespace. So a Service can only connect to Pods in its own namespace; if a Pod references a ConfigMap or a Secret or a PersistentVolumeClaim, those need to be in the same namespace as well.

Is it possible for a pod running in a satrefulset to get the hostname of the all the pod running in different statefulset?

I have a pod running in a statefulset but it needs to know the hostname or address of all pods running in another statefulset to communicate with them. The second statefulset is being created by a separate helm chart. Can the pod work this out dynamically? Can I inject this information into the pod through an env similar to setting .Status.ip?
Edit: Each statefulSet has its own headless service
As discussed in the comments, the way to go here is to use a service-resource as this will give you a static DNS within the cluster to reach all the pods that a targeted by that service.
The DNS for the service is:
the services name if you access it from within the same namespace
<my-service-name>.<namespace-name>.svc.cluster.local if you access it from another namespace, and where cluster.local is the clusters domain that might differ from cluster to cluster depending on the clusters configuration
If you further need more configuration options, e.g. when you want to deploy your chart into different cloud environments where the clusters-domain might actually differ, you can use kustomize.io to adjust your configuration at apply time.

Is there a way to apply different configmap for each pod generated by damonset?

I am using filebeat as a daemonset and I would like each generated pod to export to a single port for the logstash.
Is there an approach to be used for this?
No. You cannot provide different configmap to the pods of same daemonset or deployment. If you want each of your pods of daemonset to have different configurations then you can mount some local-volume (using hostpath) so that all the pods will take configuration from that path and that can be different on each node. Or you need to deploy different daemonsets with different configmaps and select different nodes for each of them.
As you can read here:
A DaemonSet ensures that all (or some) Nodes run a copy of a Pod.
...a copy of Pod based on a single template and this is the reason why you cannot specify different ConfigMaps to be used by different Pods managed by DaemonSet Controller.
As an alternative you can configure many different DaemonSets where each one will be responsible for running copy of a Pod specified in the template only on specific node.
Another alternative is using static pods:
It is possible to create Pods by writing a file to a certain directory
watched by Kubelet. These are called static pods. Unlike DaemonSet,
static Pods cannot be managed with kubectl or other Kubernetes API
clients. Static Pods do not depend on the apiserver, making them
useful in cluster bootstrapping cases. Also, static Pods may be
deprecated in the future.
The whole procedure of creation a static Pod is described here.
I hope it helps.
You can use a ConfigMap containing config in each node and expose spec.nodeName environment to your pods. Then your pods can know which node it's running on and decide which config it loads.

Kubeconfig for deploying to all namespaces in a k8s cluster

I am looking at instructions on how to go about generating a kubeconfig file that can deploy, delete my k8s deployment to all namespaces and also have have permissions to create, delete and view secrets in all namespaces.
The use case for this kubeconfig is to use it in Jenkins for performing deployments to a kube cluster.
I am aware of k8s service accounts with role and rolebindings, however it appears they can be used to only to specific namespace(s)
Thanks
you should create cluster role and cluster role bindings to grant access cluster level. Then using the service account that has cluster level access, you should be able to do the stuff across all namespaces.

setting up monitoring within the namespace of Kubernetes

We have a kubernetes cluster for the entire organisation. And there are namespaces for individual teams to host their projects in that cluster. The problem seems to be monitoring individual namespace. Since we have only access to our namespace we can't setup any monitoring for the PODs, containers, or the nodes on which our pods are residing. Is there any way to accomplish monitoring of only the things within the namespace without bothering the kubernetes cluster?
Monitoring should be provided on central level for all the basics of the cluster and more, but sure, you just need limited scope. If you deploy prometheus and configure only targets in your namespace you should be ok. Same goes for any other solutions.
For reference see https://prometheus.io/docs/prometheus/latest/configuration/configuration/#%3Ckubernetes_sd_config%3E
# Optional namespace discovery. If omitted, all namespaces are used.
namespaces:
names:
[ - <string> ]