Kubernetes Operator to read watch namespace from config map - kubernetes

I have an operator built via kube builder that reads from a "WATCH_NAMESPACE" env to understand which namespace to watch. This is how the current setup works.
namespaces := os.Getenv("WATCH_NAMESPACE")
if strings.Contains(namespaces, ",") {
setupLog.Info("Operator will listen to the specific namespaces: " + namespaces)
options.NewCache = cache.MultiNamespacedCacheBuilder(strings.Split(namespaces, ","))
} else {
log.Info("Operator will listen only one namespace or all namespace: " + namespaces)
options.Namespace = namespaces
}
mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), options)
if err != nil {
log.Error(err, "unable to start manager")
os.Exit(1)
}
if err = (&dataplatformcontroller.DruidReconciler{
...
}
This works fine.
But the problem is that every time we need to add a new namespace to watch, we need to restart the operator. I believe the best option here is the watch a configmap and read from it every time.
But I am not sure how to proceed with this. Any suggestions or documentation or links can be helpful.

Related

How to use the Kubernetes client-go server side apply functionality properly?

When I run the below code using client-go library I get an inscrutable error? What am I doing wrong?
ctx := context.TODO()
ns := applycorev1.NamespaceApplyConfiguration{
ObjectMetaApplyConfiguration: &applymetav1.ObjectMetaApplyConfiguration{
Name: to.StringPtr("foobar"),
},
}
if _, err := kubeClient.CoreV1().Namespaces().Apply(ctx, &ns, v1.ApplyOptions{}); err != nil {
panic(err)
}
Yields the very unhelpful error:
panic: PatchOptions.meta.k8s.io "" is invalid: fieldManager: Required value: is required for apply patch
What is the correct way to send an Apply operation to the API server in Kube using client-go?
At least you should add FieldManager in your ApplyOptions
I am also trying this out, for now I am referring to https://ymmt2005.hatenablog.com/entry/2020/04/14/An_example_of_using_dynamic_client_of_k8s.io/client-go

OPA/Rego: Ensure that every Service in Helm chart has exactly one matching Pod

I would like to check that every Service in a rendered Helm chart has exactly one matching Pod.
A Pod to service association exists when every entry specified in a Services spec.selector object is reflected in a Pods metadata.labels object (which can have additional keys).
The following policy is tested using Conftest by running conftest test --combine {YAML_FILE} and checks that every Service has at least one matching Pod. I'm completely unsure how to transform this so that it checks for exactly one matching Pod.
package main
import future.keywords.every
in_set(e, s) { s[e] }
get_pod(resource) := pod {
in_set(resource.kind, {"Deployment", "StatefulSet", "Job"})
pod := resource.spec.template
}
# ensure that every service has at least one matching pod
# TODO: ensure that every service has exactly one matching pod
deny_service_without_matching_pod[msg] {
service := input[_].contents
service.kind == "Service"
selector := object.get(service, ["spec", "selector"], {})
pods := { p | p := get_pod(input[_].contents) }
every pod in pods {
labels := object.get(pod, ["metadata", "labels"], {})
matches := { key | some key; labels[key] == selector[key] }
count(matches) != count(selector)
}
msg := sprintf("service %s has no matching pod", [service.metadata.name])
}
Marginal note: The get_pod function doesn't retrieve all PodTemplates that can possibly occur in a Helm chart. Other checks are in place to keep the Kubernetes API-surface of the Helm chart small - so in this case, Pods can only occur in Deployment, StatefulSet and Job.
Maybe there are rego experts here that can chime in and help. That would be very appreciated! 😀
Since there's no sample data provided, this is untested code. It should work though :)
package main
import future.keywords.in
pods := { pod |
resource := input[_].contents
resource.kind in {"Deployment", "StatefulSet", "Job"}
pod := resource.spec.template
}
services := { service |
service := input[_].contents
service.kind == "Service"
}
pods_matching_selector(selector) := { pod |
selector != {}
some pod in pods
labels := pod.metadata.labels
some key
labels[key] == selector[key]
}
deny_service_without_one_matching_pod[msg] {
some service in services
selector := object.get(service, ["spec", "selector"], {})
matching_pods := count(pods_matching_selector(selector))
matching_pods != 1
msg := sprintf(
"service %s has %d matching pods, must have exactly one",
[service.metadata.name, matching_pods]
)
}

How to trigger a rollout restart on deployment resource from controller-runtime

I have been using kubebuilder for writing custom controller, and aware of Get(), Update(), Delete() methods that it provides. But Now I am looking for a method which mimic the behaviour of kubectl rollout restart deployment. If there is no such direct method then I am looking for correct way to mimic the same.
type CustomReconciler struct {
client.Client
Log logr.Logger
Scheme *runtime.Scheme
}
func (r *CustomReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
configMap := &v1.ConfigMap{}
err = r.Get(ctx, req.namespacedName, configMap)
if err != nil {
logger.Error(err, "Failed to GET configMap")
return ctrl.Result{}, err
}
Say in above code I read a deployment name from the configmap and rollout restart the same as follows:
val := configMap.Data["config.yml"]
config := Config{}
if err := yaml.Unmarshal([]byte(val), &config); err != nil {
logger.Error(err, "failed to unmarshal config data")
return ctrl.Result{}, err
}
// Need equivalent of following
// r.RolloutRestart(config.DeploymentName)
In all cases where you wish to replicate kubectl behavior, the answer is always to increase its verbosity and it'll show you exactly -- sometimes down to the wire payloads -- what it is doing.
For rollout restart, one will find that it just bumps an annotation on the Deployment/StatefulSet/whatever and that causes the outer object to be "different," and triggering a reconciliation run
You can squat on their annotation, or you can make up your own, or you can use a label change -- practically any "meaningless" change will do

Why does client.Update(...) ignore non-primitive values?

I'm trying to modify the Spec of non-owned objects as part of the Reconcile of my Custom Resource, but it seems like it ignores any fields that are not primitives. I am using controller-runtime.
I figured since it was only working on primitives, maybe it's an issue related to DeepCopy. However, removing it did not solve the issue, and I read that any Updates on objects have to be on deep copies to avoid messing up the cache.
I also tried setting client.FieldOwner(...) since it says that that's required for Updates that are done server-side. I wasn't sure what to set it to, so I made it req.NamespacedName.String(). That did not work either.
Here is the Reconcile loop for my controller:
func (r *MyCustomObjectReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
// ...
var myCustomObject customv1.MyCustomObject
if err := r.Get(ctx, req.NamespacedName, &myCustomObject); err != nil {
log.Error(err, "unable to fetch ReleaseDefinition")
return ctrl.Result{}, client.IgnoreNotFound(err)
}
// ...
deployList := &kappsv1.DeploymentList{}
labels := map[string]string{
"mylabel": myCustomObject.Name,
}
if err := r.List(ctx, deployList, client.MatchingLabels(labels)); err != nil {
log.Error(err, "unable to fetch Deployments")
return ctrl.Result{}, err
}
// make a deep copy to avoid messing up the cache (used by other controllers)
myCustomObjectSpec := myCustomObject.Spec.DeepCopy()
// the two fields of my CRD that affect the Deployments
port := myCustomObjectSpec.Port // type: *int32
customenv := myCustomObjectSpec.CustomEnv // type: map[string]string
for _, dep := range deployList.Items {
newDeploy := dep.DeepCopy() // already returns a pointer
// Do these things:
// 1. replace first container's containerPort with myCustomObjectSpec.Port
// 2. replace first container's Env with values from myCustomObjectSpec.CustomEnv
// 3. Update the Deployment
container := newDeploy.Spec.Template.Spec.Containers[0]
// 1. Replace container's port
container.Ports[0].ContainerPort = *port
envVars := make([]kcorev1.EnvVar, 0, len(customenv))
for key, val := range customenv {
envVars = append(envVars, kcorev1.EnvVar{
Name: key,
Value: val,
})
}
// 2. Replace container's Env variables
container.Env = envVars
// 3. Perform update for deployment (port works, env gets ignored)
if err := r.Update(ctx, newDeploy); err != nil {
log.Error(err, "unable to update deployment", "deployment", dep.Name)
return ctrl.Result{}, err
}
}
return ctrl.Result{}, nil
}
The Spec for my CRD looks like:
// MyCustomObjectSpec defines the desired state of MyCustomObject
type MyCustomObjectSpec struct {
// CustomEnv is a list of environment variables to set in the containers.
// +optional
CustomEnv map[string]string `json:"customEnv,omitempty"`
// Port is the port that the backend container is listening on.
// +optional
Port *int32 `json:"port,omitempty"`
}
I expected that when I kubectl apply a new CR with changes to the Port and CustomEnv fields, it would modify the deployments as described in Reconcile. However, only the Port is updated, and the changes to the container's Env are ignored.
The problem was that I needed a pointer to the Container I was modifying.
Doing this instead worked:
container := &newDeploy.Spec.Template.Spec.Containers[0]

Get cluster API url with Kubernetes client-go

I'm using client-go for Kubernetes and trying to get the API url of the current cluster, i.e. something similar to the output of kubectl cluster-info.
I found a function called getCluster:
func (config *DirectClientConfig) ClientConfig() (*restclient.Config, error) {
// check that getAuthInfo, getContext, and getCluster do not return an error.
// Do this before checking if the current config is usable in the event that an
// AuthInfo, Context, or Cluster config with user-defined names are not found.
// This provides a user with the immediate cause for error if one is found
configAuthInfo, err := config.getAuthInfo()
if err != nil {
return nil, err
}
_, err = config.getContext()
if err != nil {
return nil, err
}
configClusterInfo, err := config.getCluster()
if err != nil {
return nil, err
}
...
}
When I write the following in my code
config, err := clientcmd.BuildConfigFromFlags("", kubeconfig)
clusterInfo, err := config.getCluster()
I get the error config.getCluster undefined (type *rest.Config has no field or method getCluster)
How can I use this function? Is there any other way to get the this url?
From the link you provided it seems like you need to use clusterInfo, err := config.getCluster() instead of configAuthInfo.getCluster()
$ kubectl cluster-info
1. Kubernetes master is running at https://10.156.0.3:6443
2. KubeDNS is running at https://10.156.0.3:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
If you run kubectl cluster-info -v8 you can see the following:
The first line is basicaly taken from your ~/.kube/config file and kubectl just checked if it works, with the simple GET request, searching for something, that should be definitely present in the cluster:
I0222 11:21:18.015482 18150 round_trippers.go:416] GET https://10.156.0.3:6443/api/v1/namespaces/kube-system/services?labelSelector=kubernetes.io%2Fcluster-service%3Dtrue
You can get the result of this command by starting kubectl proxy and then, in other console:
curl http://127.0.0.1:8001/api/v1/namespaces/kube-system/services?labelSelector=kubernetes.io%2Fcluster-service%3Dtrue
As you can see in the response, there are no lines with master URL there.
So, to get the value specified in the first line of the output of kubectl cluster-info you just need to read correct part of the Kubernetes config file, because you can have several cluster configuration there.
To read and deserialize Kubernetes config file there is a function in the loader.go:
// LoadFromFile takes a filename and deserializes the contents into Config object
func LoadFromFile(filename string) (*clientcmdapi.Config, error)
or another function in the config.go:
// getConfigFromFile tries to read a kubeconfig file and if it can't, returns an error. One exception, missing files result in empty configs, not an error.
func getConfigFromFile(filename string) (*clientcmdapi.Config, error)