Controller get wrong namespace name for multipe operators instances in different namespaces - kubernetes

I developed a k8s Operator, after I deploy the first Operator in first namespace, it works well. Then I deploy the 2nd Operator in second namespace, I saw the 2nd controller to get the request that's namespace still is the first name, but the expected namespace should be second.
Please see the following code, when I play with second operator in the second namespace, request's namespace still is the first namespace.
func (r *AnexampleReconciler) Reconcile(request ctrl.Request) (ctrl.Result, error) {
log := r.Log.WithValues("Anexample", request.NamespacedName)
instance := &v1alpha1.Anexample{}
err := r.Get(context.TODO(), request.NamespacedName, instance)
if err != nil {
if errors.IsNotFound(err) {
log.Info("Anexample resource not found. Ignoring since object must be deleted.")
return reconcile.Result{}, nil
}
log.Error(err, "Failed to get Anexample.")
return reconcile.Result{}, err
}
I suspect it might be related to election, but I don't understand them.
mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{
Scheme: scheme,
MetricsBindAddress: metricsAddr,
Port: 9443,
LeaderElection: enableLeaderElection,
LeaderElectionID: "2eeda3e4.com.aaa.bbb.ccc",
})
if err != nil {
setupLog.Error(err, "unable to start manager")
os.Exit(1)
}
what happen in Controller? How to fix it?

We are seeing a similar issue. The issue is about getting the wrong namespace. Might be a bug in controller-runtime.
request.NamespacedName from controller-runtime is returning the wrong namespace.

request.Namespaced depends on the namespace of the custom resource which you are deploying.
Operators are deployed in namespaces, but can still be configured to listen to custom resources in all namespaces.
This should not be related to the election but with the way you setup your manager.
You didn't specify a namespace in the ctrl.Options for the manager, so it will listen to CR changes in all namespaces.
If you want your operator to only listen to one single namespace, pass the namespace to to manager.
mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{
Scheme: scheme,
MetricsBindAddress: metricsAddr,
Port: 9443,
LeaderElection: enableLeaderElection,
LeaderElectionID: "2eeda3e4.com.aaa.bbb.ccc",
Namesace: "<namespace-of-operator-two>",
})
if err != nil {
setupLog.Error(err, "unable to start manager")
os.Exit(1)
}
See also here: https://developers.redhat.com/blog/2020/06/26/migrating-a-namespace-scoped-operator-to-a-cluster-scoped-operator#migration_guide__namespace_scoped_to_cluster_scoped

Related

What is the difference between dockerService and DockerServer in the kubelet source code

When I read the k8s source code, I found that both dockerService located in pkg/kubelet/dockershim/docker_service.go and DockerServer located in pkg/kubelet/dockershim/remote/docker_server.go seem to implement the interface of the CRI shim server.
But I don't understand the difference between the two, why do I need to distinguish between the two?
k8s version is tag 1.23.1
DockerServer simply creates dockershim grpc server
// DockerServer is the grpc server of dockershim.
type DockerServer struct {
// endpoint is the endpoint to serve on.
endpoint string
// service is the docker service which implements runtime and image services.
service DockerService
// server is the grpc server.
server *grpc.Server
}
...
// Start starts the dockershim grpc server.
func (s *DockerServer) Start() error {
glog.V(2).Infof("Start dockershim grpc server")
l, err := util.CreateListener(s.endpoint)
if err != nil {
return fmt.Errorf("failed to listen on %q: %v", s.endpoint, err)
}
// Create the grpc server and register runtime and image services.
s.server = grpc.NewServer()
runtimeapi.RegisterRuntimeServiceServer(s.server, s.service)
runtimeapi.RegisterImageServiceServer(s.server, s.service)
go func() {
// Use interrupt handler to make sure the server to be stopped properly.
h := interrupt.New(nil, s.Stop)
err := h.Run(func() error { return s.server.Serve(l) })
if err != nil {
glog.Errorf("Failed to serve connections: %v", err)
}
}()
return nil
}
DockerService is the interface implement CRI remote service server
// DockerService is the interface implement CRI remote service server.
type DockerService interface {
runtimeapi.RuntimeServiceServer
runtimeapi.ImageServiceServer
}
// **dockerService uses dockershim service to implement DockerService**.
BTW, are you sure you will use in the future? From the latest (5 days ago) news:
Kubernetes is Moving on From Dockershim: Commitments and Next Steps:
Kubernetes is removing dockershim in the upcoming v1.24 release.
If you use Docker Engine as a container runtime for your Kubernetes cluster, get ready to migrate in 1.24
Full removal is targeted in Kubernetes 1.24, in April 2022.
We'll support Kubernetes version 1.23, which includes dockershim, for another year in the Kubernetes project.

why does controller-runtime say resource 'not found' while updating resource that exists?

I have written a k8s controller with kubebuilder which reconciles my CustomResource object (MyResource).
During update, controller-runtime gives me an error 'not found' even though my resource exists on the cluster.
func (r *MyResourceReconciler) updateStatus(ctx context.Context, myResource *myResourcev1.MyResource, neoStatus *myResourcev1.MyResourceStatus) error {
if !reflect.DeepEqual(&myResource.Status, neoStatus) {
myResource.Status = *neoStatus
err := r.Status().Update(ctx, myResource)
return err
}
return nil
}
Can someone please help me troubleshoot this error? I'm stuck because I can do a GET on the resource using kubectl on the cluster & yet controller-runtime says 'not found'.
I was able to resolve this issue myself using:
r.Update(ctx, myResource) instead of r.Status().Update(ctx, myResource)
I had exactly the same issue while another type works perfectly. Finally I found the root cause.
You need to have this mark above your struct to enable status subresources.
//+kubebuilder:subresource:status
https://book-v1.book.kubebuilder.io/basics/status_subresource.html

Verifying that a kubernetes pod is deleted using client-go

I am trying to ensure that a pod is deleted before proceeding with another Kubernetes Operation. So the idea I have is to Call the Pod Delete Function and then call the Pod Get Function.
// Delete Pod
err := kubeClient.CoreV1().Pods(tr.namespace).Delete(podName, &metav1.DeleteOptions{})
if err != nil {
....
}
pod, err := kubeClient.CoreV1().Pods(tr.namespace).Get(podName, &metav1.DeleteOptions{})
// What do I look for to confirm that the pod has been deleted?
err != nil && errors.IsNotFound(err)
Also this is silly and you shouldn't do it.

How to invoke the Pod proxy verb using the Kubernetes Go client?

The Kubernetes remote API allows HTTP access to arbitrary pod ports using the proxy verb, that is, using an API path of /api/v1/namespaces/{namespace}/pods/{name}/proxy.
The Python client offers corev1.connect_get_namespaced_pod_proxy_with_path() to invoke the above proxy verb.
Despite reading, browsing, and searching the Kubernetes client-go for some time, I'm still lost how to do the same with the goclient what I'm able to do with the python client. My other impression is that I may need to dive down into the rest client of the client changeset, if there's no ready-made API corev1 call available?
How do I correctly construct the GET call using the rest client and the path mentioned above?
As it turned out after an involved dive into the Kubernetes client sources, accessing the proxy verb is only possible when going down to the level of the RESTClient and then building the GET/... request by hand. The following code shows this in form of a fully working example:
package main
import (
"fmt"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
)
func main() {
clcfg, err := clientcmd.NewDefaultClientConfigLoadingRules().Load()
if err != nil {
panic(err.Error())
}
restcfg, err := clientcmd.NewNonInteractiveClientConfig(
*clcfg, "", &clientcmd.ConfigOverrides{}, nil).ClientConfig()
if err != nil {
panic(err.Error())
}
clientset, err := kubernetes.NewForConfig(restcfg)
res := clientset.CoreV1().RESTClient().Get().
Namespace("default").
Resource("pods").
Name("hello-world:8000").
SubResource("proxy").
// The server URL path, without leading "/" goes here...
Suffix("index.html").
Do()
if err != nil {
panic(err.Error())
}
rawbody, err := res.Raw()
if err != nil {
panic(err.Error())
}
fmt.Print(string(rawbody))
}
You can test this, for instance, on a local kind cluster (Kubernetes in Docker). The following commands spin up a kind cluster, prime the only node with the required hello-world webserver, and then tell Kubernetes to start the pod with said hello-world webserver.
kind create cluster
docker pull crccheck/hello-world
docker tag crccheck/hello-world crccheck/hello-world:current
kind load docker-image crccheck/hello-world:current
kubectl run hello-world --image=crccheck/hello-world:current --port=8000 --restart=Never --image-pull-policy=Never
Now run the example:
export KUBECONFIG=~/.kube/kind-config-kind; go run .
It then should show this ASCII art:
<xmp>
Hello World
## .
## ## ## ==
## ## ## ## ## ===
/""""""""""""""""\___/ ===
~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ / ===- ~~~
\______ o _,/
\ \ _,'
`'--.._\..--''
</xmp>

Kubernetes crd failed to be created using go-client interface

I created a Kubernetes CRD following the example at https://github.com/kubernetes/sample-controller.
My controller works fine, and I can listen on the create/update/delete events of my CRD. Until I tried to create an object using go-client interface.
This is my CRD.
type MyEndpoint struct {
metav1.TypeMeta `json:",inline"`
// Standard object's metadata.
// More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
// +optional
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
}
I can create the CRD definition and create object using kubectl without any problems. But I got failure when I use following code to create the object.
myepDeploy := &crdv1.MyEndpoint{
TypeMeta: metav1.TypeMeta{
Kind: "MyEndpoint",
APIVersion: "mydom.k8s.io/v1",
},
ObjectMeta: metav1.ObjectMeta{
Name: podName,
Labels: map[string]string{
"serviceName": serviceName,
"nodeIP": nodeName,
"port": "5000"
},
},
}
epClient := myclientset.MycontrollerV1().MyEndpoints("default")
epClient.Create(myepDeploy)
But I got following error:
object *v1.MyEndpoint does not implement the protobuf marshalling interface and cannot be encoded to a protobuf message
I take a look at other standard types, I don't see if they implemented such interface. I searched on google, but not getting any luck.
Any ideas? Please help. BTW, I am running on minikube.
For most common types and for simple types marshalling works out of the box. In case of more complex structure, you may need to implement marshalling interface manually.
You may try to comment part of the MyEndpoint structure to find out what exactly caused the problem.
This error is occurred when your client epClient trying to marshal the MyEndpoint object to protobuf. This is because of your rest client config. Try setting Content Type is "application/json".
If you are using below code to generate config, then change the content type.
cfg, err := clientcmd.BuildConfigFromFlags(masterURL, kubeconfig)
if err != nil {
glog.Fatalf("Error building kubeconfig: %s", err.Error())
}
cfg.ContentType = "application/json"
kubeClient, err := kubernetes.NewForConfig(cfg)
if err != nil {
glog.Fatalf("Error building kubernetes clientset: %s", err.Error())
}