why does controller-runtime say resource 'not found' while updating resource that exists? - kubernetes

I have written a k8s controller with kubebuilder which reconciles my CustomResource object (MyResource).
During update, controller-runtime gives me an error 'not found' even though my resource exists on the cluster.
func (r *MyResourceReconciler) updateStatus(ctx context.Context, myResource *myResourcev1.MyResource, neoStatus *myResourcev1.MyResourceStatus) error {
if !reflect.DeepEqual(&myResource.Status, neoStatus) {
myResource.Status = *neoStatus
err := r.Status().Update(ctx, myResource)
return err
}
return nil
}
Can someone please help me troubleshoot this error? I'm stuck because I can do a GET on the resource using kubectl on the cluster & yet controller-runtime says 'not found'.

I was able to resolve this issue myself using:
r.Update(ctx, myResource) instead of r.Status().Update(ctx, myResource)

I had exactly the same issue while another type works perfectly. Finally I found the root cause.
You need to have this mark above your struct to enable status subresources.
//+kubebuilder:subresource:status
https://book-v1.book.kubebuilder.io/basics/status_subresource.html

Related

rego opa policy to check if resources are provided for deployment in kubernetes

I'm checking if key resources.limits is provided in deployment kubernetes using OPA rego code. Below is the code, I'm trying to fetch the resources.limits key and it is always returning TRUE. Regardless of resources provided or not.
package resourcelimits
violation[{"msg": msg}] {
some container; input.request.object.spec.template.spec.containers[container]
not container.resources.limits.memory
msg := "Resources for the pod needs to be provided"
You can try something like this:
import future.keywords.in
violation[{"msg": msg}] {
input.request.kind.kind == "Deployment"
some container in input.request.object.spec.template.spec.containers
not container.resources.limits.memory
msg := sprintf("Container '%v/%v' does not have memory limits", [input.request.object.metadata.name, container.name])
}

What Condition Causes the Pod Log Reader Return EOF

I am using client-go to continuouslly pull log streams from kubernetes pods. Most of the time everything works as expected, until the job runs couple of hours.
The code is like below:
podLogOpts := corev1.PodLogOptions{ Follow: true, }
kubeJob, err := l.k8sclient.GetKubeJob(l.job.GetNamespace(), l.job.GetJobId())
...
podName := l.k8sclient.GetKubeJobPodNameByJobId(l.job.GetNamespace(), l.job.GetJobId())
req := l.k8sclient.GetKubeClient().CoreV1().Pods(l.job.GetNamespace()).GetLogs(podName, &podLogOpts)
podLogStream, err := req.Stream(context.TODO())
...
for {
copied, err := podLogStream.Read(buf)
if err == io.EOF {
// here is place where error happens
// usually after many hours, the podLogStream return EOF.
// I checked the pod status it is still running and keeps printing data to pod stdout. why would this happend???
break
}
...
}
The podLogStream returns EOF about 3-4 hours later. But I checked the pod status and found pod is still running and the service inside keeps printing data to the stdout. So why would this happend? How to fix it?
UPDATE
I found that every 4 hours the pod stream api -- read -- would return EOF so I have to make the goroutine sleep and retry a second later, by recreating the pogLogStream and reading logs from new stream object. It works. But why would this happen??
When you contact logs endpoint what happens is that apiserver is forwarding your request to the kubelet which is hosting your pod. Kubelet server then start streaming content of the log file to the apiserver and later to your client. Since it is streaming logs from the file and not from the stdout directly it may happen that log file is rotated by container log manager and as consequence you receive EOF and need to reinitialize the stream.

Controller get wrong namespace name for multipe operators instances in different namespaces

I developed a k8s Operator, after I deploy the first Operator in first namespace, it works well. Then I deploy the 2nd Operator in second namespace, I saw the 2nd controller to get the request that's namespace still is the first name, but the expected namespace should be second.
Please see the following code, when I play with second operator in the second namespace, request's namespace still is the first namespace.
func (r *AnexampleReconciler) Reconcile(request ctrl.Request) (ctrl.Result, error) {
log := r.Log.WithValues("Anexample", request.NamespacedName)
instance := &v1alpha1.Anexample{}
err := r.Get(context.TODO(), request.NamespacedName, instance)
if err != nil {
if errors.IsNotFound(err) {
log.Info("Anexample resource not found. Ignoring since object must be deleted.")
return reconcile.Result{}, nil
}
log.Error(err, "Failed to get Anexample.")
return reconcile.Result{}, err
}
I suspect it might be related to election, but I don't understand them.
mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{
Scheme: scheme,
MetricsBindAddress: metricsAddr,
Port: 9443,
LeaderElection: enableLeaderElection,
LeaderElectionID: "2eeda3e4.com.aaa.bbb.ccc",
})
if err != nil {
setupLog.Error(err, "unable to start manager")
os.Exit(1)
}
what happen in Controller? How to fix it?
We are seeing a similar issue. The issue is about getting the wrong namespace. Might be a bug in controller-runtime.
request.NamespacedName from controller-runtime is returning the wrong namespace.
request.Namespaced depends on the namespace of the custom resource which you are deploying.
Operators are deployed in namespaces, but can still be configured to listen to custom resources in all namespaces.
This should not be related to the election but with the way you setup your manager.
You didn't specify a namespace in the ctrl.Options for the manager, so it will listen to CR changes in all namespaces.
If you want your operator to only listen to one single namespace, pass the namespace to to manager.
mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{
Scheme: scheme,
MetricsBindAddress: metricsAddr,
Port: 9443,
LeaderElection: enableLeaderElection,
LeaderElectionID: "2eeda3e4.com.aaa.bbb.ccc",
Namesace: "<namespace-of-operator-two>",
})
if err != nil {
setupLog.Error(err, "unable to start manager")
os.Exit(1)
}
See also here: https://developers.redhat.com/blog/2020/06/26/migrating-a-namespace-scoped-operator-to-a-cluster-scoped-operator#migration_guide__namespace_scoped_to_cluster_scoped

Kubernetes crd failed to be created using go-client interface

I created a Kubernetes CRD following the example at https://github.com/kubernetes/sample-controller.
My controller works fine, and I can listen on the create/update/delete events of my CRD. Until I tried to create an object using go-client interface.
This is my CRD.
type MyEndpoint struct {
metav1.TypeMeta `json:",inline"`
// Standard object's metadata.
// More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
// +optional
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
}
I can create the CRD definition and create object using kubectl without any problems. But I got failure when I use following code to create the object.
myepDeploy := &crdv1.MyEndpoint{
TypeMeta: metav1.TypeMeta{
Kind: "MyEndpoint",
APIVersion: "mydom.k8s.io/v1",
},
ObjectMeta: metav1.ObjectMeta{
Name: podName,
Labels: map[string]string{
"serviceName": serviceName,
"nodeIP": nodeName,
"port": "5000"
},
},
}
epClient := myclientset.MycontrollerV1().MyEndpoints("default")
epClient.Create(myepDeploy)
But I got following error:
object *v1.MyEndpoint does not implement the protobuf marshalling interface and cannot be encoded to a protobuf message
I take a look at other standard types, I don't see if they implemented such interface. I searched on google, but not getting any luck.
Any ideas? Please help. BTW, I am running on minikube.
For most common types and for simple types marshalling works out of the box. In case of more complex structure, you may need to implement marshalling interface manually.
You may try to comment part of the MyEndpoint structure to find out what exactly caused the problem.
This error is occurred when your client epClient trying to marshal the MyEndpoint object to protobuf. This is because of your rest client config. Try setting Content Type is "application/json".
If you are using below code to generate config, then change the content type.
cfg, err := clientcmd.BuildConfigFromFlags(masterURL, kubeconfig)
if err != nil {
glog.Fatalf("Error building kubeconfig: %s", err.Error())
}
cfg.ContentType = "application/json"
kubeClient, err := kubernetes.NewForConfig(cfg)
if err != nil {
glog.Fatalf("Error building kubernetes clientset: %s", err.Error())
}

Message hub service going down everyday

Iam using bluemix message hub service in my node app which is in production.Now we have run in to an issue that message hub service goes down everyday and the app needs to be restarted then.This cant be done so.
We are getting the following logs
2016-10-16T17:41:42.66+0100 [App/0] OUT Unable to consume topic: Error: Request returned
status code 404 but it was not in the accepted list. The REST API responded with the
following message: Consumer instance not found.
2016-10-16T17:41:46.66+0100 [App/0] OUT got error: { [Error: Request returned status code
404 but it was not in the accepted list. The REST API responded with the following message:
Consumer instance not found.] statusCode: 404, errorCode: 40403 }
Is there any way that we can handle this.This is failing over here
run: function(callback) {
var that = this;
consumerInstance.get(topic)
.then(function(data) {
that.consume(data);
return callback();
})
.fail(function(error) {
console.log("got error: ", error);
return callback(error);
})
}
This is the code which we are using for reference
https://github.com/ibm-cds-labs/Spark-Twitter-Watson-Dashboard/blob/master/server/messageHubBridge.js?s_tact=C43301PW
Any thoughts on how to resolve this issue.
Thanks,
Harish.
Hi the REST endpoint for MessageHub gets recycled every 24 hours.
Clients are expected to handle this by creating a new consumer instance.
HTH,
Edo
As per the Message Hub documentation, the REST service restarts daily. After the REST API has restarted, you will have to recreate your Kafka consumer instances.
Thanks,
Simon.