Verifying that a kubernetes pod is deleted using client-go - kubernetes

I am trying to ensure that a pod is deleted before proceeding with another Kubernetes Operation. So the idea I have is to Call the Pod Delete Function and then call the Pod Get Function.
// Delete Pod
err := kubeClient.CoreV1().Pods(tr.namespace).Delete(podName, &metav1.DeleteOptions{})
if err != nil {
....
}
pod, err := kubeClient.CoreV1().Pods(tr.namespace).Get(podName, &metav1.DeleteOptions{})
// What do I look for to confirm that the pod has been deleted?

err != nil && errors.IsNotFound(err)
Also this is silly and you shouldn't do it.

Related

How to watch crd install & uninstall events?

When user install app like istio, a lot of crds will be installed.
I mean new crd defined by users but not a crd entity.
It's easy to watch a pod creating event,
but I don't know how to watch crd installing event.
Help me please!!!
`cliSet, err := dynamic.NewForConfig(config)
if err != nil {
return
}
fac := dynamicinformer.NewFilteredDynamicSharedInformerFactory(cliSet, time.Minute, metav1.NamespaceAll, nil)
informer := fac.ForResource(schema.GroupVersionResource{
Group: "apiextensions.k8s.io",
Version: "v1",
Resource: "crds",
}).Informer()`
I tried like this. But it doesn't work.
W0126 21:20:53.634640 45331 reflector.go:424] CrdWatcher/main.go:60: failed to list *unstructured.Unstructured: Unauthorized E0126 21:20:53.634716 45331 reflector.go:140] CrdWatcher/main.go:60: Failed to watch *unstructured.Unstructured: failed to list *unstructured.Unstructured: Unauthorized

why does controller-runtime say resource 'not found' while updating resource that exists?

I have written a k8s controller with kubebuilder which reconciles my CustomResource object (MyResource).
During update, controller-runtime gives me an error 'not found' even though my resource exists on the cluster.
func (r *MyResourceReconciler) updateStatus(ctx context.Context, myResource *myResourcev1.MyResource, neoStatus *myResourcev1.MyResourceStatus) error {
if !reflect.DeepEqual(&myResource.Status, neoStatus) {
myResource.Status = *neoStatus
err := r.Status().Update(ctx, myResource)
return err
}
return nil
}
Can someone please help me troubleshoot this error? I'm stuck because I can do a GET on the resource using kubectl on the cluster & yet controller-runtime says 'not found'.
I was able to resolve this issue myself using:
r.Update(ctx, myResource) instead of r.Status().Update(ctx, myResource)
I had exactly the same issue while another type works perfectly. Finally I found the root cause.
You need to have this mark above your struct to enable status subresources.
//+kubebuilder:subresource:status
https://book-v1.book.kubebuilder.io/basics/status_subresource.html

What Condition Causes the Pod Log Reader Return EOF

I am using client-go to continuouslly pull log streams from kubernetes pods. Most of the time everything works as expected, until the job runs couple of hours.
The code is like below:
podLogOpts := corev1.PodLogOptions{ Follow: true, }
kubeJob, err := l.k8sclient.GetKubeJob(l.job.GetNamespace(), l.job.GetJobId())
...
podName := l.k8sclient.GetKubeJobPodNameByJobId(l.job.GetNamespace(), l.job.GetJobId())
req := l.k8sclient.GetKubeClient().CoreV1().Pods(l.job.GetNamespace()).GetLogs(podName, &podLogOpts)
podLogStream, err := req.Stream(context.TODO())
...
for {
copied, err := podLogStream.Read(buf)
if err == io.EOF {
// here is place where error happens
// usually after many hours, the podLogStream return EOF.
// I checked the pod status it is still running and keeps printing data to pod stdout. why would this happend???
break
}
...
}
The podLogStream returns EOF about 3-4 hours later. But I checked the pod status and found pod is still running and the service inside keeps printing data to the stdout. So why would this happend? How to fix it?
UPDATE
I found that every 4 hours the pod stream api -- read -- would return EOF so I have to make the goroutine sleep and retry a second later, by recreating the pogLogStream and reading logs from new stream object. It works. But why would this happen??
When you contact logs endpoint what happens is that apiserver is forwarding your request to the kubelet which is hosting your pod. Kubelet server then start streaming content of the log file to the apiserver and later to your client. Since it is streaming logs from the file and not from the stdout directly it may happen that log file is rotated by container log manager and as consequence you receive EOF and need to reinitialize the stream.

Controller get wrong namespace name for multipe operators instances in different namespaces

I developed a k8s Operator, after I deploy the first Operator in first namespace, it works well. Then I deploy the 2nd Operator in second namespace, I saw the 2nd controller to get the request that's namespace still is the first name, but the expected namespace should be second.
Please see the following code, when I play with second operator in the second namespace, request's namespace still is the first namespace.
func (r *AnexampleReconciler) Reconcile(request ctrl.Request) (ctrl.Result, error) {
log := r.Log.WithValues("Anexample", request.NamespacedName)
instance := &v1alpha1.Anexample{}
err := r.Get(context.TODO(), request.NamespacedName, instance)
if err != nil {
if errors.IsNotFound(err) {
log.Info("Anexample resource not found. Ignoring since object must be deleted.")
return reconcile.Result{}, nil
}
log.Error(err, "Failed to get Anexample.")
return reconcile.Result{}, err
}
I suspect it might be related to election, but I don't understand them.
mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{
Scheme: scheme,
MetricsBindAddress: metricsAddr,
Port: 9443,
LeaderElection: enableLeaderElection,
LeaderElectionID: "2eeda3e4.com.aaa.bbb.ccc",
})
if err != nil {
setupLog.Error(err, "unable to start manager")
os.Exit(1)
}
what happen in Controller? How to fix it?
We are seeing a similar issue. The issue is about getting the wrong namespace. Might be a bug in controller-runtime.
request.NamespacedName from controller-runtime is returning the wrong namespace.
request.Namespaced depends on the namespace of the custom resource which you are deploying.
Operators are deployed in namespaces, but can still be configured to listen to custom resources in all namespaces.
This should not be related to the election but with the way you setup your manager.
You didn't specify a namespace in the ctrl.Options for the manager, so it will listen to CR changes in all namespaces.
If you want your operator to only listen to one single namespace, pass the namespace to to manager.
mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{
Scheme: scheme,
MetricsBindAddress: metricsAddr,
Port: 9443,
LeaderElection: enableLeaderElection,
LeaderElectionID: "2eeda3e4.com.aaa.bbb.ccc",
Namesace: "<namespace-of-operator-two>",
})
if err != nil {
setupLog.Error(err, "unable to start manager")
os.Exit(1)
}
See also here: https://developers.redhat.com/blog/2020/06/26/migrating-a-namespace-scoped-operator-to-a-cluster-scoped-operator#migration_guide__namespace_scoped_to_cluster_scoped

How to invoke the Pod proxy verb using the Kubernetes Go client?

The Kubernetes remote API allows HTTP access to arbitrary pod ports using the proxy verb, that is, using an API path of /api/v1/namespaces/{namespace}/pods/{name}/proxy.
The Python client offers corev1.connect_get_namespaced_pod_proxy_with_path() to invoke the above proxy verb.
Despite reading, browsing, and searching the Kubernetes client-go for some time, I'm still lost how to do the same with the goclient what I'm able to do with the python client. My other impression is that I may need to dive down into the rest client of the client changeset, if there's no ready-made API corev1 call available?
How do I correctly construct the GET call using the rest client and the path mentioned above?
As it turned out after an involved dive into the Kubernetes client sources, accessing the proxy verb is only possible when going down to the level of the RESTClient and then building the GET/... request by hand. The following code shows this in form of a fully working example:
package main
import (
"fmt"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
)
func main() {
clcfg, err := clientcmd.NewDefaultClientConfigLoadingRules().Load()
if err != nil {
panic(err.Error())
}
restcfg, err := clientcmd.NewNonInteractiveClientConfig(
*clcfg, "", &clientcmd.ConfigOverrides{}, nil).ClientConfig()
if err != nil {
panic(err.Error())
}
clientset, err := kubernetes.NewForConfig(restcfg)
res := clientset.CoreV1().RESTClient().Get().
Namespace("default").
Resource("pods").
Name("hello-world:8000").
SubResource("proxy").
// The server URL path, without leading "/" goes here...
Suffix("index.html").
Do()
if err != nil {
panic(err.Error())
}
rawbody, err := res.Raw()
if err != nil {
panic(err.Error())
}
fmt.Print(string(rawbody))
}
You can test this, for instance, on a local kind cluster (Kubernetes in Docker). The following commands spin up a kind cluster, prime the only node with the required hello-world webserver, and then tell Kubernetes to start the pod with said hello-world webserver.
kind create cluster
docker pull crccheck/hello-world
docker tag crccheck/hello-world crccheck/hello-world:current
kind load docker-image crccheck/hello-world:current
kubectl run hello-world --image=crccheck/hello-world:current --port=8000 --restart=Never --image-pull-policy=Never
Now run the example:
export KUBECONFIG=~/.kube/kind-config-kind; go run .
It then should show this ASCII art:
<xmp>
Hello World
## .
## ## ## ==
## ## ## ## ## ===
/""""""""""""""""\___/ ===
~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ / ===- ~~~
\______ o _,/
\ \ _,'
`'--.._\..--''
</xmp>