How to watch crd install & uninstall events? - kubernetes

When user install app like istio, a lot of crds will be installed.
I mean new crd defined by users but not a crd entity.
It's easy to watch a pod creating event,
but I don't know how to watch crd installing event.
Help me please!!!
`cliSet, err := dynamic.NewForConfig(config)
if err != nil {
return
}
fac := dynamicinformer.NewFilteredDynamicSharedInformerFactory(cliSet, time.Minute, metav1.NamespaceAll, nil)
informer := fac.ForResource(schema.GroupVersionResource{
Group: "apiextensions.k8s.io",
Version: "v1",
Resource: "crds",
}).Informer()`
I tried like this. But it doesn't work.
W0126 21:20:53.634640 45331 reflector.go:424] CrdWatcher/main.go:60: failed to list *unstructured.Unstructured: Unauthorized E0126 21:20:53.634716 45331 reflector.go:140] CrdWatcher/main.go:60: Failed to watch *unstructured.Unstructured: failed to list *unstructured.Unstructured: Unauthorized

Related

rego opa policy to check if resources are provided for deployment in kubernetes

I'm checking if key resources.limits is provided in deployment kubernetes using OPA rego code. Below is the code, I'm trying to fetch the resources.limits key and it is always returning TRUE. Regardless of resources provided or not.
package resourcelimits
violation[{"msg": msg}] {
some container; input.request.object.spec.template.spec.containers[container]
not container.resources.limits.memory
msg := "Resources for the pod needs to be provided"
You can try something like this:
import future.keywords.in
violation[{"msg": msg}] {
input.request.kind.kind == "Deployment"
some container in input.request.object.spec.template.spec.containers
not container.resources.limits.memory
msg := sprintf("Container '%v/%v' does not have memory limits", [input.request.object.metadata.name, container.name])
}

Verifying that a kubernetes pod is deleted using client-go

I am trying to ensure that a pod is deleted before proceeding with another Kubernetes Operation. So the idea I have is to Call the Pod Delete Function and then call the Pod Get Function.
// Delete Pod
err := kubeClient.CoreV1().Pods(tr.namespace).Delete(podName, &metav1.DeleteOptions{})
if err != nil {
....
}
pod, err := kubeClient.CoreV1().Pods(tr.namespace).Get(podName, &metav1.DeleteOptions{})
// What do I look for to confirm that the pod has been deleted?
err != nil && errors.IsNotFound(err)
Also this is silly and you shouldn't do it.

How to invoke the Pod proxy verb using the Kubernetes Go client?

The Kubernetes remote API allows HTTP access to arbitrary pod ports using the proxy verb, that is, using an API path of /api/v1/namespaces/{namespace}/pods/{name}/proxy.
The Python client offers corev1.connect_get_namespaced_pod_proxy_with_path() to invoke the above proxy verb.
Despite reading, browsing, and searching the Kubernetes client-go for some time, I'm still lost how to do the same with the goclient what I'm able to do with the python client. My other impression is that I may need to dive down into the rest client of the client changeset, if there's no ready-made API corev1 call available?
How do I correctly construct the GET call using the rest client and the path mentioned above?
As it turned out after an involved dive into the Kubernetes client sources, accessing the proxy verb is only possible when going down to the level of the RESTClient and then building the GET/... request by hand. The following code shows this in form of a fully working example:
package main
import (
"fmt"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
)
func main() {
clcfg, err := clientcmd.NewDefaultClientConfigLoadingRules().Load()
if err != nil {
panic(err.Error())
}
restcfg, err := clientcmd.NewNonInteractiveClientConfig(
*clcfg, "", &clientcmd.ConfigOverrides{}, nil).ClientConfig()
if err != nil {
panic(err.Error())
}
clientset, err := kubernetes.NewForConfig(restcfg)
res := clientset.CoreV1().RESTClient().Get().
Namespace("default").
Resource("pods").
Name("hello-world:8000").
SubResource("proxy").
// The server URL path, without leading "/" goes here...
Suffix("index.html").
Do()
if err != nil {
panic(err.Error())
}
rawbody, err := res.Raw()
if err != nil {
panic(err.Error())
}
fmt.Print(string(rawbody))
}
You can test this, for instance, on a local kind cluster (Kubernetes in Docker). The following commands spin up a kind cluster, prime the only node with the required hello-world webserver, and then tell Kubernetes to start the pod with said hello-world webserver.
kind create cluster
docker pull crccheck/hello-world
docker tag crccheck/hello-world crccheck/hello-world:current
kind load docker-image crccheck/hello-world:current
kubectl run hello-world --image=crccheck/hello-world:current --port=8000 --restart=Never --image-pull-policy=Never
Now run the example:
export KUBECONFIG=~/.kube/kind-config-kind; go run .
It then should show this ASCII art:
<xmp>
Hello World
## .
## ## ## ==
## ## ## ## ## ===
/""""""""""""""""\___/ ===
~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ / ===- ~~~
\______ o _,/
\ \ _,'
`'--.._\..--''
</xmp>

Kubernetes kubelet error updating node status

Running a kubernetes cluster in AWS via EKS. Everything appears to be working as expected, but just checking through all logs to verify. I hopped on to one of the worker nodes and I noticed a bunch of errors when looking at the kubelet service
Oct 09 09:42:52 ip-172-26-0-213.ec2.internal kubelet[4226]: E1009 09:42:52.335445 4226 kubelet_node_status.go:377] Error updating node status, will retry: error getting node "ip-172-26-0-213.ec2.internal": Unauthorized
Oct 09 10:03:54 ip-172-26-0-213.ec2.internal kubelet[4226]: E1009 10:03:54.831820 4226 kubelet_node_status.go:377] Error updating node status, will retry: error getting node "ip-172-26-0-213.ec2.internal": Unauthorized
Nodes are all showing as ready, but I'm not sure why those errors are appearing. Have 3 worker nodes and all 3 have the same kubelet errors (hostnames are different obviously)
Additional information. It would appear that the error is coming from this line in kubelet_node_status.go
node, err := kl.heartbeatClient.CoreV1().Nodes().Get(string(kl.nodeName), opts)
if err != nil {
return fmt.Errorf("error getting node %q: %v", kl.nodeName, err)
}
From the workers I can execute get nodes using kubectl just fine:
kubectl get --kubeconfig=/var/lib/kubelet/kubeconfig nodes
NAME STATUS ROLES AGE VERSION
ip-172-26-0-58.ec2.internal Ready <none> 1h v1.10.3
ip-172-26-1-193.ec2.internal Ready <none> 1h v1.10.3
Turns out this is not an issue. Official reply from AWS regarding these errors:
The kubelet will regularly report node status to the Kubernetes API. When it does so it needs an authentication token generated by the aws-iam-authenticator. The kubelet will invoke the aws-iam-authenticator and store the token in it's global cache. In EKS this authentication token expires after 21 minutes.
The kubelet doesn't understand token expiry times so it will attempt to reach the API using the token in it's cache. When the API returns the Unauthorized response, there is a retry mechanism to fetch a new token from aws-iam-authenticator and retry the request.

Kubernetes crd failed to be created using go-client interface

I created a Kubernetes CRD following the example at https://github.com/kubernetes/sample-controller.
My controller works fine, and I can listen on the create/update/delete events of my CRD. Until I tried to create an object using go-client interface.
This is my CRD.
type MyEndpoint struct {
metav1.TypeMeta `json:",inline"`
// Standard object's metadata.
// More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
// +optional
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
}
I can create the CRD definition and create object using kubectl without any problems. But I got failure when I use following code to create the object.
myepDeploy := &crdv1.MyEndpoint{
TypeMeta: metav1.TypeMeta{
Kind: "MyEndpoint",
APIVersion: "mydom.k8s.io/v1",
},
ObjectMeta: metav1.ObjectMeta{
Name: podName,
Labels: map[string]string{
"serviceName": serviceName,
"nodeIP": nodeName,
"port": "5000"
},
},
}
epClient := myclientset.MycontrollerV1().MyEndpoints("default")
epClient.Create(myepDeploy)
But I got following error:
object *v1.MyEndpoint does not implement the protobuf marshalling interface and cannot be encoded to a protobuf message
I take a look at other standard types, I don't see if they implemented such interface. I searched on google, but not getting any luck.
Any ideas? Please help. BTW, I am running on minikube.
For most common types and for simple types marshalling works out of the box. In case of more complex structure, you may need to implement marshalling interface manually.
You may try to comment part of the MyEndpoint structure to find out what exactly caused the problem.
This error is occurred when your client epClient trying to marshal the MyEndpoint object to protobuf. This is because of your rest client config. Try setting Content Type is "application/json".
If you are using below code to generate config, then change the content type.
cfg, err := clientcmd.BuildConfigFromFlags(masterURL, kubeconfig)
if err != nil {
glog.Fatalf("Error building kubeconfig: %s", err.Error())
}
cfg.ContentType = "application/json"
kubeClient, err := kubernetes.NewForConfig(cfg)
if err != nil {
glog.Fatalf("Error building kubernetes clientset: %s", err.Error())
}