I want to create a controller and listen to the pod events when new pod is created (by a deployment) then add all labels belong to deployment to the created pod, is this possible at scale with client-go?
In order to observe pod events, you need to use informers. Informers have built-in optimizations to avoid overloading API servers.
There is a patch method available in the PodInterface that allows you to add a label to a pod.
Here is a sample code for your reference. In the main function, the informer code is added, and the LabelPod function implements the label logic.
package main
import (
"context"
"encoding/json"
"fmt"
"time"
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/wait"
"k8s.io/client-go/informers"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/cache"
)
type patchStringValue struct {
Op string `json:"op"`
Path string `json:"path"`
Value string `json:"value"`
}
func main() {
clientSet := GetK8sClient()
labelOptions := informers.WithTweakListOptions(func(opts *metav1.ListOptions) {
opts.LabelSelector = GetLabelSelectorForDeployment("deployment-name", "namespace-name")
})
informers := informers.NewSharedInformerFactoryWithOptions(clientSet, 10*time.Second, informers.WithNamespace("namespace-name"), labelOptions)
podInformer := informers.Core().V1().Pods()
podInformer.Informer().AddEventHandler(
cache.ResourceEventHandlerFuncs{
AddFunc: handleAdd,
},
)
informers.Start(wait.NeverStop)
informers.WaitForCacheSync(wait.NeverStop)
}
func GetLabelSelectorForDeployment(Name string, Namespace string) string {
clientSet := GetK8sClient()
k8sClient := clientSet.AppsV1()
deployment, _ := k8sClient.Deployments(Namespace).Get(context.Background(), Name, metav1.GetOptions{})
labelSet := labels.Set(deployment.Spec.Selector.MatchLabels)
return string(labelSet.AsSelector().String())
}
func handleAdd(obj interface{}) {
k8sClient := GetK8sClient().CoreV1()
pod := obj.(*v1.Pod)
fmt.Println("Pod", pod.GetName(), pod.Spec.NodeName, pod.Spec.Containers)
payload := []patchStringValue{{
Op: "replace",
Path: "/metadata/labels/testLabel",
Value: "testValue",
}}
payloadBytes, _ := json.Marshal(payload)
_, updateErr := k8sClient.Pods(pod.GetNamespace()).Patch(context.Background(), pod.GetName(), types.JSONPatchType, payloadBytes, metav1.PatchOptions{})
if updateErr == nil {
fmt.Println(fmt.Sprintf("Pod %s labelled successfully.", pod.GetName()))
} else {
fmt.Println(updateErr)
}
}
func GetK8sClient() *kubernetes.Clientset {
config, err := rest.InClusterConfig()
if err != nil {
panic(err.Error())
}
// creates the clientset
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
panic(err.Error())
}
return clientset
}
What about this??
Here are the pods before they are labeled manually:
❯ kubectl get po --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx-deploy-6bdc4445fd-5qlhg 1/1 Running 0 13h app=nginx,pod-template-hash=6bdc4445fd
nginx-deploy-6bdc4445fd-pgkhb 1/1 Running 0 13h app=nginx,pod-template-hash=6bdc4445fd
nginx-deploy-6bdc4445fd-xdz59 1/1 Running 0 13h app=nginx,pod-template-hash=6bdc4445fd
I, here, label the pod "nginx-deploy-6bdc4445fd-5qlhg" with the label "test=2":
❯ kubectl label pod nginx-deploy-6bdc4445fd-5qlhg test=2
pod/nginx-deploy-6bdc4445fd-5qlhg labeled
Here are the pods after they are labeled manually:
❯ kubectl get po --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx-deploy-6bdc4445fd-5qlhg 1/1 Running 0 13h app=nginx,pod-template-hash=6bdc4445fd,test=2
nginx-deploy-6bdc4445fd-pgkhb 1/1 Running 0 13h app=nginx,pod-template-hash=6bdc4445fd
nginx-deploy-6bdc4445fd-xdz59 1/1 Running 0 13h app=nginx,pod-template-hash=6bdc4445fd
Related
I cannot find the appropriate method to do this.
Does anyone know the way to do this on a client-go or the API resources that kubectl describe pod uses?
Here is an example code of getting pod with client-go:
/*
A demonstration of get pod using client-go
Based on client-go examples: https://github.com/kubernetes/client-go/tree/master/examples
To demonstrate, run this file with `go run <filename> --help` to see usage
*/
package main
import (
"context"
"flag"
"fmt"
"os"
"path/filepath"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/util/homedir"
)
func main() {
podName := flag.String("pod-name", "", "name of the required pod")
namespaceName := flag.String("namespace", "", "namespace of the required pod")
var kubeconfig *string
if config, exist := os.LookupEnv("KUBECONFIG"); exist {
kubeconfig = &config
} else if home := homedir.HomeDir(); home != "" {
kubeconfig = flag.String("kubeconfig", filepath.Join(home, ".kube", "config"), "(optional) absolute path to the kubeconfig file")
} else {
kubeconfig = flag.String("kubeconfig", "", "absolute path to the kubeconfig file")
}
flag.Parse()
config, err := clientcmd.BuildConfigFromFlags("", *kubeconfig)
if err != nil {
panic(err)
}
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
panic(err)
}
podClient := clientset.CoreV1().Pods(*namespaceName)
fmt.Println("Getting pod...")
result, err := podClient.Get(context.TODO(), *podName, v1.GetOptions{})
if err != nil {
panic(err)
}
// Example fields
fmt.Printf("%+v\n", result.Name)
fmt.Printf("%+v\n", result.Namespace)
fmt.Printf("%+v\n", result.Spec.ServiceAccountName)
}
you can see it on the code of the kubectl describe command
We have setup a GKE cluster using Terraform with private and shared networking:
Network configuration:
resource "google_compute_subnetwork" "int_kube02" {
name = "int-kube02"
region = var.region
project = "infrastructure"
network = "projects/infrastructure/global/networks/net-10-23-0-0-16"
ip_cidr_range = "10.23.5.0/24"
secondary_ip_range {
range_name = "pods"
ip_cidr_range = "10.60.0.0/14" # 10.60 - 10.63
}
secondary_ip_range {
range_name = "services"
ip_cidr_range = "10.56.0.0/16"
}
}
Cluster configuration:
resource "google_container_cluster" "gke_kube02" {
name = "kube02"
location = var.region
initial_node_count = var.gke_kube02_num_nodes
network = "projects/ninfrastructure/global/networks/net-10-23-0-0-16"
subnetwork = "projects/infrastructure/regions/europe-west3/subnetworks/int-kube02"
master_authorized_networks_config {
cidr_blocks {
display_name = "admin vpn"
cidr_block = "10.42.255.0/24"
}
cidr_blocks {
display_name = "monitoring server"
cidr_block = "10.42.4.33/32"
}
cidr_blocks {
display_name = "cluster nodes"
cidr_block = "10.23.5.0/24"
}
}
ip_allocation_policy {
cluster_secondary_range_name = "pods"
services_secondary_range_name = "services"
}
private_cluster_config {
enable_private_nodes = true
enable_private_endpoint = true
master_ipv4_cidr_block = "192.168.23.0/28"
}
node_config {
machine_type = "e2-highcpu-2"
tags = ["kube-no-external-ip"]
metadata = {
disable-legacy-endpoints = true
}
oauth_scopes = [
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
]
}
}
The cluster is online and running fine. If I connect to one of the worker nodes i can reach the api using curl:
curl -k https://192.168.23.2
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
"reason": "Forbidden",
"details": {
},
"code": 403
}
I also see a healthy cluster when using a SSH port forward:
❯ k get pods --all-namespaces --insecure-skip-tls-verify=true
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system event-exporter-gke-5479fd58c8-mv24r 2/2 Running 0 4h44m
kube-system fluentbit-gke-ckkwh 2/2 Running 0 4h44m
kube-system fluentbit-gke-lblkz 2/2 Running 0 4h44m
kube-system fluentbit-gke-zglv2 2/2 Running 4 4h44m
kube-system gke-metrics-agent-j72d9 1/1 Running 0 4h44m
kube-system gke-metrics-agent-ttrzk 1/1 Running 0 4h44m
kube-system gke-metrics-agent-wbqgc 1/1 Running 0 4h44m
kube-system kube-dns-697dc8fc8b-rbf5b 4/4 Running 5 4h44m
kube-system kube-dns-697dc8fc8b-vnqb4 4/4 Running 1 4h44m
kube-system kube-dns-autoscaler-844c9d9448-f6sqw 1/1 Running 0 4h44m
kube-system kube-proxy-gke-kube02-default-pool-2bf58182-xgp7 1/1 Running 0 4h43m
kube-system kube-proxy-gke-kube02-default-pool-707f5d51-s4xw 1/1 Running 0 4h43m
kube-system kube-proxy-gke-kube02-default-pool-bd2c130d-c67h 1/1 Running 0 4h43m
kube-system l7-default-backend-6654b9bccb-mw6bp 1/1 Running 0 4h44m
kube-system metrics-server-v0.4.4-857776bc9c-sq9kd 2/2 Running 0 4h43m
kube-system pdcsi-node-5zlb7 2/2 Running 0 4h44m
kube-system pdcsi-node-kn2zb 2/2 Running 0 4h44m
kube-system pdcsi-node-swhp9 2/2 Running 0 4h44m
So far so good. Then I setup the Cloud Router to announce the 192.168.23.0/28 network. This was successful and replicated to our local site using BGP. Running show route 192.168.23.2 displays the correct route is advertised and installed.
When trying to reach the API from the monitoring server 10.42.4.33 I just run into timeouts. All three, the Cloud VPN, the Cloud Router and the Kubernetes Cluster run in europe-west3.
When i try to ping one of the workers its working completely fine, so networking in general works:
[me#monitoring ~]$ ping 10.23.5.216
PING 10.23.5.216 (10.23.5.216) 56(84) bytes of data.
64 bytes from 10.23.5.216: icmp_seq=1 ttl=63 time=8.21 ms
64 bytes from 10.23.5.216: icmp_seq=2 ttl=63 time=7.70 ms
64 bytes from 10.23.5.216: icmp_seq=3 ttl=63 time=5.41 ms
64 bytes from 10.23.5.216: icmp_seq=4 ttl=63 time=7.98 ms
Googles Documentation gives no hit what could be missing. From what I understand the Cluster API should be reachable by now.
What could be missing and why is the API not reachable via VPN?
I have been missing the peering configuration documented here:
https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#cp-on-prem-routing
resource "google_compute_network_peering_routes_config" "peer_kube02" {
peering = google_container_cluster.gke_kube02.private_cluster_config[0].peering_name
project = "infrastructure"
network = "net-10-13-0-0-16"
export_custom_routes = true
import_custom_routes = false
}
I'm using Terraform helm_release resource to install bitnami/redis instance in my K8s cluster.
The code looks like this:
resource "helm_release" "redis-chart" {
name = "redis-${var.env}"
repository = "https://charts.bitnami.com/bitnami"
chart = "redis"
namespace = "redis"
create_namespace = true
set {
name = "auth.enabled"
value = "false"
}
set {
name = "master.containerPort"
value = "6379"
}
set {
name = "replica.replicaCount"
value = "2"
}
}
It completes successfully and in a separate directory I have my app terraform configuration.
In the app configuration, I want to get get the redis host from the above helm_release.
I'm doing it like this:
data "kubernetes_service" "redis-master" {
metadata {
name = "redis-${var.env}-master"
namespace = "redis"
}
}
And then in the kubernetes_secret resource I'm passing the data to my app deployment:
resource "kubernetes_secret" "questo-server-secrets" {
metadata {
name = "questo-server-secrets-${var.env}"
namespace = kubernetes_namespace.app-namespace.metadata.0.name
}
data = {
REDIS_HOST = data.kubernetes_service.redis-master.metadata.0.name
REDIS_PORT = data.kubernetes_service.redis-master.spec.0.port.0.port
}
}
But unfortunately, when I run the app deployment I'm getting the following logs:
[ioredis] Unhandled error event: Error: getaddrinfo ENOTFOUND
redis-dev-master
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:66:26) [ioredis] Unhandled error event: Error: getaddrinfo ENOTFOUND
Which suggests that redis-dev-master is not a correct host for the redis instance.
How do I get the redis host from helm_release or any underlying services the release creates?
I've tried debugging and pinging a specific pod from my app deployment.
For reference, these are my redis resources:
NAME READY STATUS RESTARTS AGE
pod/redis-dev-master-0 1/1 Running 0 34m
pod/redis-dev-replicas-0 1/1 Running 1 34m
pod/redis-dev-replicas-1 1/1 Running 0 32m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/redis-dev-headless ClusterIP None <none> 6379/TCP 34m
service/redis-dev-master ClusterIP 172.20.215.60 <none> 6379/TCP 34m
service/redis-dev-replicas ClusterIP 172.20.117.134 <none> 6379/TCP 34m
NAME READY AGE
statefulset.apps/redis-dev-master 1/1 34m
statefulset.apps/redis-dev-replicas 2/2 34m
Maybe you were just using the wrong attribute to get that information. Checking the documentation at the Terraform Registry Website we can use the
cluster_ip attribute as described at the spec documentation description.
So you should end up with something like:
data.kubernetes_service.redis-master.spec.cluster_ip
And finally ending up with the following:
resource "kubernetes_secret" "questo-server-secrets" {
metadata {
name = "questo-server-secrets-${var.env}"
namespace = kubernetes_namespace.app-namespace.metadata.0.name
}
data = {
REDIS_HOST = data.kubernetes_service.redis-master.spec.cluster_ip
REDIS_PORT = data.kubernetes_service.redis-master.spec.port
}
}
So Kubernetes events seems to have options for watch for all kinds of things like pod up/down creation/deletion etc.. - but I want to watch for a namespace creation/deletion itself - and I can't find an option to do that.
I want to know if someone created a namespace or deleted one. Is this possible?
Rgds,
Gopa.
So I got it working, pasting the entire code below for anyone's future reference
package main
import (
"os"
"os/exec"
"os/signal"
"strings"
"syscall"
"time"
"github.com/golang/glog"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/runtime"
"k8s.io/client-go/informers"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/tools/clientcmd"
)
func newNamespace(obj interface{}) {
ns := obj.(v1.Object)
glog.Error("New Namespace ", ns.GetName())
}
func modNamespace(objOld interface{}, objNew interface{}) {
}
func delNamespace(obj interface{}) {
ns := obj.(v1.Object)
glog.Error("Del Namespace ", ns.GetName())
}
func watchNamespace(k8s *kubernetes.Clientset) {
// Add watcher for the Namespace.
factory := informers.NewSharedInformerFactory(k8s, 5*time.Second)
nsInformer := factory.Core().V1().Namespaces().Informer()
nsInformerChan := make(chan struct{})
//defer close(nsInformerChan)
defer runtime.HandleCrash()
// Namespace informer state change handler
nsInformer.AddEventHandler(cache.ResourceEventHandlerFuncs{
// When a new namespace gets created
AddFunc: func(obj interface{}) {
newNamespace(obj)
},
// When a namespace gets updated
UpdateFunc: func(oldObj interface{}, newObj interface{}) {
modNamespace(oldObj, newObj)
},
// When a namespace gets deleted
DeleteFunc: func(obj interface{}) {
delNamespace(obj)
},
})
factory.Start(nsInformerChan)
//go nsInformer.GetController().Run(nsInformerChan)
go nsInformer.Run(nsInformerChan)
}
func main() {
kconfig := os.Getenv("KUBECONFIG")
glog.Error("KCONFIG", kconfig)
var config *rest.Config
var clientset *kubernetes.Clientset
var err error
for {
if config == nil {
config, err = clientcmd.BuildConfigFromFlags("", kconfig)
if err != nil {
glog.Error("Cant create kubernetes config")
time.Sleep(time.Second)
continue
}
}
// creates the clientset
clientset, err = kubernetes.NewForConfig(config)
if err != nil {
glog.Error("Cannot create kubernetes client")
time.Sleep(time.Second)
continue
}
break
}
watchNamespace(clientset)
glog.Error("Watch started")
term := make(chan os.Signal, 1)
signal.Notify(term, os.Interrupt)
signal.Notify(term, syscall.SIGTERM)
select {
case <-term:
}
}
Events are namespaced, they show the events happening in a particular namespace. However -A|--all-namespaces would show events from all the namespaces.
If you want to track the lifecycle(creation|deletion|etc) of the namespaces,then audit.logs are simplest option if not the only option.
Example: Creation of namespace called my-ns:
kubectl create ns my-ns
kubectl get events -n my-ns
No resources found in my-ns namespace.
kubectl describe ns my-ns
Name: my-ns
Labels: <none>
Annotations: <none>
Status: Active
No resource quota.
No LimitRange resource.
Now Here is the output of audit.log at metadata level, this would tell the following:
who created
what created
when created
and lot more.
Example output:
{
"kind":"Event",
"apiVersion":"audit.k8s.io/v1",
"level":"Metadata",
"auditID":"d28619be-0cb7-4d3e-b195-51fb93ae6de4",
"stage":"ResponseComplete",
"requestURI":"/api/v1/namespaces?fieldManager=kubectl-create",
"verb":"create", #<------operation type
"user":{
"username":"kubernetes-admin", #<--------who created
"groups":[
"system:masters",
"system:authenticated"
]
},
"sourceIPs":[
"1.2.3.4"
],
"userAgent":"kubectl/v1.20.0 (linux/amd64) kubernetes/af46c47",
"objectRef":{
"resource":"namespaces", #<---what created
"name":"my-ns", #<---name of resource
"apiVersion":"v1"
},
"responseStatus":{
"metadata":{
},
"code":201
},
"requestReceivedTimestamp":"2021-09-24T16:44:28.094213Z", #<---when created.
"stageTimestamp":"2021-09-24T16:44:28.270294Z",
"annotations":{
"authorization.k8s.io/decision":"allow",
"authorization.k8s.io/reason":""
}
}
$ kubectl get ns --watch-only
# run "kubectl create ns test" from another terminal
test Active 0s
# run "kubectl delete ns test"
test Terminating 23s
test Terminating 28s
test Terminating 28s
Is there a way to restart a kubernetes deployment using go-client .I have no idea how to achieve this ,help me!
If you run kubectl rollout restart deployment/my-deploy -v=10 you see that kubectl actually sends a PATCH request to the APIServer and sets the .spec.template.metadata.annotations with something like this:
kubectl.kubernetes.io/restartedAt: '2022-11-29T16:33:08+03:30'
So, you can do this with the client-go:
clientset, err :=kubernetes.NewForConfig(config)
if err != nil {
// Do something with err
}
deploymentsClient := clientset.AppsV1().Deployments(namespace)
data := fmt.Sprintf(`{"spec": {"template": {"metadata": {"annotations": {"kubectl.kubernetes.io/restartedAt": "%s"}}}}}`, time.Now().Format("20060102150405"))
deployment, err := deploymentsClient.Patch(ctx, deployment_name, k8stypes.StrategicMergePatchType, []byte(data), v1.PatchOptions{})
if err != nil {
// Do something with err
}