I'm trying to update the deployment from the application of Go in Cluster, but it fails with an authorization error.
GKE Master version 1.9.4-gke.1
package main
import (
"fmt"
"github.com/pkg/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
)
func updateReplicas(namespace string, name string, replicas int32) error {
config, err := rest.InClusterConfig()
if err != nil {
return errors.Wrap(err, "failed rest.InClusterConfig")
}
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
return errors.Wrap(err, "failed kubernetes.NewForConfig")
}
deployment, err := clientset.AppsV1().Deployments(namespace).Get(name, metav1.GetOptions{})
if err != nil {
fmt.Printf("failed get Deployment %+v\n", err)
return errors.Wrap(err, "failed get deployment")
}
deployment.Spec.Replicas = &replicas
fmt.Printf("Deployment %v\n", deployment)
ug, err := clientset.AppsV1().Deployments(deployment.Namespace).Update(deployment)
if err != nil {
fmt.Printf("failed update Deployment %+v", err)
return errors.Wrap(err, "failed update Deployment")
}
fmt.Printf("done update deployment %v\n", ug)
return nil
}
result message
failed get Deployment deployments.apps "land-node" is forbidden: User "system:serviceaccount:default:default" cannot get deployments.apps in the namespace "default": Unknown user "system:serviceaccount:default:default"
I have set the authority as follows, but is it not enough?
deployment-editor.yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: deployment-editor
rules:
- apiGroups: [""]
resources: ["deployments"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
editor-deployement.yaml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: editor-deployment
namespace: default
subjects:
- kind: ServiceAccount
name: default
namespace: default
roleRef:
kind: ClusterRole
name: deployment-editor
apiGroup: rbac.authorization.k8s.io
From Unable to list deployments resources using RBAC.
replicasets and deployments exist in the "extensions" and "apps" API groups, not in the legacy "" group
- apiGroups:
- extensions
- apps
resources:
- deployments
- replicasets
verbs:
- get
- list
- watch
- update
- create
- patch
Related
I would like permit a Kubernetes pod in namespace my-namespace to access configmap/config in the same namespace. For this purpose I have defined the following role and rolebinding:
apiVersion: v1
kind: List
items:
- kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: config
namespace: my-namespace
rules:
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["config"]
verbs: ["get"]
- kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: config
namespace: my-namespace
subjects:
- kind: ServiceAccount
name: default
namespace: my-namespace
roleRef:
kind: Role
name: config
apiGroup: rbac.authorization.k8s.io
Yet still, the pod runs into the following error:
configmaps \"config\" is forbidden: User \"system:serviceaccount:my-namespace:default\"
cannot get resource \"configmaps\" in API group \"\" in the namespace \"my-namespace\"
What am I missing? I guess it must be a simple thing, which a second pair of eyes may spot immediately.
UPDATE Here is a relevant fragment of my client code, which uses go-client:
cfg, err := rest.InClusterConfig()
if err != nil {
logger.Fatalf("cannot obtain Kubernetes config: %v", err)
}
k8sClient, err := k8s.NewForConfig(cfg)
if err != nil {
logger.Fatalf("cannot create Clientset")
}
configMapClient := k8sClient.CoreV1().ConfigMaps(Namespace)
configMap, err := configMapClient.Get(ctx, "config", metav1.GetOptions{})
if err != nil {
logger.Fatalf("cannot obtain configmap: %v", err) // error occurs here
}
I don't see anything in particular wrong with your Role or
Rolebinding, and in fact when I deploy them into my environment they
seem to work as intended. You haven't provided a complete reproducer in your question, so here's how I'm testing things out:
I started by creating a namespace my-namespace
I have the following in kustomization.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: my-namespace
commonLabels:
app: rbactest
resources:
- rbac.yaml
- deployment.yaml
generatorOptions:
disableNameSuffixHash: true
configMapGenerator:
- name: config
literals:
- foo=bar
- this=that
In rbac.yaml I have the Role and RoleBinding from your question (without modification).
In deployment.yaml I have:
apiVersion: apps/v1
kind: Deployment
metadata:
name: cli
spec:
replicas: 1
template:
spec:
containers:
- name: cli
image: quay.io/openshift/origin-cli
command:
- sleep
- inf
With this in place, I deploy everything by running:
kubectl apply -k .
And then once the Pod is up and running, this works:
$ kubectl exec -n my-namespace deploy/cli -- kubectl get cm config
NAME DATA AGE
config 2 3m50s
Attempts to access other ConfigMaps will not work, as expected:
$ kubectl exec deploy/cli -- kubectl get cm foo
Error from server (Forbidden): configmaps "foo" is forbidden: User "system:serviceaccount:my-namespace:default" cannot get resource "configmaps" in API group "" in the namespace "my-namespace"
command terminated with exit code 1
If you're seeing different behavior, it would be interesting to figure out where your process differs from what I've done.
Your Go code looks fine also; I'm able to run this in the "cli" container:
package main
import (
"context"
"fmt"
"log"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
)
func main() {
config, err := rest.InClusterConfig()
if err != nil {
panic(err.Error())
}
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
panic(err.Error())
}
namespace := "my-namespace"
configMapClient := clientset.CoreV1().ConfigMaps(namespace)
configMap, err := configMapClient.Get(context.TODO(), "config", metav1.GetOptions{})
if err != nil {
log.Fatalf("cannot obtain configmap: %v", err)
}
fmt.Printf("%+v\n", configMap)
}
If I compile the above, kubectl cp it into the container and run it, I get as output:
&ConfigMap{ObjectMeta:{config my-namespace 2ef6f031-7870-41f1-b091-49ab360b98da 2926 0 2022-10-15 03:22:34 +0000 UTC <nil> <nil> map[app:rbactest] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","data":{"foo":"bar","this":"that"},"kind":"ConfigMap","metadata":{"annotations":{},"labels":{"app":"rbactest"},"name":"config","namespace":"my-namespace"}}
] [] [] [{kubectl-client-side-apply Update v1 2022-10-15 03:22:34 +0000 UTC FieldsV1 {"f:data":{".":{},"f:foo":{},"f:this":{}},"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:app":{}}}} }]},Data:map[string]string{foo: bar,this: that,},BinaryData:map[string][]byte{},Immutable:nil,}
I am trying to setup a custom admission webhook. I do not want TLS protocol for it. I do understand the the client ( which is kube api server) would do an "https" request to the webhook server and hence , we require a TLS Server in Webhook.
The Webhook Server Code is as below. I define a dummy valid cert & key as constants. below server works fine in a the webhook service. :
package main
import (
"crypto/tls"
"fmt"
"html"
"log"
"net/http"
)
func handleRoot(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, "hello test.. %q", html.EscapeString(r.URL.Path))
}
type Config struct {
CertFile string
KeyFile string
}
func main() {
log.Print("Starting server ...2.2")
http.HandleFunc("/test", handleRoot)
config := Config{
CertFile: cert,
KeyFile: key,
}
server := &http.Server{
Addr: ":9543",
TLSConfig: configTLS(config),
}
err := server.ListenAndServeTLS("", "")
if err != nil {
panic(err)
}
}
func configTLS(config Config) *tls.Config {
sCert, err := tls.X509KeyPair([]byte(config.CertFile), []byte(config.KeyFile))
if err != nil {
log.Fatal(err)
}
return &tls.Config{
Certificates: []tls.Certificate{sCert},
ClientAuth: tls.NoClientCert,
InsecureSkipVerify: true,
}
}
Also the MutatingWebhookConfiguration yaml looks like below:
apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
metadata:
creationTimestamp: null
labels:
test-webhook-service.io/system: "true"
name: test-webhook-service-mutating-webhook-configuration
webhooks:
- admissionReviewVersions:
- v1
- v1beta1
clientConfig:
service:
name: test-webhook-service-service
namespace: test-webhook-service-system
path: /test
failurePolicy: Ignore
matchPolicy: Equivalent
name: mutation.test-webhook-service.io
rules:
- apiGroups:
- ""
apiVersions:
- v1
operations:
- CREATE
- UPDATE
resources:
- pods
sideEffects: None
Now, In order to test whether the admission controller works, I created a new POD. The admission controller gives error:
2022/09/30 05:42:56 Starting server ...2.2
2022/09/30 05:43:23 http: TLS handshake error from 10.100.0.1:37976: remote error: tls: bad certificate
What does this mean? Does this mean I have to put valid caBundle in the MutatingWebhookConfiguration and thus, TLS is required. If this is the case, I am not sure what following means in the official k8s document (source):
The example admission webhook server leaves the ClientAuth field
empty, which defaults to NoClientCert. This means that the webhook
server does not authenticate the identity of the clients, supposedly
API servers.
Trying to install Crunchydata postgres-operator.
My pgo-deploy pod is failing with error.
I have setup default nfs storage running the following commands,
# kubectl create -f rbac.yaml
the content is,
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: pgo
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: pgo
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: pgo
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: pgo
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: pgo
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
# kubectl create -f class.yaml
the content:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
archiveOnDelete: "false"
# kubectl create -f deployment.yaml
the content:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: pgo
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: k8s-sigs.io/nfs-subdir-external-provisioner
- name: NFS_SERVER
value: 192.168.10.114
- name: NFS_PATH
value: /var/nfs/general
volumes:
- name: nfs-client-root
nfs:
server: 192.168.10.114
path: /var/nfs/general
Now when I apply # kubectl apply -f postgres-operator.yml with my configuration:
apiVersion: v1
kind: ServiceAccount
metadata:
name: pgo-deployer-sa
namespace: pgo
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: pgo-deployer-cr
rules:
- apiGroups:
- ''
resources:
- namespaces
verbs:
- get
- list
- create
- patch
- delete
- apiGroups:
- ''
resources:
- pods
verbs:
- list
- apiGroups:
- ''
resources:
- secrets
verbs:
- list
- get
- create
- delete
- apiGroups:
- ''
resources:
- configmaps
- services
- persistentvolumeclaims
verbs:
- get
- create
- delete
- list
- apiGroups:
- ''
resources:
- serviceaccounts
verbs:
- get
- create
- delete
- patch
- list
- apiGroups:
- apps
- extensions
resources:
- deployments
- replicasets
verbs:
- get
- list
- watch
- create
- delete
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
verbs:
- get
- create
- delete
- apiGroups:
- rbac.authorization.k8s.io
resources:
- clusterroles
- clusterrolebindings
- roles
- rolebindings
verbs:
- get
- create
- delete
- bind
- escalate
- apiGroups:
- rbac.authorization.k8s.io
resources:
- roles
verbs:
- create
- delete
- apiGroups:
- batch
resources:
- jobs
verbs:
- delete
- list
- apiGroups:
- crunchydata.com
resources:
- pgclusters
- pgreplicas
- pgpolicies
- pgtasks
verbs:
- delete
- list
---
apiVersion: v1
kind: ConfigMap
metadata:
name: pgo-deployer-cm
namespace: pgo
data:
values.yaml: |-
# =====================
# Configuration Options
# More info for these options can be found in the docs
# https://access.crunchydata.com/documentation/postgres-operator/latest/installation/configuration/
# =====================
archive_mode: "true"
archive_timeout: "60"
backrest_aws_s3_bucket: ""
backrest_aws_s3_endpoint: ""
backrest_aws_s3_key: ""
backrest_aws_s3_region: ""
backrest_aws_s3_secret: ""
backrest_aws_s3_uri_style: ""
backrest_aws_s3_verify_tls: "true"
backrest_gcs_bucket: ""
backrest_gcs_endpoint: ""
backrest_gcs_key_type: ""
backrest_port: "2022"
badger: "false"
ccp_image_prefix: "registry.developers.crunchydata.com/crunchydata"
ccp_image_pull_secret: ""
ccp_image_pull_secret_manifest: ""
ccp_image_tag: "centos8-13.3-4.7.0"
create_rbac: "true"
crunchy_debug: "false"
db_name: ""
db_password_age_days: "0"
db_password_length: "24"
db_port: "5432"
db_replicas: "0"
db_user: "testuser"
default_instance_memory: "128Mi"
default_pgbackrest_memory: "48Mi"
default_pgbouncer_memory: "24Mi"
default_exporter_memory: "24Mi"
delete_operator_namespace: "false"
delete_watched_namespaces: "false"
disable_auto_failover: "false"
disable_fsgroup: "false"
reconcile_rbac: "true"
exporterport: "9187"
metrics: "false"
namespace: "pgo"
namespace_mode: "dynamic"
pgbadgerport: "10000"
pgo_add_os_ca_store: "false"
pgo_admin_password: "examplepassword"
pgo_admin_perms: "*"
pgo_admin_role_name: "pgoadmin"
pgo_admin_username: "admin"
pgo_apiserver_port: "8443"
pgo_apiserver_url: "https://postgres-operator"
pgo_client_cert_secret: "pgo.tls"
pgo_client_container_install: "false"
pgo_client_install: "true"
pgo_client_version: "4.7.0"
pgo_cluster_admin: "false"
pgo_disable_eventing: "false"
pgo_disable_tls: "false"
pgo_image_prefix: "registry.developers.crunchydata.com/crunchydata"
pgo_image_pull_secret: ""
pgo_image_pull_secret_manifest: ""
pgo_image_tag: "centos8-4.7.0"
pgo_installation_name: "devtest"
pgo_noauth_routes: ""
pgo_operator_namespace: "pgo"
pgo_tls_ca_store: ""
pgo_tls_no_verify: "false"
pod_anti_affinity: "preferred"
pod_anti_affinity_pgbackrest: ""
pod_anti_affinity_pgbouncer: ""
scheduler_timeout: "3600"
service_type: "ClusterIP"
sync_replication: "false"
backrest_storage: "nfsstorage"
backup_storage: "nfsstorage"
primary_storage: "nfsstorage"
replica_storage: "nfsstorage"
pgadmin_storage: "nfsstorage"
wal_storage: ""
storage1_name: "default"
storage1_access_mode: "ReadWriteOnce"
storage1_size: "1G"
storage1_type: "dynamic"
storage2_name: "hostpathstorage"
storage2_access_mode: "ReadWriteMany"
storage2_size: "1G"
storage2_type: "create"
storage3_name: "nfsstorage"
storage3_access_mode: "ReadWriteMany"
storage3_size: "10Gi"
storage3_type: "create"
storage3_supplemental_groups: "65534"
storage4_name: "nfsstoragered"
storage4_access_mode: "ReadWriteMany"
storage4_size: "1G"
storage4_match_labels: "crunchyzone=red"
storage4_type: "create"
storage4_supplemental_groups: "65534"
storage5_name: "storageos"
storage5_access_mode: "ReadWriteOnce"
storage5_size: "5Gi"
storage5_type: "dynamic"
storage5_class: "fast"
storage6_name: "primarysite"
storage6_access_mode: "ReadWriteOnce"
storage6_size: "4G"
storage6_type: "dynamic"
storage6_class: "primarysite"
storage7_name: "alternatesite"
storage7_access_mode: "ReadWriteOnce"
storage7_size: "4G"
storage7_type: "dynamic"
storage7_class: "alternatesite"
storage8_name: "gce"
storage8_access_mode: "ReadWriteOnce"
storage8_size: "300M"
storage8_type: "dynamic"
storage8_class: "standard"
storage9_name: "rook"
storage9_access_mode: "ReadWriteOnce"
storage9_size: "1Gi"
storage9_type: "dynamic"
storage9_class: "rook-ceph-block"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: pgo-deployer-crb
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: pgo-deployer-cr
subjects:
- kind: ServiceAccount
name: pgo-deployer-sa
namespace: pgo
---
apiVersion: batch/v1
kind: Job
metadata:
name: pgo-deploy
namespace: pgo
spec:
backoffLimit: 0
template:
metadata:
name: pgo-deploy
spec:
serviceAccountName: pgo-deployer-sa
restartPolicy: Never
containers:
- name: pgo-deploy
image: registry.developers.crunchydata.com/crunchydata/pgo-deployer:centos8-4.7.0
imagePullPolicy: IfNotPresent
env:
- name: DEPLOY_ACTION
value: install
volumeMounts:
- name: deployer-conf
mountPath: "/conf"
volumes:
- name: deployer-conf
configMap:
name: pgo-deployer-cm
I get the following error:
# kubectl get pods -n pgo
NAME READY STATUS RESTARTS AGE
nfs-client-provisioner-7d485f5b8d-cnt57 1/1 Running 0 28m
pgo-deploy--1-ppzkw 0/1 Error 0 10m
# kubectl describe pod -n pgo pgo-deploy--1-ppzkw returns the following error:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 9m13s default-scheduler Successfully assigned pgo/pgo-deploy--1-ppzkw to dfsworker1
Normal Pulled 9m11s kubelet Container image "registry.developers.crunchydata.com/crunchydata/pgo-deployer:centos8-4.7.1" already present on machine
Normal Created 9m10s kubelet Created container pgo-deploy
Normal Started 9m10s kubelet Started container pgo-deploy
Warning FailedMount 8m58s (x3 over 9m) kubelet MountVolume.SetUp failed for volume "deployer-conf" : object "pgo"/"pgo-deployer-cm" not registered
even tried with # kubectl apply -f https://raw.githubusercontent.com/CrunchyData/postgres-operator/v4.7.1/installers/kubectl/postgres-operator.yml
it gives the same error. # kubectl -n pgo logs -f pgo-deploy--1-ppzkw gives the following error:
TASK [pgo-operator : Create PGClusters CRD] ************************************
fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["kubectl", "create", "-f", "/ansible/postgres-operator/roles/pgo-operator/files/crds/pgclusters-crd.yaml"], "delta": "0:00:02.599141", "end": "2021-08-09 08:24:50.295545", "msg": "non-zero return code", "rc": 1, "start": "2021-08-09 08:24:47.696404", "stderr": "error: unable to recognize \"/ansible/postgres-operator/roles/pgo-operator/files/crds/pgclusters-crd.yaml\": no matches for kind \"CustomResourceDefinition\" in version \"apiextensions.k8s.io/v1beta1\"", "stderr_lines": ["error: unable to recognize \"/ansible/postgres-operator/roles/pgo-operator/files/crds/pgclusters-crd.yaml\": no matches for kind \"CustomResourceDefinition\" in version \"apiextensions.k8s.io/v1beta1\""], "stdout": "", "stdout_lines": []}
PLAY RECAP *********************************************************************
localhost : ok=21 changed=5 unreachable=0 failed=1 skipped=17 rescued=0 ignored=0
Can anyone help me to solve this? All my machines are ubuntu 20.04. It was all working with the same configurations and steps a few days ago until I deleted the pgo namespace and followed all my past procedures.
My kubernetes version: v1.22.0.
The error you provided says what is wrong:
error: unable to recognize \"/ansible/postgres-operator/roles/pgo-operator/files/crds/pgclusters-crd.yaml\": no matches for kind \"CustomResourceDefinition\" in version \"apiextensions.k8s.io/v1beta1\"
CustomResourceDefinition is no longer in beta API:
kubectl explain CustomResourceDefinition
KIND: CustomResourceDefinition
VERSION: apiextensions.k8s.io/v1
Ideally, the editor in charge of that operator already ships with some up-to-date CustomResourceDefinitions. In your case, the last copy seems to be available over here.
Though if your CRD is outdated: there may be other changes you would want to pull out of Crunchy latest release.
Otherwise, we may consider rewriting those objects ourselves:
change apiVersion to apiextensions.k8s.io/v1
fix spec complying with the last schema
spec.additionalPrinterColumns, spec.subresources or spec.validation would need to move into a spec.versions array. You no longer have to define a schema for your resources metadata - if you did configure a schema in your CRD.
The new layout would look something like this:
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: crname.api-group
spec:
group: api-group
names:
kind: CrName
listKind: CrNameList
plural: crnames
singular: crname
scope: Namespaced
versions:
- name: v1
additionalPrinterColumns:
- name: Age
type: date
jsonPath: .metadata.creationTimestamp
schema:
openAPIV3Schema:
properties:
apiVersion:
type: string
kind:
type: string
spec:
properties:
[...]
type: object
served: true
storage: true
subresources:
status: {}
- name: v1beta1
[...]
I want to run a pod that listens for updates to endpoint lists (I'm not yet ready to adopt the alpha-level feature of endpoint sets, but I'll expand to that eventually.)
I have this code:
package main
import (
"fmt"
"os"
"os/signal"
"sync"
"syscall"
"k8s.io/apimachinery/pkg/util/runtime"
"k8s.io/client-go/informers"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/cache"
)
func ReadKubeConfig() (*rest.Config, *kubernetes.Clientset, error) {
config, err := rest.InClusterConfig()
if err != nil {
return nil, nil, err
}
clients, err := kubernetes.NewForConfig(config)
if err != nil {
return nil, nil, err
}
return config, clients, nil
}
func main() {
_, cs, err := ReadKubeConfig()
if err != nil {
fmt.Printf("could not create Clientset: %s\n", err)
os.Exit(1)
}
factory := informers.NewSharedInformerFactory(cs, 0)
ifmr := factory.Core().V1().Endpoints().Informer()
stop := make(chan struct{})
ifmr.AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(next interface{}) {
fmt.Printf("AddFunc(%v)\n", next)
},
UpdateFunc: func(prev, next interface{}) {
fmt.Printf("UpdateFunc(%v, %v)\n", prev, next)
},
DeleteFunc: func(prev interface{}) {
fmt.Printf("DeleteFunc(%v)\n", prev)
},
})
wg := &sync.WaitGroup{}
wg.Add(1)
go func() {
defer runtime.HandleCrash()
ifmr.Run(stop)
wg.Done()
}()
ch := make(chan os.Signal, 1)
signal.Notify(ch, os.Interrupt)
signal.Notify(ch, os.Signal(syscall.SIGTERM))
signal.Notify(ch, os.Signal(syscall.SIGHUP))
sig := <-ch
fmt.Printf("Received signal %s\n", sig)
close(stop)
wg.Wait()
}
I get this error when deploying and running:
kubeendpointwatcher.go:55: Failed to list *v1.Endpoints: endpoints is forbidden: User "system:serviceaccount:eng:default" cannot list resource "endpoints" in API group "" at the cluster scope
I have the following role and role binding defined and deployed to the "eng" namespace:
watch_endpoints$ kubectl -n eng get role mesh-endpoint-read -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
creationTimestamp: "2021-07-08T19:59:20Z"
name: mesh-endpoint-read
namespace: eng
resourceVersion: "182975428"
selfLink: /apis/rbac.authorization.k8s.io/v1/namespaces/eng/roles/mesh-endpoint-read
uid: fcadcc2a-19d0-4d6e-bee1-78413f51b91b
rules:
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
- list
- watch
I have the following rolebinding:
watch_endpoints$ kubectl -n eng get rolebinding mesh-endpoint-read -o yaml | sed -e 's/^/ /g'
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
creationTimestamp: "2021-07-08T19:59:20Z"
name: mesh-endpoint-read
namespace: eng
resourceVersion: "182977845"
selfLink: /apis/rbac.authorization.k8s.io/v1/namespaces/eng/rolebindings/mesh-endpoint-read
uid: 705a3e50-2a73-47ed-aa62-0ea48f3493ee
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: mesh-endpoint-read
subjects:
- kind: ServiceAccount
name: default
namespace: default
You will note that I apply it to both the default namespace and the eng namespace serviceaccount named default although the error message seems to indicate that it is indeed running in the default serviceaccount in the eng namespace.
I have previously used Role and RoleBinding and ServiceAccount objects that work as expected, so I don't understand why this doesn't work. What am I missing?
For testing/reproduction purposes, I run this program by doing kubectl cp of a built binary (cgo off) into a container created with kubectl -n eng create deplpoy with a vanilla ubuntu image running /bin/sh -c sleep 999999999, and then executing a /bin/bash shell in that pod-container.
You have created role and rolebinding for eng namespace. However, as per the error message:
kubeendpointwatcher.go:55: Failed to list *v1.Endpoints: endpoints is forbidden: User "system:serviceaccount:eng:default" cannot list resource "endpoints" in API group "" at the cluster scope
you are doing query for endpoints at the "cluster" scope. Either try to limit your query to eng namespace or use clusterrole/clusterbindings
Error message provide hint(system:serviceaccount:eng:default)that serviceaccount running in eng namespace, whose name is default does not have permission to query ep at cluster scope.
To validate this, you can run two curl calls, first exec into the pod using the same sa and then run the following for eng namespace and later try it on other namespaces.
curl -v --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" https://kubernetes.default.svc/api/v1/namespaces/default/pods
Below is my python script to update a secret so I can deploy to kubernetes using kubectl. So it works fine. But I want to create a kubernetes cron job that will run a docker container to update a secret from within a kubernetes cluster. How do I do that? The aws secret lasts only 12 hours to I have to regenerate from within the cluster so I can pull if pod crash etc...
This there an internal api I have access to within kubernetes?
cmd = """aws ecr get-login --no-include-email --region us-east-1 > aws_token.txt"""
run_bash(cmd)
f = open('aws_token.txt').readlines()
TOKEN = f[0].split(' ')[5]
SECRET_NAME = "%s-ecr-registry" % (self.region)
cmd = """kubectl delete secret --ignore-not-found %s -n %s""" % (SECRET_NAME,namespace)
print (cmd)
run_bash(cmd)
cmd = """kubectl create secret docker-registry %s --docker-server=https://%s.dkr.ecr.%s.amazonaws.com --docker-username=AWS --docker-password="%s" --docker-email="david.montgomery#gmail.com" -n %s """ % (SECRET_NAME,self.aws_account_id,self.region,TOKEN,namespace)
print (cmd)
run_bash(cmd)
cmd = "kubectl describe secrets/%s-ecr-registry -n %s" % (self.region,namespace)
print (cmd)
run_bash(cmd)
cmd = "kubectl get secret %s-ecr-registry -o yaml -n %s" % (self.region,namespace)
print (cmd)
As it happens I literally just got done doing this.
Below is everything you need to set up a cronjob to roll your AWS docker login token, and then re-login to ECR, every 6 hours. Just replace the {{ variables }} with your own actual values.
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: {{ namespace }}
name: ecr-cred-helper
rules:
- apiGroups: [""]
resources:
- secrets
- serviceaccounts
- serviceaccounts/token
verbs:
- 'delete'
- 'create'
- 'patch'
- 'get'
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: ecr-cred-helper
namespace: {{ namespace }}
subjects:
- kind: ServiceAccount
name: sa-ecr-cred-helper
namespace: {{ namespace }}
roleRef:
kind: Role
name: ecr-cred-helper
apiGroup: ""
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: sa-ecr-cred-helper
namespace: {{ namespace }}
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
annotations:
name: ecr-cred-helper
namespace: {{ namespace }}
spec:
concurrencyPolicy: Allow
failedJobsHistoryLimit: 1
jobTemplate:
metadata:
creationTimestamp: null
spec:
template:
metadata:
creationTimestamp: null
spec:
serviceAccountName: sa-ecr-cred-helper
containers:
- command:
- /bin/sh
- -c
- |-
TOKEN=`aws ecr get-login --region ${REGION} --registry-ids ${ACCOUNT} | cut -d' ' -f6`
echo "ENV variables setup done."
kubectl delete secret -n {{ namespace }} --ignore-not-found $SECRET_NAME
kubectl create secret -n {{ namespace }} docker-registry $SECRET_NAME \
--docker-server=https://{{ ECR_REPOSITORY_URL }} \
--docker-username=AWS \
--docker-password="${TOKEN}" \
--docker-email="${EMAIL}"
echo "Secret created by name. $SECRET_NAME"
kubectl patch serviceaccount default -p '{"imagePullSecrets":[{"name":"'$SECRET_NAME'"}]}' -n {{ namespace }}
echo "All done."
env:
- name: AWS_DEFAULT_REGION
value: eu-west-1
- name: AWS_SECRET_ACCESS_KEY
value: '{{ AWS_SECRET_ACCESS_KEY }}'
- name: AWS_ACCESS_KEY_ID
value: '{{ AWS_ACCESS_KEY_ID }}'
- name: ACCOUNT
value: '{{ AWS_ACCOUNT_ID }}'
- name: SECRET_NAME
value: '{{ imagePullSecret }}'
- name: REGION
value: 'eu-west-1'
- name: EMAIL
value: '{{ ANY_EMAIL }}'
image: odaniait/aws-kubectl:latest
imagePullPolicy: IfNotPresent
name: ecr-cred-helper
resources: {}
securityContext:
capabilities: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: Default
hostNetwork: true
restartPolicy: Never
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
schedule: 0 */6 * * *
successfulJobsHistoryLimit: 3
suspend: false
I add my solution for copying secrets between namespaces using cronjob because this was the stack overflow answer that was given to me when searching for secret copying using CronJob
In the source namespace, you need to define Role, RoleBinding and 'ServiceAccount`
apiVersion: v1
kind: ServiceAccount
metadata:
name: demo-user-user-secret-service-account
namespace: source-namespace
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: demo-user-role
namespace: source-namespace
rules:
- apiGroups: [""]
resources: ["secrets"]
# Secrets you want to have access in your namespace
resourceNames: ["demo-user" ]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: demo-user-cron-role-binding
namespace: source-namespace
subjects:
- kind: ServiceAccount
name: demo-user-user-secret-service-account
namespace: source-namespace
roleRef:
kind: Role
name: demo-user-role
apiGroup: ""
and CronJob definition will look like so:
apiVersion: batch/v1
kind: CronJob
metadata:
name: demo-user-user-secret-copy-cronjob
spec:
schedule: "* * * * *"
concurrencyPolicy: Forbid
failedJobsHistoryLimit: 5
successfulJobsHistoryLimit: 3
startingDeadlineSeconds: 10
jobTemplate:
spec:
template:
spec:
containers:
- name: demo-user-user-secret-copy-cronjob
image: bitnami/kubectl:1.25.4-debian-11-r6
imagePullPolicy: IfNotPresent
command:
- "/bin/bash"
- "-c"
- "kubectl -n source-namespace get secret demo-user -o json | \
jq 'del(.metadata.creationTimestamp, .metadata.uid, .metadata.resourceVersion, .metadata.ownerReferences, .metadata.namespace)' > /tmp/demo-user-secret.json && \
kubectl apply --namespace target-namespace -f /tmp/demo-user-secret.json"
restartPolicy: Never
securityContext:
privileged: false
allowPrivilegeEscalation: true
readOnlyRootFilesystem: true
runAsNonRoot: true
capabilities:
drop: [ "all" ]
serviceAccountName: demo-user-user-secret-service-account
In the target namespace you also need Role and RoleBinding so that CronJob in source namespace can copy over the secrets.
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: target-namespace
name: demo-user-role
rules:
- apiGroups: [""]
resources:
- secrets
verbs:
- 'list'
- 'delete'
- 'create'
- 'patch'
- 'get'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: demo-user-role-binding
namespace: target-namespace
subjects:
- kind: ServiceAccount
name: demo-user-user-secret-service-account
namespace: source-namespace
roleRef:
kind: Role
name: demo-user-role
apiGroup: ""
In your target namespace deployment you can read in the secrets as regular files.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
replicas: 1
...
spec:
containers:
- name: my-app
image: [image-name]
volumeMounts:
- name: your-secret
mountPath: /opt/your-secret
readOnly: true
volumes:
- name: your-secret
secret:
secretName: demo-user
items:
- key: ca.crt
path: ca.crt
- key: user.crt
path: user.crt
- key: user.key
path: user.key
- key: user.p12
path: user.p12
- key: user.password
path: user.password