I am trying to setup a custom admission webhook. I do not want TLS protocol for it. I do understand the the client ( which is kube api server) would do an "https" request to the webhook server and hence , we require a TLS Server in Webhook.
The Webhook Server Code is as below. I define a dummy valid cert & key as constants. below server works fine in a the webhook service. :
package main
import (
"crypto/tls"
"fmt"
"html"
"log"
"net/http"
)
func handleRoot(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, "hello test.. %q", html.EscapeString(r.URL.Path))
}
type Config struct {
CertFile string
KeyFile string
}
func main() {
log.Print("Starting server ...2.2")
http.HandleFunc("/test", handleRoot)
config := Config{
CertFile: cert,
KeyFile: key,
}
server := &http.Server{
Addr: ":9543",
TLSConfig: configTLS(config),
}
err := server.ListenAndServeTLS("", "")
if err != nil {
panic(err)
}
}
func configTLS(config Config) *tls.Config {
sCert, err := tls.X509KeyPair([]byte(config.CertFile), []byte(config.KeyFile))
if err != nil {
log.Fatal(err)
}
return &tls.Config{
Certificates: []tls.Certificate{sCert},
ClientAuth: tls.NoClientCert,
InsecureSkipVerify: true,
}
}
Also the MutatingWebhookConfiguration yaml looks like below:
apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
metadata:
creationTimestamp: null
labels:
test-webhook-service.io/system: "true"
name: test-webhook-service-mutating-webhook-configuration
webhooks:
- admissionReviewVersions:
- v1
- v1beta1
clientConfig:
service:
name: test-webhook-service-service
namespace: test-webhook-service-system
path: /test
failurePolicy: Ignore
matchPolicy: Equivalent
name: mutation.test-webhook-service.io
rules:
- apiGroups:
- ""
apiVersions:
- v1
operations:
- CREATE
- UPDATE
resources:
- pods
sideEffects: None
Now, In order to test whether the admission controller works, I created a new POD. The admission controller gives error:
2022/09/30 05:42:56 Starting server ...2.2
2022/09/30 05:43:23 http: TLS handshake error from 10.100.0.1:37976: remote error: tls: bad certificate
What does this mean? Does this mean I have to put valid caBundle in the MutatingWebhookConfiguration and thus, TLS is required. If this is the case, I am not sure what following means in the official k8s document (source):
The example admission webhook server leaves the ClientAuth field
empty, which defaults to NoClientCert. This means that the webhook
server does not authenticate the identity of the clients, supposedly
API servers.
Related
I would like permit a Kubernetes pod in namespace my-namespace to access configmap/config in the same namespace. For this purpose I have defined the following role and rolebinding:
apiVersion: v1
kind: List
items:
- kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: config
namespace: my-namespace
rules:
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["config"]
verbs: ["get"]
- kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: config
namespace: my-namespace
subjects:
- kind: ServiceAccount
name: default
namespace: my-namespace
roleRef:
kind: Role
name: config
apiGroup: rbac.authorization.k8s.io
Yet still, the pod runs into the following error:
configmaps \"config\" is forbidden: User \"system:serviceaccount:my-namespace:default\"
cannot get resource \"configmaps\" in API group \"\" in the namespace \"my-namespace\"
What am I missing? I guess it must be a simple thing, which a second pair of eyes may spot immediately.
UPDATE Here is a relevant fragment of my client code, which uses go-client:
cfg, err := rest.InClusterConfig()
if err != nil {
logger.Fatalf("cannot obtain Kubernetes config: %v", err)
}
k8sClient, err := k8s.NewForConfig(cfg)
if err != nil {
logger.Fatalf("cannot create Clientset")
}
configMapClient := k8sClient.CoreV1().ConfigMaps(Namespace)
configMap, err := configMapClient.Get(ctx, "config", metav1.GetOptions{})
if err != nil {
logger.Fatalf("cannot obtain configmap: %v", err) // error occurs here
}
I don't see anything in particular wrong with your Role or
Rolebinding, and in fact when I deploy them into my environment they
seem to work as intended. You haven't provided a complete reproducer in your question, so here's how I'm testing things out:
I started by creating a namespace my-namespace
I have the following in kustomization.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: my-namespace
commonLabels:
app: rbactest
resources:
- rbac.yaml
- deployment.yaml
generatorOptions:
disableNameSuffixHash: true
configMapGenerator:
- name: config
literals:
- foo=bar
- this=that
In rbac.yaml I have the Role and RoleBinding from your question (without modification).
In deployment.yaml I have:
apiVersion: apps/v1
kind: Deployment
metadata:
name: cli
spec:
replicas: 1
template:
spec:
containers:
- name: cli
image: quay.io/openshift/origin-cli
command:
- sleep
- inf
With this in place, I deploy everything by running:
kubectl apply -k .
And then once the Pod is up and running, this works:
$ kubectl exec -n my-namespace deploy/cli -- kubectl get cm config
NAME DATA AGE
config 2 3m50s
Attempts to access other ConfigMaps will not work, as expected:
$ kubectl exec deploy/cli -- kubectl get cm foo
Error from server (Forbidden): configmaps "foo" is forbidden: User "system:serviceaccount:my-namespace:default" cannot get resource "configmaps" in API group "" in the namespace "my-namespace"
command terminated with exit code 1
If you're seeing different behavior, it would be interesting to figure out where your process differs from what I've done.
Your Go code looks fine also; I'm able to run this in the "cli" container:
package main
import (
"context"
"fmt"
"log"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
)
func main() {
config, err := rest.InClusterConfig()
if err != nil {
panic(err.Error())
}
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
panic(err.Error())
}
namespace := "my-namespace"
configMapClient := clientset.CoreV1().ConfigMaps(namespace)
configMap, err := configMapClient.Get(context.TODO(), "config", metav1.GetOptions{})
if err != nil {
log.Fatalf("cannot obtain configmap: %v", err)
}
fmt.Printf("%+v\n", configMap)
}
If I compile the above, kubectl cp it into the container and run it, I get as output:
&ConfigMap{ObjectMeta:{config my-namespace 2ef6f031-7870-41f1-b091-49ab360b98da 2926 0 2022-10-15 03:22:34 +0000 UTC <nil> <nil> map[app:rbactest] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","data":{"foo":"bar","this":"that"},"kind":"ConfigMap","metadata":{"annotations":{},"labels":{"app":"rbactest"},"name":"config","namespace":"my-namespace"}}
] [] [] [{kubectl-client-side-apply Update v1 2022-10-15 03:22:34 +0000 UTC FieldsV1 {"f:data":{".":{},"f:foo":{},"f:this":{}},"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:app":{}}}} }]},Data:map[string]string{foo: bar,this: that,},BinaryData:map[string][]byte{},Immutable:nil,}
I want to run a pod that listens for updates to endpoint lists (I'm not yet ready to adopt the alpha-level feature of endpoint sets, but I'll expand to that eventually.)
I have this code:
package main
import (
"fmt"
"os"
"os/signal"
"sync"
"syscall"
"k8s.io/apimachinery/pkg/util/runtime"
"k8s.io/client-go/informers"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/cache"
)
func ReadKubeConfig() (*rest.Config, *kubernetes.Clientset, error) {
config, err := rest.InClusterConfig()
if err != nil {
return nil, nil, err
}
clients, err := kubernetes.NewForConfig(config)
if err != nil {
return nil, nil, err
}
return config, clients, nil
}
func main() {
_, cs, err := ReadKubeConfig()
if err != nil {
fmt.Printf("could not create Clientset: %s\n", err)
os.Exit(1)
}
factory := informers.NewSharedInformerFactory(cs, 0)
ifmr := factory.Core().V1().Endpoints().Informer()
stop := make(chan struct{})
ifmr.AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(next interface{}) {
fmt.Printf("AddFunc(%v)\n", next)
},
UpdateFunc: func(prev, next interface{}) {
fmt.Printf("UpdateFunc(%v, %v)\n", prev, next)
},
DeleteFunc: func(prev interface{}) {
fmt.Printf("DeleteFunc(%v)\n", prev)
},
})
wg := &sync.WaitGroup{}
wg.Add(1)
go func() {
defer runtime.HandleCrash()
ifmr.Run(stop)
wg.Done()
}()
ch := make(chan os.Signal, 1)
signal.Notify(ch, os.Interrupt)
signal.Notify(ch, os.Signal(syscall.SIGTERM))
signal.Notify(ch, os.Signal(syscall.SIGHUP))
sig := <-ch
fmt.Printf("Received signal %s\n", sig)
close(stop)
wg.Wait()
}
I get this error when deploying and running:
kubeendpointwatcher.go:55: Failed to list *v1.Endpoints: endpoints is forbidden: User "system:serviceaccount:eng:default" cannot list resource "endpoints" in API group "" at the cluster scope
I have the following role and role binding defined and deployed to the "eng" namespace:
watch_endpoints$ kubectl -n eng get role mesh-endpoint-read -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
creationTimestamp: "2021-07-08T19:59:20Z"
name: mesh-endpoint-read
namespace: eng
resourceVersion: "182975428"
selfLink: /apis/rbac.authorization.k8s.io/v1/namespaces/eng/roles/mesh-endpoint-read
uid: fcadcc2a-19d0-4d6e-bee1-78413f51b91b
rules:
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
- list
- watch
I have the following rolebinding:
watch_endpoints$ kubectl -n eng get rolebinding mesh-endpoint-read -o yaml | sed -e 's/^/ /g'
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
creationTimestamp: "2021-07-08T19:59:20Z"
name: mesh-endpoint-read
namespace: eng
resourceVersion: "182977845"
selfLink: /apis/rbac.authorization.k8s.io/v1/namespaces/eng/rolebindings/mesh-endpoint-read
uid: 705a3e50-2a73-47ed-aa62-0ea48f3493ee
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: mesh-endpoint-read
subjects:
- kind: ServiceAccount
name: default
namespace: default
You will note that I apply it to both the default namespace and the eng namespace serviceaccount named default although the error message seems to indicate that it is indeed running in the default serviceaccount in the eng namespace.
I have previously used Role and RoleBinding and ServiceAccount objects that work as expected, so I don't understand why this doesn't work. What am I missing?
For testing/reproduction purposes, I run this program by doing kubectl cp of a built binary (cgo off) into a container created with kubectl -n eng create deplpoy with a vanilla ubuntu image running /bin/sh -c sleep 999999999, and then executing a /bin/bash shell in that pod-container.
You have created role and rolebinding for eng namespace. However, as per the error message:
kubeendpointwatcher.go:55: Failed to list *v1.Endpoints: endpoints is forbidden: User "system:serviceaccount:eng:default" cannot list resource "endpoints" in API group "" at the cluster scope
you are doing query for endpoints at the "cluster" scope. Either try to limit your query to eng namespace or use clusterrole/clusterbindings
Error message provide hint(system:serviceaccount:eng:default)that serviceaccount running in eng namespace, whose name is default does not have permission to query ep at cluster scope.
To validate this, you can run two curl calls, first exec into the pod using the same sa and then run the following for eng namespace and later try it on other namespaces.
curl -v --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" https://kubernetes.default.svc/api/v1/namespaces/default/pods
With Kubernetes, in a multi-tenant env., controlled by RBAC, when creating a new Ingress cname, I would like to force cname format like:
${service}.${namespace}.${cluster}.kube.infra
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ${servce}
spec:
tls:
- hosts:
- ${service}.${namespace}.${cluster}.kube.infra
secretName: conso-elasticsearch-ssl
rules:
- host: ${service}.${namespace}.${cluster}.kube.infra
http:
paths:
- path: /
backend:
serviceName: ${service}
servicePort: 9200
Is it possible?
You can do by it by writing a validating admission webhook which validates the ingress yaml and rejects it if the cname format is not as per the way you want. A better way to is to use Open Policy agent(OPA) and write rego policy. Here is a guide on how to perform policy driven validation of ingress using OPA.
package kubernetes.admission
import data.kubernetes.namespaces
operations = {"CREATE", "UPDATE"}
deny[msg] {
input.request.kind.kind == "Ingress"
operations[input.request.operation]
host := input.request.object.spec.rules[_].host
not fqdn_matches_any(host, valid_ingress_hosts)
msg := sprintf("invalid ingress host %q", [host])
}
valid_ingress_hosts = {
// valid hosts
}
fqdn_matches_any(str, patterns) {
fqdn_matches(str, patterns[_])
}
fqdn_matches(str, pattern) {
// validation logic
}
fqdn_matches(str, pattern) {
not contains(pattern, "*")
str == pattern
}
I have a cluster with 3 control-planes. As any cluster my cluster also has a default kubernetes service. As any service it has a list of endpoints:
apiVersion: v1
items:
- apiVersion: v1
kind: Endpoints
metadata:
creationTimestamp: 2017-12-12T17:08:34Z
name: kubernetes
namespace: default
resourceVersion: "6242123"
selfLink: /api/v1/namespaces/default/endpoints/kubernetes
uid: 161edaa7-df5f-11e7-a311-d09466092927
subsets:
- addresses:
- ip: 10.9.22.25
- ip: 10.9.22.26
- ip: 10.9.22.27
ports:
- name: https
port: 8443
protocol: TCP
kind: List
metadata:
resourceVersion: ""
selfLink: ""
Everything is ok, but I completely can't understand where do these endpoints come from? It is logical to assume from the Service label selector, but there's no any label selectors:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2017-12-12T17:08:34Z
labels:
component: apiserver
provider: kubernetes
name: kubernetes
namespace: default
resourceVersion: "6"
selfLink: /api/v1/namespaces/default/services/kubernetes
uid: 161e4f00-df5f-11e7-a311-d09466092927
spec:
clusterIP: 10.100.0.1
ports:
- name: https
port: 443
protocol: TCP
targetPort: 8443
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10800
type: ClusterIP
status:
loadBalancer: {}
So, could anybody explain how k8s services and endpoints work in case of built-in default kubernetes service?
Its not clear how you created multi-node cluster, but here are some research for you:
Set up High-Availability Kubernetes Masters describe HA k8s creation. They have notes about default kubernetes service.
Instead of trying to keep an up-to-date list of Kubernetes apiserver
in the Kubernetes service, the system directs all traffic to the
external IP:
in one master cluster the IP points to the single master,
in multi-master cluster the IP points to the load balancer in-front of
the masters.
Similarly, the external IP will be used by kubelets to communicate
with master
So I would rather expect a LB ip instead of 3 masters.
Service creation: https://github.com/kubernetes/kubernetes/blob/master/pkg/master/controller.go#L46-L83
const kubernetesServiceName = "kubernetes"
// Controller is the controller manager for the core bootstrap Kubernetes
// controller loops, which manage creating the "kubernetes" service, the
// "default", "kube-system" and "kube-public" namespaces, and provide the IP
// repair check on service IPs
type Controller struct {
ServiceClient corev1client.ServicesGetter
NamespaceClient corev1client.NamespacesGetter
EventClient corev1client.EventsGetter
healthClient rest.Interface
ServiceClusterIPRegistry rangeallocation.RangeRegistry
ServiceClusterIPInterval time.Duration
ServiceClusterIPRange net.IPNet
ServiceNodePortRegistry rangeallocation.RangeRegistry
ServiceNodePortInterval time.Duration
ServiceNodePortRange utilnet.PortRange
EndpointReconciler reconcilers.EndpointReconciler
EndpointInterval time.Duration
SystemNamespaces []string
SystemNamespacesInterval time.Duration
PublicIP net.IP
// ServiceIP indicates where the kubernetes service will live. It may not be nil.
ServiceIP net.IP
ServicePort int
ExtraServicePorts []corev1.ServicePort
ExtraEndpointPorts []corev1.EndpointPort
PublicServicePort int
KubernetesServiceNodePort int
runner *async.Runner
}
Service periodically updates: https://github.com/kubernetes/kubernetes/blob/master/pkg/master/controller.go#L204-L242
// RunKubernetesService periodically updates the kubernetes service
func (c *Controller) RunKubernetesService(ch chan struct{}) {
// wait until process is ready
wait.PollImmediateUntil(100*time.Millisecond, func() (bool, error) {
var code int
c.healthClient.Get().AbsPath("/healthz").Do().StatusCode(&code)
return code == http.StatusOK, nil
}, ch)
wait.NonSlidingUntil(func() {
// Service definition is not reconciled after first
// run, ports and type will be corrected only during
// start.
if err := c.UpdateKubernetesService(false); err != nil {
runtime.HandleError(fmt.Errorf("unable to sync kubernetes service: %v", err))
}
}, c.EndpointInterval, ch)
}
// UpdateKubernetesService attempts to update the default Kube service.
func (c *Controller) UpdateKubernetesService(reconcile bool) error {
// Update service & endpoint records.
// TODO: when it becomes possible to change this stuff,
// stop polling and start watching.
// TODO: add endpoints of all replicas, not just the elected master.
if err := createNamespaceIfNeeded(c.NamespaceClient, metav1.NamespaceDefault); err != nil {
return err
}
servicePorts, serviceType := createPortAndServiceSpec(c.ServicePort, c.PublicServicePort, c.KubernetesServiceNodePort, "https", c.ExtraServicePorts)
if err := c.CreateOrUpdateMasterServiceIfNeeded(kubernetesServiceName, c.ServiceIP, servicePorts, serviceType, reconcile); err != nil {
return err
}
endpointPorts := createEndpointPortSpec(c.PublicServicePort, "https", c.ExtraEndpointPorts)
if err := c.EndpointReconciler.ReconcileEndpoints(kubernetesServiceName, c.PublicIP, endpointPorts, reconcile); err != nil {
return err
}
return nil
}
Endpoint update place: https://github.com/kubernetes/kubernetes/blob/72f69546142a84590550e37d70260639f8fa3e88/pkg/master/reconcilers/lease.go#L163
Also endpoint could be created manually. Visit Services without selectors for more info.
I'm trying to update the deployment from the application of Go in Cluster, but it fails with an authorization error.
GKE Master version 1.9.4-gke.1
package main
import (
"fmt"
"github.com/pkg/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
)
func updateReplicas(namespace string, name string, replicas int32) error {
config, err := rest.InClusterConfig()
if err != nil {
return errors.Wrap(err, "failed rest.InClusterConfig")
}
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
return errors.Wrap(err, "failed kubernetes.NewForConfig")
}
deployment, err := clientset.AppsV1().Deployments(namespace).Get(name, metav1.GetOptions{})
if err != nil {
fmt.Printf("failed get Deployment %+v\n", err)
return errors.Wrap(err, "failed get deployment")
}
deployment.Spec.Replicas = &replicas
fmt.Printf("Deployment %v\n", deployment)
ug, err := clientset.AppsV1().Deployments(deployment.Namespace).Update(deployment)
if err != nil {
fmt.Printf("failed update Deployment %+v", err)
return errors.Wrap(err, "failed update Deployment")
}
fmt.Printf("done update deployment %v\n", ug)
return nil
}
result message
failed get Deployment deployments.apps "land-node" is forbidden: User "system:serviceaccount:default:default" cannot get deployments.apps in the namespace "default": Unknown user "system:serviceaccount:default:default"
I have set the authority as follows, but is it not enough?
deployment-editor.yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: deployment-editor
rules:
- apiGroups: [""]
resources: ["deployments"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
editor-deployement.yaml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: editor-deployment
namespace: default
subjects:
- kind: ServiceAccount
name: default
namespace: default
roleRef:
kind: ClusterRole
name: deployment-editor
apiGroup: rbac.authorization.k8s.io
From Unable to list deployments resources using RBAC.
replicasets and deployments exist in the "extensions" and "apps" API groups, not in the legacy "" group
- apiGroups:
- extensions
- apps
resources:
- deployments
- replicasets
verbs:
- get
- list
- watch
- update
- create
- patch