I would like permit a Kubernetes pod in namespace my-namespace to access configmap/config in the same namespace. For this purpose I have defined the following role and rolebinding:
apiVersion: v1
kind: List
items:
- kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: config
namespace: my-namespace
rules:
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["config"]
verbs: ["get"]
- kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: config
namespace: my-namespace
subjects:
- kind: ServiceAccount
name: default
namespace: my-namespace
roleRef:
kind: Role
name: config
apiGroup: rbac.authorization.k8s.io
Yet still, the pod runs into the following error:
configmaps \"config\" is forbidden: User \"system:serviceaccount:my-namespace:default\"
cannot get resource \"configmaps\" in API group \"\" in the namespace \"my-namespace\"
What am I missing? I guess it must be a simple thing, which a second pair of eyes may spot immediately.
UPDATE Here is a relevant fragment of my client code, which uses go-client:
cfg, err := rest.InClusterConfig()
if err != nil {
logger.Fatalf("cannot obtain Kubernetes config: %v", err)
}
k8sClient, err := k8s.NewForConfig(cfg)
if err != nil {
logger.Fatalf("cannot create Clientset")
}
configMapClient := k8sClient.CoreV1().ConfigMaps(Namespace)
configMap, err := configMapClient.Get(ctx, "config", metav1.GetOptions{})
if err != nil {
logger.Fatalf("cannot obtain configmap: %v", err) // error occurs here
}
I don't see anything in particular wrong with your Role or
Rolebinding, and in fact when I deploy them into my environment they
seem to work as intended. You haven't provided a complete reproducer in your question, so here's how I'm testing things out:
I started by creating a namespace my-namespace
I have the following in kustomization.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: my-namespace
commonLabels:
app: rbactest
resources:
- rbac.yaml
- deployment.yaml
generatorOptions:
disableNameSuffixHash: true
configMapGenerator:
- name: config
literals:
- foo=bar
- this=that
In rbac.yaml I have the Role and RoleBinding from your question (without modification).
In deployment.yaml I have:
apiVersion: apps/v1
kind: Deployment
metadata:
name: cli
spec:
replicas: 1
template:
spec:
containers:
- name: cli
image: quay.io/openshift/origin-cli
command:
- sleep
- inf
With this in place, I deploy everything by running:
kubectl apply -k .
And then once the Pod is up and running, this works:
$ kubectl exec -n my-namespace deploy/cli -- kubectl get cm config
NAME DATA AGE
config 2 3m50s
Attempts to access other ConfigMaps will not work, as expected:
$ kubectl exec deploy/cli -- kubectl get cm foo
Error from server (Forbidden): configmaps "foo" is forbidden: User "system:serviceaccount:my-namespace:default" cannot get resource "configmaps" in API group "" in the namespace "my-namespace"
command terminated with exit code 1
If you're seeing different behavior, it would be interesting to figure out where your process differs from what I've done.
Your Go code looks fine also; I'm able to run this in the "cli" container:
package main
import (
"context"
"fmt"
"log"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
)
func main() {
config, err := rest.InClusterConfig()
if err != nil {
panic(err.Error())
}
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
panic(err.Error())
}
namespace := "my-namespace"
configMapClient := clientset.CoreV1().ConfigMaps(namespace)
configMap, err := configMapClient.Get(context.TODO(), "config", metav1.GetOptions{})
if err != nil {
log.Fatalf("cannot obtain configmap: %v", err)
}
fmt.Printf("%+v\n", configMap)
}
If I compile the above, kubectl cp it into the container and run it, I get as output:
&ConfigMap{ObjectMeta:{config my-namespace 2ef6f031-7870-41f1-b091-49ab360b98da 2926 0 2022-10-15 03:22:34 +0000 UTC <nil> <nil> map[app:rbactest] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","data":{"foo":"bar","this":"that"},"kind":"ConfigMap","metadata":{"annotations":{},"labels":{"app":"rbactest"},"name":"config","namespace":"my-namespace"}}
] [] [] [{kubectl-client-side-apply Update v1 2022-10-15 03:22:34 +0000 UTC FieldsV1 {"f:data":{".":{},"f:foo":{},"f:this":{}},"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:app":{}}}} }]},Data:map[string]string{foo: bar,this: that,},BinaryData:map[string][]byte{},Immutable:nil,}
I am trying to setup a custom admission webhook. I do not want TLS protocol for it. I do understand the the client ( which is kube api server) would do an "https" request to the webhook server and hence , we require a TLS Server in Webhook.
The Webhook Server Code is as below. I define a dummy valid cert & key as constants. below server works fine in a the webhook service. :
package main
import (
"crypto/tls"
"fmt"
"html"
"log"
"net/http"
)
func handleRoot(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, "hello test.. %q", html.EscapeString(r.URL.Path))
}
type Config struct {
CertFile string
KeyFile string
}
func main() {
log.Print("Starting server ...2.2")
http.HandleFunc("/test", handleRoot)
config := Config{
CertFile: cert,
KeyFile: key,
}
server := &http.Server{
Addr: ":9543",
TLSConfig: configTLS(config),
}
err := server.ListenAndServeTLS("", "")
if err != nil {
panic(err)
}
}
func configTLS(config Config) *tls.Config {
sCert, err := tls.X509KeyPair([]byte(config.CertFile), []byte(config.KeyFile))
if err != nil {
log.Fatal(err)
}
return &tls.Config{
Certificates: []tls.Certificate{sCert},
ClientAuth: tls.NoClientCert,
InsecureSkipVerify: true,
}
}
Also the MutatingWebhookConfiguration yaml looks like below:
apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
metadata:
creationTimestamp: null
labels:
test-webhook-service.io/system: "true"
name: test-webhook-service-mutating-webhook-configuration
webhooks:
- admissionReviewVersions:
- v1
- v1beta1
clientConfig:
service:
name: test-webhook-service-service
namespace: test-webhook-service-system
path: /test
failurePolicy: Ignore
matchPolicy: Equivalent
name: mutation.test-webhook-service.io
rules:
- apiGroups:
- ""
apiVersions:
- v1
operations:
- CREATE
- UPDATE
resources:
- pods
sideEffects: None
Now, In order to test whether the admission controller works, I created a new POD. The admission controller gives error:
2022/09/30 05:42:56 Starting server ...2.2
2022/09/30 05:43:23 http: TLS handshake error from 10.100.0.1:37976: remote error: tls: bad certificate
What does this mean? Does this mean I have to put valid caBundle in the MutatingWebhookConfiguration and thus, TLS is required. If this is the case, I am not sure what following means in the official k8s document (source):
The example admission webhook server leaves the ClientAuth field
empty, which defaults to NoClientCert. This means that the webhook
server does not authenticate the identity of the clients, supposedly
API servers.
Exec Summary
Jenkins is running in a Kubernetes cluster just upgrade to 1.19.7 but now jenkins build scripts are failing when running
sh "kubectl --kubeconfig ${args.config} config use-context ${args.context}"
to give error
io.fabric8.kubernetes.client.KubernetesClientException: Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=proxy)
but what permissions or roles should I change?
MORE DETAIL HERE
Jenkins is running within a Kubernetes cluster, as a master, and it picks up GIT jobs and then creates slave pods which are also supposed to run in the same cluster. We have a namespace in the cluster called "Jenkins". As you use Jenkins to creates builds of the microservice applications which are in their own containers, then prompts to have these deployed through the pipeline of test, demo, production.
The cluster has been updated to Kubernetes 1.19.7 using kops. Everything still deploys, runs and is accessible as normal. To the user you would not think that there is a problem to the applications which are running internally on the cluster; all are accessible via the browser and PODS show no significant issues.
Jenkins is still accessible (running version 2.278, with Kubernetes plugin 1.29.1, Kubernetes credential 0.8.0, Kubernetes Client API Plugin 4.13.2-1)
I can log into Jenkins, see everything I would normally expect to see
I can use LENS to connect to the cluster and see all the nodes, pods etc as normal.
However, and this is where our problem now lies post upgrading 1.19.7, when a Jenkins job starts it now always fails at the point which it tries to set the kubectl context
We get this error in every build pipeline at the same place...
[Pipeline] load
[Pipeline] { (JenkinsUtil.groovy)
[Pipeline] }
[Pipeline] // load
[Pipeline] stage
[Pipeline] { (Set-Up and checks)
[Pipeline] withCredentials
Masking supported pattern matches of $KUBECONFIG or $user or $password
[Pipeline] {
[Pipeline] container
[Pipeline] {
[Pipeline] sh
Warning: A secret was passed to "sh" using Groovy String interpolation, which is insecure.
Affected argument(s) used the following variable(s): [KUBECONFIG, user]
See https://****.io/redirect/groovy-string-interpolation for details.
java.net.ProtocolException: Expected HTTP 101 response but was '403 Forbidden'
at okhttp3.internal.ws.RealWebSocket.checkResponse(RealWebSocket.java:229)
at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:196)
at okhttp3.RealCall$AsyncCall.execute(RealCall.java:203)
at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // withCredentials
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] echo
io.fabric8.kubernetes.client.KubernetesClientException: Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=proxy)
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] End of Pipeline
[Bitbucket] Notifying commit build result
[Bitbucket] Build result notified
Now I presume this is about security....but I'm unsure what to change
I can see that it's using system:anonymous and this may have been restricted in later Kubernetes versions, but I'm unsure how to either supply another user or allow this to work from the Jenkins master node in this namespace.
As we run jenkins and also have jenkins deploy I can see the following service accounts
kind: ServiceAccount
apiVersion: v1
metadata:
name: jenkins
namespace: jenkins
selfLink: /api/v1/namespaces/jenkins/serviceaccounts/jenkins
uid: a81a479a-b525-4b01-be39-4445796c6eb1
resourceVersion: '94146677'
creationTimestamp: '2020-08-20T13:32:35Z'
labels:
app: jenkins-master
app.kubernetes.io/managed-by: Helm
chart: jenkins-acme-2.278.102
heritage: Helm
release: jenkins-acme-v2
annotations:
meta.helm.sh/release-name: jenkins-acme-v2
meta.helm.sh/release-namespace: jenkins
secrets:
- name: jenkins-token-lqgk5
and also
kind: ServiceAccount
apiVersion: v1
metadata:
name: jenkins-deployer
namespace: jenkins
selfLink: /api/v1/namespaces/jenkins/serviceaccounts/jenkins-deployer
uid: 4442ec9b-9cbd-11e9-a350-06cfb66a82f6
resourceVersion: '2157387'
creationTimestamp: '2019-07-02T11:33:51Z'
annotations:
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"v1","kind":"ServiceAccount","metadata":{"annotations":{},"name":"jenkins-deployer","namespace":"jenkins"}}
secrets:
- name: jenkins-deployer-token-mdfq9
And the following roles
jenkins-role
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"Role","metadata":{"annotations":{"meta.helm.sh/release-name":"jenkins-acme-v2","meta.helm.sh/release-namespace":"jenkins"},"creationTimestamp":"2020-08-20T13:32:35Z","labels":{"app":"jenkins-master","app.kubernetes.io/managed-by":"Helm","chart":"jenkins-acme-2.278.102","heritage":"Helm","release":"jenkins-acme-v2"},"name":"jenkins-role","namespace":"jenkins","selfLink":"/apis/rbac.authorization.k8s.io/v1/namespaces/jenkins/roles/jenkins-role","uid":"de5431f6-d576-4804-b132-6562d0ba7a94"},"rules":[{"apiGroups":["","extensions"],"resources":["*"],"verbs":["*"]},{"apiGroups":[""],"resources":["nodes"],"verbs":["get","list","watch","update"]}]}
meta.helm.sh/release-name: jenkins-acme-v2
meta.helm.sh/release-namespace: jenkins
creationTimestamp: '2020-08-20T13:32:35Z'
labels:
app: jenkins-master
app.kubernetes.io/managed-by: Helm
chart: jenkins-acme-2.278.102
heritage: Helm
release: jenkins-acme-v2
name: jenkins-role
namespace: jenkins
resourceVersion: '94734324'
selfLink: /apis/rbac.authorization.k8s.io/v1/namespaces/jenkins/roles/jenkins-role
uid: de5431f6-d576-4804-b132-6562d0ba7a94
rules:
- apiGroups:
- ''
- extensions
resources:
- '*'
verbs:
- '*'
- apiGroups:
- ''
resources:
- nodes
verbs:
- get
- list
- watch
- update
jenkins-deployer-role
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: jenkins-deployer-role
namespace: jenkins
selfLink: >-
/apis/rbac.authorization.k8s.io/v1/namespaces/jenkins/roles/jenkins-deployer-role
uid: 87b6486e-6576-11e8-92a9-06bdf97be268
resourceVersion: '94731699'
creationTimestamp: '2018-06-01T08:33:59Z'
annotations:
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"Role","metadata":{"annotations":{},"creationTimestamp":"2018-06-01T08:33:59Z","name":"jenkins-deployer-role","namespace":"jenkins","selfLink":"/apis/rbac.authorization.k8s.io/v1/namespaces/jenkins/roles/jenkins-deployer-role","uid":"87b6486e-6576-11e8-92a9-06bdf97be268"},"rules":[{"apiGroups":[""],"resources":["pods"],"verbs":["*"]},{"apiGroups":[""],"resources":["deployments","services"],"verbs":["*"]}]}
rules:
- verbs:
- '*'
apiGroups:
- ''
resources:
- pods
- verbs:
- '*'
apiGroups:
- ''
resources:
- deployments
- services
and jenkins-namespace-manager
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: jenkins-namespace-manager
selfLink: /apis/rbac.authorization.k8s.io/v1/clusterroles/jenkins-namespace-manager
uid: 93e80d54-6346-11e8-92a9-06bdf97be268
resourceVersion: '94733699'
creationTimestamp: '2018-05-29T13:45:41Z'
annotations:
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole","metadata":{"annotations":{},"creationTimestamp":"2018-05-29T13:45:41Z","name":"jenkins-namespace-manager","selfLink":"/apis/rbac.authorization.k8s.io/v1/clusterroles/jenkins-namespace-manager","uid":"93e80d54-6346-11e8-92a9-06bdf97be268"},"rules":[{"apiGroups":[""],"resources":["namespaces"],"verbs":["get","watch","list","create"]},{"apiGroups":[""],"resources":["nodes"],"verbs":["get","list","watch","update"]}]}
rules:
- verbs:
- get
- watch
- list
- create
apiGroups:
- ''
resources:
- namespaces
- verbs:
- get
- list
- watch
- update
apiGroups:
- ''
resources:
- nodes
and finally jenkins-deployer-role
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole","metadata":{"annotations":{},"creationTimestamp":"2018-05-29T13:29:43Z","name":"jenkins-deployer-role","selfLink":"/apis/rbac.authorization.k8s.io/v1/clusterroles/jenkins-deployer-role","uid":"58e1912e-6344-11e8-92a9-06bdf97be268"},"rules":[{"apiGroups":["","extensions","apps","rbac.authorization.k8s.io"],"resources":["*"],"verbs":["*"]},{"apiGroups":["policy"],"resources":["poddisruptionbudgets","podsecuritypolicies"],"verbs":["create","delete","deletecollection","patch","update","use","get"]},{"apiGroups":["","extensions","apps","rbac.authorization.k8s.io"],"resources":["nodes"],"verbs":["get","list","watch","update"]}]}
creationTimestamp: '2018-05-29T13:29:43Z'
name: jenkins-deployer-role
resourceVersion: '94736572'
selfLink: /apis/rbac.authorization.k8s.io/v1/clusterroles/jenkins-deployer-role
uid: 58e1912e-6344-11e8-92a9-06bdf97be268
rules:
- apiGroups:
- ''
- extensions
- apps
- rbac.authorization.k8s.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- policy
resources:
- poddisruptionbudgets
- podsecuritypolicies
verbs:
- create
- delete
- deletecollection
- patch
- update
- use
- get
- apiGroups:
- ''
- extensions
- apps
- rbac.authorization.k8s.io
resources:
- nodes
verbs:
- get
- list
- watch
- update
And the following bindings..
Kubernetes bindings
I'm really stuck with this one, I don't want to give system:anonymous access to everything, although guess that could be an option.
The jenkins files which help build this are
JenkinsFile
import org.jenkinsci.plugins.workflow.steps.FlowInterruptedException
def label = "worker-${UUID.randomUUID().toString()}"
def dockerRegistry = "id.dkr.ecr.eu-west-1.amazonaws.com"
def localHelmRepository = "acme-helm"
def artifactoryHelmRepository = "https://acme.jfrog.io/acme/$localHelmRepository"
def jenkinsContext = "jenkins-staging"
def MAJOR = 2 // Change HERE
def MINOR = 278 // Change HERE
def PATCH = BUILD_NUMBER
def chartVersion = "X.X.X"
def name = "jenkins-acme"
def projectName = "$name"
def helmPackageName = "$projectName"
def helmReleaseName = "$name-v$MAJOR"
def fullVersion = "$MAJOR.$MINOR.$PATCH"
def jenkinsVersion = "${MAJOR}.${MINOR}" // Gets passed to Dockerfile for getting image from Docker hub
podTemplate(label: label, containers: [
containerTemplate(name: 'docker', image: 'docker:18.05-dind', ttyEnabled: true, privileged: true),
containerTemplate(name: 'perl', image: 'perl', ttyEnabled: true, command: 'cat'),
containerTemplate(name: 'kubectl', image: 'lachlanevenson/k8s-kubectl:v1.18.8', command: 'cat', ttyEnabled: true),
containerTemplate(name: 'helm', image: 'id.dkr.ecr.eu-west-1.amazonaws.com/k8s-helm:3.2.0', command: 'cat', ttyEnabled: true),
containerTemplate(name: 'clair-local-scan', image: '738398925563.dkr.ecr.eu-west-1.amazonaws.com/clair-local-scan:latest', ttyEnabled: true, envVars: [envVar(key: 'DOCKER_HOST', value: 'tcp://localhost:2375')]),
containerTemplate(name: 'clair-scanner', image: '738398925563.dkr.ecr.eu-west-1.amazonaws.com/clair-scanner:latest', command: 'cat', ttyEnabled: true, envVars: [envVar(key: 'DOCKER_HOST', value: 'tcp://localhost:2375')]),
containerTemplate(name: 'clair-db', image: "738398925563.dkr.ecr.eu-west-1.amazonaws.com/clair-db:latest", ttyEnabled: true),
containerTemplate(name: 'aws-cli', image: 'mesosphere/aws-cli', command: 'cat', ttyEnabled: true)
], volumes: [
emptyDirVolume(mountPath: '/var/lib/docker')
]) {
try {
node(label) {
def myRepo = checkout scm
jenkinsUtils = load 'JenkinsUtil.groovy'
stage('Set-Up and checks') {
jenkinsContext = 'jenkins-staging'
withCredentials([
file(credentialsId: 'kubeclt-staging-config', variable: 'KUBECONFIG'),
usernamePassword(credentialsId: 'jenkins_artifactory', usernameVariable: 'user', passwordVariable: 'password')]) {
jenkinsUtils.initKubectl(jenkinsUtils.appendToParams("kubectl", [
namespaces: ["jenkins"],
context : jenkinsContext,
config : KUBECONFIG])
)
jenkinsUtils.initHelm(jenkinsUtils.appendToParams("helm", [
namespace : "jenkins",
helmRepo : artifactoryHelmRepository,
username : user,
password : password,
])
)
}
}
stage('docker build and push') {
container('perl'){
def JENKINS_HOST = "jenkins_api:1Ft38erDFjjfM6q3a6y7#jenkins.acme.com"
sh "curl -sSL \"https://${JENKINS_HOST}/pluginManager/api/xml?depth=1&xpath=/*/*/shortName|/*/*/version&wrapper=plugins\" | perl -pe 's/.*?<shortName>([\\w-]+).*?<version>([^<]+)()(<\\/\\w+>)+/\\1 \\2\\n/g'|sed 's/ /:/' > plugins.txt"
sh "cat plugins.txt"
}
container('docker'){
sh "ls -la"
sh "docker version"
// This is because of this annoying "feature" where the command ran from docker contains a \r character which must be removed
sh 'eval $(docker run --rm -t $(tty &>/dev/null && echo "-n") -v "$(pwd):/project" mesosphere/aws-cli ecr get-login --no-include-email --region eu-west-1 | tr \'\\r\' \' \')'
sh "sed \"s/JENKINS_VERSION/${jenkinsVersion}/g\" Dockerfile > Dockerfile.modified"
sh "cat Dockerfile.modified"
sh "docker build -t $name:$fullVersion -f Dockerfile.modified ."
sh "docker tag $name:$fullVersion $dockerRegistry/$name:$fullVersion"
sh "docker tag $name:$fullVersion $dockerRegistry/$name:latest"
sh "docker tag $name:$fullVersion $dockerRegistry/$name:${MAJOR}"
sh "docker tag $name:$fullVersion $dockerRegistry/$name:${MAJOR}.$MINOR"
sh "docker tag $name:$fullVersion $dockerRegistry/$name:${MAJOR}.${MINOR}.$PATCH"
sh "docker push $dockerRegistry/$name:$fullVersion"
sh "docker push $dockerRegistry/$name:latest"
sh "docker push $dockerRegistry/$name:${MAJOR}"
sh "docker push $dockerRegistry/$name:${MAJOR}.$MINOR"
sh "docker push $dockerRegistry/$name:${MAJOR}.${MINOR}.$PATCH"
}
}
stage('helm build') {
namespace = 'jenkins'
jenkinsContext = 'jenkins-staging'
withCredentials([
file(credentialsId: 'kubeclt-staging-config', variable: 'KUBECONFIG'),
usernamePassword(credentialsId: 'jenkins_artifactory', usernameVariable: 'user', passwordVariable: 'password')]) {
jenkinsUtils.setContext(jenkinsUtils.appendToParams("kubectl", [
context: jenkinsContext,
config : KUBECONFIG])
)
jenkinsUtils.helmDeploy(jenkinsUtils.appendToParams("helm", [
namespace : namespace,
credentials: true,
release : helmReleaseName,
args : [replicaCount : 1,
imageTag : fullVersion,
namespace : namespace,
"MajorVersion" : MAJOR]])
)
jenkinsUtils.helmPush(jenkinsUtils.appendToParams("helm", [
helmRepo : artifactoryHelmRepository,
username : user,
password : password,
BuildInfo : BRANCH_NAME,
Commit : "${myRepo.GIT_COMMIT}"[0..6],
fullVersion: fullVersion
]))
}
}
stage('Deployment') {
namespace = 'jenkins'
jenkinsContext = 'jenkins-staging'
withCredentials([
file(credentialsId: 'kubeclt-staging-config', variable: 'KUBECONFIG')]) {
jenkinsUtils.setContext(jenkinsUtils.appendToParams("kubectl", [
context: jenkinsContext,
config : KUBECONFIG])
)
jenkinsUtils.helmDeploy(jenkinsUtils.appendToParams("helm", [
dryRun : false,
namespace : namespace,
package : "${localHelmRepository}/${helmPackageName}",
credentials: true,
release : helmReleaseName,
args : [replicaCount : 1,
imageTag : fullVersion,
namespace : namespace,
"MajorVersion" : MAJOR
]
])
)
}
}
}
} catch (FlowInterruptedException e) {
def reasons = e.getCauses().collect { it.getShortDescription() }.join(",")
println "Interupted. Reason: $reasons"
currentBuild.result = 'SUCCESS'
return
} catch (error) {
println error
throw error
}
}
And the groovy file
templateMap = [
"helm" : [
containerName: "helm",
dryRun : true,
namespace : "test",
tag : "xx",
package : "jenkins-acme",
credentials : false,
ca_cert : null,
helm_cert : null,
helm_key : null,
args : [
majorVersion : 0,
replicaCount : 1
]
],
"kubectl": [
containerName: "kubectl",
context : null,
config : null,
]
]
def appendToParams(String templateName, Map newArgs) {
def copyTemplate = templateMap[templateName].clone()
newArgs.each { paramName, paramValue ->
if (paramName.equalsIgnoreCase("args"))
newArgs[paramName].each {
name, value -> copyTemplate[paramName][name] = value
}
else
copyTemplate[paramName] = paramValue
}
return copyTemplate
}
def setContext(Map args) {
container(args.containerName) {
sh "kubectl --kubeconfig ${args.config} config use-context ${args.context}"
}
}
def initKubectl(Map args) {
container(args.containerName) {
sh "kubectl --kubeconfig ${args.config} config use-context ${args.context}"
for (namespace in args.namespaces)
sh "kubectl -n $namespace get pods"
}
}
def initHelm(Map args) {
container(args.containerName) {
// sh "helm init --client-only"
def command = "helm version --short"
// if (args.credentials)
// command = "$command --tls --tls-ca-cert ${args.ca_cert} --tls-cert ${args.helm_cert} --tls-key ${args.helm_key}"
//
// sh "$command --tiller-connection-timeout 5 --tiller-namespace tiller-${args.namespace}"
sh "helm repo add acme-helm ${args.helmRepo} --username ${args.username} --password ${args.password}"
sh "helm repo update"
}
}
def helmDeploy(Map args) {
container(args.containerName) {
sh "helm repo update"
def command = "helm upgrade"
// if (args.credentials)
// command = "$command --tls --tls-ca-cert ${args.ca_cert} --tls-cert ${args.helm_cert} --tls-key ${args.helm_key}"
if (args.dryRun) {
sh "helm lint ${args.package}"
command = "$command --dry-run --debug"
}
// command = "$command --install --tiller-namespace tiller-${args.namespace} --namespace ${args.namespace}"
command = "$command --install --namespace ${args.namespace}"
def setVar = "--set "
args.args.each { key, value -> setVar = "$setVar$key=\"${value.toString().replace(",", "\\,")}\"," }
setVar = setVar[0..-1]
sh "$command $setVar --devel ${args.release} ${args.package}"
}
}
def helmPush(Map args){
container(args.containerName) {
sh "helm package ${args.package} --version ${args.fullVersion} --app-version ${args.fullVersion}+${args.BuildInfo}-${args.Commit}"
sh "curl -u${args.username}:${args.password} -T ${args.package}-${args.fullVersion}.tgz \"${args.helmRepo}/${args.package}-${args.fullVersion}.tgz\""
}
}
return this
And from the log it seems to be when it runs
sh "kubectl --kubeconfig ${args.config} config use-context ${args.context}"
That it throws the error
io.fabric8.kubernetes.client.KubernetesClientException: Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=proxy)
but what permissions or roles should I change?
Many thanks,
Nick
Take a look at this section of the official kubernetes documentation and this answer provided by Prafull Ladha:
The above error means your apiserver doesn't have the credentials
(kubelet cert and key) to authenticate the kubelet's log/exec
commands and hence the Forbidden error message.
You need to provide --kubelet-client-certificate=<path_to_cert>
and --kubelet-client-key=<path_to_key> to your apiserver, this way
apiserver authenticate the kubelet with the certficate and key pair.
Very similar issue was also reported on GitHub in this thread, where you can find the following explanation:
That means the api server has not been given a credential to use to
authenticate to kubelets when proxying log/exec requests.
See apiserver configuration as described in
https://kubernetes.io/docs/admin/kubelet-authentication-authorization/#kubelet-authentication
I have a cluster with 3 control-planes. As any cluster my cluster also has a default kubernetes service. As any service it has a list of endpoints:
apiVersion: v1
items:
- apiVersion: v1
kind: Endpoints
metadata:
creationTimestamp: 2017-12-12T17:08:34Z
name: kubernetes
namespace: default
resourceVersion: "6242123"
selfLink: /api/v1/namespaces/default/endpoints/kubernetes
uid: 161edaa7-df5f-11e7-a311-d09466092927
subsets:
- addresses:
- ip: 10.9.22.25
- ip: 10.9.22.26
- ip: 10.9.22.27
ports:
- name: https
port: 8443
protocol: TCP
kind: List
metadata:
resourceVersion: ""
selfLink: ""
Everything is ok, but I completely can't understand where do these endpoints come from? It is logical to assume from the Service label selector, but there's no any label selectors:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2017-12-12T17:08:34Z
labels:
component: apiserver
provider: kubernetes
name: kubernetes
namespace: default
resourceVersion: "6"
selfLink: /api/v1/namespaces/default/services/kubernetes
uid: 161e4f00-df5f-11e7-a311-d09466092927
spec:
clusterIP: 10.100.0.1
ports:
- name: https
port: 443
protocol: TCP
targetPort: 8443
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10800
type: ClusterIP
status:
loadBalancer: {}
So, could anybody explain how k8s services and endpoints work in case of built-in default kubernetes service?
Its not clear how you created multi-node cluster, but here are some research for you:
Set up High-Availability Kubernetes Masters describe HA k8s creation. They have notes about default kubernetes service.
Instead of trying to keep an up-to-date list of Kubernetes apiserver
in the Kubernetes service, the system directs all traffic to the
external IP:
in one master cluster the IP points to the single master,
in multi-master cluster the IP points to the load balancer in-front of
the masters.
Similarly, the external IP will be used by kubelets to communicate
with master
So I would rather expect a LB ip instead of 3 masters.
Service creation: https://github.com/kubernetes/kubernetes/blob/master/pkg/master/controller.go#L46-L83
const kubernetesServiceName = "kubernetes"
// Controller is the controller manager for the core bootstrap Kubernetes
// controller loops, which manage creating the "kubernetes" service, the
// "default", "kube-system" and "kube-public" namespaces, and provide the IP
// repair check on service IPs
type Controller struct {
ServiceClient corev1client.ServicesGetter
NamespaceClient corev1client.NamespacesGetter
EventClient corev1client.EventsGetter
healthClient rest.Interface
ServiceClusterIPRegistry rangeallocation.RangeRegistry
ServiceClusterIPInterval time.Duration
ServiceClusterIPRange net.IPNet
ServiceNodePortRegistry rangeallocation.RangeRegistry
ServiceNodePortInterval time.Duration
ServiceNodePortRange utilnet.PortRange
EndpointReconciler reconcilers.EndpointReconciler
EndpointInterval time.Duration
SystemNamespaces []string
SystemNamespacesInterval time.Duration
PublicIP net.IP
// ServiceIP indicates where the kubernetes service will live. It may not be nil.
ServiceIP net.IP
ServicePort int
ExtraServicePorts []corev1.ServicePort
ExtraEndpointPorts []corev1.EndpointPort
PublicServicePort int
KubernetesServiceNodePort int
runner *async.Runner
}
Service periodically updates: https://github.com/kubernetes/kubernetes/blob/master/pkg/master/controller.go#L204-L242
// RunKubernetesService periodically updates the kubernetes service
func (c *Controller) RunKubernetesService(ch chan struct{}) {
// wait until process is ready
wait.PollImmediateUntil(100*time.Millisecond, func() (bool, error) {
var code int
c.healthClient.Get().AbsPath("/healthz").Do().StatusCode(&code)
return code == http.StatusOK, nil
}, ch)
wait.NonSlidingUntil(func() {
// Service definition is not reconciled after first
// run, ports and type will be corrected only during
// start.
if err := c.UpdateKubernetesService(false); err != nil {
runtime.HandleError(fmt.Errorf("unable to sync kubernetes service: %v", err))
}
}, c.EndpointInterval, ch)
}
// UpdateKubernetesService attempts to update the default Kube service.
func (c *Controller) UpdateKubernetesService(reconcile bool) error {
// Update service & endpoint records.
// TODO: when it becomes possible to change this stuff,
// stop polling and start watching.
// TODO: add endpoints of all replicas, not just the elected master.
if err := createNamespaceIfNeeded(c.NamespaceClient, metav1.NamespaceDefault); err != nil {
return err
}
servicePorts, serviceType := createPortAndServiceSpec(c.ServicePort, c.PublicServicePort, c.KubernetesServiceNodePort, "https", c.ExtraServicePorts)
if err := c.CreateOrUpdateMasterServiceIfNeeded(kubernetesServiceName, c.ServiceIP, servicePorts, serviceType, reconcile); err != nil {
return err
}
endpointPorts := createEndpointPortSpec(c.PublicServicePort, "https", c.ExtraEndpointPorts)
if err := c.EndpointReconciler.ReconcileEndpoints(kubernetesServiceName, c.PublicIP, endpointPorts, reconcile); err != nil {
return err
}
return nil
}
Endpoint update place: https://github.com/kubernetes/kubernetes/blob/72f69546142a84590550e37d70260639f8fa3e88/pkg/master/reconcilers/lease.go#L163
Also endpoint could be created manually. Visit Services without selectors for more info.
I'm trying to update the deployment from the application of Go in Cluster, but it fails with an authorization error.
GKE Master version 1.9.4-gke.1
package main
import (
"fmt"
"github.com/pkg/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
)
func updateReplicas(namespace string, name string, replicas int32) error {
config, err := rest.InClusterConfig()
if err != nil {
return errors.Wrap(err, "failed rest.InClusterConfig")
}
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
return errors.Wrap(err, "failed kubernetes.NewForConfig")
}
deployment, err := clientset.AppsV1().Deployments(namespace).Get(name, metav1.GetOptions{})
if err != nil {
fmt.Printf("failed get Deployment %+v\n", err)
return errors.Wrap(err, "failed get deployment")
}
deployment.Spec.Replicas = &replicas
fmt.Printf("Deployment %v\n", deployment)
ug, err := clientset.AppsV1().Deployments(deployment.Namespace).Update(deployment)
if err != nil {
fmt.Printf("failed update Deployment %+v", err)
return errors.Wrap(err, "failed update Deployment")
}
fmt.Printf("done update deployment %v\n", ug)
return nil
}
result message
failed get Deployment deployments.apps "land-node" is forbidden: User "system:serviceaccount:default:default" cannot get deployments.apps in the namespace "default": Unknown user "system:serviceaccount:default:default"
I have set the authority as follows, but is it not enough?
deployment-editor.yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: deployment-editor
rules:
- apiGroups: [""]
resources: ["deployments"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
editor-deployement.yaml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: editor-deployment
namespace: default
subjects:
- kind: ServiceAccount
name: default
namespace: default
roleRef:
kind: ClusterRole
name: deployment-editor
apiGroup: rbac.authorization.k8s.io
From Unable to list deployments resources using RBAC.
replicasets and deployments exist in the "extensions" and "apps" API groups, not in the legacy "" group
- apiGroups:
- extensions
- apps
resources:
- deployments
- replicasets
verbs:
- get
- list
- watch
- update
- create
- patch