I'm trying to trigger a pre existing ClusterWorkflowTemplate from a post request in argo/ argo-events.
I've been following the example here, but i don't want to define the workflow in the sensor- I want to separate this.
I can't get the sensor to import and trigger the workflow, what is the problem?
# kubectl apply -n argo-test -f templates/whalesay/workflow-template.yml
apiVersion: argoproj.io/v1alpha1
kind: ClusterWorkflowTemplate
metadata:
name: workflow-template-submittable
spec:
entrypoint: whalesay-template
arguments:
parameters:
- name: message
value: hello world
templates:
- name: whalesay-template
inputs:
parameters:
- name: message
container:
image: docker/whalesay
command: [cowsay]
args: ["{{inputs.parameters.message}}"]
# kubectl apply -n argo-events templates/whalesay/event-source.yml
apiVersion: argoproj.io/v1alpha1
kind: EventSource
metadata:
name: webhook
spec:
service:
ports:
- port: 12000
targetPort: 12000
webhook:
# event-source can run multiple HTTP servers. Simply define a unique port to start a new HTTP server
example:
# port to run HTTP server on
port: "12000"
# endpoint to listen to
endpoint: /example
# HTTP request method to allow. In this case, only POST requests are accepted
method: POST
# kubectl apply -n argo-events -f templates/whalesay/sensor.yml
apiVersion: argoproj.io/v1alpha1
kind: Sensor
metadata:
name: workflow
namespace: argo-events
finalizers:
- sensor-controller
spec:
template:
serviceAccountName: operate-workflow-sa
dependencies:
- name: http-post-trigger
eventSourceName: webhook
eventName: example
triggers:
- template:
name: workflow-trigger-1
argoWorkflow:
group: argoproj.io
version: v1alpha1
kind: Workflow
operation: submit
metadata:
generateName: cluster-workflow-template-hello-world-
spec:
entrypoint: whalesay-template
workflowTemplateRef:
name: cluster-workflow-template-submittable
clusterScope: true
# to launch
curl -d '{"message":"this is my first webhook"}' -H "Content-Type: application/json" -X POST http://argo-events-51-210-211-4.nip.io/example
# error
{
"level": "error",
"ts": 1627655074.716865,
"logger": "argo-events.sensor-controller",
"caller": "sensor/controller.go:69",
"msg": "reconcile error",
"namespace": "argo-events",
"sensor": "workflow",
"error": "temp ││ late workflow-trigger-1 is invalid: argoWorkflow trigger does not contain an absolute action",
}
References:
special-workflow-trigger.yaml
cluster workflow templates
spec
I had to:
Ensure that my service account operate-workflow-sa had cluster privileges
Correct my sensor.yml syntax spec
Cluster privileges:
# kubectl apply -f ./k8s/workflow-service-account.yml
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: argo-events
name: operate-workflow-sa
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: operate-workflow-role
# namespace: argo-events
rules:
- apiGroups:
- argoproj.io
verbs:
- "*"
resources:
- workflows
- clusterworkflowtemplates
- workflowtemplates
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: operate-workflow-role-binding
namespace: argo-events
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: operate-workflow-role
subjects:
- kind: ServiceAccount
name: operate-workflow-sa
namespace: argo-events
sensor.yml (note the addition of the serviceAccountName for the workflow too):
apiVersion: argoproj.io/v1alpha1
kind: Sensor
metadata:
name: workflow
namespace: argo-events
finalizers:
- sensor-controller
spec:
template:
serviceAccountName: operate-workflow-sa
dependencies:
- name: http-post-trigger
eventSourceName: webhook
eventName: example
triggers:
# https://github.com/argoproj/argo-events/blob/master/api/sensor.md#triggertemplate
- template:
name: workflow-trigger-1
argoWorkflow:
# https://github.com/argoproj/argo-events/blob/master/api/sensor.md#argoproj.io/v1alpha1.ArgoWorkflowTrigger
group: argoproj.io
version: v1alpha1
resource: Workflow
operation: submit
metadata:
generateName: cluster-workflow-template-hello-world-
source:
resource:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
name: special-trigger
spec:
serviceAccountName: operate-workflow-sa
entrypoint: whalesay-template
workflowTemplateRef:
name: whalesay-template
clusterScope: true
Related
We're working on the integration of GitLab and Tekton / OpenShift Pipelines via Webhooks and Tekton Triggers. We followed this example project and crafted our EventListener that ships with the needed Interceptor, TriggerBinding and TriggerTemplate as gitlab-push-listener.yml:
apiVersion: triggers.tekton.dev/v1beta1
kind: EventListener
metadata:
name: gitlab-listener
spec:
serviceAccountName: tekton-triggers-example-sa
triggers:
- name: gitlab-push-events-trigger
interceptors:
- name: "verify-gitlab-payload"
ref:
name: "gitlab"
kind: ClusterInterceptor
params:
- name: secretRef
value:
secretName: "gitlab-secret"
secretKey: "secretToken"
- name: eventTypes
value:
- "Push Hook"
bindings:
- name: gitrevision
value: $(body.checkout_sha)
- name: gitrepositoryurl
value: $(body.repository.git_http_url)
template:
spec:
params:
- name: gitrevision
- name: gitrepositoryurl
- name: message
description: The message to print
default: This is the default message
- name: contenttype
description: The Content-Type of the event
resourcetemplates:
- apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
generateName: buildpacks-test-pipeline-run-
#name: buildpacks-test-pipeline-run
spec:
serviceAccountName: buildpacks-service-account-gitlab # Only needed if you set up authorization
pipelineRef:
name: buildpacks-test-pipeline
workspaces:
- name: source-workspace
subPath: source
persistentVolumeClaim:
claimName: buildpacks-source-pvc
- name: cache-workspace
subPath: cache
persistentVolumeClaim:
claimName: buildpacks-source-pvc
params:
- name: IMAGE
value: registry.gitlab.com/jonashackt/microservice-api-spring-boot # This defines the name of output image
- name: SOURCE_URL
value: https://gitlab.com/jonashackt/microservice-api-spring-boot
- name: SOURCE_REVISION
value: main
As stated in the example (and in the Tekton docs) we also created and kubectl applyed a ServiceAccount named tekton-triggers-example-sa, RoleBinding and ClusterRoleBinding:
apiVersion: v1
kind: ServiceAccount
metadata:
name: tekton-triggers-example-sa
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: triggers-example-eventlistener-binding
subjects:
- kind: ServiceAccount
name: tekton-triggers-example-sa
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: tekton-triggers-eventlistener-roles
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: triggers-example-eventlistener-clusterbinding
subjects:
- kind: ServiceAccount
name: tekton-triggers-example-sa
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: tekton-triggers-eventlistener-clusterroles
Now installing our EventListener via kubectl apply -f gitlab-push-listener.yml, no Triggering from GitLab or even a curl is triggering a PipelineRun as intended. Looking into the logs of the el-gitlab-listener Deployment and Pod, we see the following error:
kubectl logs el-gitlab-listener-69f4c5c8f8-t4zdj
{"level":"info","ts":"2021-11-30T09:38:32.444Z","caller":"logging/config.go:116","msg":"Successfully created the logger."}
{"level":"info","ts":"2021-11-30T09:38:32.444Z","caller":"logging/config.go:117","msg":"Logging level set to: info"}
{"level":"info","ts":"2021-11-30T09:38:32.444Z","caller":"logging/config.go:79","msg":"Fetch GitHub commit ID from kodata failed","error":"\"KO_DATA_PATH\" does not exist or is empty"}
{"level":"info","ts":"2021-11-30T09:38:32.444Z","logger":"eventlistener","caller":"logging/logging.go:46","msg":"Starting the Configuration eventlistener","knative.dev/controller":"eventlistener"}
{"level":"info","ts":"2021-11-30T09:38:32.445Z","logger":"eventlistener","caller":"profiling/server.go:64","msg":"Profiling enabled: false","knative.dev/controller":"eventlistener"}
{"level":"fatal","ts":"2021-11-30T09:38:32.451Z","logger":"eventlistener","caller":"eventlistenersink/main.go:104","msg":"Error reading ConfigMap config-observability-triggers","knative.dev/controller":"eventlistener","error":"configmaps \"config-observability-triggers\" is forbidden: User \"system:serviceaccount:default:tekton-triggers-example-sa\" cannot get resource \"configmaps\" in API group \"\" in the namespace \"default\": RBAC: [clusterrole.rbac.authorization.k8s.io \"tekton-triggers-eventlistener-clusterroles\" not found, clusterrole.rbac.authorization.k8s.io \"tekton-triggers-eventlistener-roles\" not found]","stacktrace":"main.main\n\t/opt/app-root/src/go/src/github.com/tektoncd/triggers/cmd/eventlistenersink/main.go:104\nruntime.main\n\t/usr/lib/golang/src/runtime/proc.go:203"}
The OpenShift Pipelines documentation does not directly document it. But if you skim the docs especially in the Triggers section, you might recognize that there is no ServiceAccount created whatsoever. But one is used by every Trigger component. It's called pipeline. Simply run kubectl get serviceaccount to see it:
$ kubectl get serviceaccount
NAME SECRETS AGE
default 2 49d
deployer 2 49d
pipeline 2 48d
This pipeline ServiceAccount is ready to use inside your Tekton Triggers & EventListeners. So your gitlab-push-listener.yml can directly reference it:
apiVersion: triggers.tekton.dev/v1beta1
kind: EventListener
metadata:
name: gitlab-listener
spec:
serviceAccountName: pipeline
triggers:
- name: gitlab-push-events-trigger
interceptors:
...
You can simply delete your manually created ServiceAccount tekton-triggers-example-sa. It's not needed in OpenShift Pipelines! Now your Tekton Triggers EventListener should work and trigger your Tekton Pipelines as defined.
My design is: EventController lives in "default" namespace and starts Jobs in "gamespace" namespace.
I'm getting this error trying to create a Job with the Node.js kubernetes client:
jobs.batch is forbidden: User "system:serviceaccount:default:event-manager" cannot create resource "jobs" in API group "batch" in the namespace "gamespace"
from this line of code:
const job = await batchV1Api.createNamespacedJob('gamespace', kubeSpec.job)
kubeSpec.job is:
{
apiVersion: 'batch/v1',
kind: 'Job',
metadata: {
name: 'event-60da4bee95e237001d65e355',
namespace: 'gamespace',
labels: {
tier: 'event-servers',
}
},
spec: {
backoffLimit: 1,
activeDeadlineSeconds: 14400,
ttlSecondsAfterFinished: 86400,
template: { spec: [Object] }
}
}
And here's my RBAC configuration:
apiVersion: v1
kind: Namespace
metadata:
name: gamespace
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: event-manager
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: event-manager-role
rules:
- apiGroups: ['', 'batch'] # '' means "core"
resources: ['jobs', 'services']
verbs: ['get', 'list', 'watch', 'create', 'update', 'patch', 'delete']
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: event-manager-clusterrole-binding
# Specifies the namespace the ClusterRole can be used in.
namespace: gamespace
subjects:
- kind: ServiceAccount
name: event-manager
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: event-manager-role
The container making the function call is configured like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: eventcontroller-deployment
labels:
app: eventcontroller
spec:
selector:
matchLabels:
app: eventcontroller
replicas: 1
template:
metadata:
labels:
app: eventcontroller
spec:
# see accounts-namespaces.yaml
serviceAccountName: 'event-manager'
imagePullSecrets:
- name: registry-credentials
containers:
- name: eventcontroller
image: eventcontroller
imagePullPolicy: Always
envFrom:
- configMapRef:
name: eventcontroller-config
ports:
- containerPort: 3003
I'm not sure if I'm using the client incorrectly (why is namespace needed in the spec and the function call?), or if I've configured RBAC incorrectly.
Any suggestions?
Thank you!
I can't comment so I will share some Ideas what could be an Issue and how i would further debug the problem.
For your Deployment you have no Namespace defined, could it be the case that the Pod is running in a different Namespace (!= gamespace), but your Service Account only applies for gameplay?
A RoleBinding grants permissions within a specific namespace1
If this is not the error you might want to try to use a service account that is already created and gives all permissions for the start to rule out other errors.
Here an Example Manifest
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
creationTimestamp: null
name: all-permissions
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: service-account-all-permissions
namespace: gameplay
Set the Service Account to 'service-account-all-permissions' in your Deployment and see if you still get an permission error from the Kubernetes API
Source.
I want to periodically restart the deployment using k8s cronjob.
Please check what is the problem with the yaml file.
When I execute the command from the local command line, the deployment restarts normally, but it seems that the restart is not possible with cronjob.
e.g $ kubectl rollout restart deployment my-ingress -n my-app
my cronjob yaml file
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: deployment-restart
namespace: my-app
spec:
schedule: '0 8 */60 * *'
jobTemplate:
spec:
backoffLimit: 2
activeDeadlineSeconds: 600
template:
spec:
serviceAccountName: deployment-restart
restartPolicy: Never
containers:
- name: kubectl
image: bitnami/kubectl:latest
command:
- 'kubectl'
- 'rollout'
- 'restart'
- 'deployment/my-ingress -n my-app'
as David suggested run cron of kubectl is like by executing the command
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/5 * * * *"
jobTemplate:
spec:
template:
spec:
serviceAccountName: sa-jp-runner
containers:
- name: hello
image: bitnami/kubectl:latest
command:
- /bin/sh
- -c
- kubectl rollout restart deployment my-ingress -n my-app
restartPolicy: OnFailure
i would also suggest you to check the role and service account permissions
example for ref :
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: kubectl-cron
rules:
- apiGroups:
- extensions
- apps
resources:
- deployments
verbs:
- 'patch'
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: kubectl-cron
namespace: default
subjects:
- kind: ServiceAccount
name: sa-kubectl-cron
namespace: default
roleRef:
kind: Role
name: kubectl-cron
apiGroup: ""
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: sa-kubectl-cron
namespace: default
---
I am trying to setup traefik ingress route using the configuration that is provided at https://docs.traefik.io/routing/providers/kubernetes-crd/
I can see traefik is up & running, can also see the dashboard. But I dont see the whoami service on dashboard and cannot access it via url.
crd.yaml
# All resources definition must be declared
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ingressroutes.traefik.containo.us
spec:
group: traefik.containo.us
version: v1alpha1
names:
kind: IngressRoute
plural: ingressroutes
singular: ingressroute
scope: Namespaced
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: middlewares.traefik.containo.us
spec:
group: traefik.containo.us
version: v1alpha1
names:
kind: Middleware
plural: middlewares
singular: middleware
scope: Namespaced
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ingressroutetcps.traefik.containo.us
spec:
group: traefik.containo.us
version: v1alpha1
names:
kind: IngressRouteTCP
plural: ingressroutetcps
singular: ingressroutetcp
scope: Namespaced
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ingressrouteudps.traefik.containo.us
spec:
group: traefik.containo.us
version: v1alpha1
names:
kind: IngressRouteUDP
plural: ingressrouteudps
singular: ingressrouteudp
scope: Namespaced
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: tlsoptions.traefik.containo.us
spec:
group: traefik.containo.us
version: v1alpha1
names:
kind: TLSOption
plural: tlsoptions
singular: tlsoption
scope: Namespaced
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: tlsstores.traefik.containo.us
spec:
group: traefik.containo.us
version: v1alpha1
names:
kind: TLSStore
plural: tlsstores
singular: tlsstore
scope: Namespaced
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: traefikservices.traefik.containo.us
spec:
group: traefik.containo.us
version: v1alpha1
names:
kind: TraefikService
plural: traefikservices
singular: traefikservice
scope: Namespaced
ingress.yaml
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: myingressroute
namespace: default
annotations:
kubernetes.io/ingress.class: traefik
spec:
entryPoints:
- web
routes:
- match: Host(`test`) && PathPrefix(`/bar`)
kind: Rule
services:
- name: whoami
port: 80
rbac.yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- traefik.containo.us
resources:
- middlewares
- ingressroutes
- traefikservices
- ingressroutetcps
- ingressrouteudps
- tlsoptions
- tlsstores
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
name: traefik-ingress-controller
namespace: default
traefik.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-controller
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: traefik
labels:
app: traefik
spec:
replicas: 1
selector:
matchLabels:
app: traefik
template:
metadata:
labels:
app: traefik
spec:
serviceAccountName: traefik-ingress-controller
containers:
- name: traefik
image: traefik:v2.2
args:
- --log.level=DEBUG
- --api
- --api.insecure
- --entrypoints.web.address=:80
- --providers.kubernetescrd
ports:
- name: web
containerPort: 80
- name: admin
containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: traefik
spec:
type: LoadBalancer
selector:
app: traefik
ports:
- protocol: TCP
port: 80
name: web
targetPort: 80
- protocol: TCP
port: 8080
name: admin
targetPort: 8080
whoami.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
name: whoami
namespace: default
labels:
app: containous
name: whoami
spec:
replicas: 1
selector:
matchLabels:
app: containous
task: whoami
template:
metadata:
labels:
app: containous
task: whoami
spec:
containers:
- name: containouswhoami
image: containous/whoami
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: whoami
namespace: default
spec:
ports:
- name: http
port: 80
selector:
app: containous
task: whoami
UPDATE
After resources from link
E0708 21:34:10.222538 1 reflector.go:153] pkg/mod/k8s.io/client-go#v0.17.3/tools/cache/reflector.go:105: Failed to list *v1alpha1.IngressRouteUDP: ingressrouteudps.traefik.containo.us is forbidden: User "system:serviceaccount:default:traefik-ingress-controller" cannot list resource "ingressrouteudps" in API group "traefik.containo.us" at the cluster scope
E0708 21:34:10.223416 1 reflector.go:153] pkg/mod/k8s.io/client-go#v0.17.3/tools/cache/reflector.go:105: Failed to list *v1alpha1.TLSStore: tlsstores.traefik.containo.us is forbidden: User "system:serviceaccount:default:traefik-ingress-controller" cannot list resource "tlsstores" in API group "traefik.containo.us" at the cluster scope
E0708 21:34:11.225368 1 reflector.go:153] pkg/mod/k8s.io/client-go#v0.17.3/tools/cache/reflector.go:105: Failed to list *v1alpha1.TLSStore: tlsstores.traefik.containo.us is forbidden: User "system:serviceaccount:default:traefik-ingress-controller" cannot list resource "tlsstores" in API group "traefik.containo.us" at the cluster scope
Its working for me with with treafik 2.1 using following set of resources.
link to example
Can you try these once and let me know if this helps.
I deployed the below code and the whoami is now accessible without any issues. Things I changed are, updated the CRD, RBAC with the latest available in Traefik and changed the apiVersion for the deployment to "apps/v1". Simply copy the below code all together and deploy on kubernetes. Once it is up access http://localhost/whoami-app-api.
Deployment File:
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ingressroutes.traefik.containo.us
spec:
group: traefik.containo.us
version: v1alpha1
names:
kind: IngressRoute
plural: ingressroutes
singular: ingressroute
scope: Namespaced
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: middlewares.traefik.containo.us
spec:
group: traefik.containo.us
version: v1alpha1
names:
kind: Middleware
plural: middlewares
singular: middleware
scope: Namespaced
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ingressroutetcps.traefik.containo.us
spec:
group: traefik.containo.us
version: v1alpha1
names:
kind: IngressRouteTCP
plural: ingressroutetcps
singular: ingressroutetcp
scope: Namespaced
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ingressrouteudps.traefik.containo.us
spec:
group: traefik.containo.us
version: v1alpha1
names:
kind: IngressRouteUDP
plural: ingressrouteudps
singular: ingressrouteudp
scope: Namespaced
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: tlsoptions.traefik.containo.us
spec:
group: traefik.containo.us
version: v1alpha1
names:
kind: TLSOption
plural: tlsoptions
singular: tlsoption
scope: Namespaced
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: tlsstores.traefik.containo.us
spec:
group: traefik.containo.us
version: v1alpha1
names:
kind: TLSStore
plural: tlsstores
singular: tlsstore
scope: Namespaced
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: traefikservices.traefik.containo.us
spec:
group: traefik.containo.us
version: v1alpha1
names:
kind: TraefikService
plural: traefikservices
singular: traefikservice
scope: Namespaced
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- traefik.containo.us
resources:
- middlewares
- ingressroutes
- traefikservices
- ingressroutetcps
- ingressrouteudps
- tlsoptions
- tlsstores
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
name: traefik-ingress-controller
namespace: default
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-controller
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: traefik
labels:
app: traefik
spec:
replicas: 1
selector:
matchLabels:
app: traefik
template:
metadata:
labels:
app: traefik
spec:
serviceAccountName: traefik-ingress-controller
containers:
- name: traefik
image: traefik:v2.1
args:
- --accesslog=true
- --api
- --api.insecure
- --entrypoints.web.address=:80
- --entrypoints.websecure.address=:443
- --providers.kubernetescrd
- --configfile=/config/traefik.toml
ports:
- name: web
containerPort: 80
- name: admin
containerPort: 8080
- name: websecure
containerPort: 443
---
apiVersion: v1
kind: Service
metadata:
name: traefik
spec:
type: LoadBalancer
selector:
app: traefik
ports:
- protocol: TCP
port: 80
name: web
targetPort: 80
- protocol: TCP
port: 443
name: websecure
targetPort: 80
- protocol: TCP
port: 8080
name: admin
targetPort: 8080
---
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: default
name: whoami
labels:
app: whoami
spec:
replicas: 1
selector:
matchLabels:
app: whoami
template:
metadata:
labels:
app: whoami
spec:
containers:
- name: whoami
image: containous/whoami
ports:
- name: web
containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: whoami
spec:
ports:
- protocol: TCP
name: web
port: 80
selector:
app: whoami
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: whoami-whoami
namespace: default
spec:
entryPoints:
- web
- websecure
routes:
- match: PathPrefix(`/whoami-app-api`)
kind: Rule
services:
- name: whoami
port: 80
I am trying this example and got very stupid issue as mentioned.
After hours to try, I figure out that the issue is at "match" rule.
-match: PathPrefix(`/whoami-app-api`)
-match: Host(`whoami.localhost`) && PathPrefix(`/notls`)
Match without host will work by http:localhost/<prefix>.
Match with a host that don't like real URI like ".net" ".com".. will work
Some host like real URI like abc.com, scx.net, the browser will try to reach it by DNS instead of local host "0.0.0.0", so it cant not reach our Traefik.
Otherwise, Host with single word won't work also. I don't know why. I think in URL, there are at least 2 words splitting by dot
I'm having trouble connecting to the python kubernetes client, but I'm getting this 404 url not found error when I run curl -k https://ip-address:30000/pods which is an endpoint I wrote to connect to kubernetes & list pods. I'm running this locally through minikube, any suggestions on what to do?
The error:
Reason: Unauthorized
HTTP response headers: HTTPHeaderDict({'Content-Type': 'application/json', 'Content-Length': '129', 'Date': 'Wed, 18 Jul 2018 01:08:30 GMT'})
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Unauthorized","reason":"Unauthorized","code":401}
How I'm connecting to api:
from kubernetes import config,client
from kubernetes.client import Configuration, ApiClient
config = Configuration()
config.api_key = {'authorization': 'Bearer my-key-here'}
config.ssl_ca_cert = '/etc/secret-volume/ca-cert'
config.cert_file = 'ca-cert.crt'
config.key_file = '/etc/secret-volume/secret-key'
config.assert_hostname = False
api_client = ApiClient(configuration=config)
v1 = client.CoreV1Api(api_client)
#I've also tried using below, but it gives sslcertifificate max retry error
#so I opted for manual config above
try:
config.load_incluster_config()
except:
config.load_kube_config()
v1 = client.CoreV1Api()
the way I'm getting the api key is getting the decoded token from service account I created, however according to documentation here
it says
on a server with token authentication configured, and anonymous access
enabled, a request providing an invalid bearer token would receive a
401 Unauthorized error. A request providing no bearer token would be
treated as an anonymous request.
so it seems like my api token is not valid somehow? however I followed the steps to decode it and everything. I was pretty much following this
My yaml files:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: first-app
namespace: test-ns
spec:
selector:
matchLabels:
app: first-app
replicas: 3
template:
metadata:
labels:
app: first-app
spec:
serviceAccountName: test-sa
containers:
- name: first-app
image: first-app
imagePullPolicy: IfNotPresent
volumeMounts:
- name: secret-volume
mountPath: /etc/secret-volume
ports:
- containerPort: 80
env:
- name: "SECRET_KEY"
value: /etc/secret-volume/secret-key
- name: "SECRET_CRT"
value: /etc/secret-volume/secret-crt
- name: "CA_CRT"
value: /etc/secret-volume/ca-cert
volumes:
- name: secret-volume
secret:
secretName: test-secret
---
apiVersion: v1
kind: Service
metadata:
name: first-app
namespace: test-ns
spec:
type: NodePort
selector:
app: first-app
ports:
- protocol: TCP
port: 443
nodePort: 30000
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: test-sa
namespace: test-ns
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: system:test-app
namespace: test-ns
rules:
- apiGroups:
- ""
resources:
- pods
- namespaces
- services
verbs:
- '*'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:test-app
namespace: test-ns
subjects:
- kind: ServiceAccount
name: test-sa
namespace: test-ns
roleRef:
kind: ClusterRole
name: system:test-app
apiGroup: rbac.authorization.k8s.io