Micronaut Application on Kubernetes not able to pick up a property from yml - kubernetes

Running a micronaut application on kubernetes where configs are loaded from configMap.
Firstly, my configmap.yml looks like this:
apiVersion: v1
kind: ConfigMap
metadata:
name: data-loader-service-config
data:
application-devcloud.yml: |-
data.uploaded.event.queue: local-datauploaded-event-queue
data.uploaded.event.consumer.concurrency: 1-3
base.dir: basedir
aws:
region: XXX
datasources:
default:
dialect: POSTGRES
driverClassName: org.postgresql.Driver
micronaut:
config:
sources:
- file:/data-loader-service-config
debug: true
jms:
sqs:
enabled: true
My deployment yml looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
app.dekorate.io/vcs-url: <<unknown>>
app.dekorate.io/commit-id: c041d22bc8a1a69a4c9016b77d9df465c8ca9d83
labels:
app.kubernetes.io/name: data-loader-service
app.kubernetes.io/version: 0.1-SNAPSHOT
name: data-loader-service
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: data-loader-service
app.kubernetes.io/version: 0.1-SNAPSHOT
template:
metadata:
annotations:
app.dekorate.io/vcs-url: <<unknown>>
app.dekorate.io/commit-id: c041d22bc8a1a69a4c9016b77d9df465c8ca9d83
labels:
app.kubernetes.io/name: data-loader-service
app.kubernetes.io/version: 0.1-SNAPSHOT
spec:
containers:
- env:
- name: KUBERNETES_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MICRONAUT_ENVIRONMENTS
value: "devcloud"
- name: aws.region
value: xxx
image: mynamespace/data-loader-service:0.1-SNAPSHOT
imagePullPolicy: Always
name: data-loader-service
volumeMounts:
- name: data-loader-service-config
mountPath: /data-loader-service-config
volumes:
- configMap:
defaultMode: 384
name: data-loader-service-config
optional: false
name: data-loader-service-config
When my micronaut app in the pod starts up, it is not able to resolve base.dir.
Not sure what's missing here.

Here is what I ended up doing. It works. I don't think it's the cleanest way though. Looking for a better way.
---
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
app.dekorate.io/vcs-url: <<unknown>>
app.dekorate.io/commit-id: c041d22bc8a1a69a4c9016b77d9df465c8ca9d83
labels:
app.kubernetes.io/name: data-loader-service
app.kubernetes.io/version: 0.1-SNAPSHOT
name: data-loader-service
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: data-loader-service
app.kubernetes.io/version: 0.1-SNAPSHOT
template:
metadata:
annotations:
app.dekorate.io/vcs-url: <<unknown>>
app.dekorate.io/commit-id: c041d22bc8a1a69a4c9016b77d9df465c8ca9d83
labels:
app.kubernetes.io/name: data-loader-service
app.kubernetes.io/version: 0.1-SNAPSHOT
spec:
containers:
- env:
- name: KUBERNETES_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MICRONAUT_ENVIRONMENTS
value: "devcloud"
- name: MICRONAUT_CONFIG_FILES
value: "/config/application-common.yml,/config/application-devcloud.yml"
- name: aws.region
value: xxx
image: xxx/data-loader-service:0.1-SNAPSHOT
imagePullPolicy: Always
name: data-loader-service
volumeMounts:
- name: data-loader-service-config
mountPath: /config
volumes:
- configMap:
defaultMode: 384
name: data-loader-service-config
optional: false
name: data-loader-service-config
I do not want to "hard-code" the values for MICRONAUT_ENVIRONMENTS and MICRONAUT_CONFIG_FILES inmy deployment.yml. Is there a way to parameterise/externalise them so that I have a single deployment.yml for all the environments. At runtime, I can dynamically decide what is the environment to which I need to deploy to? I do not want to create multiple yml files (one for each environment/profile).

Related

Traefik returning 404 for local deployment

I'm following this tutorial and making changes as necessary to set up a self hosted instance of Ghost blog. I'm new to Kubernetes, and am self hosting this locally on some Raspberry Pis. I applied all deployments, services, myqsl, secrets, PVCs etc, and added ghost to /etc/hosts. When i visit ghost/ in browser, I get a 404 error. Even though I'm targeting the service. Here are my YAMLs:
MySQL PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
type: longhorn
app: example
spec:
storageClassName: longhorn
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
Ghost PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ghost-pv-claim
labels:
type: longhorn
app: ghost
spec:
storageClassName: longhorn
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
MySQL Password Secret
apiVersion: v1
kind: Secret
metadata:
name: mysql-pass
type: Opaque
data:
password: <base_64_encoded_pwd>
Ghost SQL deployment
apiVersion: v1
kind: Service
metadata:
name: ghost-mysql
labels:
app: ghost
spec:
ports:
- port: 3306
selector:
app: ghost
tier: mysql
clusterIP: None
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: ghost-mysql
labels:
app: ghost
spec:
selector:
matchLabels:
app: ghost
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: ghost
tier: mysql
spec:
containers:
- image: arm64v8/mysql:latest
imagePullPolicy: Always
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
- name: MYSQL_USER
value: ghost
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-vol
mountPath: /var/lib/mysql
volumes:
- name: mysql-vol
persistentVolumeClaim:
claimName: mysql-pv-claim
Ghost Blog Deployment
apiVersion: v1
kind: Service
metadata:
name: ghost-svc
labels:
app: ghost
tier: frontend
spec:
selector:
app: ghost
tier: frontend
ports:
- protocol: TCP
port: 2368
targetPort: 2368
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ghost-deploy
spec:
replicas: 1
selector:
matchLabels:
app: ghost
tier: frontend
template:
metadata:
labels:
app: ghost
tier: frontend
spec:
# securityContext:
# runAsUser: 1000
# runAsGroup: 50
containers:
- name: blog
image: ghost
imagePullPolicy: Always
ports:
- containerPort: 2368
env:
# - name: url
# value: https://www.myblog.com
- name: database__client
value: mysql
- name: database__connection__host
value: ghost-mysql
- name: database__connection__user
value: root
- name: database__connection__password
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
- name: database__connection__database
value: ghost
volumeMounts:
- mountPath: /var/lib/ghost/content
name: ghost-vol
volumes:
- name: ghost-vol
persistentVolumeClaim:
claimName: ghost-pv-claim
Traefik Ingress
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: ghost-ingress
namespace: default
spec:
entryPoints:
- web
routes:
- match: Host(`ghost`)
kind: Rule
services:
- name: ghost-svc
port: 80
Added ghost to /etc/hosts (Mac) also.
Not sure what I'm doing wrong but I imagine its certs / ingress related. Any ideas?

How to set Kubernetes config map and secret to mongodb environment variables

I am trying to set the two env variables of mongo namely - MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD using kubernetes ConfigMap and Secret as follows:
When I don't use the config map and password, i.e. I hardcode the username and password, it works, but when I try to replace it with configmap and secret, it says
'Authentication failed.'
my username and password is the same, which is admin
Here's the yaml definition for these obects, can someone help me what is wrong?
apiVersion: v1
kind: ConfigMap
metadata:
name: mongodb-username
data:
username: admin
---
apiVersion: v1
kind: Secret
metadata:
name: mongodb-password
data:
password: YWRtaW4K
type: Opaque
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodbtest
spec:
# serviceName: mongodbtest
replicas: 1
selector:
matchLabels:
app: mongodbtest
template:
metadata:
labels:
app: mongodbtest
selector: mongodbtest
spec:
containers:
- name: mongodbtest
image: mongo:3
# env:
# - name: MONGO_INITDB_ROOT_USERNAME
# value: admin
# - name: MONGO_INITDB_ROOT_PASSWORD
# value: admin
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
configMapKeyRef:
name: mongodb-username
key: username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-password
key: password
Finally I was able to find the solution after hours, it is not something I did from kubernetes side, it is when I did base64 encode.
The correct way to encode is with following command:
echo -n 'admin' | base64
and this was the issue with me.
Your deployment yaml is fine, just change spec.containers[0].env to spec.containers[0].envFrom:
spec:
containers:
- name: mongodbtest
image: mongo:3
envFrom:
- configMapRef:
name: mongodb-username
- secretRef:
name: mongodb-password
That will put all keys of your secret and configmap as environment variables in the deployment.
apiVersion: v1
data:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD : password
kind: ConfigMap
metadata:
name: mongo-cred
namespace: default
inject it to deployment like
envFrom:
- configMapRef:
name: mongo-cred
the deployment will be something like
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodbtest
spec:
# serviceName: mongodbtest
replicas: 1
selector:
matchLabels:
app: mongodbtest
template:
metadata:
labels:
app: mongodbtest
selector: mongodbtest
spec:
containers:
- name: mongodbtest
image: mongo:3
envFrom:
- configMapRef:
name: mongo-cred
if you want to save the data in secret, the secret is best practice to store data with encryption base64 and sensitive data.
envFrom:
- secretRef:
name: mongo-cred
you can create the secret with
apiVersion: v1
data:
MONGO_INITDB_ROOT_USERNAME: YWRtaW4K #base 64 encoded
MONGO_INITDB_ROOT_PASSWORD : YWRtaW4K
kind: secret
type: Opaque
metadata:
name: mongo-cred
namespace: default

Why kubernetes' statefulset didn't run evenly for 3 pods?

I deployed a logstash by statefulset kind with 3 replicas in k8s. Using filebeat to send data to it.
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: logstash-nginx
spec:
serviceName: "logstash"
selector:
matchLabels:
app: logstash
updateStrategy:
type: RollingUpdate
replicas: 3
template:
metadata:
labels:
app: logstash
spec:
containers:
- name: logstash
image: docker.elastic.co/logstash/logstash:7.10.0
resources:
limits:
memory: 2Gi
ports:
- containerPort: 5044
volumeMounts:
- name: config-volume
mountPath: /usr/share/logstash/config
- name: logstash-pipeline-volume
mountPath: /usr/share/logstash/pipeline
command: ["/bin/sh","-c"]
args:
- bin/logstash -f /usr/share/logstash/pipeline/logstash.conf;
volumes:
- name: config-volume
configMap:
name: logstash-configmap
items:
- key: logstash.yml
path: logstash.yml
- name: logstash-pipeline-volume
configMap:
name: logstash-configmap
items:
- key: logstash.conf
path: logstash.conf
Logstash's service
---
apiVersion: v1
kind: Service
metadata:
labels:
app: logstash
name: logstash
spec:
ports:
- name: "5044"
port: 5044
targetPort: 5044
selector:
app: logstash
Filebeat's daemonset configmap
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
...
output.logstash:
hosts: ["logstash.default.svc.cluster.local:5044"]
loadbalance: true
bulk_max_size: 1024
When run real data. Most data went to the second logstash's pod. Sometimes data also can go to the first and the third pods but very little occurs.
Use another way to set LB from filebeat
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
...
output.logstash:
hosts: ["logstash-nginx-0.logstash.default.svc.cluster.local:5044", "logstash-nginx-1.logstash.default.svc.cluster.local:5044", "logstash-nginx-2.logstash.default.svc.cluster.local:5044"]
loadbalance: true
bulk_max_size: 1024
Logstash's configmap
---
apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-configmap
data:
logstash.yml: |
http.host: "0.0.0.0"
path.config: /usr/share/logstash/pipeline
xpack.monitoring.enabled: false
From the filebeat pod, it can access URIs by curl:
logstash-nginx-0.logstash.default.svc.cluster.local:5044
logstash-nginx-1.logstash.default.svc.cluster.local:5044
logstash-nginx-2.logstash.default.svc.cluster.local:5044
But the data can't be sent by filebeat to logstash's 3 pods at all. No traffic in the logstash's output logs. Where is wrong?

Adding websockets/port 6001 to Kubernetes Ingress deployed via Helm - Connection Refused

We currently have a multi-tenant backend laravel application set up, with pusher websockets enabled on the same app. This application is built into a docker image and hosted on Digital Ocean container registry, and deployed via HELM to our Kubernetes Cluster.
We also have a front end application built in angular that tries to connect to the backend app via port 80 on the /ws/ path to establish a websocket connection.
When we try to access the tenant1.example.com/ws/ we get a 502 gateway error, which suggests the ports arent mapping correctly? but tenant1.example.com port 80 works just fine.
Our heml chart yaml is as follows:
NAME: tenant1
LAST DEPLOYED: Fri Dec 11 14:34:00 2020
NAMESPACE: tenants
STATUS: pending-install
REVISION: 1
USER-SUPPLIED VALUES:
subdomain: tenant1
COMPUTED VALUES:
affinity: {}
autoscaling:
enabled: true
maxReplicas: 1
minReplicas: 1
targetCPUUtilizationPercentage: 80
fullnameOverride: ""
image:
pullPolicy: IfNotPresent
repository: nginx
tag: ""
imagePullSecrets: []
ingress:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
enabled: true
hosts:
- host: example.com
pathType: Prefix
tls: []
migrate:
enabled: true
nameOverride: ""
nodeSelector: {}
podAnnotations: {}
podSecurityContext: {}
replicaCount: 1
resources:
requests:
cpu: 10m
rootDB: public
securityContext: {}
service:
port: 80
type: ClusterIP
serviceAccount:
annotations: {}
create: true
name: ""
setup:
enabled: true
subdomain: tenant1
tolerations: []
---
# Source: backend-api/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: tenant1-backend-api
labels:
helm.sh/chart: backend-api-0.1.0
app: backend-api
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
---
# Source: backend-api/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: tenant1-backend-api-service
namespace: tenants
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 80
name: 'http'
selector:
app: tenant1-backend-api-deployment
---
# Source: backend-api/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: tenant1-backend-api-ws-service
namespace: tenants
spec:
type: ClusterIP
ports:
- port: 6001
targetPort: 6001
name: 'websocket'
selector:
app: tenant1-backend-api-deployment
---
# Source: backend-api/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: tenant1-backend-api-deployment
namespace: tenants
labels:
helm.sh/chart: backend-api-0.1.0
app: backend-api
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
spec:
selector:
matchLabels:
app: tenant1-backend-api-deployment
template:
metadata:
labels:
app: tenant1-backend-api-deployment
namespace: tenants
spec:
containers:
- name: backend-api
image: "registry.digitalocean.com/rock/backend-api:latest"
imagePullPolicy: Always
ports:
- containerPort: 80
- containerPort: 6001
resources:
requests:
cpu: 10m
env:
- name: CONTAINER_ROLE
value: "backend-api"
- name: DB_CONNECTION
value: "pgsql"
- name: DB_DATABASE
value: tenant1
- name: DB_HOST
valueFrom:
secretKeyRef:
name: postgresql-database-creds
key: DB_HOST
- name: DB_PORT
valueFrom:
secretKeyRef:
name: postgresql-database-creds
key: DB_PORT
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: postgresql-database-creds
key: DB_USERNAME
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: postgresql-database-creds
key: DB_PASSWORD
---
# Source: backend-api/templates/hpa.yaml
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: tenant1-backend-api-hpa
namespace: tenants
labels:
helm.sh/chart: backend-api-0.1.0
app: backend-api
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: tenant1-backend-api-deployment
minReplicas: 1
maxReplicas: 1
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization: 80
---
# Source: backend-api/templates/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: tenant1-backend-api-ingress
namespace: tenants
labels:
helm.sh/chart: backend-api-0.1.0
app: backend-api
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: tenant1.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: tenant1-backend-api-service
port:
number: 80
- path: /ws/
pathType: Prefix
backend:
service:
name: tenant1-backend-api-ws-service
port:
number: 6001

Openshift ImageChange trigger gets deleted in Deploymentconfig when applying templage

I am currently working on a template for OpenShift and my ImageChange trigger gets deleted when I initally instantiate the application. My Template contains the following objects
ImageStream
BuildConfig
Service
Route
Deploymentconfig
I guess the route is irrelevant but this is what it looks like so far (for better overview I will post the objects seperated, but they are all items in my Template)
ImageStream
- kind: ImageStream
apiVersion: v1
metadata:
labels:
app: my-app
name: my-app
namespace: ${IMAGE_NAMESPACE}
BuildConfig
- kind: BuildConfig
apiVersion: v1
metadata:
labels:
app: my-app
deploymentconfig: my-app
name: my-app
namespace: ${IMAGE_NAMESPACE}
selfLink: /oapi/v1/namespaces/${IMAGE_NAMESPACE}/buildconfigs/my-app
spec:
runPolicy: Serial
source:
git:
ref: pre-prod
uri: 'ssh://git#git.myreopo.net:port/project/my-app.git'
sourceSecret:
name: git-secret
type: Git
strategy:
type: Source
sourceStrategy:
env:
- name: HTTP_PROXY
value: 'http://user:password#proxy.com:8080'
- name: HTTPS_PROXY
value: 'http://user:password#proxy.com:8080'
- name: NO_PROXY
value: .something.net
from:
kind: ImageStreamTag
name: 'nodejs:8'
namespace: openshift
output:
to:
kind: ImageStreamTag
name: 'my-app:latest'
namespace: ${IMAGE_NAMESPACE}
Service
- kind: Service
apiVersion: v1
metadata:
name: my-app
labels:
app: my-app
spec:
selector:
deploymentconfig: my-app
ports:
- name: 8080-tcp
port: 8080
protocol: TCP
targetPort: 8080
sessionAffinity: None
type: ClusterIP
DeploymentConfig
Now what is already weird in the DeploymentConfig is that under spec.template.spec.containers[0].image I have to specify the full path to the repository to make it work, otherwise I get an error pulling the image. (even though documentation says my-app:latest would be correct)
- kind: DeploymentConfig
apiVersion: v1
metadata:
labels:
app: my-app
deploymentconfig: my-app
name: my-app
namespace: ${IMAGE_NAMESPACE}
selfLink: /oapi/v1/namespaces/${IMAGE_NAMESPACE}/deploymentconfigs/my-app
spec:
selector:
app: my-app
deploymentconfig: my-app
strategy:
type: Rolling
rollingParams:
intervalSeconds: 1
maxSurge: 25%
maxUnavailability: 25%
timeoutSeconds: 600
updatePeriodSeconds: 1
replicas: 1
template:
metadata:
labels:
app: my-app
deploymentconfig: my-app
spec:
containers:
- name: my-app-container
image: "${REPOSITORY_IP}:${REPOSITORY_PORT}/${IMAGE_NAMESPACE}/my-app:latest"
imagePullPolicy: Always
ports:
- containerPort: 8080
protocol: TCP
- containerPort: 8081
protocol: TCP
env:
- name: MONGODB_USERNAME
valueFrom:
secretKeyRef:
name: my-app-database
key: database-user
- name: MONGODB_PASSWORD
valueFrom:
secretKeyRef:
name: my-app-database
key: database-password
- name: MONGODB_DATABASE
value: "myapp"
- name: ROUTE_PATH
value: /my-app
- name: MONGODB_AUTHDB
value: "myapp"
- name: MONGODB_PORT
value: "27017"
- name: HTTP_PORT
value: "8080"
- name: HTTPS_PORT
value: "8082"
restartPolicy: Always
dnsPolicy: ClusterFirst
triggers:
- type: ImageChange
imageChangeParams:
automatic: true
from:
kind: ImageStreamTag
name: 'my-app:latest'
namespace: ${IMAGE_NAMESPACE}
containerNames:
- my-app-container
- type: ConfigChange
I deploy the application using
oc process -f ./openshift/template.yaml ..Parameters... | oc apply -f -
But the outcome is the same when I use oc new-app.
The weird thing is. The application gets deployed and is running fine, but image changes will have no effect. So I exported DeploymentConfig and found that it was missing the ImageChangeTrigger leaving the trigger part being
triggers:
- type: ConfigChange
At first I thought this was due to the fact that maybe the build was not ready when I tried to apply the DeploymentConfig so I created a build first and waited for it to finish. Afterwards I deployed the rest of the application (Service, Route, DeploymentConfig). The outcome was the same however. If I use the Webgui and change the DeploymentConfig there from
to this, fill out namespace, app and tag (latest) and hit apply everything works as it should. I just can't figure out why the trigger is beeing ignored initially. Would be great if someone has an idea where I'm wrong
Versions I am using are
oc: v3.9.0
kubernetes: v1.6.1
openshift v3.6.173.0.140
OK the answer was pretty simple. Turned out it was just an indentation error in the yaml file for the DeploymentConfig. Instead of
dnsPolicy: ClusterFirst
restartPolicy: Always
terminationGracePeriodSeconds: 30
triggers:
- type: ImageChange
imageChangeParams:
automatic: true
containerNames:
- alpac-studio-container
from:
kind: ImageStreamTag
name: alpac-studio:latest
- type: ConfigChange
It has to be
dnsPolicy: ClusterFirst
restartPolicy: Always
terminationGracePeriodSeconds: 30
triggers:
- type: ImageChange
imageChangeParams:
automatic: true
containerNames:
- alpac-studio-container
from:
kind: ImageStreamTag
name: alpac-studio:latest
- type: ConfigChange
So the triggers have to be on the same level as e.g. template and strategy