kubernetes vpa for CronJob - kubernetes

I need to run VPA for CronJob. I refer to this doc.
I think i followed it properly but it doesn't work for me.
using GKE, 1.17
VPA version is vpa-release-0.8
I created CronJob and VPA with this file.
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
metadata:
labels:
app: hello
spec:
containers:
- name: hello
image: busybox
imagePullPolicy: IfNotPresent
args:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
---
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
name: my-vpa
spec:
targetRef:
apiVersion: "batch/v1beta1"
kind: CronJob
name: hello
updatePolicy:
updateMode: "Auto"
When I type this command:
kubectl describe vpa
I got this result:
Name: my-vpa
Namespace: default
Labels: <none>
Annotations: <none>
API Version: autoscaling.k8s.io/v1
Kind: VerticalPodAutoscaler
Metadata:
Creation Timestamp: 2021-02-08T07:38:23Z
Generation: 2
Resource Version: 3762
Self Link: /apis/autoscaling.k8s.io/v1/namespaces/default/verticalpodautoscalers/my-vpa
UID: 07803254-c549-4568-a062-144c570a8d41
Spec:
Target Ref:
API Version: batch/v1beta1
Kind: CronJob
Name: hello
Update Policy:
Update Mode: Auto
Status:
Conditions:
Last Transition Time: 2021-02-08T07:39:14Z
Status: False
Type: RecommendationProvided
Recommendation:
Events: <none>

#mario oh!! so there was not enough time to get metrics to recommend
resource.... – 변상현 Feb 10 at 2:36
Yes, exactly. If the only task of your CronJob is to echo Hello from the Kubernetes cluster and exit you won't get any recommendations from VPA as this is not a resource-intensive task.
However if you modify your command so that it generates an artificial load in your CronJob-managed pods:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
metadata:
labels:
app: hello
spec:
containers:
- name: hello
image: busybox
imagePullPolicy: IfNotPresent
args:
- /bin/sh
- -c
- date; dd if=/dev/urandom | gzip -9 >> /dev/null
restartPolicy: OnFailure
after a few minutes you'll get the expected result:
$ kubectl describe vpa my-vpa
Name: my-vpa
Namespace: default
Labels: <none>
Annotations: <none>
API Version: autoscaling.k8s.io/v1
Kind: VerticalPodAutoscaler
Metadata:
Creation Timestamp: 2021-05-22T13:02:27Z
Generation: 8
...
Manager: vpa-recommender
Operation: Update
Time: 2021-05-22T13:29:40Z
Resource Version: 5534471
Self Link: /apis/autoscaling.k8s.io/v1/namespaces/default/verticalpodautoscalers/my-vpa
UID: e37abd79-296d-4f72-8bd5-f2409457e9ff
Spec:
Target Ref:
API Version: batch/v1beta1
Kind: CronJob
Name: hello
Update Policy:
Update Mode: Auto
Status:
Conditions:
Last Transition Time: 2021-05-22T13:39:40Z
Status: False
Type: LowConfidence
Last Transition Time: 2021-05-22T13:29:40Z
Status: True
Type: RecommendationProvided
Recommendation:
Container Recommendations:
Container Name: hello
Lower Bound:
Cpu: 1185m
Memory: 2097152
Target:
Cpu: 1375m
Memory: 2097152
Uncapped Target:
Cpu: 1375m
Memory: 2097152
Upper Bound:
Cpu: 96655m
Memory: 115343360
Events: <none>
❗Important: Just don't leave it running for too long as you might be quite surprised with your bill 😉

Related

openshift cronjob with imagestream

when i update my imagestream and then trigger a run of a cronjob, the cronjob will still use the previous imagestream instead of pulling latest. despite the fact i have configured the cronjob to always pull.
the way ive been testing this is to:
push a new image to the stream, and verify the imagestream is updated
check the cronjob obj and verify the image associated to the container still has the old image stream hash
trigger a new run of the cronjob, which I would think would pull a new image since the pull policy is always. -- but it does not, the cronjob starts with a container using the old image stream.
heres the yaml:
apiVersion: template.openshift.io/v1
kind: Template
metadata:
name: cool-cron-job-template
parameters:
- name: ENVIRONMENT
displayName: Environment
required: true
objects:
- apiVersion: v1
kind: ImageStream
metadata:
name: cool-cron-job
namespace: cool-namespace
labels:
app: cool-cron-job
owner: cool-owner
spec:
lookupPolicy:
local: true
- apiVersion: batch/v1
kind: CronJob
metadata:
name: cool-cron-job-cron-job
namespace: cool-namespace
labels:
app: cool-cron-job
owner: cool-owner
spec:
schedule: "10 0 1 * *"
concurrencyPolicy: "Forbid"
startingDeadlineSeconds: 200
suspend: false
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
metadata:
labels:
app: cool-cron-job
cronjob: "true"
annotations:
alpha.image.policy.openshift.io/resolve-names: '*'
spec:
dnsPolicy: ClusterFirst
restartPolicy: OnFailure
securityContext: { }
terminationGracePeriodSeconds: 0
containers:
- command: [ "python", "-m", "cool_cron_job.handler" ]
imagePullPolicy: Always
name: cool-cron-job-container
image: cool-cron-job:latest

k8s custom metric exporter json returned values

I've tried to make a custom metric exporter for my kubernetes ..
And still failing to get the right value in order to exploit if within an hpa with this simplistic "a" variable ..
I don't really know how to export it according to different routes .. if I do {"a":"1"}, the error message on the hpa tells me it's missing kind .. then another fields ..
So I've did sum up the complete reproducible experiment below, in order to make it the simplest ever
Any Idea on how to complete this task ?
Thanks a lot for any clue, advice, enlightment, comment, notice
apiVersion: v1
kind: ConfigMap
metadata:
name: exporter
namespace: test
data:
os.php: |
<?php // If anyone knows a simple replacement other than swoole .. he's welcome --- Somehow I only think I'll need to know what the json output is expected for theses routes
$server = new Swoole\HTTP\Server("0.0.0.0",443,SWOOLE_PROCESS,SWOOLE_SOCK_TCP | SWOOLE_SSL);
$server->set(['worker_num' => 1,'ssl_cert_file' => __DIR__ . '/example.com+5.pem','ssl_key_file' => __DIR__ . '/example.com+5-key.pem']);
$server->on('Request', 'onMessage');
$server->start();
function onMessage($req, $res){
$value=1;
$url=$req->server['request_uri'];
file_put_contents('monolog.log',"\n".$url,8);//Log
if($url=='/'){
$res->end('{"status":"healthy"}');return;
} elseif($url=='/metrics'){
$res->end('a '.$value);return;
} elseif($url=='/apis/custom.metrics.k8s.io/v1beta1'){ // <-- This url is called lots of time in the logs
$res->end('{"kind": "APIResourceList","apiVersion": "v1","groupVersion": "custom.metrics.k8s.io/v1beta1","resources": [{"name": "namespaces/a","singularName": "","namespaced": false,"kind": "MetricValueList","verbs": ["get"]}]}');return;
} elseif($url=='/apis/custom.metrics.k8s.io/v1beta1/namespaces/test/services/test-metrics-exporter/a'){
$res->end('{"kind": "MetricValueList","apiVersion": "custom.metrics.k8s.io/v1beta1","metadata": {"selfLink": "/apis/custom.metrics.k8s.io/v1beta1/namespaces/test/services/test-metrics-exporter-svc/a"},"items": [{"describedObject": {"kind": "Service","namespace": "test","name": "test-metrics-exporter-svc","apiVersion": "/v1"},"metricName": "a","timestamp": "2020-06-21T08:35:58Z","value": "'.$value.'","selector": null}]}');return;
}
$res->status(404);return;
}
---
apiVersion: v1
kind: Service
metadata:
name: test-metrics-exporter
namespace: test
annotations:
prometheus.io/port: '443'
prometheus.io/scrape: 'true'
spec:
ports:
- port: 443
protocol: TCP
selector:
app: test-metrics-exporter
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-metrics-exporter
namespace: test
spec:
selector:
matchLabels:
app: test-metrics-exporter
template:
metadata:
labels:
app: test-metrics-exporter
spec:
terminationGracePeriodSeconds: 1
volumes:
- name: exporter
configMap:
name: exporter
defaultMode: 0744
items:
- key: os.php
path: os.php
containers:
- name: test-metrics-exporter
image: openswoole/swoole:latest
imagePullPolicy: IfNotPresent
volumeMounts:
- name: exporter
mountPath: /var/www/os.php
subPath: os.php
command:
- /bin/sh
- -c
- |
touch monolog.log;
apt update && apt install wget -y && wget https://github.com/FiloSottile/mkcert/releases/download/v1.4.3/mkcert-v1.4.3-linux-amd64
cp mkcert-v1.4.3-linux-amd64 /usr/local/bin/mkcert && chmod +x /usr/local/bin/mkcert
mkcert example.com "*.example.com" example.test localhost 127.0.0.1 ::1
php os.php &
tail -f monolog.log
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: alpine
namespace: test
spec:
selector:
matchLabels:
app: alpine
template:
metadata:
labels:
app: alpine
spec:
terminationGracePeriodSeconds: 1
containers:
- name: alpine
image: alpine
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- |
tail -f /dev/null
---
kind: HorizontalPodAutoscaler
apiVersion: autoscaling/v2beta1
metadata:
name: alpine
namespace: test
spec:
scaleTargetRef:
kind: Deployment
name: alpine
apiVersion: apps/v1
minReplicas: 1
maxReplicas: 10
metrics:
- type: Object
object:
target:
kind: Service
name: test-metrics-exporter
metricName: a
targetValue: '1'
---
# This is the Api hook for custom metrics
apiVersion: apiregistration.k8s.io/v1beta1
kind: APIService
metadata:
name: v1beta1.custom.metrics.k8s.io
namespace: test
spec:
insecureSkipTLSVerify: true
group: custom.metrics.k8s.io
groupPriorityMinimum: 1000
versionPriority: 5
version: v1beta1
service:
name: test-metrics-exporter
namespace: test
port: 443

Cannot add linkerd inject annotation in deployment file

I have a kubernetes deployment file user.yaml -
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-deployment
namespace: stage
spec:
replicas: 1
selector:
matchLabels:
app: user
template:
metadata:
labels:
app: user
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '9022'
spec:
nodeSelector:
env: stage
containers:
- name: user
image: <docker image path>
imagePullPolicy: Always
resources:
limits:
memory: "512Mi"
cpu: "250m"
requests:
memory: "256Mi"
cpu: "200m"
ports:
- containerPort: 8080
env:
- name: MODE
value: "local"
- name: PORT
value: ":8080"
- name: REDIS_HOST
value: "xxx"
- name: KAFKA_ENABLED
value: "true"
- name: BROKERS
value: "xxx"
imagePullSecrets:
- name: regcred
---
apiVersion: v1
kind: Service
metadata:
namespace: stage
name: user
spec:
selector:
app: user
ports:
- protocol: TCP
port: 8080
targetPort: 8080
---
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: user
namespace: stage
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: user-deployment
minReplicas: 1
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
- type: Resource
resource:
name: memory
target:
type: AverageValue
averageValue: 300Mi
This deployment is already running with linkerd injected with command cat user.yaml | linkerd inject - | kubectl apply -f -
Now I wanted to add linkerd inject annotation (as mentioned here) and use command kubectl apply -f user.yaml just like I use for a deployment without linkerd injected.
However, with modified user.yaml (after adding linkerd.io/inject annotation in deployment) -
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-deployment
namespace: stage
spec:
replicas: 1
selector:
matchLabels:
app: user
template:
metadata:
labels:
app: user
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '9022'
linkerd.io/inject: enabled
spec:
nodeSelector:
env: stage
containers:
- name: user
image: <docker image path>
imagePullPolicy: Always
resources:
limits:
memory: "512Mi"
cpu: "250m"
requests:
memory: "256Mi"
cpu: "200m"
ports:
- containerPort: 8080
env:
- name: MODE
value: "local"
- name: PORT
value: ":8080"
- name: REDIS_HOST
value: "xxx"
- name: KAFKA_ENABLED
value: "true"
- name: BROKERS
value: "xxx"
imagePullSecrets:
- name: regcred
---
apiVersion: v1
kind: Service
metadata:
namespace: stage
name: user
spec:
selector:
app: user
ports:
- protocol: TCP
port: 8080
targetPort: 8080
---
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: user
namespace: stage
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: user-deployment
minReplicas: 1
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
- type: Resource
resource:
name: memory
target:
type: AverageValue
averageValue: 300Mi
When I run kubectl apply -f user.yaml, it throws error -
service/user unchanged
horizontalpodautoscaler.autoscaling/user configured
Error from server (BadRequest): error when creating "user.yaml": Deployment in version "v1" cannot be handled as a Deployment: v1.Deployment.Spec: v1.DeploymentSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container: v1.Container.Env: []v1.EnvVar: v1.EnvVar.v1.EnvVar.Value: ReadString: expects " or n, but found 1, error found in #10 byte of ...|,"value":1},{"name":|..., bigger context ...|ue":":8080"}
Can anyone please point out where I have gone wrong in adding annotation?
Thanks
Try with double quotes like below
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9022"
linkerd.io/inject: enabled

Kubernetes failed job with no pods

I see a failed job that created no pods.And also there is no information in the events.Since there are no pods,I could not check the logs.
Here is the description of the job which failed.
kubectl describe job time-limited-rbac-1604010900 -n add-ons
Name: time-limited-rbac-1604010900
Namespace: add-ons
Selector: controller-uid=0816b9b3-814c-4802-83cf-5d5f3456701d
Labels: controller-uid=0816b9b3-814c-4802-83cf-5d5f3456701d
job-name=time-limited-rbac-1604010900
Annotations: <none>
Controlled By: CronJob/time-limited-rbac
Parallelism: 1
Completions: <unset>
Start Time: Thu, 29 Oct 2020 15:35:08 -0700
Active Deadline Seconds: 280s
Pods Statuses: 0 Running / 0 Succeeded / 1 Failed
Pod Template:
Labels: controller-uid=0816b9b3-814c-4802-83cf-5d5f3456701d
job-name=time-limited-rbac-1604010900
Service Account: time-limited-rbac
Containers:
time-limited-rbac:
Image: bitnami/kubectl:latest
Port: <none>
Host Port: <none>
Command:
/bin/bash
Args:
/var/tmp/time-limited-rbac.sh
Environment: <none>
Mounts:
/var/tmp/ from script (rw)
Volumes:
script:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: time-limited-rbac-script
Optional: false
Events: <none>
Here is the description of CronJob.
apiVersion: v1
items:
- apiVersion: batch/v1beta1
kind: CronJob
metadata:
annotations:
meta.helm.sh/release-name: time-limited-rbac
meta.helm.sh/release-namespace: add-ons
labels:
app.kubernetes.io/name: time-limited-rbac
name: time-limited-rbac
spec:
concurrencyPolicy: Replace
failedJobsHistoryLimit: 1
jobTemplate:
metadata:
creationTimestamp: null
spec:
activeDeadlineSeconds: 280
backoffLimit: 3
parallelism: 1
template:
metadata:
creationTimestamp: null
spec:
containers:
- args:
- /var/tmp/time-limited-rbac.sh
command:
- /bin/bash
image: bitnami/kubectl:latest
imagePullPolicy: Always
name: time-limited-rbac
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/tmp/
name: script
dnsPolicy: ClusterFirst
restartPolicy: Never
schedulerName: default-scheduler
securityContext: {}
serviceAccount: time-limited-rbac
serviceAccountName: time-limited-rbac
terminationGracePeriodSeconds: 0
volumes:
- configMap:
defaultMode: 356
name: time-limited-rbac-script
name: script
schedule: '*/5 * * * *'
successfulJobsHistoryLimit: 3
suspend: false
Is there any way to tune thie cronjob to avoid such scenarios? We are receiving this issue atleast once or twice everyday.

Azure AKS backup using Velero

I noticed that Velero can only backup AKS PVCs if those PVCs are disk and not Azure fileshares. To handle this i tried to use restic to backup by fileshares itself but i gives me a strange log:
This is how my actual pod looks like
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
backup.velero.io/backup-volumes: grafana-data
deployment.kubernetes.io/revision: "17"
And the log of my backup:
time="2020-05-26T13:51:54Z" level=info msg="Adding pvc grafana-data to additionalItems" backup=velero/grafana-test-volume cmd=/velero logSource="pkg/backup/pod_action.go:67" pluginName=velero
time="2020-05-26T13:51:54Z" level=info msg="Backing up item" backup=velero/grafana-test-volume group=v1 logSource="pkg/backup/item_backupper.go:169" name=grafana-data namespace=grafana resource=persistentvolumeclaims
time="2020-05-26T13:51:54Z" level=info msg="Executing custom action" backup=velero/grafana-test-volume group=v1 logSource="pkg/backup/item_backupper.go:330" name=grafana-data namespace=grafana resource=persistentvolumeclaims
time="2020-05-26T13:51:54Z" level=info msg="Skipping item because it's already been backed up." backup=velero/grafana-test-volume group=v1 logSource="pkg/backup/item_backupper.go:163" name=grafana-data namespace=grafana resource=persistentvolumeclaims
As you can see somehow it did not backup the grafana-data volume since it says it is already in the backup (where it is actually not).
My azurefile volume holds these contents:
allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"allowVolumeExpansion":true,"apiVersion":"storage.k8s.io/v1beta1","kind":"StorageClass","metadata":{"annotations":{},"labels":{"kubernetes.io/cluster-service":"true"},"name":"azurefile"},"parameters":{"skuName":"Standard_LRS"},"provisioner":"kubernetes.io/azure-file"}
creationTimestamp: "2020-05-18T15:18:18Z"
labels:
kubernetes.io/cluster-service: "true"
name: azurefile
resourceVersion: "1421202"
selfLink: /apis/storage.k8s.io/v1/storageclasses/azurefile
uid: e3cc4e52-c647-412a-bfad-81ab6eb222b1
mountOptions:
- nouser_xattr
parameters:
skuName: Standard_LRS
provisioner: kubernetes.io/azure-file
reclaimPolicy: Delete
volumeBindingMode: Immediate
As you can see i actually patched the storage class to hold the nouser_xattr mount option which was suggested earlier
When i check the Restic pod logs i see the following info:
E0524 10:22:08.908190 1 reflector.go:156] github.com/vmware-tanzu/velero/pkg/generated/informers/externalversions/factory.go:117: Failed to list *v1.PodVolumeBackup: Get https://10.0.0.1:443/apis/velero.io/v1/namespaces/velero/podvolumebackups?limit=500&resourceVersion=1212830: dial tcp 10.0.0.1:443: i/o timeout
I0524 10:22:08.909577 1 trace.go:116] Trace[1946538740]: "Reflector ListAndWatch" name:github.com/vmware-tanzu/velero/pkg/generated/informers/externalversions/factory.go:117 (started: 2020-05-24 10:21:38.908988405 +0000 UTC m=+487217.942875118) (total time: 30.000554209s):
Trace[1946538740]: [30.000554209s] [30.000554209s] END
When i check the PodVolumeBackup pod i see below contents. I don't know what is expected here though
➜ ~ kubectl -n velero get podvolumebackups -o yaml
apiVersion: v1
items: []
kind: List
metadata:
resourceVersion: ""
selfLink: ""
To summarize this, i installed Velero like this:
velero install \
--provider azure \
--plugins velero/velero-plugin-for-microsoft-azure:v1.0.1 \
--bucket $BLOB_CONTAINER \
--secret-file ./credentials-velero \
--backup-location-config resourceGroup=$AZURE_BACKUP_RESOURCE_GROUP,storageAccount=$AZURE_STORAGE_ACCOUNT_ID \
--snapshot-location-config apiTimeout=5m,resourceGroup=$AZURE_BACKUP_RESOURCE_GROUP \
--use-restic
--wait
The end result is the deployment described below
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
backup.velero.io/backup-volumes: app-upload
deployment.kubernetes.io/revision: "18"
creationTimestamp: "2020-05-18T16:55:38Z"
generation: 10
labels:
app: app
velero.io/backup-name: mekompas-tenant-production-20200518020012
velero.io/restore-name: mekompas-tenant-production-20200518020012-20200518185536
name: app
namespace: mekompas-tenant-production
resourceVersion: "427893"
selfLink: /apis/extensions/v1beta1/namespaces/mekompas-tenant-production/deployments/app
uid: c1961ec3-b7b1-4f81-9aae-b609fa3d31fc
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: app
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
annotations:
kubectl.kubernetes.io/restartedAt: "2020-05-18T20:24:19+02:00"
creationTimestamp: null
labels:
app: app
spec:
containers:
- image: nginx:1.17-alpine
imagePullPolicy: IfNotPresent
name: app-nginx
ports:
- containerPort: 80
name: http
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/www/html
name: app-files
- mountPath: /etc/nginx/conf.d
name: nginx-vhost
- env:
- name: CONF_DB_HOST
value: db.mekompas-tenant-production
- name: CONF_DB
value: mekompas
- name: CONF_DB_USER
value: mekompas
- name: CONF_DB_PASS
valueFrom:
secretKeyRef:
key: DATABASE_PASSWORD
name: secret
- name: CONF_EMAIL_FROM_ADDRESS
value: noreply#mekompas.nl
- name: CONF_EMAIL_FROM_NAME
value: mekompas
- name: CONF_EMAIL_REPLYTO_ADDRESS
value: slc#mekompas.nl
- name: CONF_UPLOAD_PATH
value: /uploads
- name: CONF_SMTP_HOST
value: smtp.sendgrid.net
- name: CONF_SMTP_PORT
value: "587"
- name: CONF_SMTP_USER
value: apikey
- name: CONF_SMTP_PASSWORD
valueFrom:
secretKeyRef:
key: MAIL_PASSWORD
name: secret
image: me.azurecr.io/mekompas/php-fpm-alpine:1.12.0
imagePullPolicy: Always
lifecycle:
postStart:
exec:
command:
- /bin/sh
- -c
- cp -r /app/. /var/www/html && chmod -R 777 /var/www/html/templates_c
&& chmod -R 777 /var/www/html/core/lib/htmlpurifier-4.9.3/library/HTMLPurifier/DefinitionCache
name: app-php
ports:
- containerPort: 9000
name: upstream-php
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/www/html
name: app-files
- mountPath: /uploads
name: app-upload
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: registrypullsecret
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name: app-upload
persistentVolumeClaim:
claimName: upload
- emptyDir: {}
name: app-files
- configMap:
defaultMode: 420
name: nginx-vhost
name: nginx-vhost
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2020-05-18T18:12:20Z"
lastUpdateTime: "2020-05-18T18:12:20Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2020-05-18T16:55:38Z"
lastUpdateTime: "2020-05-20T16:03:48Z"
message: ReplicaSet "app-688699c5fb" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 10
readyReplicas: 1
replicas: 1
updatedReplicas: 1
Best,
Pim
Have you added nouser_xattr to your StorageClass mountOptions list?
This requirement is documented in GitHub issue 1800.
Also mentioned on the restic integration page (check under the Azure section), where they provide this snippet to patch your StorageClass resource:
kubectl patch storageclass/<YOUR_AZURE_FILE_STORAGE_CLASS_NAME> \
--type json \
--patch '[{"op":"add","path":"/mountOptions/-","value":"nouser_xattr"}]'
If you have no existing mountOptions list, you can try:
kubectl patch storageclass azurefile \
--type merge \
--patch '{"mountOptions": ["nouser_xattr"]}'
Ensure the pod template of the Deployment resource includes the annotation backup.velero.io/backup-volumes. Annotations on Deployment resources will propagate to ReplicaSet resources, but not to Pod resources.
Specifically, in your example the annotation backup.velero.io/backup-volumes: app-upload should be a child of spec.template.metadata.annotations, rather than a child of metadata.annotations.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
# *** move velero annotiation from here ***
labels:
app: app
name: app
namespace: mekompas-tenant-production
spec:
template:
metadata:
annotations:
# *** velero annotation goes here in order to end up on the pod ***
backup.velero.io/backup-volumes: app-upload
labels:
app: app
spec:
containers:
- image: nginx:1.17-alpine