Converting docker-compose to kubernetes using Kompose - kubernetes

Im new to Kubernetes and i saw that there is a way runing Kompose up, but i get this error:
root#master-node:kompose --file docker-compose.yml --volumes hostPath up
INFO We are going to create Kubernetes Deployments, Services and PersistentVolumeClaims for your Dockerized application. If you need different kind of resources, use the 'kompose convert' and 'kubectl create -f' commands instead.
FATA Error while deploying application: Error loading config file "/etc/kubernetes/admin.conf": open /etc/kubernetes/admin.conf: permission denied
ls -l /etc/kubernetes/admin.conf
-rw------- 1 root root 5593 Jun 14 08:59 /etc/kubernetes/admin.conf
How can i make this work ?
system info :
ubuntu18
Client Version: version.Info: Major:"1", Minor:"21"
I also tried making it work like that:
kompose convert --volumes hostPath
INFO Kubernetes file "postgres-1-service.yaml" created
INFO Kubernetes file "postgres-2-service.yaml" created
INFO Kubernetes file "postgres-1-deployment.yaml" created
INFO Kubernetes file "postgres-2-deployment.yaml" created
kubectl create -f postgres-1-service.yaml,postgres-2-service.yaml,postgres-1-deployment.yaml,postgres-1-claim0-persistentvolumeclaim.yaml,postgres-1-claim1-persistentvolumeclaim.yaml,postgres-2-deployment.yaml,postgres-2-claim0-persistentvolumeclaim.yaml
I get this error only on postgres-1-deployment.yaml and postgres-2-deployment.yaml.
service/postgres-1 created
service/postgres-2 created
persistentvolumeclaim/postgres-1-claim0 created
persistentvolumeclaim/postgres-1-claim1 created
persistentvolumeclaim/postgres-2-claim0 created
Error from server (BadRequest): error when creating "postgres-1-deployment.yaml": Deployment in version "v1" cannot be handled as a Deployment: v1.Deployment.Spec: v1.DeploymentSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container: v1.Container.Env: []v1.EnvVar: v1.EnvVar.Value: ReadString: expects " or n, but found t, error found in #10 byte of ...|,"value":true},{"nam|..., bigger context ...|sword"},{"name":"REPMGR_PGHBA_TRUST_ALL","value":true},{"name":"REPMGR_PRIMARY_HOST","value":"postgr|...
Error from server (BadRequest): error when creating "postgres-2-deployment.yaml": Deployment in version "v1" cannot be handled as a Deployment: v1.Deployment.Spec: v1.DeploymentSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container: v1.Container.Env: []v1.EnvVar: v1.EnvVar.Value: ReadString: expects " or n, but found t, error found in #10 byte of ...|,"value":true},{"nam|..., bigger context ...|sword"},{"name":"REPMGR_PGHBA_TRUST_ALL","value":true},{"name":"REPMGR_PRIMARY_HOST","value":"postgr|...
example of postgres-1-deployment.yaml :
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: /snap/kompose/19/kompose-linux-amd64 convert --volumes hostPath
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: postgres-1
name: postgres-1
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: postgres-1
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: /snap/kompose/19/kompose-linux-amd64 convert --volumes hostPath
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: postgres-1
spec:
containers:
- env:
- name: BITNAMI_DEBUG
value: "true"
- name: POSTGRESQL_PASSWORD
value: password
- name: POSTGRESQL_POSTGRES_PASSWORD
value: adminpassword
- name: POSTGRESQL_USERNAME
value: user
- name: REPMGR_NODE_NAME
value: postgres-1
- name: REPMGR_NODE_NETWORK_NAME
value: postgres-1
- name: REPMGR_PARTNER_NODES
value: postgres-1,postgres-2:5432
- name: REPMGR_PASSWORD
value: repmgrpassword
- name: REPMGR_PGHBA_TRUST_ALL
value: yes
- name: REPMGR_PRIMARY_HOST
value: postgres-1
- name: REPMGR_PRIMARY_PORT
value: "5432"
image: bitnami/postgresql-repmgr:11
imagePullPolicy: ""
name: postgres-1
ports:
- containerPort: 5432
resources: {}
volumeMounts:
- mountPath: /bitnami/postgresql
name: postgres-1-hostpath0
- mountPath: /docker-entrypoint-initdb.d
name: postgres-1-hostpath1
restartPolicy: Always
serviceAccountName: ""
volumes:
- hostPath:
path: /db4_data
name: postgres-1-hostpath0
- hostPath:
path: /root/ansible/api/posrgres11/cluster
name: postgres-1-hostpath1
status: {}
Is kompose translated deploy.yml files the wrong way ? i did everything like guided on kompose guid

figured the issue is Kompose translated env value true without quotes "true"
used this verifier https://kubeyaml.com/

Related

Open Telemetry python-FastAPI auto instrumentation in k8s not collecting data

So I am trying to instrument a FastAPI python server with Open Telemetry. I installed the dependencies needed through poetry:
opentelemetry-api = "^1.11.1"
opentelemetry-distro = {extras = ["otlp"], version = "^0.31b0"}
opentelemetry-instrumentation-fastapi = "^0.31b0"
When running the server locally, with opentelemetry-instrument --traces_exporter console uvicorn src.main:app --host 0.0.0.0 --port 5000 I can see the traces printed out to my console whenever I call any of my endpoints.
The main issue I face, is when running the app in k8s I see no logs in the collector.
I have added cert-manager kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.8.0/cert-manager.yaml (needed by the OTel Operator) and the OTel Operator itself install the operator kubectl apply -f https://github.com/open-telemetry/opentelemetry-operator/releases/latest/download/opentelemetry-operator.yaml.
Then, I added a collector with the following config:
apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
name: otel
spec:
config: |
receivers:
otlp:
protocols:
grpc:
http:
processors:
exporters:
logging:
service:
pipelines:
traces:
receivers: [otlp]
processors: []
exporters: [logging]
And finally, an Instrumentation CR to enable auto-instrumentation:
apiVersion: opentelemetry.io/v1alpha1
kind: Instrumentation
metadata:
name: my-instrumentation
spec:
exporter:
endpoint: http://otel-collector:4317
propagators:
- tracecontext
- baggage
sampler:
type: parentbased_traceidratio
argument: "0.25"
My app's deployment contains:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose-run-local-with-aws.yml -c
kompose.version: 1.26.1 (HEAD)
creationTimestamp: null
labels:
io.kompose.service: api
name: api
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: api
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose-run-local-with-aws.yml -c
kompose.version: 1.26.1 (HEAD)
sidecar.opentelemetry.io/inject: "true"
instrumentation.opentelemetry.io/inject-python: "true"
creationTimestamp: null
labels:
io.kompose.network/backend: "true"
io.kompose.service: api
app: api
spec:
containers:
- env:
- name: APP_ENV
value: docker
- name: RELEASE_VERSION
value: "1.0.0"
- name: OTEL_RESOURCE_ATTRIBUTES
value: "service.name=fastapiApp"
- name: OTEL_LOG_LEVEL
value: "debug"
- name: OTEL_TRACES_EXPORTER
value: otlp_proto_http
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: http://otel-collector:4317
image: my_org/api:1.0.0
name: api
ports:
- containerPort: 5000
resources: {}
imagePullPolicy: IfNotPresent
restartPolicy: Always
What am I missing? I have double checked everything a thousand times and cannot figure out what might be wrong
You have incorrectly configured the exporter endpoint setting OTEL_EXPORTER_OTLP_ENDPOINT. Endpoint value for OTLP over HTTP exporter should have port number 4318. The 4317 port number should be used for OTLP/gRPC exporters.

Kubenetes accsess api web

Im new to Kub and i converted my envirement from docker-compose,
I have a pod that have python code - if i use my docker on the same host i can accsess
but when its on pod no traffic goes inside,
kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://10.10.10.130:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin#kubernetes
current-context: kubernetes-admin#kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
api-service.yaml:
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: /snap/kompose/19/kompose-linux-amd64 convert --volumes hostPath
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: api
name: api
spec:
ports:
- name: "5001"
port: 5001
targetPort: 5001
selector:
io.kompose.service: api
status:
loadBalancer: {}
api-deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: /snap/kompose/19/kompose-linux-amd64 convert --volumes hostPath
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: api
name: api
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: api
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: /snap/kompose/19/kompose-linux-amd64 convert --volumes hostPath
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: api
spec:
containers:
- image: 127.0.0.1:5000/api:latest
imagePullPolicy: "Never"
name: api
ports:
- containerPort: 5001
resources: {}
volumeMounts:
- mountPath: /base
name: api-hostpath0
restartPolicy: Always
serviceAccountName: ""
volumes:
- hostPath:
path: /root/ansible/api/base
name: api-hostpath0
status: {}
pod log:
* Serving Flask app 'server' (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: on
* Running on all addresses.
WARNING: This is a development server. Do not use it in a production deployment.
* Running on http://10.244.0.17:5001/ (Press CTRL+C to quit)
* Restarting with stat
* Debugger is active!
* Debugger PIN: 553-272-086
I tried reaching what the config view shows and i get this :
https://10.10.10.130:6443/
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
"reason": "Forbidden",
"details": {
},
"code": 403
}
The path to reach through container is :
https://10.10.10.130:5001/
It does not reach container and says like site does not exists -
again this works on docker container so what am i missing ?
Thanks
--EDIT--
If i curl http://10.244.0.17:5001/ (the address the api pod) from host i get in, why i cannot get in from outside?
Also tried adding nginx + api pod deployment
template:
spec:
hostNetwork: true
Still cannot reach please help
Found the solution!
I needed to add externalIPs to my pods service.yaml (api and nginx)
spec:
ports:
- name: "8443"
port: 8443
targetPort: 80
externalIPs:
- 10.10.10.130

Bad Gateway in Rails app with Kubernetes setup

I try to setup a Rails app within a Kubernetes Cluster (which is created with k3d on my local machine.
k3d cluster create --api-port 6550 -p "8081:80#loadbalancer" --agents 2
kubectl create deployment nginx --image=nginx
kubectl create service clusterip nginx --tcp=80:80
# apiVersion: networking.k8s.io/v1beta1 # for k3s < v1.19
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx
annotations:
ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx
port:
number: 80
I can get an Ingress running which correctly exposes a running Nginx Deployment ("Welcome to nginx!")
(I took this example from here: https://k3d.io/usage/guides/exposing_services/
So I know my setup is working (with nginx).
Now, I simpy wanna point that ingress to my Rails app, but I always get an "Bad Gateway". (I also tried to point to my other services (elasticsearch, kibana, pgadminer) but I always get a "Bad gateway".
I can see my Rails app running at (http://localhost:62333/)
last lines of my Dockerfile:
EXPOSE 3001:3001
CMD rm -f tmp/pids/server.pid && bundle exec rails s -b 0.0.0.0 -p 3001
Why is my API has the "bad gateway" but Nginx not?
Does it have something to do with my selectors and labels which are created by kompose convert?
This is my complete Rails-API Deployment:
kubectl apply -f api-deployment.yml -f api.service.yml -f ingress.yml
api-deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert -c --file metashop-backend/docker-compose.yml --file metashop-backend/docker-compose.override.yml
kompose.version: 1.22.0 (HEAD)
creationTimestamp: null
labels:
io.kompose.service: api
name: api
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: api
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: kompose convert -c --file metashop-backend/docker-compose.yml --file metashop-backend/docker-compose.override.yml
kompose.version: 1.22.0 (HEAD)
creationTimestamp: null
labels:
io.kompose.network/metashop-net: 'true'
io.kompose.service: api
spec:
containers:
- env:
- name: APPLICATION_URL
valueFrom:
configMapKeyRef:
key: APPLICATION_URL
name: env
- name: DEBUG
value: 'true'
- name: ELASTICSEARCH_URL
valueFrom:
configMapKeyRef:
key: ELASTICSEARCH_URL
name: env
image: metashop-backend-api:DockerfileJeanKlaas
name: api
ports:
- containerPort: 3001
resources: {}
# volumeMounts:
# - mountPath: /usr/src/app
# name: api-claim0
# restartPolicy: Always
# volumes:
# - name: api-claim0
# persistentVolumeClaim:
# claimName: api-claim0
status: {}
api-service.yml
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert -c --file metashop-backend/docker-compose.yml --file metashop-backend/docker-compose.override.yml
kompose.version: 1.22.0 (HEAD)
creationTimestamp: null
labels:
app: api
io.kompose.service: api
name: api
spec:
type: ClusterIP
ports:
- name: '3001'
protocol: TCP
port: 3001
targetPort: 3001
selector:
io.kompose.service: api
status:
loadBalancer: {}
ingress.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: api
annotations:
ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: api
port:
number: 3001
configmap.yml
apiVersion: v1
data:
APPLICATION_URL: localhost:3001
ELASTICSEARCH_URL: elasticsearch
RAILS_ENV: development
RAILS_MAX_THREADS: '5'
kind: ConfigMap
metadata:
creationTimestamp: null
labels:
io.kompose.service: api-env
name: env
I hope I didn't miss anything.
thank you in advance
EDIT: added endpoint, requested in comment:
kind: endpoint
kind: Endpoints
apiVersion: v1
metadata:
name: api
namespace: default
labels:
app: api
io.kompose.service: api
selfLink: /api/v1/namespaces/default/endpoints/api
subsets:
- addresses:
- ip: 10.42.1.105
nodeName: k3d-metashop-cluster-server-0
targetRef:
kind: Pod
namespace: default
apiVersion: v1
ports:
- name: '3001'
port: 3001
protocol: TCP
The problem was within the Dockerfile:
I had not defined ENV RAILS_LOG_TO_STDOUT true, so I was not able to see any errors in the pod logs.
After I added ENV RAILS_LOG_TO_STDOUT true I saw errors like database xxxx does not exist

Four different errors when using kubectl apply command

My docker-compose file is as below:
version: '3'
services:
nginx:
build: .
container_name: nginx
ports:
- '80:80'
And my Dockerfile:
FROM nginx:alpine
I used kompose konvert and it created two files called nginx-deployment.yml and nginx-service.yml with the below contents.
nginx-deployment.yml:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: nginx
name: nginx
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: nginx
strategy: {}
template:
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: nginx
spec:
containers:
- image: nginx:alpine
name: nginx
ports:
- containerPort: 80
resources: {}
restartPolicy: Always
status: {}
And nginx-service.yml:
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: nginx
name: nginx
spec:
ports:
- name: "80"
port: 80
targetPort: 80
selector:
io.kompose.service: nginx
status:
loadBalancer: {}
My kustomization.yml:
resources:
- nginx-deployment.yml
- nginx-service.yml
All files are in the same path /home.
I run these three commands but for each of them I got a different error:
kubectl apply -f kustomization.yml:
error: error validating "kustomization.yml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
kubectl apply -k .:
error: accumulating resources: accumulation err='accumulating resources from 'nginx-deployment.yml': evalsymlink failure on '/home/nginx-deployment.yml' : lstat /home/nginx-deployment.yml: no such file or directory': evalsymlink failure on '/home/nginx-deployment.yml' : lstat /home/nginx-deployment.yml: no such file or directory
kubectl apply -f kustomization.yml --validate=false:
error: unable to decode "kustomization.yml": Object 'Kind' is missing in '{"resources":["nginx-deployment.yml","nginx-service.yml"]}'
kubectl apply -k . --validate=false:
error: accumulating resources: accumulation err='accumulating resources from 'nginx-deployment.yml': evalsymlink failure on '/home/nginx-deployment.yml' : lstat /home/nginx-deployment.yml: no such file or directory': evalsymlink failure on '/home/nginx-deployment.yml' : lstat /home/nginx-deployment.yml: no such file or directory
The kubernetes is a single node.
Why I'm getting these errors and how may I run my container in this environment?
Your Kustomization.yml file has two errors. The files generated by kompose have .yaml extensions but you are referring to .yml and you are missing the kind and apiVersion lines.
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- nginx-deployment.yaml
- nginx-service.yaml
kubectl apply -k .

Azure AKS backup using Velero

I noticed that Velero can only backup AKS PVCs if those PVCs are disk and not Azure fileshares. To handle this i tried to use restic to backup by fileshares itself but i gives me a strange log:
This is how my actual pod looks like
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
backup.velero.io/backup-volumes: grafana-data
deployment.kubernetes.io/revision: "17"
And the log of my backup:
time="2020-05-26T13:51:54Z" level=info msg="Adding pvc grafana-data to additionalItems" backup=velero/grafana-test-volume cmd=/velero logSource="pkg/backup/pod_action.go:67" pluginName=velero
time="2020-05-26T13:51:54Z" level=info msg="Backing up item" backup=velero/grafana-test-volume group=v1 logSource="pkg/backup/item_backupper.go:169" name=grafana-data namespace=grafana resource=persistentvolumeclaims
time="2020-05-26T13:51:54Z" level=info msg="Executing custom action" backup=velero/grafana-test-volume group=v1 logSource="pkg/backup/item_backupper.go:330" name=grafana-data namespace=grafana resource=persistentvolumeclaims
time="2020-05-26T13:51:54Z" level=info msg="Skipping item because it's already been backed up." backup=velero/grafana-test-volume group=v1 logSource="pkg/backup/item_backupper.go:163" name=grafana-data namespace=grafana resource=persistentvolumeclaims
As you can see somehow it did not backup the grafana-data volume since it says it is already in the backup (where it is actually not).
My azurefile volume holds these contents:
allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"allowVolumeExpansion":true,"apiVersion":"storage.k8s.io/v1beta1","kind":"StorageClass","metadata":{"annotations":{},"labels":{"kubernetes.io/cluster-service":"true"},"name":"azurefile"},"parameters":{"skuName":"Standard_LRS"},"provisioner":"kubernetes.io/azure-file"}
creationTimestamp: "2020-05-18T15:18:18Z"
labels:
kubernetes.io/cluster-service: "true"
name: azurefile
resourceVersion: "1421202"
selfLink: /apis/storage.k8s.io/v1/storageclasses/azurefile
uid: e3cc4e52-c647-412a-bfad-81ab6eb222b1
mountOptions:
- nouser_xattr
parameters:
skuName: Standard_LRS
provisioner: kubernetes.io/azure-file
reclaimPolicy: Delete
volumeBindingMode: Immediate
As you can see i actually patched the storage class to hold the nouser_xattr mount option which was suggested earlier
When i check the Restic pod logs i see the following info:
E0524 10:22:08.908190 1 reflector.go:156] github.com/vmware-tanzu/velero/pkg/generated/informers/externalversions/factory.go:117: Failed to list *v1.PodVolumeBackup: Get https://10.0.0.1:443/apis/velero.io/v1/namespaces/velero/podvolumebackups?limit=500&resourceVersion=1212830: dial tcp 10.0.0.1:443: i/o timeout
I0524 10:22:08.909577 1 trace.go:116] Trace[1946538740]: "Reflector ListAndWatch" name:github.com/vmware-tanzu/velero/pkg/generated/informers/externalversions/factory.go:117 (started: 2020-05-24 10:21:38.908988405 +0000 UTC m=+487217.942875118) (total time: 30.000554209s):
Trace[1946538740]: [30.000554209s] [30.000554209s] END
When i check the PodVolumeBackup pod i see below contents. I don't know what is expected here though
➜ ~ kubectl -n velero get podvolumebackups -o yaml
apiVersion: v1
items: []
kind: List
metadata:
resourceVersion: ""
selfLink: ""
To summarize this, i installed Velero like this:
velero install \
--provider azure \
--plugins velero/velero-plugin-for-microsoft-azure:v1.0.1 \
--bucket $BLOB_CONTAINER \
--secret-file ./credentials-velero \
--backup-location-config resourceGroup=$AZURE_BACKUP_RESOURCE_GROUP,storageAccount=$AZURE_STORAGE_ACCOUNT_ID \
--snapshot-location-config apiTimeout=5m,resourceGroup=$AZURE_BACKUP_RESOURCE_GROUP \
--use-restic
--wait
The end result is the deployment described below
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
backup.velero.io/backup-volumes: app-upload
deployment.kubernetes.io/revision: "18"
creationTimestamp: "2020-05-18T16:55:38Z"
generation: 10
labels:
app: app
velero.io/backup-name: mekompas-tenant-production-20200518020012
velero.io/restore-name: mekompas-tenant-production-20200518020012-20200518185536
name: app
namespace: mekompas-tenant-production
resourceVersion: "427893"
selfLink: /apis/extensions/v1beta1/namespaces/mekompas-tenant-production/deployments/app
uid: c1961ec3-b7b1-4f81-9aae-b609fa3d31fc
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: app
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
annotations:
kubectl.kubernetes.io/restartedAt: "2020-05-18T20:24:19+02:00"
creationTimestamp: null
labels:
app: app
spec:
containers:
- image: nginx:1.17-alpine
imagePullPolicy: IfNotPresent
name: app-nginx
ports:
- containerPort: 80
name: http
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/www/html
name: app-files
- mountPath: /etc/nginx/conf.d
name: nginx-vhost
- env:
- name: CONF_DB_HOST
value: db.mekompas-tenant-production
- name: CONF_DB
value: mekompas
- name: CONF_DB_USER
value: mekompas
- name: CONF_DB_PASS
valueFrom:
secretKeyRef:
key: DATABASE_PASSWORD
name: secret
- name: CONF_EMAIL_FROM_ADDRESS
value: noreply#mekompas.nl
- name: CONF_EMAIL_FROM_NAME
value: mekompas
- name: CONF_EMAIL_REPLYTO_ADDRESS
value: slc#mekompas.nl
- name: CONF_UPLOAD_PATH
value: /uploads
- name: CONF_SMTP_HOST
value: smtp.sendgrid.net
- name: CONF_SMTP_PORT
value: "587"
- name: CONF_SMTP_USER
value: apikey
- name: CONF_SMTP_PASSWORD
valueFrom:
secretKeyRef:
key: MAIL_PASSWORD
name: secret
image: me.azurecr.io/mekompas/php-fpm-alpine:1.12.0
imagePullPolicy: Always
lifecycle:
postStart:
exec:
command:
- /bin/sh
- -c
- cp -r /app/. /var/www/html && chmod -R 777 /var/www/html/templates_c
&& chmod -R 777 /var/www/html/core/lib/htmlpurifier-4.9.3/library/HTMLPurifier/DefinitionCache
name: app-php
ports:
- containerPort: 9000
name: upstream-php
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/www/html
name: app-files
- mountPath: /uploads
name: app-upload
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: registrypullsecret
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name: app-upload
persistentVolumeClaim:
claimName: upload
- emptyDir: {}
name: app-files
- configMap:
defaultMode: 420
name: nginx-vhost
name: nginx-vhost
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2020-05-18T18:12:20Z"
lastUpdateTime: "2020-05-18T18:12:20Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2020-05-18T16:55:38Z"
lastUpdateTime: "2020-05-20T16:03:48Z"
message: ReplicaSet "app-688699c5fb" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 10
readyReplicas: 1
replicas: 1
updatedReplicas: 1
Best,
Pim
Have you added nouser_xattr to your StorageClass mountOptions list?
This requirement is documented in GitHub issue 1800.
Also mentioned on the restic integration page (check under the Azure section), where they provide this snippet to patch your StorageClass resource:
kubectl patch storageclass/<YOUR_AZURE_FILE_STORAGE_CLASS_NAME> \
--type json \
--patch '[{"op":"add","path":"/mountOptions/-","value":"nouser_xattr"}]'
If you have no existing mountOptions list, you can try:
kubectl patch storageclass azurefile \
--type merge \
--patch '{"mountOptions": ["nouser_xattr"]}'
Ensure the pod template of the Deployment resource includes the annotation backup.velero.io/backup-volumes. Annotations on Deployment resources will propagate to ReplicaSet resources, but not to Pod resources.
Specifically, in your example the annotation backup.velero.io/backup-volumes: app-upload should be a child of spec.template.metadata.annotations, rather than a child of metadata.annotations.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
# *** move velero annotiation from here ***
labels:
app: app
name: app
namespace: mekompas-tenant-production
spec:
template:
metadata:
annotations:
# *** velero annotation goes here in order to end up on the pod ***
backup.velero.io/backup-volumes: app-upload
labels:
app: app
spec:
containers:
- image: nginx:1.17-alpine