My K8s cluster needs persistent storage
This is the pv-claim.yaml file
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 30Mi
This is the pv-volume.yaml file
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 50Mi
accessModes:
- ReadWriteOnce
hostPath:
path: "kpi-mon/configurations"
and this is my deployment.yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.name }}-deployment
labels:
app: {{ .Values.name }}
xappRelease: {{ .Release.Name }}
spec:
replicas: 1
selector:
matchLabels:
app: {{ .Values.name }}
template:
metadata:
labels:
app: {{ .Values.name }}
xappRelease: {{ .Release.Name }}
spec:
containers:
- name: {{ .Values.name }}
image: {{ .Values.repository -}}{{- .Values.image -}}:{{- .Values.tag }}
imagePullPolicy: IfNotPresent
ports:
- name: rmr
containerPort: {{ .Values.rmrPort }}
protocol: TCP
- name: rtg
containerPort: {{ .Values.rtgPort }}
protocol: TCP
volumeMounts:
- name: app-cfg
mountPath: {{ .Values.routingTablePath }}{{ .Values.routingTableFile }}
subPath: {{ .Values.routingTableFile }}
- name: app-cfg
mountPath: {{ .Values.routingTablePath }}{{ .Values.vlevelFile }}
subPath: {{ .Values.vlevelFile }}
- name: task-pv-storage
mountPath: "kpi-mon/configurations"
envFrom:
- configMapRef:
name: {{ .Values.name }}-configmap
volumes:
- name: app-cfg
configMap:
name: {{ .Values.name }}-configmap
items:
- key: {{ .Values.routingTableFile }}
path: {{ .Values.routingTableFile }}
- key: {{ .Values.vlevelFile }}
path: {{ .Values.vlevelFile }}
- name: task-pv-storage
persistentVolumeClain:
claimName: task-pv-claim
---
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.name }}-rmr-service
labels:
xappRelease: {{ .Release.Name }}
spec:
selector:
app: {{ .Values.name }}
type: NodePort
ports:
- name: rmr
protocol: TCP
port: {{ .Values.rmrPort }}
targetPort: {{ .Values.rmrPort }}
- name: rtg
protocol: TCP
port: {{ .Values.rtgPort }}
targetPort: {{ .Values.rtgPort }}
I'm trying to map local directory from the Pod file system
I choosed this documentation https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/
When I try to load the deployment an error occurred and I cannot understand what the problem is.
Related
I have a Mongo deployment with a metrics exporter sidecar. Now when I load the associated dashboard in Grafana the metrics are not showing - it appears the exporter is not scrapping all metrics :
What is working
Only the Mongo UP metric is working i. e mongodb_up{env=~""} and part of Server Metrics
What is not working
All of the following metrics in the dashboard are showing no data : opscounters, Replication Set Metrics, Cursor Metrics
My configuration:
Deployment.yaml (using Percona MongoDB Exporter)
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb-prom
namespace: "labs"
labels:
app.kubernetes.io/name: mongodb
helm.sh/chart: mongodb-12.1.12
app.kubernetes.io/instance: mongodb-prom
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: mongodb
spec:
replicas: 1
strategy:
type: RollingUpdate
selector:
matchLabels:
app.kubernetes.io/name: mongodb
app.kubernetes.io/instance: mongodb-prom
app.kubernetes.io/component: mongodb
template:
metadata:
labels:
app.kubernetes.io/name: mongodb
helm.sh/chart: mongodb-12.1.12
app.kubernetes.io/instance: mongodb-prom
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: mongodb
spec:
serviceAccountName: mongodb-prom
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels:
app.kubernetes.io/name: mongodb
app.kubernetes.io/instance: mongodb-prom
app.kubernetes.io/component: mongodb
namespaces:
- "labs"
topologyKey: kubernetes.io/hostname
weight: 100
securityContext:
fsGroup: 1001
sysctls: []
containers:
- name: mongodb
image: docker.io/bitnami/mongodb:5.0.9-debian-10-r15
imagePullPolicy: "IfNotPresent"
securityContext:
runAsNonRoot: true
runAsUser: 1001
env:
- name: BITNAMI_DEBUG
value: "false"
- name: MONGODB_ROOT_USER
value: "root"
- name: MONGODB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-prom
key: mongodb-root-password
- name: ALLOW_EMPTY_PASSWORD
value: "no"
- name: MONGODB_SYSTEM_LOG_VERBOSITY
value: "0"
- name: MONGODB_DISABLE_SYSTEM_LOG
value: "no"
- name: MONGODB_DISABLE_JAVASCRIPT
value: "no"
- name: MONGODB_ENABLE_JOURNAL
value: "yes"
- name: MONGODB_PORT_NUMBER
value: "27017"
- name: MONGODB_ENABLE_IPV6
value: "no"
- name: MONGODB_ENABLE_DIRECTORY_PER_DB
value: "no"
ports:
- name: mongodb
containerPort: 27017
volumeMounts:
- name: datadir
mountPath: /bitnami/mongodb
- name: datadir
mountPath: /tmp
- name: metrics
image: percona/mongodb_exporter:0.35
imagePullPolicy: "IfNotPresent"
args:
- "--mongodb.direct-connect=false"
- "--mongodb.uri=mongodb://username:password#mongodb-prom/admin"
ports:
- name: metrics
containerPort: 9216
resources:
requests:
memory: 128Mi
cpu: 250m
volumes:
- name: datadir
persistentVolumeClaim:
claimName: mongodb
metrics-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: mongodb-metrics
namespace: "labs"
labels:
app.kubernetes.io/name: mongodb
helm.sh/chart: mongodb-12.1.12
app.kubernetes.io/instance: mongodb-prom
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: metrics
annotations:
prometheus.io/path: /metrics
prometheus.io/port: '9216'
prometheus.io/scrape: "true"
spec:
type: ClusterIP
ports:
- port: 9216
targetPort: metrics
protocol: TCP
name: http-metrics
selector:
app.kubernetes.io/name: mongodb
app.kubernetes.io/instance: mongodb-prom
app.kubernetes.io/component: mongodb
service.yaml
apiVersion: v1
kind: Service
metadata:
name: mongodb-prom
namespace: "labs"
labels:
app.kubernetes.io/name: mongodb
helm.sh/chart: mongodb-12.1.12
app.kubernetes.io/instance: mongodb-prom
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: mongodb
spec:
type: ClusterIP
sessionAffinity: None
ports:
- name: "mongodb"
port: 27017
targetPort: mongodb
selector:
app.kubernetes.io/name: mongodb
app.kubernetes.io/instance: mongodb-prom
app.kubernetes.io/component: mongodb
What I have tried
I have tried using the Bitnami MongoDB Exporter version for the sidecar. It yields the exact same results :
- name: metrics
image: docker.io/bitnami/mongodb-exporter:0.32.0-debian-11-r5
imagePullPolicy: "IfNotPresent"
securityContext:
runAsNonRoot: true
runAsUser: 1001
command:
- /bin/bash
- -ec
args:
- |
/bin/mongodb_exporter --web.listen-address ":9216" --mongodb.uri "mongodb://$MONGODB_ROOT_USER:$(echo $MONGODB_ROOT_PASSWORD | sed -r "s/#/%40/g;s/:/%3A/g")#localhost:27017/admin?"
env:
- name: MONGODB_ROOT_USER
value: "root"
- name: MONGODB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-prom
key: mongodb-root-password
ports:
- name: metrics
containerPort: 9216
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 15
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 5
httpGet:
path: /
port: metrics
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 5
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 1
httpGet:
path: /
port: metrics
resources:
limits: {}
requests: {}
```
I am following the reference implementation given here and I have added the clusterMonitor role to the db user as below:
db.grantRolesToUser("root",[{ role: "clusterMonitor", db: "admin" }, { role: "read", db: "local" }])
and when I execute db.getUsers(); I get
[
{
"_id" : "admin.root",
"userId" : UUID("d8e181fc-6429-447e-bbcb-cec252f0792f"),
"user" : "root",
"db" : "admin",
"roles" : [
{
"role" : "clusterMonitor",
"db" : "admin"
},
{
"role" : "root",
"db" : "admin"
},
{
"role" : "read",
"db" : "local"
}
],
"mechanisms" : [
"SCRAM-SHA-1",
"SCRAM-SHA-256"
]
}
]
Even after granting these roles the dashboard is still not loading the missing metrics.
I have updated the Prometheus and Grafana versions in the dashboard's JSON to match my installed versions (Could this affect anything?)
The default dashboard I am using is here.
What am I missing ?
For anyone struggling with this I had to add the --collect-all and --compatibility-mode flags for the container to pull the metrics as in the configuration below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb-prom
namespace: "labs"
labels:
app.kubernetes.io/name: mongodb
helm.sh/chart: mongodb-12.1.12
app.kubernetes.io/instance: mongodb-prom
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: mongodb
spec:
replicas: 1
strategy:
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate
selector:
matchLabels:
app.kubernetes.io/name: mongodb
app.kubernetes.io/instance: mongodb-prom
app.kubernetes.io/component: mongodb
template:
metadata:
labels:
app.kubernetes.io/name: mongodb
helm.sh/chart: mongodb-12.1.12
app.kubernetes.io/instance: mongodb-prom
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: mongodb
spec:
serviceAccountName: mongodb-prom
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels:
app.kubernetes.io/name: mongodb
app.kubernetes.io/instance: mongodb-prom
app.kubernetes.io/component: mongodb
namespaces:
- "labs"
topologyKey: kubernetes.io/hostname
weight: 100
securityContext:
fsGroup: 1001
sysctls: []
containers:
- name: mongodb
image: docker.io/bitnami/mongodb:5.0.9-debian-10-r15
imagePullPolicy: "IfNotPresent"
securityContext:
runAsNonRoot: true
runAsUser: 1001
env:
- name: BITNAMI_DEBUG
value: "false"
- name: MONGODB_ROOT_USER
value: "root"
- name: MONGODB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-prom
key: mongodb-root-password
- name: ALLOW_EMPTY_PASSWORD
value: "no"
- name: MONGODB_SYSTEM_LOG_VERBOSITY
value: "0"
- name: MONGODB_DISABLE_SYSTEM_LOG
value: "no"
- name: MONGODB_DISABLE_JAVASCRIPT
value: "no"
- name: MONGODB_ENABLE_JOURNAL
value: "yes"
- name: MONGODB_PORT_NUMBER
value: "27017"
- name: MONGODB_ENABLE_IPV6
value: "no"
- name: MONGODB_ENABLE_DIRECTORY_PER_DB
value: "no"
ports:
- name: mongodb
containerPort: 27017
volumeMounts:
- name: datadir
mountPath: /bitnami/mongodb
- name: datadir
mountPath: /tmp
- name: metrics
image: docker.io/bitnami/mongodb-exporter:0.32.0-debian-11-r5
imagePullPolicy: "IfNotPresent"
securityContext:
runAsNonRoot: true
runAsUser: 1001
command:
- /bin/bash
- -ec
args:
- |
/bin/mongodb_exporter --web.listen-address ":9216" --mongodb.uri "mongodb://$MONGODB_ROOT_USER:$(echo $MONGODB_ROOT_PASSWORD | sed -r "s/#/%40/g;s/:/%3A/g")#localhost:27017/admin?" --collect-all --compatible-mode
env:
- name: MONGODB_ROOT_USER
value: "root"
- name: MONGODB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-prom
key: mongodb-root-password
ports:
- name: metrics
containerPort: 9216
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 15
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 5
httpGet:
path: /
port: metrics
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 5
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 1
httpGet:
path: /
port: metrics
resources:
limits: {}
requests: {}
volumes:
- name: datadir
persistentVolumeClaim:
claimName: mongodb
The solution is from the following thread:
GithubIssue
I want to adapt the example here
https://www.enterprisedb.com/blog/how-deploy-pgadmin-kubernetes
I want to change the following
need the host be localhost
the user / password to access postgreSQL server vis Admin4 be :
postgres / admin
But I dont know what shout be changed in the files
pgadmin-secret.yaml file :
(the password is SuperSecret)
sapiVersion: v1
kind: Secret
type: Opaque
metadata:
name: pgadmin
data:
pgadmin-password: U3VwZXJTZWNyZXQ=
pgadmin-configmap.yaml file :
apiVersion: v1
kind: ConfigMap
metadata:
name: pgadmin-config
data:
servers.json: |
{
"Servers": {
"1": {
"Name": "PostgreSQL DB",
"Group": "Servers",
"Port": 5432,
"Username": "postgres",
"Host": "postgres.domain.com",
"SSLMode": "prefer",
"MaintenanceDB": "postgres"
}
}
}
pgadmin-service.yaml file
apiVersion: v1
kind: Service
metadata:
name: pgadmin-service
spec:
ports:
- protocol: TCP
port: 80
targetPort: http
selector:
app: pgadmin
type: NodePort
The pgadmin-statefulset.yaml :
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: pgadmin
spec:
serviceName: pgadmin-service
podManagementPolicy: Parallel
replicas: 1
updateStrategy:
type: RollingUpdate
selector:
matchLabels:
app: pgadmin
template:
metadata:
labels:
app: pgadmin
spec:
terminationGracePeriodSeconds: 10
containers:
- name: pgadmin
image: dpage/pgadmin4:5.4
imagePullPolicy: Always
env:
- name: PGADMIN_DEFAULT_EMAIL
value: user#domain.com
- name: PGADMIN_DEFAULT_PASSWORD
valueFrom:
secretKeyRef:
name: pgadmin
key: pgadmin-password
ports:
- name: http
containerPort: 80
protocol: TCP
volumeMounts:
- name: pgadmin-config
mountPath: /pgadmin4/servers.json
subPath: servers.json
readOnly: true
- name: pgadmin-data
mountPath: /var/lib/pgadmin
volumes:
- name: pgadmin-config
configMap:
name: pgadmin-config
volumeClaimTemplates:
- metadata:
name: pgadmin-data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 3Gi
the following commands work OK with the original yaml files ad I get the Admin4 connect form in a web page
But even with the unchanged files, I have a bad username/password error
I have this chart of a personal project deployed in minikube:
---
# Source: frontend/templates/service.yaml
kind: Service
apiVersion: v1
metadata:
name: xxx-app-service
spec:
selector:
app: xxx-app
ports:
- protocol: TCP
port: 3000
targetPort: 3000
---
# Source: frontend/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: '3'
creationTimestamp: '2022-06-19T21:57:01Z'
generation: 3
labels:
app: xxx-app
name: xxx-app
namespace: default
resourceVersion: '43299'
uid: 7c43767a-abbd-4806-a9d2-6712847a0aad
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: xxx-app
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: xxx-app
spec:
containers:
- image: "registry.gitlab.com/esavara/xxx/wm:staging"
name: frontend
imagePullPolicy: Always
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
ports:
- containerPort: 3000
livenessProbe:
httpGet:
path: /
port: 3000
initialDelaySeconds: 10
periodSeconds: 3
env:
- name: PORT
value: "3000"
resources: {}
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: regcred
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
---
# Source: frontend/templates/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
creationTimestamp: '2022-06-19T22:28:58Z'
generation: 1
name: xxx-app
namespace: default
resourceVersion: '44613'
uid: b58dcd17-ee1f-42e5-9dc7-d915a21f97b5
spec:
ingressClassName: nginx
rules:
- http:
paths:
- backend:
service:
name: "xxx-app-service"
port:
number: 3000
path: /
pathType: Prefix
status:
loadBalancer:
ingress:
- ip: 192.168.39.80
---
# Source: frontend/templates/gitlab-registry-sealed.json
{
"kind": "SealedSecret",
"apiVersion": "bitnami.com/v1alpha1",
"metadata": {
"name": "regcred",
"namespace": "default",
"creationTimestamp": null
},
"spec": {
"template": {
"metadata": {
"name": "regcred",
"namespace": "default",
"creationTimestamp": null
},
"type": "kubernetes.io/dockerconfigjson",
"data": null
},
"encryptedData": {
".dockerconfigjson": "AgBpHoQw1gBq0IFFYWnxlBLLYl1JC23TzbRWGgLryVzEDP8p+NAGjngLFZmtklmCEHLK63D9pp3zL7YQQYgYBZUjpEjj8YCTOhvnjQIg7g+5b/CPXWNoI5TuNexNJFKFv1mN5DzDk9np/E69ogRkDJUvUsbxvVXs6/TKGnRbnp2NuI7dTJ18QgGXdLXs7S416KE0Yid9lggw06JrDN/OSxaNyUlqFGcRJ6OfGDAHivZRV1Kw9uoX06go3o+AjVd6eKlDmmvaY/BOc52bfm7pY2ny1fsXGouQ7R7V1LK0LcsCsKdAkg6/2DU3v32mWZDKJgKkK5efhTQr1KGOBoLuuHKX6nF5oMA1e1Ii3wWe77lvWuvlpaNecCBTc7im+sGt0dyJb4aN5WMLoiPGplGqnuvNqEFa/nhkbwXm5Suke2FQGKyzDsMIBi9p8PZ7KkOJTR1s42/8QWVggTGH1wICT1RzcGzURbanc0F3huQ/2RcTmC4UvYyhCUqr3qKrbAIYTNBayfyhsBaN5wVRnV5LiPxjLbbOmObSr/8ileJgt1HMQC3V9pVLZobaJvlBjr/mmNlrF118azJOFY+a/bqzmtBWwlcvVuel/EaOb8uA8mgwfnbnShMinL1OWTHoj+D0ayFmUdsQgMYwRmStnC7x/6OXThmBgfyrLguzz4V2W8O8qbdDz+O5QoyboLUuR9WQb/ckpRio2qa5tidnKXzZzbWzFEevv9McxvCC1+ovvw4IullU8ra3FutnTentmPHKU2OPr1EhKgFKIX20u8xZaeIJYLCiZlVQohylpjkHnBZo1qw+y1CTiDwytunpmkoGGAXLx++YQSjEJEo889PmVVlSwZ8p/Rdusiz1WbgKqFt27yZaOfYzR2bT++HuB5x6zqfK6bbdV/UZndXs"
}
}
}
I'm trying to use Telepresence to redirect the traffic from the deployed application to a Docker container which have my project mounted inside and has hot-reloading, to continue the development of it but inside Kubernetes, but running telepresence intercept xxx-app-service --port 3000:3000 --env-file /tmp/xxx-app-service.env fails with the following error:
telepresence: error: workload "xxx-app-service.default" not found
Why is this happening and how do I fix it?
Thought to post this because it might help someone. I couldn't find Kubernetes NiFi setup without helm package, so I have prepared the below configuration YAML for Kubernetes NiFi Cluster setup.
Here's the link for Zookeeper Cluster setup in AKS
Please comment if you see any issues anywhere in the configuration or if you would like to provide any suggestions. Increase disk storage configuration according to your usage.
apiVersion: v1
kind: Service
metadata:
name: nifi-hs
labels:
app: nifi
spec:
ports:
- port: 1025
name: nodeport
- port: 8080
name: client
clusterIP: None
selector:
app: nifi
---
apiVersion: v1
kind: Service
metadata:
name: nifi-cs
labels:
app: nifi
annotations:
service.beta.kubernetes.io/azure-dns-label-name: nifi
spec:
ports:
- port: 80
targetPort: 8080
name: client
selector:
app: nifi
type: LoadBalancer
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: nifi-pdb
spec:
selector:
matchLabels:
app: nifi
maxUnavailable: 1
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nifi-sc
selfLink: /apis/storage.k8s.io/v1/storageclasses/nifi-sc
labels:
addonmanager.kubernetes.io/mode: EnsureExists
kubernetes.io/cluster-service: 'true'
provisioner: kubernetes.io/azure-disk
parameters:
cachingmode: ReadOnly
kind: Managed
storageaccounttype: StandardSSD_LRS
reclaimPolicy: Delete
allowVolumeExpansion: true
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: nifi
spec:
selector:
matchLabels:
app: nifi
serviceName: nifi-hs
replicas: 3
updateStrategy:
type: RollingUpdate
podManagementPolicy: OrderedReady
template:
metadata:
labels:
app: nifi
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- k: "app"
operator: In
values:
- nifi
topologyKey: "kubernetes.io/hostname"
containers:
- name: nifi
image: "apache/nifi:1.13.0"
env:
- name: NIFI_CLUSTER_IS_NODE
value: "true"
- name: HOSTNAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NIFI_CLUSTER_ADDRESS
value: $(HOSTNAME).nifi-hs
- name: NIFI_CLUSTER_NODE_PROTOCOL_PORT
value: "1025"
- name: NIFI_WEB_HTTP_HOST
value: $(HOSTNAME).nifi-hs.ns1.svc.cluster.local
#- name: NIFI_WEB_HTTP_PORT
# value: "80"
- name: NIFI_CLUSTER_NODE_PROTOCOL_MAX_THREADS
value: "100"
- name: NIFI_ZK_CONNECT_STRING
value: "zk-cs:2181"
- name: NIFI_ELECTION_MAX_CANDIDATES
value: "3"
ports:
- containerPort: 8080
name: client
- containerPort: 1025
name: nodeport
volumeMounts:
- name: nifi-database
mountPath: "/opt/nifi/nifi-current/database_repository"
- name: nifi-flowfile
mountPath: "/opt/nifi/nifi-current/flowfile_repository"
- name: nifi-content
mountPath: "/opt/nifi/nifi-current/content_repository"
- name: nifi-provenance
mountPath: "/opt/nifi/nifi-current/provenance_repository"
- name: nifi-state
mountPath: "/opt/nifi/nifi-current/state"
securityContext:
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
volumeClaimTemplates:
- metadata:
name: nifi-database
spec:
storageClassName: "nifi-sc"
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 5Gi
- metadata:
name: nifi-flowfile
spec:
storageClassName: "nifi-sc"
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 5Gi
- metadata:
name: nifi-content
spec:
storageClassName: "nifi-sc"
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 5Gi
- metadata:
name: nifi-provenance
spec:
storageClassName: "nifi-sc"
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 5Gi
- metadata:
name: nifi-state
spec:
storageClassName: "nifi-sc"
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 5Gi
The labelselector in affinity block is missing few words. Below is the updated working yaml block for statefulset.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: nifi
spec:
selector:
matchLabels:
app: nifi
serviceName: nifi-hs
replicas: 3
updateStrategy:
type: RollingUpdate
podManagementPolicy: OrderedReady
template:
metadata:
labels:
app: nifi
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- nifi
topologyKey: "kubernetes.io/hostname"
containers:
- name: nifi
image: "apache/nifi:1.13.0"
env:
- name: NIFI_CLUSTER_IS_NODE
value: "true"
- name: HOSTNAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NIFI_CLUSTER_ADDRESS
value: $(HOSTNAME).nifi-hs
- name: NIFI_CLUSTER_NODE_PROTOCOL_PORT
value: "1025"
- name: NIFI_WEB_HTTP_HOST
value: $(HOSTNAME).nifi-hs.ns1.svc.cluster.local
#- name: NIFI_WEB_HTTP_PORT
# value: "80"
- name: NIFI_CLUSTER_NODE_PROTOCOL_MAX_THREADS
value: "100"
- name: NIFI_ZK_CONNECT_STRING
value: "zk-cs:2181"
- name: NIFI_ELECTION_MAX_CANDIDATES
value: "3"
ports:
- containerPort: 8080
name: client
- containerPort: 1025
name: nodeport
volumeMounts:
- name: nifi-database
mountPath: "/opt/nifi/nifi-current/database_repository"
- name: nifi-flowfile
mountPath: "/opt/nifi/nifi-current/flowfile_repository"
- name: nifi-content
mountPath: "/opt/nifi/nifi-current/content_repository"
- name: nifi-provenance
mountPath: "/opt/nifi/nifi-current/provenance_repository"
- name: nifi-state
mountPath: "/opt/nifi/nifi-current/state"
securityContext:
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
volumeClaimTemplates:
- metadata:
name: nifi-database
spec:
storageClassName: "nifi-sc"
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 5Gi
- metadata:
name: nifi-flowfile
spec:
storageClassName: "nifi-sc"
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 5Gi
- metadata:
name: nifi-content
spec:
storageClassName: "nifi-sc"
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 5Gi
- metadata:
name: nifi-provenance
spec:
storageClassName: "nifi-sc"
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 5Gi
- metadata:
name: nifi-state
spec:
storageClassName: "nifi-sc"
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 5Gi
I have created a job that runs the following command (job-command.sh):
cat << EOF | kubectl apply -f -
{
"apiVersion": "v1",
"kind": "Secret",
"metadata": {
"name": "my-secret-name",
"namespace": "my-namespace"
},
"data": {
"cert": "my-certificate-data"
},
"type": "Opaque"
}
EOF
The job looks like this:
apiVersion: batch/v1
kind: Job
metadata:
name: "my-name-job1"
namespace: "my-namespace"
labels:
app: "my-name"
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
spec:
template:
metadata:
labels:
app: my-name
spec:
serviceAccountName: my-name-sa
restartPolicy: Never
containers:
- name: myrepo
image: "repo:image"
imagePullPolicy: "Always"
command: ["sh", "-c"]
args:
- |-
/job-command.sh
And the rbac like this:
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-name-sa
namespace: my-namespace
---------------------------
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: my-name-rolebinding
namespace: my-namespace
labels:
app: "{{ .Chart.Name }}"
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: my-name-role
subjects:
- kind: ServiceAccount
name: my-name-sa
---------------------------
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: my-name-role
namespace: my-namespace
labels:
app: "{{ .Chart.Name }}"
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
rules:
- apiGroups: ['']
resources: ['secrets']
verbs: ["get", "watch", "list", "create", "update", "patch"]
I'm getting the following issue:
"error: error validating "STDIN": error validating data: the server does not allow access to the requested resource; if you choose to ignore these errors, turn validation off with --validate=false"
I don't understand what's wrong... the service account used by the job should give access to save secrets...