I have the following file using which I'm setting up Prometheus on my Kubernetes cluster:
apiVersion: apps/v1
kind: Deployment
metadata:
name: prometheus-deployment
namespace: plant-simulator-monitoring
spec:
replicas: 1
selector:
matchLabels:
name: prometheus-server
template:
metadata:
labels:
app: prometheus-server
spec:
containers:
- name: prometheus
image: prom/prometheus:latest
args:
- "--config.file=/etc/prometheus/prometheus.yml"
- "--storage.tsdb.path=/prometheus/"
ports:
- containerPort: 9090
volumeMounts:
- name: prometheus-config-volume
mountPath: /etc/prometheus/
- name: prometheus-storage-volume
mountPath: /prometheus/
resources:
requests:
memory: "512Mi"
cpu: "500m"
limits:
memory: "1Gi"
cpu: "1000m"
volumes:
- name: prometheus-config-volume
configMap:
defaultMode: 420
name: prometheus-server-conf
- name: prometheus-storage-volume
emptyDir: {}
When I apply this to my Kubernetes cluster, I see the following error:
ts=2020-03-16T21:40:33.123641578Z caller=sync.go:165 component=daemon err="plant-simulator-monitoring:deployment/prometheus-deployment: running kubectl: The Deployment \"prometheus-deployment\" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{\"app\":\"prometheus-server\"}: `selector` does not match template `labels`"
I could not see anything wrong with my yaml file. Is there something that I'm missing?
As I mentioned in comments, You have issue with matching labels.
In spec.selector.matchLabels you have name: prometheus-server and in spec.template.medatada.labels you have app: prometheus-server. Values there need to be the same. Below what I get when used your yaml:
$ kubectl apply -f deploymentoriginal.yaml
The Deployment "prometheus-deployment" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"app":"prometheus-server"}: `selector` does not match template `labels`
And output when I used below yaml with the same labels:
apiVersion: apps/v1
kind: Deployment
metadata:
name: prometheus-deployment
namespace: plant-simulator-monitoring
spec:
replicas: 1
selector:
matchLabels:
name: prometheus-server
template:
metadata:
labels:
name: prometheus-server
spec:
containers:
- name: prometheus
image: prom/prometheus:latest
args:
- "--config.file=/etc/prometheus/prometheus.yml"
- "--storage.tsdb.path=/prometheus/"
ports:
- containerPort: 9090
volumeMounts:
- name: prometheus-config-volume
mountPath: /etc/prometheus/
- name: prometheus-storage-volume
mountPath: /prometheus/
resources:
requests:
memory: "512Mi"
cpu: "500m"
limits:
memory: "1Gi"
cpu: "1000m"
volumes:
- name: prometheus-config-volume
configMap:
defaultMode: 420
name: prometheus-server-conf
- name: prometheus-storage-volume
emptyDir: {}
$ kubectl apply -f deploymentselectors.yaml
deployment.apps/prometheus-deployment created
More detailed info about selectors/labels can be found in Official Kubernetes docs.
There is a mismatch between the label in selector(name: prometheus-server) and metadata (app: prometheus-server). Below should work.
selector:
matchLabels:
app: prometheus-server
template:
metadata:
labels:
app: prometheus-server
Related
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
name: mysqldb-1
labels:
app.kubernetes.io: mysqldb-1
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
name: mysqldb-2
labels:
app.kubernetes.io: mysqldb-2
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
---
apiVersion: v1
kind: Service
metadata:
name: mysqldb-service
labels:
app.kubernetes.io: mysqldb
spec:
ports:
- name: "5306"
port: 5306
targetPort: 3306
selector:
app.kubernetes.io: mysqldb
status:
loadBalancer: {}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysqldb
labels:
app.kubernetes.io: mysqldb
spec:
replicas: 3
selector:
matchLabels:
app.kubernetes.io: mysqldb
template:
metadata:
labels:
app.kubernetes.io: mysqldb
spec:
containers:
-name: mysqldb
image: mysql:8.0
ports:
- containerPort: 3306
volumeMounts:
- mountPath: /var/lib/mysql
name: mysqldb-1
- mountPath: /docker-entrypoint-initdb.d/init.sql
name: mysqldb-2
restartPolicy: Always
volumes:
- name: mysqldb-1
persistentVolumeClaim:
claimName: mysqldb-1
- name: mysqldb-2
persistentVolumeClaim:
claimName: mysqldb-2
status: {}
I got this error
Error from server (BadRequest): error when creating "mysqldb.yaml": Deployment in version "v1" cannot be handled as a Deployment: strict decoding error: unknown field "spec.spec"
How do I go about ignoring the errors for an included yaml file?
There are indentation issues in your Deployment resource. Also, add resource limits for the containers.
Try this one:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysqldb
labels:
app.kubernetes.io: mysqldb
spec:
replicas: 3
selector:
matchLabels:
app.kubernetes.io: mysqldb
template:
metadata:
labels:
app.kubernetes.io: mysqldb
spec:
containers:
- name: mysqldb
image: mysql:8.0
ports:
- containerPort: 3306
resources:
requests:
memory: "256Mi"
cpu: "500m"
limits:
memory: "512Mi"
cpu: "1000m"
volumeMounts:
- mountPath: /var/lib/mysql
name: mysqldb-1
- mountPath: /docker-entrypoint-initdb.d/init.sql
name: mysqldb-2
restartPolicy: Always
volumes:
- name: mysqldb-1
persistentVolumeClaim:
claimName: mysqldb-1
- name: mysqldb-2
persistentVolumeClaim:
claimName: mysqldb-2
status: {}
NOTE: You can use an IDE extension for Kubernetes to get errors and warnings for YAML resource issues easily.
I have installed on K3S raspberry pi cluster nexus with the following setups for kubernetes learning purposes. First I created a StatefulSet:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: nexus
namespace: dev-ops
spec:
serviceName: "nexus"
replicas: 1
selector:
matchLabels:
app: nexus-server
template:
metadata:
labels:
app: nexus-server
spec:
containers:
- name: nexus
image: klo2k/nexus3:latest
env:
- name: MAX_HEAP
value: "800m"
- name: MIN_HEAP
value: "300m"
resources:
limits:
memory: "4Gi"
cpu: "1000m"
requests:
memory: "2Gi"
cpu: "500m"
ports:
- containerPort: 8081
volumeMounts:
- name: nexusstorage
mountPath: /sonatype-work
volumes:
- name: nexusstorage
persistentVolumeClaim:
claimName: nexusstorage
Storage class
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nexusstorage
provisioner: driver.longhorn.io
allowVolumeExpansion: true
reclaimPolicy: Delete
volumeBindingMode: Immediate
parameters:
numberOfReplicas: "3"
staleReplicaTimeout: "30"
fsType: "ext4"
diskSelector: "ssd"
nodeSelector: "ssd"
pvc
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nexusstorage
namespace: dev-ops
spec:
accessModes:
- ReadWriteOnce
storageClassName: nexusstorage
resources:
requests:
storage: 50Gi
Service
apiVersion: v1
kind: Service
metadata:
name: nexus-server
namespace: dev-ops
annotations:
prometheus.io/scrape: 'true'
prometheus.io/path: /
prometheus.io/port: '8081'
spec:
selector:
app: nexus-server
type: LoadBalancer
ports:
- port: 8081
targetPort: 8081
nodePort: 32000
this setup will spin up nexus, but if I restart the pod the data will not persist and I have to create all the setups and users from scratch.
What I'm missing in this case?
UPDATE
I got it working, nexus needs on mount permissions on directory. The working StatefulSet looks as it follow
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: nexus
namespace: dev-ops
spec:
serviceName: "nexus"
replicas: 1
selector:
matchLabels:
app: nexus-server
template:
metadata:
labels:
app: nexus-server
spec:
securityContext:
runAsUser: 200
runAsGroup: 200
fsGroup: 200
containers:
- name: nexus
image: klo2k/nexus3:latest
env:
- name: MAX_HEAP
value: "800m"
- name: MIN_HEAP
value: "300m"
resources:
limits:
memory: "4Gi"
cpu: "1000m"
requests:
memory: "2Gi"
cpu: "500m"
ports:
- containerPort: 8081
volumeMounts:
- name: nexus-storage
mountPath: /nexus-data
volumes:
- name: nexus-storage
persistentVolumeClaim:
claimName: nexus-storage
important snippet to get it working
securityContext:
runAsUser: 200
runAsGroup: 200
fsGroup: 200
I'm not familiar with that image, although checking dockerhub, they mention using a Dockerfile similar to that of Sonatype. Then, I would change the mountpoint for your volume, to /nexus-data
This is the default path storing data (they set this env var, then declare a VOLUME). Which we can confirm, looking at the repository that most likely produced your arm-capable image
And following up on your last comment, let's try to also mount it in /opt/sonatype/sonatype-work/nexus3...
In your statefulset, change volumeMounts, to this:
volumeMounts:
- name: nexusstorage
mountPath: /nexus-data
- name: nexusstorage
mountPath: /opt/sonatype/sonatype-work/nexus3
volumes:
- name: nexusstorage
persistentVolumeClaim:
claimName: nexusstorage
Although the second volumeMount entry should not be necessary, as far as I understand. Maybe something's wrong with your storage provider?
Are you sure your PVC is write-able? Reverting back to your initial configuration, enter your pod (kubectl exec -it) and try to write a file at the root of your PVC.
I am using the below command to restart Pods in a statefulset
kubectl rollout restart statefulset ts
If I have to introduce a delay between pod rotation, is there an argument or any other method to achieve it? I am using a sidecar that updates the Pod IP address to a configuration file, if the Pod restarts before the IP address is updated in the config file, the service is not healthy. Looking for a way to introduce a delay between pod restarts/pod rotations.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: typesense
namespace: typesense
labels:
service: typesense
app: typesense
spec:
serviceName: ts
podManagementPolicy: Parallel
replicas: 3
selector:
matchLabels:
service: typesense
app: typesense
template:
metadata:
labels:
service: typesense
app: typesense
spec:
serviceAccountName: typesense-service-account
securityContext:
fsGroup: 2000
runAsUser: 10000
runAsGroup: 3000
runAsNonRoot: true
terminationGracePeriodSeconds: 300
containers:
- name: typesense
envFrom:
# - configMapRef:
# name: typesense-config
- secretRef:
name: typesense-secret
image: typesense/typesense:0.23.0.rc43
command:
- "/opt/typesense-server"
- "-d"
- "/usr/share/typesense/data"
- "--api-port"
- "8108"
- "--peering-port"
- "8107"
- "--nodes"
- "/usr/share/typesense/nodes"
ports:
- containerPort: 8108
name: http
resources:
requests:
memory: 100Mi
cpu: "100m"
limits:
memory: 1Gi
cpu: "1000m"
volumeMounts:
- name: nodeslist
mountPath: /usr/share/typesense
- name: data
mountPath: /usr/share/typesense/data
- name: typesense-node-resolver
image: alasano/typesense-node-resolver
command:
- "/opt/tsns"
- "-namespace=typesense"
volumeMounts:
- name: nodeslist
mountPath: /usr/share/typesense
volumes:
- name: nodeslist
emptyDir: {}
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
storageClassName: nfs
resources:
requests:
storage: 1Gi
You can find the full manifest here.
Maybe using init container ?
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: typesense
namespace: typesense
labels:
service: typesense
app: typesense
spec:
serviceName: ts
podManagementPolicy: Parallel
replicas: 3
selector:
matchLabels:
service: typesense
app: typesense
template:
metadata:
labels:
service: typesense
app: typesense
spec:
serviceAccountName: typesense-service-account
securityContext:
fsGroup: 2000
runAsUser: 10000
runAsGroup: 3000
runAsNonRoot: true
terminationGracePeriodSeconds: 300
initContainers:
- name: init-myservice
image: busybox:1.34.1
command: [
"echo",
"ip",
"to",
"config",
"file",,
]
containers:
- name: typesense
envFrom:
# - configMapRef:
# name: typesense-config
- secretRef:
name: typesense-secret
image: typesense/typesense:0.23.0.rc43
command:
- "/opt/typesense-server"
- "-d"
- "/usr/share/typesense/data"
- "--api-port"
- "8108"
- "--peering-port"
- "8107"
- "--nodes"
- "/usr/share/typesense/nodes"
ports:
- containerPort: 8108
name: http
resources:
requests:
memory: 100Mi
cpu: "100m"
limits:
memory: 1Gi
cpu: "1000m"
volumeMounts:
- name: nodeslist
mountPath: /usr/share/typesense
- name: data
mountPath: /usr/share/typesense/data
- name: typesense-node-resolver
image: alasano/typesense-node-resolver
command:
- "/opt/tsns"
- "-namespace=typesense"
volumeMounts:
- name: nodeslist
mountPath: /usr/share/typesense
volumes:
- name: nodeslist
emptyDir: {}
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
storageClassName: nfs
resources:
requests:
storage: 1Gi
Maybe you could try terminationGracePeriodSeconds and preStop hook with following command.
command: [ "/bin/bash", "-c", "sleep 30" ]
It introduces a 30 second delay after your container receive stop from control plane. With terminationGracePeriodSeconds, control plane won't force kill your container within the period.
The complete example should look like this:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: test
labels:
app: test
spec:
replicas: 1
serviceName: "test"
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- image: nginx
name: test
lifecycle:
preStop:
exec:
command: [ "/bin/bash", "-c", "sleep 60" ]
terminationGracePeriodSeconds: 60
I am trying to update my pod time to Asia/Kolkata zone as per kubernetes timezone in POD with command and argument. However, the time still remains the same UTC time. Only the time zone is getting updated from UTC to Asia.
I was able to fix it using the volume mounts as below. Create a config map and apply the deployment yaml.
kubectl create configmap tz --from-file=/usr/share/zoneinfo/Asia/Kolkata -n <required namespace>
Why is the environmental variable method not working? Will a pod eviction occur from one host to another if we use volume mount time and will if affect the volume mount time after pod eviction?
The EV deployment YAML is below which does not update the time
apiVersion: apps/v1
kind: Deployment
metadata:
name: connector
labels:
app: connector
namespace: clients
spec:
replicas: 1
selector:
matchLabels:
app: connector
template:
metadata:
labels:
app: connector
spec:
containers:
- image: connector
name: connector
resources:
requests:
memory: "32Mi" # "64M"
cpu: "250m"
limits:
memory: "64Mi" # "128M"
cpu: "500m"
ports:
- containerPort: 3307
protocol: TCP
env:
- name: TZ
value: Asia/Kolkata
volumeMounts:
- name: connector-rd
mountPath: /home/mongobi/mongosqld.conf
subPath: mongosqld.conf
volumes:
- name: connector-rd
configMap:
name: connector-rd
items:
- key: mongod.conf
Volume Mount yaml is below.
apiVersion: apps/v1
kind: Deployment
metadata:
name: connector
labels:
app: connector
namespace: clients
spec:
replicas: 1
selector:
matchLabels:
app: connector
template:
metadata:
labels:
app: connector
spec:
containers:
- image: connector
name: connector
resources:
requests:
memory: "32Mi" # "64M"
cpu: "250m"
limits:
memory: "64Mi" # "128M"
cpu: "500m"
ports:
- containerPort: 3307
protocol: TCP
volumeMounts:
- name: tz-config
mountPath: /etc/localtime
- name: connector-rd
mountPath: /home/mongobi/mongosqld.conf
subPath: mongosqld.conf
volumes:
- name: connector-rd
configMap:
name: connector-rd
items:
- key: mongod.conf
path: mongosqld.conf
- name: tz-config
hostPath:
path: /usr/share/zoneinfo/Asia/Kolkata
In this scenario you need to mention type attribute as File for hostPath in the deployment configuration. The below configuration should work for you.
- name: tz-config
hostPath:
path: /usr/share/zoneinfo/Asia/Kolkata
type: File
Simply setting TZ env variable in deployment works for me
I'm trying to add ciao to my Kubernetes single node cluster and every time I run the kubectl apply -f command, I keep running into the error " error converting YAML to JSON: YAML: line 12: did not find expected key ". I looked at the other solutions but they were no help. Any help will be appreciated.
kind: Namespace
metadata:
name: monitoring
---
apiVersion: v1
kind: Secret
metadata:
name: ciao
namespace: monitoring
data:
BASIC_AUTH_USERNAME: YWRtaW4=
BASIC_AUTH_PASSWORD: cGFzc3dvcmQ=
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ciao
namespace: monitoring
spec:
replicas: 1
template:
metadata:
selector:
labels:
app: ciao
spec:
containers:
- image: brotandgames/ciao:latest
imagePullPolicy: IfNotPresent
name: ciao
volumeMounts: # Emit if you do not have persistent volumes
- mountPath: /app/db/sqlite/
name: persistent-volume
subPath: ciao
ports:
- containerPort: 3000
resources:
requests:
memory: 256Mi
cpu: 200m
limits:
memory: 512Mi
cpu: 400m
envFrom:
- secretRef:
name: ciao
---
apiVersion: v1
kind: Service
metadata:
name: ciao
namespace: monitoring
spec:
ports:
- port: 80
targetPort: 3000
protocol: TCP
type: NodePort
selector:
app: ciao
Looks there's an indentation in your Deployment definition. This should work:
apiVersion: apps/v1
kind: Deployment
metadata:
name: ciao
namespace: monitoring
labels:
app: ciao
spec:
replicas: 1
selector:
matchLabels:
app: ciao
template:
metadata:
labels:
app: ciao
spec:
containers:
- image: brotandgames/ciao:latest
imagePullPolicy: IfNotPresent
name: ciao
volumeMounts: # Emit if you do not have persistent volumes
- mountPath: /app/db/sqlite/
name: persistent-volume
subPath: ciao
ports:
- containerPort: 3000
resources:
requests:
memory: 256Mi
cpu: 200m
limits:
memory: 512Mi
cpu: 400m
envFrom:
- secretRef:
name: ciao
Keep in mind that in this definition the PV persistent-volume needs to exist in your cluster/namespace.