Deployment Yaml file - kubernetes

I'm learning SQL Server BDC on minkube using this article as a guide. I tried deploying the below yaml file by running the code : kubectl apply -f deployment.yaml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: mssql-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: mssql
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mssql
image: microsoft/mssql-server-linux
ports:
- containerPort: 1433
securityContext:
privileged: true
env:
- name: ACCEPT_EULA
value: "Y"
- name: SA_PASSWORD
valueFrom:
secretKeyRef:
name: mssql
key: SA_PASSWORD
volumeMounts:
- name: mssqldb
mountPath: /var/opt/mssql
volumes:
- name: mssqldb
persistentVolumeClaim:
claimName: pvc0001
It errored due to the v1beta1 APIVersion. I converted this yaml file by running : kubectl convert -f deployment.yaml and got the below script:
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
name: mssql-deployment
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector: null
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: mssql
spec:
containers:
- env:
- name: ACCEPT_EULA
value: "Y"
- name: SA_PASSWORD
valueFrom:
secretKeyRef:
key: SA_PASSWORD
name: mssql
image: microsoft/mssql-server-linux
imagePullPolicy: Always
name: mssql
ports:
- containerPort: 1433
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 10
status: {}
But when I deploy the above script I get:
Error validating "deployment.yaml": error validating data: ValidationError(Deployment.spec): missing required field "selector" in io.k8s.api.apps.v1.DeploymentSpec; if you choose to ignore these errors, turn validation off with --validate=false
It is related to matchlabels/matchexpressions but I'm not able to address it. Can someone point me in the right direction?

You need to add a selector in the spec section of the deployment. It's a mandatory field.The .spec.selector field defines how the Deployment finds which Pods to manage. In this case, you simply select a label that is defined in the Pod template (app: mssql). However, more sophisticated selection rules are possible, as long as the Pod template itself satisfies the rule.
apiVersion: apps/v1
kindapiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
name: mssql-deployment
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: mssql
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: mssql
spec:
containers:
- env:
- name: ACCEPT_EULA
value: "Y"
- name: SA_PASSWORD
valueFrom:
secretKeyRef:
key: SA_PASSWORD
name: mssql
image: microsoft/mssql-server-linux
imagePullPolicy: Always
name: mssql
ports:
- containerPort: 1433
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 10
status: {}

missing required field "selector" in io.k8s.api.apps.v1.DeploymentSpec
You need a selector to select which pods are configured to deployment spec.
solution:
selector:
matchLabels:
app: mssql
template:
metadata:
labels:
app: mssql

Related

Redis pod failing

I have redis DB setup running on my minikube cluster. I have shutdown my minikube and started after 3 days and I can see my redis pod is failing to come up with below error from pod log
Bad file format reading the append only file: make a backup of your AOF file, then use ./redis-check-aof --fix <filename>.
Below is my Stateful Set yaml file for redis master deployed via a helm chart
apiVersion: apps/v1
kind: StatefulSet
metadata:
annotations:
meta.helm.sh/release-name: test-redis
meta.helm.sh/release-namespace: test
generation: 1
labels:
app.kubernetes.io/component: master
app.kubernetes.io/instance: test-redis
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: redis
helm.sh/chart: redis-14.8.11
name: test-redis-master
namespace: test
resourceVersion: "191902"
uid: 3a4e541f-154f-4c54-a379-63974d90089e
spec:
podManagementPolicy: OrderedReady
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app.kubernetes.io/component: master
app.kubernetes.io/instance: test-redis
app.kubernetes.io/name: redis
serviceName: test-redis-headless
template:
metadata:
annotations:
checksum/configmap: dd1f90e0231e5f9ebd1f3f687d534d9ec53df571cba9c23274b749c01e5bc2bb
checksum/health: xxxxx
creationTimestamp: null
labels:
app.kubernetes.io/component: master
app.kubernetes.io/instance: test-redis
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: redis
helm.sh/chart: redis-14.8.11
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels:
app.kubernetes.io/component: master
app.kubernetes.io/instance: test-redis
app.kubernetes.io/name: redis
namespaces:
- tyk
topologyKey: kubernetes.io/hostname
weight: 1
containers:
- args:
- -c
- /opt/bitnami/scripts/start-scripts/start-master.sh
command:
- /bin/bash
env:
- name: BITNAMI_DEBUG
value: "false"
- name: REDIS_REPLICATION_MODE
value: master
- name: ALLOW_EMPTY_PASSWORD
value: "no"
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
key: redis-password
name: test-redis
- name: REDIS_TLS_ENABLED
value: "no"
- name: REDIS_PORT
value: "6379"
image: docker.io/bitnami/redis:6.2.5-debian-10-r11
imagePullPolicy: IfNotPresent
livenessProbe:
exec:
command:
- sh
- -c
- /health/ping_liveness_local.sh 5
failureThreshold: 5
initialDelaySeconds: 20
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 6
name: redis
ports:
- containerPort: 6379
name: redis
protocol: TCP
readinessProbe:
exec:
command:
- sh
- -c
- /health/ping_readiness_local.sh 1
failureThreshold: 5
initialDelaySeconds: 20
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 2
resources: {}
securityContext:
runAsUser: 1001
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /opt/bitnami/scripts/start-scripts
name: start-scripts
- mountPath: /health
name: health
- mountPath: /data
name: redis-data
- mountPath: /opt/bitnami/redis/mounted-etc
name: config
- mountPath: /opt/bitnami/redis/etc/
name: redis-tmp-conf
- mountPath: /tmp
name: tmp
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 1001
serviceAccount: test-redis
serviceAccountName: test-redis
terminationGracePeriodSeconds: 30
volumes:
- configMap:
defaultMode: 493
name: test-redis-scripts
name: start-scripts
- configMap:
defaultMode: 493
name: test-redis-health
name: health
- configMap:
defaultMode: 420
name: test-redis-configuration
name: config
- emptyDir: {}
name: redis-tmp-conf
- emptyDir: {}
name: tmp
updateStrategy:
rollingUpdate:
partition: 0
type: RollingUpdate
volumeClaimTemplates:
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/component: master
app.kubernetes.io/instance: test-redis
app.kubernetes.io/name: redis
name: redis-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
volumeMode: Filesystem
status:
phase: Pending
Please let me know your suggestions on how can I fix this.
I am not an Redis expert but from what I can see:
kubectl describe pod red3-redis-master-0
...
Bad file format reading the append only file: make a backup of your AOF file, then use ./redis-check-aof --fix <filename>
...
Means that your appendonly.aof file was corrupted with invalid byte sequences in the middle.
How we can proceed if redis-master is not working?:
Verify pvc attached to the redis-master-pod:
kubectl get pvc
NAME STATUS VOLUME
redis-data-red3-redis-master-0 Bound pvc-cf59a0b2-a3ee-4f7f-9f07-8f4922518359
Create new redis-client pod wit the same pvc redis-data-red3-redis-master-0:
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: redis-client
spec:
volumes:
- name: data
persistentVolumeClaim:
claimName: redis-data-red3-redis-master-0
containers:
- name: redis
image: docker.io/bitnami/redis:6.2.3-debian-10-r0
command: ["/bin/bash"]
args: ["-c", "sleep infinity"]
volumeMounts:
- mountPath: "/tmp"
name: data
EOF
Backup your files:
kubectl cp redis-client:/tmp .
Repair appendonly.aof file:
kubectl exec -it redis-client -- /bin/bash
cd /tmp
# make copy of appendonly.aof file:
cp appendonly.aof appendonly.aofbackup
# verify appendonly.aof file:
redis-check-aof appendonly.aof
...
0x 38: Expected prefix '*', got: '"'
AOF analyzed: size=62, ok_up_to=56, ok_up_to_line=13, diff=6
AOF is not valid. Use the --fix option to try fixing it.
...
# repair appendonly.aof file:
redis-check-aof --fix appendonly.aof
# compare files using diff:
diff appendonly.aof appendonly.aofbackup
Note:
As per docs:
The best thing to do is to run the redis-check-aof utility, initially without the --fix option, then understand the problem, jump at the given offset in the file, and see if it is possible to manually repair the file: the AOF uses the same format of the Redis protocol and is quite simple to fix manually. Otherwise it is possible to let the utility fix the file for us, but in that case all the AOF portion from the invalid part to the end of the file may be discarded, leading to a massive amount of data loss if the corruption happened to be in the initial part of the file.
In addition as described in the comments by #Miffa Young you can verify where your data is stored using k8s.io/minikube-hostpath provisioner:
kubectl get pv
...
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM
pvc-cf59a0b2-a3ee-4f7f-9f07-8f4922518359 8Gi RWO Delete Bound default/redis-data-red3-redis-master-0
...
kubectl describe pv pvc-cf59a0b2-a3ee-4f7f-9f07-8f4922518359
...
Source:
Type: HostPath (bare host directory volume)
Path: /tmp/hostpath-provisioner/default/redis-data-red3-redis-master-0
...
Your redis instance is failing down because your appendonly.aof is malformed and stored permanently under this location.
You can ssh into your vm:
minikube -p redis ssh
cd /tmp/hostpath-provisioner/default/redis-data-red3-redis-master-0
# from there you can backup/repair/remove your files:
Another solution is to install this chart using new name in this case new set of pv,pvc for redis StatefulSets will be created.
I think your redis is not quit Gracefully , so the AOF file is in a bad format What is AOF
you should repair aof file using a initcontainer by command (./redis-check-aof --fix .)
apiVersion: apps/v1
kind: StatefulSet
metadata:
annotations:
meta.helm.sh/release-name: test-redis
meta.helm.sh/release-namespace: test
generation: 1
labels:
app.kubernetes.io/component: master
app.kubernetes.io/instance: test-redis
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: redis
helm.sh/chart: redis-14.8.11
name: test-redis-master
namespace: test
resourceVersion: "191902"
uid: 3a4e541f-154f-4c54-a379-63974d90089e
spec:
podManagementPolicy: OrderedReady
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app.kubernetes.io/component: master
app.kubernetes.io/instance: test-redis
app.kubernetes.io/name: redis
serviceName: test-redis-headless
template:
metadata:
annotations:
checksum/configmap: dd1f90e0231e5f9ebd1f3f687d534d9ec53df571cba9c23274b749c01e5bc2bb
checksum/health: xxxxx
creationTimestamp: null
labels:
app.kubernetes.io/component: master
app.kubernetes.io/instance: test-redis
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: redis
helm.sh/chart: redis-14.8.11
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels:
app.kubernetes.io/component: master
app.kubernetes.io/instance: test-redis
app.kubernetes.io/name: redis
namespaces:
- tyk
topologyKey: kubernetes.io/hostname
weight: 1
initContainers:
- name: repair-redis
image: docker.io/bitnami/redis:6.2.5-debian-10-r11
command: ['sh', '-c', "redis-check-aof --fix /data/appendonly.aof"]
containers:
- args:
- -c
- /opt/bitnami/scripts/start-scripts/start-master.sh
command:
- /bin/bash
env:
- name: BITNAMI_DEBUG
value: "false"
- name: REDIS_REPLICATION_MODE
value: master
- name: ALLOW_EMPTY_PASSWORD
value: "no"
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
key: redis-password
name: test-redis
- name: REDIS_TLS_ENABLED
value: "no"
- name: REDIS_PORT
value: "6379"
image: docker.io/bitnami/redis:6.2.5-debian-10-r11
imagePullPolicy: IfNotPresent
livenessProbe:
exec:
command:
- sh
- -c
- /health/ping_liveness_local.sh 5
failureThreshold: 5
initialDelaySeconds: 20
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 6
name: redis
ports:
- containerPort: 6379
name: redis
protocol: TCP
readinessProbe:
exec:
command:
- sh
- -c
- /health/ping_readiness_local.sh 1
failureThreshold: 5
initialDelaySeconds: 20
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 2
resources: {}
securityContext:
runAsUser: 1001
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /opt/bitnami/scripts/start-scripts
name: start-scripts
- mountPath: /health
name: health
- mountPath: /data
name: redis-data
- mountPath: /opt/bitnami/redis/mounted-etc
name: config
- mountPath: /opt/bitnami/redis/etc/
name: redis-tmp-conf
- mountPath: /tmp
name: tmp
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 1001
serviceAccount: test-redis
serviceAccountName: test-redis
terminationGracePeriodSeconds: 30
volumes:
- configMap:
defaultMode: 493
name: test-redis-scripts
name: start-scripts
- configMap:
defaultMode: 493
name: test-redis-health
name: health
- configMap:
defaultMode: 420
name: test-redis-configuration
name: config
- emptyDir: {}
name: redis-tmp-conf
- emptyDir: {}
name: tmp
updateStrategy:
rollingUpdate:
partition: 0
type: RollingUpdate
volumeClaimTemplates:
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/component: master
app.kubernetes.io/instance: test-redis
app.kubernetes.io/name: redis
name: redis-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
volumeMode: Filesystem

Kubernetes pod error: creating multiple services

I am pretty new in Kubernetese, so apologies if this my questions seem vague. I try to elaborate as much as possible. I have a pod on Google Cloud via Kubernetese that has a GPU in it. This GPU is responsible for processing one set of tasks, let's say classifying images. In order to do so, I created a service with kubernetes. The service section of my yaml file looks something the following. Also the url for this service will be http://model-server-service.default.svc.cluster.local since the name of the Service is moderl-server-service
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: model-server
name: model-server
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: model-server
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
app: model-server
spec:
containers:
- args:
- -t
- "120"
- -b
- "0.0.0.0"
- app:flask_app
command:
- gunicorn
env:
- name: ENV
value: staging
- name: GCP
value: "2"
image: gcr.io/my-production/my-model-server: myGitHash
imagePullPolicy: Always
name: model-server
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
resources:
limits:
nvidia.com/gpu: 1
ports:
- containerPort: 8000
protocol: TCP
volumeMounts:
- name: model-files
mountPath: /model-server/models
# These containers are run during pod initialization
initContainers:
- name: model-download
image: gcr.io/my-production/my-model-server: myGitHash
command:
- gsutil
- cp
- -r
- gs://my-staging-models/*
- /model-files/
volumeMounts:
- name: model-files
mountPath: "/model-files"
volumes:
- name: model-files
emptyDir: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
runAsUser: 0
terminationGracePeriodSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
labels:
app: model-server
name: model-server-service
namespace: default
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8000
selector:
app: model-server
sessionAffinity: None
type: ClusterIP
Here my question begins. I am creating a new set of tasks. For this new set of tasks, I will need extensive memory, so I do not want to use the previous service. I would like to do it as part of a separate new service. Something with the following url http://model-server-heavy-service.default.svc.cluster.local. I tried to create a new yaml file model-server-heavy.yaml. In this new yaml file, I changed the name of the service from model-server-service into model-server-heavy-service. Also, I changed the name of the app and name from model-server into model-sever-heavy. So the final yaml file looks like something like what I put at the end of this post. Unfortunately, the new model sever does not work and I get the following message for the new model server on kubernetes.
model-server-asdhjs-asd 1/1 Running 0 21m
model-server-heavy-xnshk 0/1 **CrashLoopBackOff** 8 21m
Can someone please shed some light on what I am doing wrong and what would be the alternative for what I have in mind? Why do I get the message CrashLoopBackOff for the second model server? What is it that I am not doing correctly for the second model server.
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: model-server-heavy
name: model-server-heavy
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: model-server-heavy
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
app: model-server-heavy
spec:
containers:
- args:
- -t
- "120"
- -b
- "0.0.0.0"
- app:flask_app
command:
- gunicorn
env:
- name: ENV
value: staging
- name: GCP
value: "2"
image: gcr.io/my-production/my-model-server:mgGitHash
imagePullPolicy: Always
name: model-server-heavy
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
resources:
limits:
nvidia.com/gpu: 1
ports:
- containerPort: 8000
protocol: TCP
volumeMounts:
- name: model-files
mountPath: /model-server-heavy/models
# These containers are run during pod initialization
initContainers:
- name: model-download
image: gcr.io/my-production/my-model-server:myGitHash
command:
- gsutil
- cp
- -r
- gs://my-staging-models/*
- /model-files/
volumeMounts:
- name: model-files
mountPath: "/model-files"
volumes:
- name: model-files
emptyDir: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
runAsUser: 0
terminationGracePeriodSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
labels:
app: model-server-heavy
name: model-server-heavy-service
namespace: default
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8000
selector:
app: model-server-heavy
sessionAffinity: None
type: ClusterIP
Thanks to #dawid-kruk and #patrick-w I had to make two modification in the model-sever-heavy.yaml in order for it to work.
Change the mountPath from /model-server-heavy/models into /model-server/models
In line 38 of the model-sever-heavy.yaml file, I should have changed the name from model-server-heavy into model-sever.
I first tried to fix the problem by applying the item 1 but it didn't work out. Then I tried the 2nd item as well and it fixed. I need to have both 1 and 2 in place in order for the server to work. I understand why I had to make change for the first item but not sure about the second one.

kubectl cp will not write to pod

I'm moving from Kubernetes v1.17.4 to v1.18.3
The command kubectl -n nameSpace cp file pod-xxxx-yyyy:file will not write to the pod.
There are no error messages generated
Copying from a pod works just fine.
YAML for creating the pod
apiVersion: apps/v2
kind: Deployment
metadata:
name: acadmin
namespace: iiabe3h
labels:
app: acadmin
spec:
replicas: 1
selector:
matchLabels:
app: acadmin
template:
metadata:
labels:
app: acadmin
spec:
schedulerName: stork
terminationGracePeriodSeconds: 1800
securityContext:
supplementalGroups: [12312, 65432, 3399, 1000001]
fsGroup: 1000
nodeSelector:
node-role.kubernetes.io/worker: "true"
volumes:
- name: acadmin-logs
persistentVolumeClaim:
claimName: acadmin-logs
- name: acadmin-config
persistentVolumeClaim:
claimName: acadmin-config
containers:
- name: acadmin
image: acadmin:latest
env:
- name: spring.profiles.active
value: "e3h"
- name: AccessControlFilter.props
value: "/opt/jboss/wildfly/standalone/configuration/AccessControlFilter.props"
volumeMounts:
- name: acadmin-logs
mountPath: /opt/jboss/wildfly/standalone/log
- name: acadmin-config
mountPath: /opt/jboss/wildfly/standalone/configuration

Communication between pods

I am currently in the process to set up sentry.io but i am having problems in setting it up in openshift 3.11
I got pods running for sentry itself, postgresql, redis and memcache but according to the log messages they are not able to communicate together.
sentry.exceptions.InvalidConfiguration: Error 111 connecting to 127.0.0.1:6379. Connection refused.
Do i need to create a network like in docker or should the pods (all in the same namespace) be able to talk to each other by default? I got admin rights for the complete project so i can also work with the console and not only the web interface.
Best wishes
EDIT: Adding deployment config for sentry and its service and for the sake of simplicity the postgres config and service. I also blanked out some unnecessary information with the keyword BLANK if I went overboard please let me know and ill look it up.
Deployment config for sentry:
apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
annotations:
openshift.io/generated-by: OpenShiftWebConsole
creationTimestamp: BLANK
generation: 20
labels:
app: sentry
name: sentry
namespace: test
resourceVersion: '506667843'
selfLink: BLANK
uid: BLANK
spec:
replicas: 1
selector:
app: sentry
deploymentconfig: sentry
strategy:
activeDeadlineSeconds: 21600
resources: {}
rollingParams:
intervalSeconds: 1
maxSurge: 25%
maxUnavailable: 25%
timeoutSeconds: 600
updatePeriodSeconds: 1
type: Rolling
template:
metadata:
annotations:
openshift.io/generated-by: OpenShiftWebConsole
creationTimestamp: null
labels:
app: sentry
deploymentconfig: sentry
spec:
containers:
- env:
- name: SENTRY_SECRET_KEY
value: Iamsosecret
- name: C_FORCE_ROOT
value: '1'
- name: SENTRY_FILESTORE_DIR
value: /var/lib/sentry/files/data
image: BLANK
imagePullPolicy: Always
name: sentry
ports:
- containerPort: 9000
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/lib/sentry/files
name: sentry-1
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- emptyDir: {}
name: sentry-1
test: false
triggers:
- type: ConfigChange
- imageChangeParams:
automatic: true
containerNames:
- sentry
from:
kind: ImageStreamTag
name: 'sentry:latest'
namespace: catcloud
lastTriggeredImage: BLANK
type: ImageChange
status:
availableReplicas: 1
conditions:
- lastTransitionTime: BLANK
lastUpdateTime: BLANK
message: Deployment config has minimum availability.
status: 'True'
type: Available
- lastTransitionTime: BLANK
lastUpdateTime: BLANK
message: replication controller "sentry-19" successfully rolled out
reason: NewReplicationControllerAvailable
status: 'True'
type: Progressing
details:
causes:
- type: ConfigChange
message: config change
latestVersion: 19
observedGeneration: 20
readyReplicas: 1
replicas: 1
unavailableReplicas: 0
updatedReplicas: 1
Service for sentry:
apiVersion: v1
kind: Service
metadata:
annotations:
openshift.io/generated-by: OpenShiftWebConsole
creationTimestamp: BLANK
labels:
app: sentry
name: sentry
namespace: test
resourceVersion: '505555608'
selfLink: BLANK
uid: BLANK
spec:
clusterIP: BLANK
ports:
- name: 9000-tcp
port: 9000
protocol: TCP
targetPort: 9000
selector:
deploymentconfig: sentry
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
Deployment config for postgresql:
apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
annotations:
openshift.io/generated-by: OpenShiftWebConsole
creationTimestamp: BLANK
generation: 10
labels:
app: postgres
type: backend
name: postgres
namespace: test
resourceVersion: '506664185'
selfLink: BLANK
uid: BLANK
spec:
replicas: 1
selector:
app: postgres
deploymentconfig: postgres
type: backend
strategy:
activeDeadlineSeconds: 21600
resources: {}
rollingParams:
intervalSeconds: 1
maxSurge: 25%
maxUnavailable: 25%
timeoutSeconds: 600
updatePeriodSeconds: 1
type: Rolling
template:
metadata:
annotations:
openshift.io/generated-by: OpenShiftWebConsole
creationTimestamp: null
labels:
app: postgres
deploymentconfig: postgres
type: backend
spec:
containers:
- env:
- name: PGDATA
value: /var/lib/postgresql/data/sql
- name: POSTGRES_HOST_AUTH_METHOD
value: trust
- name: POSTGRESQL_USER
value: sentry
- name: POSTGRESQL_PASSWORD
value: sentry
- name: POSTGRESQL_DATABASE
value: sentry
image: BLANK
imagePullPolicy: Always
name: postgres
ports:
- containerPort: 5432
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: volume-uirge
subPath: sql
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
runAsUser: 2000020900
terminationGracePeriodSeconds: 30
volumes:
- name: volume-uirge
persistentVolumeClaim:
claimName: postgressql
test: false
triggers:
- type: ConfigChange
- imageChangeParams:
automatic: true
containerNames:
- postgres
from:
kind: ImageStreamTag
name: 'postgres:latest'
namespace: catcloud
lastTriggeredImage: BLANK
type: ImageChange
status:
availableReplicas: 1
conditions:
- lastTransitionTime: BLANK
lastUpdateTime: BLANK
message: Deployment config has minimum availability.
status: 'True'
type: Available
- lastTransitionTime: BLANK
lastUpdateTime: BLANK
message: replication controller "postgres-9" successfully rolled out
reason: NewReplicationControllerAvailable
status: 'True'
type: Progressing
details:
causes:
- type: ConfigChange
message: config change
latestVersion: 9
observedGeneration: 10
readyReplicas: 1
replicas: 1
unavailableReplicas: 0
updatedReplicas: 1
Service config postgresql:
apiVersion: v1
kind: Service
metadata:
annotations:
openshift.io/generated-by: OpenShiftWebConsole
creationTimestamp: BLANK
labels:
app: postgres
type: backend
name: postgres
namespace: catcloud
resourceVersion: '506548841'
selfLink: /api/v1/namespaces/catcloud/services/postgres
uid: BLANK
spec:
clusterIP: BLANK
ports:
- name: 5432-tcp
port: 5432
protocol: TCP
targetPort: 5432
selector:
deploymentconfig: postgres
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
Pods (even in the same namespace) are not able to talk directly to each other by default. You need to create a Service in order to allow a pod to receive connections from another pod. In general, one pod connects to another pod via the latter's service, as I illustrated below:
The connection info would look something like <servicename>:<serviceport> (e.g. elasticsearch-master:9200) rather than localhost:port.
You can read https://kubernetes.io/docs/concepts/services-networking/service/ for further info on a service.
N.B: localhost:port will only work for containers running inside the same pod to connect to each other, just like how nginx connects to gravitee-mgmt-api and gravitee-mgmt-ui in my illustration above.
Well for me it looks like you didn't configure the sentry correctly means you are not providing credential to sentry pod to connect to PostgreSQL pod and redis pod.
env:
- name: SENTRY_SECRET_KEY
valueFrom:
secretKeyRef:
name: sentry-sentry
key: sentry-secret
- name: SENTRY_DB_USER
value: "sentry"
- name: SENTRY_DB_NAME
value: "sentry"
- name: SENTRY_DB_PASSWORD
valueFrom:
secretKeyRef:
name: sentry-postgresql
key: postgres-password
- name: SENTRY_POSTGRES_HOST
value: sentry-postgresql
- name: SENTRY_POSTGRES_PORT
value: "5432"
- name: SENTRY_REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: sentry-redis
key: redis-password
- name: SENTRY_REDIS_HOST
value: sentry-redis
- name: SENTRY_REDIS_PORT
value: "6379"
- name: SENTRY_EMAIL_HOST
value: "smtp"
- name: SENTRY_EMAIL_PORT
value: "25"
- name: SENTRY_EMAIL_USER
value: ""
- name: SENTRY_EMAIL_PASSWORD
valueFrom:
secretKeyRef:
name: sentry-sentry
key: smtp-password
- name: SENTRY_EMAIL_USE_TLS
value: "false"
- name: SENTRY_SERVER_EMAIL
value: "sentry#sentry.local"
for more info you could refer to this where they configured the sentry
https://github.com/maty21/sentry-kubernetes/blob/master/sentry.yaml
For communication between pods localhost or 127.0.0.1 does not work.
Get the IP of any pod using
kubectl describe podname
Use that IP in the other pod to communicate with above pod.
Since Pod IPs changes if the pod is recreated you should ideally use kubernetes service specifically clusterIP type for communication between pods within the cluster.

/var/mqm folder is getting overwritten with PV in kubernetes deployment of ACE-MQ with ibmcom/ace-mq image

Trying to deploy MQ with AppConnect in AWS Cloud with EFS as persistence storage for a HA solution. While trying to deploy with keeping /var/mqm (tried with mnt/mqm also) in the mount path, it is getting overwritten with the blank Persistence storage data. So when pod is up, I can't see the queue manager or other files inside /var/mqm.
This is the deployment.yml used
apiVersion: apps/v1beta1
kind: Deployment
metadata:
generation: 1
labels:
run: acemq01
name: acemq01
namespace: default
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 2
selector:
matchLabels:
run: acemq01
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: ~
labels:
run: acemq01
spec:
containers:
-
env:
-
name: LICENSE
value: accept
-
name: DOMAIN
value: cluster
-
name: MQ_QMGR_NAME
value: MQAWS
image: ibmcom/ace-mq
imagePullPolicy: IfNotPresent
name: acemq01
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
-
mountPath: /var/mqm
name: pv-volumeef
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
-
name: pv-volumeef
persistentVolumeClaim:
claimName: efs
status: {}