Postgres user not created Kubernetes - postgresql

I have the following YAML file for create a postgres server instance
kind: Deployment
apiVersion: apps/v1beta1
metadata:
name: spring-demo-db
labels:
app: spring-demo-application
spec:
replicas: 1
selector:
matchLabels:
app: spring-demo-db
template:
metadata:
creationTimestamp: null
labels:
app: spring-demo-db
spec:
containers:
- name: spring-demo-db
image: postgres:10.4
ports:
- name: spring-demo-db
containerPort: 5432
protocol: TCP
env:
- name: POSTGRES_PASSWORD
value: "springdemo"
- name: POSTGRES_USER
value: "springdemo"
- name: POSTGRES_DB
value: "springdemo"
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-storage
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
volumes:
- name: "postgres-storage"
persistentVolumeClaim:
claimName: spring-demo-pv-claim
restartPolicy: Always
But when ssh into the container user springdemo not created. I have been struggling all day.What could be the problem for this
Anyone who can help me?

You didn't mention what command you're running and what error you're getting, so I'm guessing here, but try this:
kind: Deployment
apiVersion: apps/v1beta1
metadata:
name: spring-demo-db
labels:
app: spring-demo-application
spec:
replicas: 1
selector:
matchLabels:
app: spring-demo-db
template:
metadata:
creationTimestamp: null
labels:
app: spring-demo-db
spec:
containers:
- name: spring-demo-db
image: postgres:10.4
ports:
- name: spring-demo-db
containerPort: 5432
protocol: TCP
env:
- name: POSTGRES_USER
value: "springdemo"
- name: POSTGRES_DB
value: "springdemo"
- name: POSTGRES_PASSWORD
value: "springdemo"
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-storage
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
volumes:
- name: "postgres-storage"
persistentVolumeClaim:
claimName: spring-demo-pv-claim
restartPolicy: Always
But if it doesn't work, just use the Helm chart, because, among other issues, you are passing the password in an insecure way, which is a bad idea.

Related

How to use git-sync image as a sidecar in kubernetes that git pulls periodically

I am trying to use git-sync image as a side car in kubernetes that runs git-pull periodically and mounts cloned data to shared volume.
Everything is working fine when I configure it for sync one time. I want to run it periodically like every 10 mins. Somehow when I configure it to run periodically pod initializing is failing.
I read documentation but couldn't find proper answer. Would be nice if you help me to figure out what I am missing in my configuration.
Here is my configuration that failing.
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx-helloworld
image: nginx
ports:
- containerPort: 80
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: www-data
initContainers:
- name: git-sync
image: k8s.gcr.io/git-sync:v3.1.3
volumeMounts:
- name: www-data
mountPath: /data
env:
- name: GIT_SYNC_REPO
value: "https://github.com/musaalp/design-patterns.git" ##repo-path-you-want-to-clone
- name: GIT_SYNC_BRANCH
value: "master" ##repo-branch
- name: GIT_SYNC_ROOT
value: /data
- name: GIT_SYNC_DEST
value: "hello" ##path-where-you-want-to-clone
- name: GIT_SYNC_PERIOD
value: "10"
- name: GIT_SYNC_ONE_TIME
value: "false"
securityContext:
runAsUser: 0
volumes:
- name: www-data
emptyDir: {}
Pod
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: nginx-helloworld
name: nginx-helloworld
spec:
containers:
- image: nginx
name: nginx-helloworld
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Never
status: {}
you are using the git-sync as an initContainers, which run only during init (once in lifecycle)
A Pod can have multiple containers running apps within it, but it can also have one or more init containers, which are run before the app containers are started.
Init containers are exactly like regular containers, except:
Init containers always run to completion.
Each init container must complete successfully before the next one starts.
init-containers
So use this as a regular container
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: git-sync
image: k8s.gcr.io/git-sync:v3.1.3
volumeMounts:
- name: www-data
mountPath: /data
env:
- name: GIT_SYNC_REPO
value: "https://github.com/musaalp/design-patterns.git" ##repo-path-you-want-to-clone
- name: GIT_SYNC_BRANCH
value: "master" ##repo-branch
- name: GIT_SYNC_ROOT
value: /data
- name: GIT_SYNC_DEST
value: "hello" ##path-where-you-want-to-clone
- name: GIT_SYNC_PERIOD
value: "20"
- name: GIT_SYNC_ONE_TIME
value: "false"
securityContext:
runAsUser: 0
- name: nginx-helloworld
image: nginx
ports:
- containerPort: 80
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: www-data
volumes:
- name: www-data
emptyDir: {}

k8s Permission Denied issue

I got that error when deploying a k8s deployment, I tried to impersonate being a root user via the security context but it didn't help, any guess how to solve it? Unfortunately, I don't have any other ideas or a workaround to avoid this permission issue.
The error I get is:
30: line 1: /scripts/wrapper.sh: Permission denied
stream closed
The deployment is as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: cluster-autoscaler-grok-exporter
labels:
app: cluster-autoscaler-grok-exporter
spec:
replicas: 1
selector:
matchLabels:
app: cluster-autoscaler-grok-exporter
sidecar: cluster-autoscaler-grok-exporter-sidecar
template:
metadata:
labels:
app: cluster-autoscaler-grok-exporter
sidecar: cluster-autoscaler-grok-exporter-sidecar
spec:
securityContext:
runAsUser: 1001
fsGroup: 2000
serviceAccountName: flux
imagePullSecrets:
- name: id-docker
containers:
- name: get-data
# 3.5.0 - helm v3.5.0, kubectl v1.20.2, alpine 3.12
image: dtzar/helm-kubectl:3.5.0
command: ["sh", "-c", "/scripts/wrapper.sh"]
args:
- cluster-autoscaler
- "90"
# - cluster-autoscaler
- "30"
- /scripts/get_data.sh
- /logs/data.log
volumeMounts:
- name: logs
mountPath: /logs/
- name: scripts-volume-get-data
mountPath: /scripts/get_data.sh
subPath: get_data.sh
- name: scripts-wrapper
mountPath: /scripts/wrapper.sh
subPath: wrapper.sh
- name: export-data
image: ippendigital/grok-exporter:1.0.0.RC3
imagePullPolicy: Always
ports:
- containerPort: 9148
protocol: TCP
volumeMounts:
- name: grok-config-volume
mountPath: /grok/config.yml
subPath: config.yml
- name: logs
mountPath: /logs
volumes:
- name: grok-config-volume
configMap:
name: grok-exporter-config
- name: scripts-volume-get-data
configMap:
name: get-data-script
defaultMode: 0777
defaultMode: 0700
- name: scripts-wrapper
configMap:
name: wrapper-config
defaultMode: 0777
defaultMode: 0700
- name: logs
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: cluster-autoscaler-grok-exporter-sidecar
labels:
sidecar: cluster-autoscaler-grok-exporter-sidecar
spec:
type: ClusterIP
ports:
- name: metrics
protocol: TCP
targetPort: 9144
port: 9148
selector:
sidecar: cluster-autoscaler-grok-exporter-sidecar
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
app.kubernetes.io/name: cluster-autoscaler-grok-exporter
app.kubernetes.io/part-of: grok-exporter
name: cluster-autoscaler-grok-exporter
spec:
endpoints:
- port: metrics
selector:
matchLabels:
sidecar: cluster-autoscaler-grok-exporter-sidecar
From what I can see, your script does not have execute permissions.
Remove this line from your config map.
defaultMode: 0700
Keep only:
defaultMode: 0777
Also, I see missing leading / in your script path
- /bin/sh scripts/get_data.sh
So, change it to
- /bin/sh /scripts/get_data.sh

deployment throwing error for init container only when I add a second regular container to my deployment

Hi There I am currently trying to deploy sonarqube 7.8-community in GKE using a DB cloudsql instance.
This requires 2 containers ( one for sonarqube and the other for the cloudproxy in order to connect to the DB)
Sonarqube container, however, also requires an init container to give it some special memory requirments.
When I create the deployment with just the sonarqube image and the init container it works fine but this wont be of any use as I need the cloudsql proxy container to connect to my external db. When I add this container though the deployment suddenly errors with the below
deirdrerodgers#cloudshell:~ (meta-gear-306013)$ kubectl create -f initsonar.yaml
The Deployment "sonardeploy" is invalid:spec.template.spec.initContainers[0].volumeMounts[0].name: Not found: "init-sysctl"
This is my complete yaml file with the init container and the other two containers. I wonder is the issue because it doesnt know which container to apply the init container to?
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: sonardeploy
name: sonardeploy
namespace: sonar
spec:
replicas: 1
selector:
matchLabels:
app: sonardeploy
strategy: {}
template:
metadata:
labels:
app: sonardeploy
spec:
initContainers:
- name: init-sysctl
image: busybox:1.32
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
resources:
{}
command: ["sh",
"-e",
"/tmp/scripts/init_sysctl.sh"]
volumeMounts:
- name: init-sysctl
mountPath: /tmp/scripts/
volumes:
- name: init-sysctl
configMap:
name: sonarqube-sonarqube-init-sysctl
items:
- key: init_sysctl.sh
path: init_sysctl.sh
spec:
containers:
- image: sonarqube:7.8-community
name: sonarqube
env:
- name: SONARQUBE_JDBC_USERNAME
valueFrom:
secretKeyRef:
name: sonarsecret
key: username
- name: SONARQUBE_JDBC_PASSWORD
valueFrom:
secretKeyRef:
name: sonarsecret
key: password
- name: SONARQUBE_JDBC_URL
value: jdbc:postgresql://localhost:5432/sonar
ports:
- containerPort: 9000
name: sonarqube
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.17
command: ["/cloud_sql_proxy",
"-instances=meta-gear-306013:us-central1:sonardb=tcp:5432",
"-credential_file=/secrets/service_account.json"]
securityContext:
runAsNonRoot: true
volumeMounts:
- name: cloudsql-instance-credentials-volume
mountPath: /secrets/
readOnly: true
volumes:
- name: cloudsql-instance-credentials-volume
secret:
secretName: cloudsql-instance-credentials
Your yaml file is incorrect. You have two spec: blocks. It should be only one. You need to combine it together. Under spec block should be initContainers block, then containers and finally volumes block. Look at the correct yaml file below:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: sonardeploy
name: sonardeploy
namespace: sonar
spec:
replicas: 1
selector:
matchLabels:
app: sonardeploy
strategy: {}
template:
metadata:
labels:
app: sonardeploy
spec:
initContainers:
- name: init-sysctl
image: busybox:1.32
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
resources:
{}
command: ["sh",
"-e",
"/tmp/scripts/init_sysctl.sh"]
volumeMounts:
- name: init-sysctl
mountPath: /tmp/scripts/
containers:
- image: sonarqube:7.8-community
name: sonarqube
env:
- name: SONARQUBE_JDBC_USERNAME
valueFrom:
secretKeyRef:
name: sonarsecret
key: username
- name: SONARQUBE_JDBC_PASSWORD
valueFrom:
secretKeyRef:
name: sonarsecret
key: password
- name: SONARQUBE_JDBC_URL
value: jdbc:postgresql://localhost:5432/sonar
ports:
- containerPort: 9000
name: sonarqube
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.17
command: ["/cloud_sql_proxy",
"-instances=meta-gear-306013:us-central1:sonardb=tcp:5432",
"-credential_file=/secrets/service_account.json"]
securityContext:
runAsNonRoot: true
volumeMounts:
- name: cloudsql-instance-credentials-volume
mountPath: /secrets/
readOnly: true
volumes:
- name: cloudsql-instance-credentials-volume
secret:
secretName: cloudsql-instance-credentials
- name: init-sysctl
configMap:
name: sonarqube-sonarqube-init-sysctl
items:
- key: init_sysctl.sh
path: init_sysctl.sh

getting error while creating kubernetes deployment

My deployment is working fine. i just try to use local persistent volume for storing data on local of my application. after that i am getting below error.
error: error validating "xxx-deployment.yaml": error validating data: ValidationError(Deployment.spec.template.spec.imagePullSecrets[0]): unknown field "volumeMounts" in io.k8s.api.core.v1.LocalObjectReference; if you choose to ignore these errors, turn validation off with --validate=false
apiVersion: apps/v1
kind: Deployment
metadata:
name: xxx
namespace: xxx
spec:
selector:
matchLabels:
app: xxx
replicas: 3
template:
metadata:
labels:
app: xxx
spec:
containers:
- name: xxx
image: xxx:1.xx
imagePullPolicy: "Always"
stdin: true
tty: true
ports:
- containerPort: 80
imagePullPolicy: Always
imagePullSecrets:
- name: xxx
volumeMounts:
- mountPath: /data
name: xxx-data
restartPolicy: Always
volumes:
- name: xx-data
persistentVolumeClaim:
claimName: xx-xx-pvc
You need to move the imagePullSecret further down. It's breaking the container spec. imagePullSecret is defined at the pod spec level while volumeMounts belongs to the container spec
apiVersion: apps/v1
kind: Deployment
metadata:
name: xxx
namespace: xxx
spec:
selector:
matchLabels:
app: xxx
replicas: 3
template:
metadata:
labels:
app: xxx
spec:
containers:
- name: xxx
image: xxx:1.xx
imagePullPolicy: "Always"
stdin: true
tty: true
ports:
- containerPort: 80
imagePullPolicy: Always
volumeMounts:
- mountPath: /data
name: xxx-data
imagePullSecrets:
- name: xxx
restartPolicy: Always
volumes:
- name: xx-data
persistentVolumeClaim:
claimName: xx-xx-pvc
You have an indentation typo in your yaml, volumeMounts is under imagePullSecrets, when it should be at the same level:
imagePullSecrets:
- name: xxx
volumeMounts:
- mountPath: /data
name: xxx-data
volumeMounts: is a container child.
And volumes: is spec child.
Also volumeMounts and Vloume name should be same.

kubernetes creating statefulset fail

I am trying to create a stateful set with definition below but I get this error:
error: unable to recognize "wordpress-database.yaml": no matches for kind "StatefulSet" in version "apps/v1beta2"
what's wrong?
The yaml file is (please do not consider the alignment of the rows):
apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
name: wordpress-database
spec:
selector:
matchLabels:
app: blog
serviceName: "blog"
replicas: 1
template:
metadata:
labels:
app: blog
spec:
containers:
- name: database
image: mysql:5.7
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: rootPassword
- name: MYSQL_DATABASE
value: database
- name: MYSQL_USER
value: user
- name: MYSQL_PASSWORD
value: password
volumeMounts:
- name: data
mountPath: /var/lib/mysql
- name: blog
image: wordpress:latest
ports:
- containerPort: 80
env:
- name: WORDPRESS_DB_HOST
value: 127.0.0.1:3306
- name: WORDPRESS_DB_NAME
value: database
- name: WORDPRESS_DB_USER
value: user
- name: WORDPRESS_DB_PASSWORD
value: password
volumeClaimTemplates:
- metadata:
name: data
spec:
resources:
requests:
storage: 1Gi
The api version of StatefulSet shoud be:
apiVersion: apps/v1
From the official documentation
Good luck.