kubernetes creating statefulset fail - kubernetes

I am trying to create a stateful set with definition below but I get this error:
error: unable to recognize "wordpress-database.yaml": no matches for kind "StatefulSet" in version "apps/v1beta2"
what's wrong?
The yaml file is (please do not consider the alignment of the rows):
apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
name: wordpress-database
spec:
selector:
matchLabels:
app: blog
serviceName: "blog"
replicas: 1
template:
metadata:
labels:
app: blog
spec:
containers:
- name: database
image: mysql:5.7
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: rootPassword
- name: MYSQL_DATABASE
value: database
- name: MYSQL_USER
value: user
- name: MYSQL_PASSWORD
value: password
volumeMounts:
- name: data
mountPath: /var/lib/mysql
- name: blog
image: wordpress:latest
ports:
- containerPort: 80
env:
- name: WORDPRESS_DB_HOST
value: 127.0.0.1:3306
- name: WORDPRESS_DB_NAME
value: database
- name: WORDPRESS_DB_USER
value: user
- name: WORDPRESS_DB_PASSWORD
value: password
volumeClaimTemplates:
- metadata:
name: data
spec:
resources:
requests:
storage: 1Gi

The api version of StatefulSet shoud be:
apiVersion: apps/v1
From the official documentation
Good luck.

Related

odoo in k8s: Odoo pod running then crashing

I try to deploy Odoo in k8s ;
I have use the below Yaml files for odoo/postgres/services.
the Odoo pod is always crashing . the logs result :
could not translate host name "db" to address: Temporary failure in name resolution
apiVersion: apps/v1
kind: Deployment
metadata:
name: odoo3
spec:
replicas: 1
selector:
matchLabels:
app: odoo3
template:
metadata:
labels:
app: odoo3
spec:
containers:
- name: odoo3
image: odoo
resources:
requests:
cpu: 100m
memory: 100Mi
env:
- name: POSTGRES_DB
value: "postgres"
- name: POSTGRES_PASSWORD
value: "postgres"
- name: POSTGRES_USER
value: "postgres"
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: odoo3
labels:
app: odoo3
spec:
ports:
- port: 80
targetPort: 80
selector:
app: odoo3
You need to specify the environment variable HOST
env:
- name: POSTGRES_DB
value: "postgres"
- name: POSTGRES_PASSWORD
value: "postgres"
- name: POSTGRES_USER
value: "postgres"
- name: HOST
value: "your-postgres-service-name"
Your your-postgres-service-name should point to your postgres database container or server.

deployment throwing error for init container only when I add a second regular container to my deployment

Hi There I am currently trying to deploy sonarqube 7.8-community in GKE using a DB cloudsql instance.
This requires 2 containers ( one for sonarqube and the other for the cloudproxy in order to connect to the DB)
Sonarqube container, however, also requires an init container to give it some special memory requirments.
When I create the deployment with just the sonarqube image and the init container it works fine but this wont be of any use as I need the cloudsql proxy container to connect to my external db. When I add this container though the deployment suddenly errors with the below
deirdrerodgers#cloudshell:~ (meta-gear-306013)$ kubectl create -f initsonar.yaml
The Deployment "sonardeploy" is invalid:spec.template.spec.initContainers[0].volumeMounts[0].name: Not found: "init-sysctl"
This is my complete yaml file with the init container and the other two containers. I wonder is the issue because it doesnt know which container to apply the init container to?
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: sonardeploy
name: sonardeploy
namespace: sonar
spec:
replicas: 1
selector:
matchLabels:
app: sonardeploy
strategy: {}
template:
metadata:
labels:
app: sonardeploy
spec:
initContainers:
- name: init-sysctl
image: busybox:1.32
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
resources:
{}
command: ["sh",
"-e",
"/tmp/scripts/init_sysctl.sh"]
volumeMounts:
- name: init-sysctl
mountPath: /tmp/scripts/
volumes:
- name: init-sysctl
configMap:
name: sonarqube-sonarqube-init-sysctl
items:
- key: init_sysctl.sh
path: init_sysctl.sh
spec:
containers:
- image: sonarqube:7.8-community
name: sonarqube
env:
- name: SONARQUBE_JDBC_USERNAME
valueFrom:
secretKeyRef:
name: sonarsecret
key: username
- name: SONARQUBE_JDBC_PASSWORD
valueFrom:
secretKeyRef:
name: sonarsecret
key: password
- name: SONARQUBE_JDBC_URL
value: jdbc:postgresql://localhost:5432/sonar
ports:
- containerPort: 9000
name: sonarqube
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.17
command: ["/cloud_sql_proxy",
"-instances=meta-gear-306013:us-central1:sonardb=tcp:5432",
"-credential_file=/secrets/service_account.json"]
securityContext:
runAsNonRoot: true
volumeMounts:
- name: cloudsql-instance-credentials-volume
mountPath: /secrets/
readOnly: true
volumes:
- name: cloudsql-instance-credentials-volume
secret:
secretName: cloudsql-instance-credentials
Your yaml file is incorrect. You have two spec: blocks. It should be only one. You need to combine it together. Under spec block should be initContainers block, then containers and finally volumes block. Look at the correct yaml file below:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: sonardeploy
name: sonardeploy
namespace: sonar
spec:
replicas: 1
selector:
matchLabels:
app: sonardeploy
strategy: {}
template:
metadata:
labels:
app: sonardeploy
spec:
initContainers:
- name: init-sysctl
image: busybox:1.32
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
resources:
{}
command: ["sh",
"-e",
"/tmp/scripts/init_sysctl.sh"]
volumeMounts:
- name: init-sysctl
mountPath: /tmp/scripts/
containers:
- image: sonarqube:7.8-community
name: sonarqube
env:
- name: SONARQUBE_JDBC_USERNAME
valueFrom:
secretKeyRef:
name: sonarsecret
key: username
- name: SONARQUBE_JDBC_PASSWORD
valueFrom:
secretKeyRef:
name: sonarsecret
key: password
- name: SONARQUBE_JDBC_URL
value: jdbc:postgresql://localhost:5432/sonar
ports:
- containerPort: 9000
name: sonarqube
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.17
command: ["/cloud_sql_proxy",
"-instances=meta-gear-306013:us-central1:sonardb=tcp:5432",
"-credential_file=/secrets/service_account.json"]
securityContext:
runAsNonRoot: true
volumeMounts:
- name: cloudsql-instance-credentials-volume
mountPath: /secrets/
readOnly: true
volumes:
- name: cloudsql-instance-credentials-volume
secret:
secretName: cloudsql-instance-credentials
- name: init-sysctl
configMap:
name: sonarqube-sonarqube-init-sysctl
items:
- key: init_sysctl.sh
path: init_sysctl.sh

Kubernetes: mysqld Can't create/write to file '/var/lib/mysql/is_writable' (Errcode: 13 - Permission denied)

I have a the same issue that I have seen other users have about permission for mysql folder in the percona image. But I have it in Kubernetes, and I am not sure exactly how I can chown of the volume before the image is applied.
This is the yaml:
apiVersion: v1
kind: Service
metadata:
name: db
labels:
app: db
k8s-app: magento
spec:
selector:
app: db
ports:
- name: db
port: 3306
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: db
spec:
selector:
matchLabels:
app: db
serviceName: db
template:
metadata:
labels:
app: db
k8s-app: magento
spec:
containers:
- args:
- --max_allowed_packet=134217728
- "--ignore-db-dir=lost+found"
volumeMounts:
- mountPath: /var/lib/mysql
name: data
env:
- name: MYSQL_DATABASE
valueFrom:
configMapKeyRef:
name: config
key: DB_NAME
- name: MYSQL_PASSWORD
valueFrom:
configMapKeyRef:
name: config
key: DB_PASS
- name: MYSQL_USER
valueFrom:
configMapKeyRef:
name: config
key: DB_USER
- name: MYSQL_ROOT_PASSWORD
valueFrom:
configMapKeyRef:
name: config
key: DB_ROOT_PASS
image: percona:5.7
name: db
resources:
requests:
cpu: 100m
memory: 256Mi
restartPolicy: Always
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
Same issue, but in docker:
Docker-compose : mysqld: Can't create/write to file '/var/lib/mysql/is_writable' (Errcode: 13 - Permission denied)
How to fix it in Kubernetes?
I found this solution and it works:
initContainers:
- name: take-data-dir-ownership
image: alpine:3
# Give `mysql` user permissions a mounted volume
# https://stackoverflow.com/a/51195446/4360433
command:
- chown
- -R
- 999:999
- /var/lib/mysql
volumeMounts:
- name: data
mountPath: /var/lib/mysql

Pod status CrashLoopBackOff

I have s stateful set which status is showing CrashLoopBackOff. All other components are working fine. When I run kubectl -n magento get po I see pod status in CrashLoopBackOff, and logs show
Initializing database
2020-07-22T11:57:25.498116Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
2020-07-22T11:57:25.499540Z 0 [ERROR] --initialize specified but the data directory has files in it. Aborting.
2020-07-22T11:57:25.499578Z 0 [ERROR] Aborting
This is the Kubernetes manifest:
apiVersion: v1
kind: Service
metadata:
name: db
labels:
app: db
k8s-app: magento
spec:
selector:
app: db
ports:
- name: db
port: 3306
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: db
namespace: magento
spec:
selector:
matchLabels:
app: db
serviceName: db
template:
metadata:
labels:
app: db
k8s-app: magento
spec:
containers:
- args:
- --max_allowed_packet=134217728
volumeMounts:
- mountPath: /var/lib/mysql
name: data
env:
- name: MYSQL_DATABASE
valueFrom:
configMapKeyRef:
name: config
key: DB_NAME
- name: MYSQL_PASSWORD
valueFrom:
configMapKeyRef:
name: config
key: DB_PASS
- name: MYSQL_USER
valueFrom:
configMapKeyRef:
name: config
key: DB_USER
- name: MYSQL_ROOT_PASSWORD
valueFrom:
configMapKeyRef:
name: config
key: DB_ROOT_PASS
image: percona:5.7
name: db
resources:
requests:
cpu: 100m
memory: 256Mi
restartPolicy: Always
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi

Postgres user not created Kubernetes

I have the following YAML file for create a postgres server instance
kind: Deployment
apiVersion: apps/v1beta1
metadata:
name: spring-demo-db
labels:
app: spring-demo-application
spec:
replicas: 1
selector:
matchLabels:
app: spring-demo-db
template:
metadata:
creationTimestamp: null
labels:
app: spring-demo-db
spec:
containers:
- name: spring-demo-db
image: postgres:10.4
ports:
- name: spring-demo-db
containerPort: 5432
protocol: TCP
env:
- name: POSTGRES_PASSWORD
value: "springdemo"
- name: POSTGRES_USER
value: "springdemo"
- name: POSTGRES_DB
value: "springdemo"
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-storage
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
volumes:
- name: "postgres-storage"
persistentVolumeClaim:
claimName: spring-demo-pv-claim
restartPolicy: Always
But when ssh into the container user springdemo not created. I have been struggling all day.What could be the problem for this
Anyone who can help me?
You didn't mention what command you're running and what error you're getting, so I'm guessing here, but try this:
kind: Deployment
apiVersion: apps/v1beta1
metadata:
name: spring-demo-db
labels:
app: spring-demo-application
spec:
replicas: 1
selector:
matchLabels:
app: spring-demo-db
template:
metadata:
creationTimestamp: null
labels:
app: spring-demo-db
spec:
containers:
- name: spring-demo-db
image: postgres:10.4
ports:
- name: spring-demo-db
containerPort: 5432
protocol: TCP
env:
- name: POSTGRES_USER
value: "springdemo"
- name: POSTGRES_DB
value: "springdemo"
- name: POSTGRES_PASSWORD
value: "springdemo"
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-storage
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
volumes:
- name: "postgres-storage"
persistentVolumeClaim:
claimName: spring-demo-pv-claim
restartPolicy: Always
But if it doesn't work, just use the Helm chart, because, among other issues, you are passing the password in an insecure way, which is a bad idea.