I have an example project with nginx, php-fpm, mysql:
https://gitlab.com/x-team-blog/docker-compose-php
This is the docker-compose file:
version: '3'
services:
php-fpm:
build:
context: ./php-fpm
volumes:
- ../src:/var/www
nginx:
build:
context: ./nginx
volumes:
- ../src:/var/www
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
- ./nginx/sites/:/etc/nginx/sites-available
- ./nginx/conf.d/:/etc/nginx/conf.d
depends_on:
- php-fpm
ports:
- "80:80"
- "443:443"
database:
build:
context: ./database
environment:
- MYSQL_DATABASE=mydb
- MYSQL_USER=myuser
- MYSQL_PASSWORD=secret
- MYSQL_ROOT_PASSWORD=docker
volumes:
- ./database/data.sql:/docker-entrypoint-initdb.d/data.sql
I'd like to use Kubernetes in Google Cloud and for this I have created several files:
Persistent Volume database-claim0-persistentvolumeclaim.yaml:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: database-claim0
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 400Mi
Also I created secrets:
kubectl create secret generic mysql --from-literal=MYSQL_DATABASE=mydb --from-literal=MYSQL_PASSWORD=secret --from-literal=MYSQL_ROOT_PASSWORD=docker --from-literal=MYSQL_USER=myuser
Then I created a service:
apiVersion: v1
kind: Service
metadata:
name: database
labels:
app: database
spec:
type: ClusterIP
ports:
- port: 3306
selector:
app: database
And now I'd like to create database-deployment.yaml, but I don't know what I have to write in the volumeMounts section for copy sql file like docker-compose:
- ./database/data.sql:/docker-entrypoint-initdb.d/data.sql
database-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: database
labels:
app: database
spec:
replicas: 1
selector:
matchLabels:
app: database
template:
metadata:
labels:
app: database
spec:
containers:
- image: mariadb
name: database
env:
- name: MYSQL_DATABASE
valueFrom:
secretKeyRef:
name: mysql
key: MYSQL_DATABASE
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mysql
key: MYSQL_PASSWORD
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql
key: MYSQL_ROOT_PASSWORD
- name: MYSQL_USER
valueFrom:
secretKeyRef:
name: mysql
key: MYSQL_USER
ports:
- containerPort: 3306
name: database
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: database-claim0
UPD:
My mistake. I did not push images first.
Use kompose
INFO Kubernetes file "nginx-service.yaml" created
INFO Kubernetes file "database-deployment.yaml" created
INFO Kubernetes file "database-claim0-persistentvolumeclaim.yaml" created
INFO Kubernetes file "nginx-deployment.yaml" created
INFO Kubernetes file "nginx-claim0-persistentvolumeclaim.yaml" created
INFO Kubernetes file "nginx-claim1-persistentvolumeclaim.yaml" created
INFO Kubernetes file "nginx-claim2-persistentvolumeclaim.yaml" created
INFO Kubernetes file "nginx-claim3-persistentvolumeclaim.yaml" created
INFO Kubernetes file "php-fpm-deployment.yaml" created
INFO Kubernetes file "php-fpm-claim0-persistentvolumeclaim.yaml" created
This is an Online tool here which help generating most of the Docker compose file converted to the Kubernetes Resources
Related
Hi There I am currently trying to deploy sonarqube 7.8-community in GKE using a DB cloudsql instance.
This requires 2 containers ( one for sonarqube and the other for the cloudproxy in order to connect to the DB)
Sonarqube container, however, also requires an init container to give it some special memory requirments.
When I create the deployment with just the sonarqube image and the init container it works fine but this wont be of any use as I need the cloudsql proxy container to connect to my external db. When I add this container though the deployment suddenly errors with the below
deirdrerodgers#cloudshell:~ (meta-gear-306013)$ kubectl create -f initsonar.yaml
The Deployment "sonardeploy" is invalid:spec.template.spec.initContainers[0].volumeMounts[0].name: Not found: "init-sysctl"
This is my complete yaml file with the init container and the other two containers. I wonder is the issue because it doesnt know which container to apply the init container to?
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: sonardeploy
name: sonardeploy
namespace: sonar
spec:
replicas: 1
selector:
matchLabels:
app: sonardeploy
strategy: {}
template:
metadata:
labels:
app: sonardeploy
spec:
initContainers:
- name: init-sysctl
image: busybox:1.32
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
resources:
{}
command: ["sh",
"-e",
"/tmp/scripts/init_sysctl.sh"]
volumeMounts:
- name: init-sysctl
mountPath: /tmp/scripts/
volumes:
- name: init-sysctl
configMap:
name: sonarqube-sonarqube-init-sysctl
items:
- key: init_sysctl.sh
path: init_sysctl.sh
spec:
containers:
- image: sonarqube:7.8-community
name: sonarqube
env:
- name: SONARQUBE_JDBC_USERNAME
valueFrom:
secretKeyRef:
name: sonarsecret
key: username
- name: SONARQUBE_JDBC_PASSWORD
valueFrom:
secretKeyRef:
name: sonarsecret
key: password
- name: SONARQUBE_JDBC_URL
value: jdbc:postgresql://localhost:5432/sonar
ports:
- containerPort: 9000
name: sonarqube
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.17
command: ["/cloud_sql_proxy",
"-instances=meta-gear-306013:us-central1:sonardb=tcp:5432",
"-credential_file=/secrets/service_account.json"]
securityContext:
runAsNonRoot: true
volumeMounts:
- name: cloudsql-instance-credentials-volume
mountPath: /secrets/
readOnly: true
volumes:
- name: cloudsql-instance-credentials-volume
secret:
secretName: cloudsql-instance-credentials
Your yaml file is incorrect. You have two spec: blocks. It should be only one. You need to combine it together. Under spec block should be initContainers block, then containers and finally volumes block. Look at the correct yaml file below:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: sonardeploy
name: sonardeploy
namespace: sonar
spec:
replicas: 1
selector:
matchLabels:
app: sonardeploy
strategy: {}
template:
metadata:
labels:
app: sonardeploy
spec:
initContainers:
- name: init-sysctl
image: busybox:1.32
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
resources:
{}
command: ["sh",
"-e",
"/tmp/scripts/init_sysctl.sh"]
volumeMounts:
- name: init-sysctl
mountPath: /tmp/scripts/
containers:
- image: sonarqube:7.8-community
name: sonarqube
env:
- name: SONARQUBE_JDBC_USERNAME
valueFrom:
secretKeyRef:
name: sonarsecret
key: username
- name: SONARQUBE_JDBC_PASSWORD
valueFrom:
secretKeyRef:
name: sonarsecret
key: password
- name: SONARQUBE_JDBC_URL
value: jdbc:postgresql://localhost:5432/sonar
ports:
- containerPort: 9000
name: sonarqube
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.17
command: ["/cloud_sql_proxy",
"-instances=meta-gear-306013:us-central1:sonardb=tcp:5432",
"-credential_file=/secrets/service_account.json"]
securityContext:
runAsNonRoot: true
volumeMounts:
- name: cloudsql-instance-credentials-volume
mountPath: /secrets/
readOnly: true
volumes:
- name: cloudsql-instance-credentials-volume
secret:
secretName: cloudsql-instance-credentials
- name: init-sysctl
configMap:
name: sonarqube-sonarqube-init-sysctl
items:
- key: init_sysctl.sh
path: init_sysctl.sh
I have a the same issue that I have seen other users have about permission for mysql folder in the percona image. But I have it in Kubernetes, and I am not sure exactly how I can chown of the volume before the image is applied.
This is the yaml:
apiVersion: v1
kind: Service
metadata:
name: db
labels:
app: db
k8s-app: magento
spec:
selector:
app: db
ports:
- name: db
port: 3306
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: db
spec:
selector:
matchLabels:
app: db
serviceName: db
template:
metadata:
labels:
app: db
k8s-app: magento
spec:
containers:
- args:
- --max_allowed_packet=134217728
- "--ignore-db-dir=lost+found"
volumeMounts:
- mountPath: /var/lib/mysql
name: data
env:
- name: MYSQL_DATABASE
valueFrom:
configMapKeyRef:
name: config
key: DB_NAME
- name: MYSQL_PASSWORD
valueFrom:
configMapKeyRef:
name: config
key: DB_PASS
- name: MYSQL_USER
valueFrom:
configMapKeyRef:
name: config
key: DB_USER
- name: MYSQL_ROOT_PASSWORD
valueFrom:
configMapKeyRef:
name: config
key: DB_ROOT_PASS
image: percona:5.7
name: db
resources:
requests:
cpu: 100m
memory: 256Mi
restartPolicy: Always
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
Same issue, but in docker:
Docker-compose : mysqld: Can't create/write to file '/var/lib/mysql/is_writable' (Errcode: 13 - Permission denied)
How to fix it in Kubernetes?
I found this solution and it works:
initContainers:
- name: take-data-dir-ownership
image: alpine:3
# Give `mysql` user permissions a mounted volume
# https://stackoverflow.com/a/51195446/4360433
command:
- chown
- -R
- 999:999
- /var/lib/mysql
volumeMounts:
- name: data
mountPath: /var/lib/mysql
I want to connect to a Postgres instance that it is in a pod in GKE.
I think a way to achieve this can be with kubectl port forwarding.
In my local I have "Docker for desktop" and when I apply the yamls files I am able to connect to the database. The yamls I am using in GKE are almost identical
secrets.yaml
apiVersion: v1
kind: Secret
metadata:
namespace: staging
name: postgres-secrets
type: Opaque
data:
MYAPPAPI_DATABASE_NAME: XXXENCODEDXXX
MYAPPAPI_DATABASE_USERNAME: XXXENCODEDXXX
MYAPPAPI_DATABASE_PASSWORD: XXXENCODEDXXX
pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
namespace: staging
name: db-data-pv
labels:
type: local
spec:
storageClassName: generic
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/var/lib/postgresql/data"
pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
namespace: staging
name: db-data-pvc
spec:
storageClassName: generic
accessModes:
- ReadWriteMany
resources:
requests:
storage: 500Mi
deployment.yaml
# Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: staging
labels:
app: postgres-db
name: postgres-db
spec:
replicas: 1
selector:
matchLabels:
app: postgres-db
template:
metadata:
labels:
app: postgres-db
spec:
containers:
- name: postgres-db
image: postgres:12.4
ports:
- containerPort: 5432
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-db
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres-secrets
key: MYAPPAPI_DATABASE_USERNAME
- name: POSTGRES_DB
valueFrom:
secretKeyRef:
name: postgres-secrets
key: MYAPPAPI_DATABASE_NAME
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secrets
key: MYAPPAPI_DATABASE_PASSWORD
volumes:
- name: postgres-db
persistentVolumeClaim:
claimName: db-data-pvc
svc.yaml
apiVersion: v1
kind: Service
metadata:
namespace: staging
labels:
app: postgres-db
name: postgresdb-service
spec:
type: ClusterIP
selector:
app: postgres-db
ports:
- port: 5432
and it seems that everything is working
Then I execute kubectl port-forward postgres-db-podname 5433:5432 -n staging and when I try to connect it throws
FATAL: role "myappuserdb" does not exist
UPDATE 1
This is from GKE YAML
spec:
containers:
- env:
- name: POSTGRES_DB
valueFrom:
secretKeyRef:
key: MYAPPAPI_DATABASE_NAME
name: postgres-secrets
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
key: MYAPPAPI_DATABASE_USERNAME
name: postgres-secrets
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
key: MYAPPAPI_DATABASE_PASSWORD
name: postgres-secrets
UPDATE 2
I will explain what happened and how I solve this.
The first time I applied the files, kubectl apply -f k8s/, in the deployment, the environment variable POSTGRES_USER was referencing a wrong secret, MYAPPAPI_DATABASE_NAME and it should make reference to MYAPPAPI_DATABASE_USERNAME.
After this first time, everytime I did kubectl delete -f k8s/ the resources were deleted. However, when I created the resources again, the data that I created in the previous step was not cleaned.
I deleted the cluster and created a new cluster and everything worked. I need to check if there is a way to clean the data in kubernetes volume.
in your deployment's env spec you have assigned the wrong value for POSTGRES_USER. you have assigned the value POSTGRES_USER = MYAPPAPI_DATABASE_NAME.
but i think it should be POSTGRES_USER = MYAPPAPI_DATABASE_USERNAME .
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres-secrets
key: MYAPPAPI_DATABASE_NAME #<<<this is the value need to change>>>
please try this one
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres-secrets
key: MYAPPAPI_DATABASE_USERNAME
I'm creating a micro-service API I want to run on a local raspberry k3s cluster.
The aim is to use skaffold to deploy during development.
The problem I have is everytime I use skaffold dev I have the same error :
deployment/epos-auth-deploy: container epos-auth is waiting to start: 192.168.1.10:8080/epos_auth:05ea8c1#sha256:4e7f7c7224ce1ec4831627782ed696dee68b66929b5641e9fc6cfbfc4d2e82da can't be pulled
I've tried to setup a local docker registry which is defined with this docker-compose.yaml file:
version: '2.0'
services:
registry:
image: registry:latest
volumes:
- ./registry-data:/var/lib/registry
networks:
- registry-ui-net
ui:
image: joxit/docker-registry-ui:static
ports:
- 8080:80
environment:
- REGISTRY_TITLE=Docker Registry
- REGISTRY_URL=http://registry:5000
depends_on:
- registry
networks:
- registry-ui-net
networks:
registry-ui-net:
It's working on http://192.168.1.10:8080 on my local network.
It seems ok when it builds and pushes the image.
I also have set /etc/docker/daemon.json on my local computer
{
"insecure-registries": ["192.168.1.10:8080"],
"registry-mirrors": ["http://192.168.1.10:8080"]
}
I've set the /etc/rancher/k3s/registries.yaml on all the nodes:
mirrors:
docker.io:
endpoint:
- "http://192.168.1.10:8080"
skaffold.yaml looks like this :
apiVersion: skaffold/v2alpha3
kind: Config
metadata:
name: epos-skaffold
deploy:
kubectl:
manifests:
- ./infra/k8s/skaffold/*
build:
local:
useBuildkit: true
artifacts:
- image: epos_auth
context: epos-auth
docker:
dockerfile: Dockerfile
sync:
manual:
- src: 'src/**/*.ts'
dest: .
And ./infra/k8s/skaffold/epos-auth-deploy.yaml looks like this :
apiVersion: apps/v1
kind: Deployment
metadata:
name: epos-auth-deploy
spec:
replicas: 1
selector:
matchLabels:
app: epos-auth
template:
metadata:
labels:
app: epos-auth
spec:
containers:
- name: epos-auth
image: epos_auth
env:
- name: NATS_URL
value: http://nats-srv:4222
- name: NATS_CLUSTER_ID
value: epos
- name: NATS_CLIENT_ID
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: JWT_KEY
valueFrom:
secretKeyRef:
name: jwt-keys
key: key
- name: JWT_PRIVATE_KEY
valueFrom:
secretKeyRef:
name: jwt-keys
key: private_key
- name: JWT_PUBLIC_KEY
valueFrom:
secretKeyRef:
name: jwt-keys
key: public_key
- name: MONGO_URI
value: mongodb://mongodb-srv:27017/auth-service
---
apiVersion: v1
kind: Service
metadata:
name: epos-auth-srv
spec:
selector:
app: epos-auth
ports:
- name: epos-auth
protocol: TCP
port: 3000
targetPort: 3000
I did
skaffold config set default-repo 192.168.1.10:8080
skaffold config set insecure-registries 192.168.1.10:8080
I really don't know what's wrong with this.
Do you have any clue please ?
Please change the docker.io reference as it is confusing;
mirrors:
docker.io:
endpoint:
- "http://192.168.1.10:8080"
And use the image with the full registry path just like:
registry.local:5000/epos_auth
https://devopsspiral.com/articles/k8s/k3d-skaffold/
I installed docker-ce on the master node. Then I created the master node with --docker parameter in the command.
When I created the k3s node, I defined a context. I forgot to replace default context in ~/.skaffold/config
For some reason, the postgres instance isn't being locked down with a password using the following kubernetes script.
apiVersion: v1
kind: ReplicationController
metadata:
name: postgres
labels:
name: postgres
spec:
replicas: 1
template:
metadata:
labels:
name: postgres
spec:
containers:
- resources:
image: postgres:9.4
name: postgres
env:
- name: DB_PASS
value: password
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
ports:
- containerPort: 5432
name: postgres
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-persistent-storage
volumes:
- name: postgres-persistent-storage
gcePersistentDisk:
pdName: postgres-disk
fsType: ext4
Any ideas?
According to the docker hub documentation for the postgres image you should be using the environment variable POSTGRES_PASSWORD instead of DB_PASSWORD.