Errors trying to launch Postgres on a local kubernetes cluster - postgresql

I am trying to experimentally run a Postgres Service on a local Kubernetes cluster consisting of 2 Ubuntu-18.04 machines.
My postgres pod is stuck in ContainerCreating, and kubectl describe pod postgres gave me this message:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 14s default-scheduler Successfully assigned default/postgres-57b4695bc9-8wklp to cumulusg2
Warning FailedCreatePodSandBox 11s kubelet, cumulusg2 Failed to create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "832ee34a687d8a1aabb92b57ec6b6b5b8d5f55889c996c2bd4bc4ddcb106fdd2" network for pod "postgres-57b4695bc9-8wklp": networkPlugin cni failed to set up pod "postgres-57b4695bc9-8wklp_default" network: error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes"), failed to clean up sandbox container "832ee34a687d8a1aabb92b57ec6b6b5b8d5f55889c996c2bd4bc4ddcb106fdd2" network for pod "postgres-57b4695bc9-8wklp": networkPlugin cni failed to teardown pod "postgres-57b4695bc9-8wklp_default" network: error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")]
Normal SandboxChanged 8s (x2 over 9s) kubelet, cumulusg2 Pod sandbox changed, it will be killed and re-created.
The error message confuses me and I am not sure where to start, so I'll lay out my process up to this point.
To initialize the cluster, I used
sudo kubeadm init --pod-network-cidr=192.168.0.0/16
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
,then the kubeadm join command, and after that:
kubectl apply -n kube-system -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 |tr -d '\n')"
To create the Postgres Database, I used 3 yaml files:
postgres-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
labels:
app: postgres
data:
POSTGRES_DB: postgresdb
POSTGRES_USER: postgresadmin
POSTGRES_PASSWORD: admin123
postgres-volumes.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv-volume
labels:
type: local
app: postgres
spec:
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-pv-claim
labels:
app: postgres
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
and postgres-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:latest
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
envFrom:
- configMapRef:
name: postgres-config
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgredb
volumes:
- name: postgredb
persistentVolumeClaim:
claimName: postgres-pv-claim
---
apiVersion: v1
kind: Service
metadata:
name: postgres
labels:
app: postgres
spec:
type: NodePort
ports:
- port: 5432
selector:
app: postgres

Related

CreateContainerError while creating postgresql in k8s

I'm trying to run postgresql db at k8s and there is no errors while creating all from file, but pod at the deployment cant create container.
There is my yaml code:
ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
labels:
app: postgres
data:
POSTGRES_DB: postgresdb
POSTGRES_USER: postgresadmin
POSTGRES_PASSWORD: adminpassword
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:10.18
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
envFrom:
- configMapRef:
name: postgres-config
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgredb
volumes:
- name: postgredb
persistentVolumeClaim:
claimName: postgres-pv-claim
Sevice:
apiVersion: v1
kind: Service
metadata:
name: postgres
labels:
app: postgres
spec:
type: ClusterIP
ports:
- port: 5432
selector:
app: postgres
after i'm using:
kubectl create -f filename
i got :
configmap/postgres-config created
persistentvolume/postgres-pv-volume created
persistentvolumeclaim/postgres-pv-claim created
deployment.apps/postgres created
service/postgres created
But when i'm typing:
kubectl get pods
There is an error:
postgres-78496cc865-85kt7 0/1 CreateContainerError 0 13m
this is PV and PVC, no more space at the question to ad that as a code :)
If you describe the pod, you'll see the warning message in there,
Warning FailedScheduling 45s (x2 over 45s) default-scheduler persistentvolumeclaim "postgres-pv-claim" not found
On a high level, a database instance can run within a Kubernetes container. A database instance stores data in files, and the files are stored in persistent volume claims. A PersistentVolumeClaim must be created and made available to a PostgreSQL instance.To create the database instance as a container, you use a deployment configuration. In order to provide an access interface that is independent of the particular container, you create a service that provides access to the database. The service remains unchanged even if a container (or pod) is moved to a different node.
In your case, create a PVC resource and bound it to PV so that will be used by the pod. As currently it does not found that , it went into pending state. This can be achieved in multiple ways, you can either use the hostPath as the local storage,
$ k get pods
NAME READY STATUS RESTARTS AGE
postgres-795cfcd67b-khfgn 1/1 Running 0 18s
Sample PV and PVC configs as below,
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-nautilus
spec:
storageClassName: manual
capacity:
storage: 8Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/home/mohan"
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
You can check the Persistent Volume doc for more details. Also, read more about storage class and StatefulSets for deploying database applications in Kubernetes cluster.
Thanks to all who tried to help me! Problem was at PersistentVolume.spec.hostPath.path. There was an invalid character at the path. I tried to use "./path".

mkdir /mnt/data: read-only file system Back-off restarting failed postgres container

I'm new to Kubernetes, I tried to apply yaml file to create Postgres in GKE, I'm getting error as "Error: failed to start container "postgres": Error response from daemon: error while creating mount source path '/mnt/data': mkdir /mnt/data: read-only file system Back-off restarting failed container.
I thinki need to give permsions as RWX , when i tried to Login to pod i.e inside container..It is not allowing to login.
ANyone please help me !!.
This is my Yaml file for Postgres:
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
spec:
selector:
matchLabels:
app: postgres
replicas: 1
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:latest
imagePullPolicy: "IfNotPresent"
envFrom:
- configMapRef:
name: postgres-config
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgredb
volumes:
- name: postgredb
persistentVolumeClaim:
claimName: postgres-pv-claim
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv-volume
labels:
type: local
app: postgres
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-pv-claim
labels:
app: postgres
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
---
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
labels:
app: postgres
data:
POSTGRES_DB: postgresdb
POSTGRES_USER: postgres
POSTGRES_PASSWORD: root
---
apiVersion: v1
kind: Service
metadata:
name: postgres
labels:
app: postgres
spec:
ports:
- name: postgres
port: 5432
nodePort: 30432
type: NodePort
selector:
app: postgres
In your Persistent Volume you are using type: local which means that you want to create directory in /mnt. Local also do not support dynamic volume provisioning. If you will SSH to any of your nodes you will find that this folder is ReadOnly file system.
/mnt $ mkdir something
mkdir: cannot create directory ‘something’: Read-only file system
As fastest workaround, you just could change in your PV YAML
- ReadWriteMany
hostPath:
path: /mnt/data
To:
- ReadWriteMany
hostPath:
path: /var/lib/data
Example:
$ kubectl apply -f pv-pvc.yaml
persistentvolume/postgres-pv-volume created
persistentvolumeclaim/postgres-pv-claim created
$ kubectl apply -f pos.yaml
deployment.apps/postgres created
$ kubectl get po
NAME READY STATUS RESTARTS AGE
postgres-65d9cbd495-pcqf5 1/1 Running 0 2s
$ kubectl exec -ti postgres-65d9cbd495-pcqf5 -- /bin/bash
root#postgres-65d9cbd495-pcqf5:/# cd /var/lib/postgresql/data
root#postgres-65d9cbd495-pcqf5:/var/lib/postgresql/data# ls
base pg_commit_ts pg_hba.conf pg_logical pg_notify pg_serial pg_stat pg_subtrans pg_twophase pg_wal postgresql.auto.conf postmaster.opts
global pg_dynshmem pg_ident.conf pg_multixact pg_replslot pg_snapshots pg_stat_tmp pg_tblspc PG_VERSION pg_xact postgresql.conf postmaster.pid
root#postgres-65d9cbd495-pcqf5:/var/lib/postgresql/data# echo "Hello from postgress pod" > data.txt
root#postgres-65d9cbd495-pcqf5:/var/lib/postgresql/data# cat data.txt
Hello from postgress pod
Now if you will SSH to the node which is hosting this pod, you will be able to reach this folder and files.
user#gke-cluster-1-default-pool-463f9615-gxhl ~ $ sudo su
gke-cluster-1-default-pool-463f9615-gxhl /home/user # cd /var/lib/data
gke-cluster-1-default-pool-463f9615-gxhl /var/lib/data # ls
PG_VERSION pg_dynshmem pg_notify pg_stat_tmp pg_xact
base pg_hba.conf pg_replslot pg_subtrans postgresql.auto.conf
data.txt pg_ident.conf pg_serial pg_tblspc postgresql.conf
global pg_logical pg_snapshots pg_twophase postmaster.opts
pg_commit_ts pg_multixact pg_stat pg_wal postmaster.pid
gke-cluster-1-default-pool-463f9615-gxhl /var/lib/data # cat data.txt
Hello from postgress pod
EDIT
YAMLs Ive used.
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
labels:
app: postgres
data:
POSTGRES_DB: postgresdb
POSTGRES_USER: postgres
POSTGRES_PASSWORD: root
---
apiVersion: v1
kind: Service
metadata:
name: postgres
labels:
app: postgres
spec:
type: NodePort
selector:
app: postgres
ports:
- name: postgres
port: 5432
nodePort: 30432
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv-volume
labels:
app: postgres
type: local
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
hostPath:
path: /var/lib/data
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-pv-claim
labels:
app: postgres
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:latest
imagePullPolicy: "IfNotPresent"
envFrom:
- configMapRef:
name: postgres-config
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgredb
volumes:
- name: postgredb
persistentVolumeClaim:
claimName: postgres-pv-claim
configmap/postgres-config created
service/postgres created
persistentvolume/postgres-pv-volume created
persistentvolumeclaim/postgres-pv-claim created
deployment.apps/postgres created
$ kubectl get po
NAME READY STATUS RESTARTS AGE
postgres-65d9cbd495-wxx4h 1/1 Running 0 19s
If you're working with GKE, just create PVC, it will self create PV which will work according to your need.
I Fixed my issue that way.

Error: failed to prepare subPath for volumeMount "postgres-storage" of container "postgres"

I am trying to use persistent volume claims and facing this issue
This is my postgres-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment
spec:
replicas: 1
selector:
matchLabels:
component: postgres
template:
metadata:
labels:
component: postgres
spec:
volumes:
- name: postgres-storage
persistentVolumeClaim:
claimName: database-persistent-volume-claim
containers:
- name: postgres
image: postgres
ports:
- containerPort: 5432
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-storage
subPath: postgres
when i debug pod using describe
kubectl describe pod postgres-deployment-8576df7bfc-8mp5t
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m4s default-scheduler Successfully assigned default/postgres-deployment-8576df7bfc-8mp5t to docker-desktop
Normal Pulled 67s (x8 over 2m58s) kubelet, docker-desktop Successfully pulled image "postgres"
Warning Failed 67s (x8 over 2m58s) kubelet, docker-desktop Error: failed to prepare subPath for volumeMount "postgres-storage" of container "postgres"
Normal Pulling 53s (x9 over 3m3s) kubelet, docker-desktop Pulling image "postgres"
My pod is showing me this error
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
postgres-deployment-8576df7bfc-8mp5t 0/1 CreateContainerConfigError 0 5m5
I am not sure where is the problem in the config. but I am sure this is related to volumes because after adding volumes this problem appears
remove subpath. can you try below yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment
spec:
replicas: 1
selector:
matchLabels:
component: postgres
template:
metadata:
labels:
component: postgres
spec:
volumes:
- name: postgres-storage
persistentVolumeClaim:
claimName: database-persistent-volume-claim
containers:
- name: postgres
image: postgres
ports:
- containerPort: 5432
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-storage
I just deployed and it works
master $ kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
postgres-deployment 1/1 1 1 4m13s
master $ kubectl get po
NAME READY STATUS RESTARTS AGE
postgres-deployment-6b66bdd748-5q76h 1/1 Running 0 4m13s

Setting GCP FileStorage and Kubernetes

How do you mount the FileStorage to the Kubernetes pod in GCP
I did follow the documentation but the pods still pending
I did:
apiVersion: v1
kind: PersistentVolume
metadata:
name: <some name>
spec:
capacity:
storage: 50Gi
accessModes:
- ReadWriteMany
nfs:
path: /
server: <filestorage_ip with this format xx.xxx.xxx.xx>
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: <some name>
namespace: <some name>
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
resources:
requests:
storage: 50Gi
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: <some name>
name: <some name>
labels:
app: <some name>
spec:
replicas: 2
selector:
matchLabels:
app: <some name>
template:
metadata:
labels:
app: <some name>
spec:
containers:
- name: <some name>
image: gcr.io/somepath/<some name>#sha256:<some hash>
ports:
- containerPort: 80
volumeMounts:
- name: <some name>
mountPath: /var/www/html
imagePullPolicy: Always
restartPolicy: Always
volumes:
- name: <some name>
persistentVolumeClaim:
claimName: <some name>
readOnly: false
Running kubectl -n <some name> describe pods returns:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedMount 23m (x52 over 3h21m) kubelet, gke-<some name>-default-pool-<some hash> Unable to mount volumes for pod "<some name>-<some hash>_<some name>(<some hash>)": timeout expired waiting for volumes to attach or mount for pod "<some name>"/"<some name>-<some hash>". list of unmounted volumes=[<some name>-persistent-storage]. list of unattached volumes=[<some name>-persistent-storage default-token-<some hash>]
Warning FailedMount 3m5s (x127 over 3h21m) kubelet, gke-<some name>-default-pool-<some hash> (combined from similar events): MountVolume.SetUp failed for volume "<some name>-storage" : mount failed: exit status 1
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/<some path>/volumes/kubernetes.io~nfs/<some name>-storage --scope -- /home/kubernetes/containerized_mounter/mounter mount -t nfs <filestorage_ip with this format xx.xxx.xxx.xx>:/ /var/lib/kubelet/pods/<some hash>/volumes/kubernetes.io~nfs/<some name>-storage
Output: Running scope as unit: run-<some hash>.scope
Mount failed: mount failed: exit status 32
Mounting command: chroot
Mounting arguments: [/home/kubernetes/containerized_mounter/rootfs mount -t nfs <filestorage_ip with this format xx.xxx.xxx.xx>:/ /var/lib/kubelet/pods/<some hash>/volumes/kubernetes.io~nfs/<some name>-storage]
Output: mount.nfs: access denied by server while mounting <filestorage_ip with this format xx.xxx.xxx.xx>:/
It seems that the pod can't access de the IP of the FileStorage service
In the documentation says that needs to be on the same VPC
"Authorized network *
Filestore instances can only be accessed from machines on an authorized VPC network. Select the network from which you need access."
But I don't know how to add the Kubernetes cluster to the VPC
Any suggestions?
I found the problem
The PersistentVolume can't be mount in path: /
It needs the "Fileshare properties" field that makes you fill in on the creation
Now works with multiple pods!

gitlab-runner on a kubernetes cluster error while creating mount source path '/usr/share/ca-certificates/mozilla'

I'm trying to get gitlab-runner "run" on a kubernetes cluster, after following the official doc -> https://docs.gitlab.com/runner/install/kubernetes.html (using kubernetes executor) I'm getting an error once I deploy:
Error: failed to start container "gitlab-runner": Error response from
daemon: error while creating mount source path
'/usr/share/ca-certificates/mozilla': mkdir
/usr/share/ca-certificates/mozilla: read-only file system
I'm using the examples in that web and can't figure out why isn't allowing to create that dir (As I understand the default user is root)
Here my config-map.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: gitlab-runner
namespace: gitlab
data:
config.toml: |
concurrent = 1
[[runners]]
name = "Kubernetes Runner"
url = "URL"
token = "TOKEN"
executor = "kubernetes"
[runners.kubernetes]
namespace = "gitlab"
and this is the deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: gitlab-runner
namespace: gitlab
spec:
replicas: 1
selector:
matchLabels:
name: gitlab-runner
template:
metadata:
labels:
name: gitlab-runner
spec:
containers:
- args:
- run
image: gitlab/gitlab-runner:alpine-v11.5.0
imagePullPolicy: Always
name: gitlab-runner
volumeMounts:
- mountPath: /etc/gitlab-runner
name: config
- mountPath: /etc/ssl/certs
name: cacerts
readOnly: true
restartPolicy: Always
volumes:
- configMap:
name: gitlab-runner
name: config
- hostPath:
path: /usr/share/ca-certificates/mozilla
name: cacerts
Here is the complete list of events initializing the pod:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 29s default-scheduler Successfully assigned gitlab-runner-5b689c7cbc-hw6r5 to gke-my-project-dev-default-pool-0d32b263-6skk
Normal SuccessfulMountVolume 29s kubelet, gke-my-project-dev-default-pool-0d32b263-6skk MountVolume.SetUp succeeded for volume "cacerts"
Normal SuccessfulMountVolume 29s kubelet, gke-my-project-dev-default-pool-0d32b263-6skk MountVolume.SetUp succeeded for volume "config"
Normal SuccessfulMountVolume 29s kubelet, gke-my-project-dev-default-pool-0d32b263-6skk MountVolume.SetUp succeeded for volume "default-token-6hr2h"
Normal Pulling 23s (x2 over 28s) kubelet, gke-my-project-dev-default-pool-0d32b263-6skk pulling image "gitlab/gitlab-runner:alpine-v11.5.0"
Normal Pulled 19s (x2 over 24s) kubelet, gke-my-project-dev-default-pool-0d32b263-6skk Successfully pulled image "gitlab/gitlab-runner:alpine-v11.5.0"
Normal Created 19s (x2 over 24s) kubelet, gke-my-project-dev-default-pool-0d32b263-6skk Created container
Warning Failed 19s (x2 over 24s) kubelet, gke-my-project-dev-default-pool-0d32b263-6skk Error: failed to start container "gitlab-runner": Error response from daemon: error while creating mount source path '/usr/share/ca-certificates/mozilla': mkdir /usr/share/ca-certificates/mozilla: read-only file system
Warning BackOff 14s kubelet, gke-my-project-dev-default-pool-0d32b263-6skk Back-off restarting failed container
Any clue will be appreciated
Thanks
From the logs, i am guessing you are using GKE. Google security mount your / file-system(see here). That's why you are getting error.
Try it by enabling privileged mode of the container:
containers:
securityContext:
privileged: true
If that does not work, then change /usr/share/ca-certificates/mozilla to /var/SOMETHING (not sure, this is good solution). If there are files in /usr/share/ca-certificates/mozilla, then move/copy them to /var/SOMETHING
Finally, I got it working here what I use to register and run the gitlab-runner on GKE
ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: gitlab-runner-cm
namespace: gitlab
data:
config.toml: |
concurrent = 4
check_interval = 30
entrypoint: |
#!/bin/bash
set -xe
cp /scripts/config.toml /etc/gitlab-runner/
# Register the runner
/entrypoint register --non-interactive \
--url $GITLAB_URL \
--tag-list "kubernetes, my_project" \
--kubernetes-image "alpine:latest" \
--kubernetes-namespace "gitlab" \
--executor kubernetes \
--config "/etc/gitlab-runner/config.toml" \
--locked=false \
--run-untagged=true \
--description "My Project - Kubernetes Runner" \
--kubernetes-privileged
# Start the runner
/entrypoint run --user=gitlab-runner \
--working-directory=/home/gitlab-runner \
--config "/etc/gitlab-runner/config.toml"
Deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: gitlab-runner
namespace: gitlab
spec:
replicas: 1
selector:
matchLabels:
app: gitlab-runner
template:
metadata:
labels:
app: gitlab-runner
spec:
containers:
- name: gitlab-runner
image: gitlab/gitlab-runner:latest
command: ["/bin/bash", "/scripts/entrypoint"]
env:
- name: GITLAB_URL
value: "URL"
- name: REGISTRATION_TOKEN
value: "TOKEN"
- name: KUBERNETES_NAMESPACE
value: gitlab
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/secrets/google/key.json
imagePullPolicy: Always
volumeMounts:
- name: config
mountPath: /scripts
- name: google-cloud-key
mountPath: /var/secrets/google
restartPolicy: Always
volumes:
- name: config
configMap:
name: gitlab-runner-cm
- name: google-cloud-key
secret:
secretName: gitlab-runner-sa
And Autoscaling:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: gitlab-runner-hpa
namespace: gitlab
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: gitlab-runner
minReplicas: 1
maxReplicas: 3
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization: 50
I hope this helps someone trying to run a Gitlab Runner in a Kubernetes Cluster on Google Kubernetes Engine