kubectl exec permission denied - kubernetes

I have a pod running mariadb container and I would like to backup my database but it fails with a Permission denied.
kubectl exec my-owncloud-mariadb-0 -it -- bash -c "mysqldump --single-transaction -h localhost -u myuser -ppassword mydatabase > owncloud-dbbackup_`date +"%Y%m%d"`.bak"
And the result is
bash: owncloud-dbbackup_20191121.bak: Permission denied
command terminated with exit code 1
I can't run sudo mysqldump because I get a sudo command not found.
I tried to export the backup file on different location: /home, the directory where mysqldump is located, /usr, ...
Here is the yaml of my pod:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2019-11-20T14:16:58Z"
generateName: my-owncloud-mariadb-
labels:
app: mariadb
chart: mariadb-7.0.0
component: master
controller-revision-hash: my-owncloud-mariadb-77495ddc7c
release: my-owncloud
statefulset.kubernetes.io/pod-name: my-owncloud-mariadb-0
name: my-owncloud-mariadb-0
namespace: default
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: StatefulSet
name: my-owncloud-mariadb
uid: 47f2a129-8d4e-4ae9-9411-473288623ed5
resourceVersion: "2509395"
selfLink: /api/v1/namespaces/default/pods/my-owncloud-mariadb-0
uid: 6a98de05-c790-4f59-b182-5aaa45f3b580
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels:
app: mariadb
release: my-owncloud
topologyKey: kubernetes.io/hostname
weight: 1
containers:
- env:
- name: MARIADB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
key: mariadb-root-password
name: my-owncloud-mariadb
- name: MARIADB_USER
value: myuser
- name: MARIADB_PASSWORD
valueFrom:
secretKeyRef:
key: mariadb-password
name: my-owncloud-mariadb
- name: MARIADB_DATABASE
value: mydatabase
image: docker.io/bitnami/mariadb:10.3.18-debian-9-r36
imagePullPolicy: IfNotPresent
livenessProbe:
exec:
command:
- sh
- -c
- exec mysqladmin status -uroot -p$MARIADB_ROOT_PASSWORD
failureThreshold: 3
initialDelaySeconds: 120
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: mariadb
ports:
- containerPort: 3306
name: mysql
protocol: TCP
readinessProbe:
exec:
command:
- sh
- -c
- exec mysqladmin status -uroot -p$MARIADB_ROOT_PASSWORD
failureThreshold: 3
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /bitnami/mariadb
name: data
- mountPath: /opt/bitnami/mariadb/conf/my.cnf
name: config
subPath: my.cnf
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-pbgxr
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
hostname: my-owncloud-mariadb-0
nodeName: 149.202.36.244
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 1001
runAsUser: 1001
serviceAccount: default
serviceAccountName: default
subdomain: my-owncloud-mariadb
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: data
persistentVolumeClaim:
claimName: data-my-owncloud-mariadb-0
- configMap:
defaultMode: 420
name: my-owncloud-mariadb
name: config
- name: default-token-pbgxr
secret:
defaultMode: 420
secretName: default-token-pbgxr
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2019-11-20T14:33:22Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2019-11-20T14:34:03Z"
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2019-11-20T14:34:03Z"
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2019-11-20T14:33:22Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://3898b6a20bd8c38699374b7db7f04ccef752ffd5a5f7b2bc9f7371e6a27c963a
image: bitnami/mariadb:10.3.18-debian-9-r36
imageID: docker-pullable://bitnami/mariadb#sha256:a89e2fab7951c622e165387ead0aa0bda2d57e027a70a301b8626bf7412b9366
lastState: {}
name: mariadb
ready: true
restartCount: 0
state:
running:
startedAt: "2019-11-20T14:33:24Z"
hostIP: 149.202.36.244
phase: Running
podIP: 10.42.2.56
qosClass: BestEffort
startTime: "2019-11-20T14:33:22Z"
Is their something I'm missing?

You might not have permission to write to the location inside container. try the below command
use /tmp or some other location where you can dump the backup file
kubectl exec my-owncloud-mariadb-0 -it -- bash -c "mysqldump --single-transaction -h localhost -u myuser -ppassword mydatabase > /tmp/owncloud-dbbackup_`date +"%Y%m%d"`.bak"

Given the pod YAML file you've shown, you can't usefully use kubectl exec to make a database backup.
You're getting a shell inside the pod and running mysqldump there to write out the dump file somewhere else inside the pod. You can't write it to the secret directory or the configmap directory, so your essential choices are either to write it to the pod filesystem (which will get deleted as soon as the pod exits, including if Kubernetes decides to relocate the pod within the cluster) or the mounted database directory (and your backup will survive exactly as long as the data it's backing up).
I'd run mysqldump from outside the pod. One good approach would be to create a separate Job that mounted some sort of long-term storage (or relied on external object storage; if you're running on AWS, for example, S3), connected to the database pod, and ran the backup that way. That has the advantage of being fairly self-contained (so you can debug it without interfering with your live database) and also totally automated (you could launch it from a Kubernetes CronJob).
kubectl exec doesn't seem to have the same flags docker exec does to control the user identity, so you're dependent on there being some path inside the container that its default user can write to. /tmp is typically world-writable so if you just want that specific command to work I'd try putting the dump file into /tmp/owncloud-dbbackup_....

Related

terminationGracePeriodSeconds not shown in kubectl describe result

Creating a Pod with spec terminationGracePeriodSeconds specified, I can't check whether this spec has been applied successfully using kubectl describe. How can I check whether terminationGracePeriodSeconds option has been successfully applied? I'm running kubernetes version 1.19.
apiVersion: v1
kind: Pod
metadata:
name: mysql-client
spec:
serviceAccountName: test
terminationGracePeriodSeconds: 60
containers:
- name: mysql-cli
image: blah
command: ["/bin/sh", "-c"]
args:
- sleep 2000
restartPolicy: OnFailure
Assuming the pod is running successfully. You should be able to see the settings in the manifest.
terminationGracePeriodSeconds is available in v1.19 as per the following page. Search for "terminationGracePeriodSeconds" here.
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/
Now try this command:
kubectl get pod mysql-client -o yaml | grep terminationGracePeriodSeconds -a10 -b10
How can I check whether terminationGracePeriodSeconds option has been successfully applied?
First of all, you need to make sure your pod has been created correctly. I will show you this on an example. I have deployed very simple pod by following yaml:
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
terminationGracePeriodSeconds: 60
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
Then I run the command kubectl get pods:
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 9m1s
Everything is fine.
I can't check whether this spec has been applied successfully using kubectl describe.
That is also correct, because this command doesn't return us information about termination grace period. To find this information you need to run kubectl get pod <your pod name> command. The result will be similar to below:
apiVersion: v1
kind: Pod
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"nginx","namespace":"default"},"spec":{"containers":[{"image":"nginx:1.14.2","name":"nginx","ports":[{"containerPort":80}]}],"terminationGracePeriodSeconds":60}}
creationTimestamp: "2022-01-11T11:34:58Z"
name: nginx
namespace: default
resourceVersion: "57260566"
uid: <MY-UID>
spec:
containers:
- image: nginx:1.14.2
imagePullPolicy: IfNotPresent
name: nginx
ports:
- containerPort: 80
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: <name>
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: <my-node-name>
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 60
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: kube-api-access-nj88r
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2022-01-11T11:35:01Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2022-01-11T11:35:07Z"
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2022-01-11T11:35:07Z"
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2022-01-11T11:35:01Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: containerd://<ID>
image: docker.io/library/nginx:1.14.2
imageID: docker.io/library/nginx#sha256:<sha256>
lastState: {}
name: nginx
ready: true
restartCount: 0
started: true
state:
running:
startedAt: "2022-01-11T11:35:06Z"
hostIP: <IP>
phase: Running
podIP: <IP>
podIPs:
- ip: <IP>
qosClass: BestEffort
startTime: "2022-01-11T11:35:01Z"
The most important part will be here:
{"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"nginx","namespace":"default"},"spec":{"containers":[{"image":"nginx:1.14.2","name":"nginx","ports":[{"containerPort":80}]}],"terminationGracePeriodSeconds":60}}
and here
terminationGracePeriodSeconds: 60
At this moment you are sure that terminationGracePeriodSeconds is applied successfully.

Editing a running pod runAsUser to 1010 is taken as root

I tried editing the running pod runAsUser by 1010 but i am unbale to do so, it kept running with root. Do i need to edit or delete some more tags in order to run this correctly as user 1010
HOwever, if i create the yaml from scrtach and put the runAsUser there, its been correctly been interpreted.
Running Below Code gives me that the user is being run as root, however, i have mentioned it as 1010:
apiVersion: v1
kind: Pod
metadata:
name: ubuntu-sleeper
namespace: default
spec:
securityContext:
runAsUser: 1010
containers:
- command:
- sleep
- "4800"
image: ubuntu
imagePullPolicy: Always
name: ubuntu
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-v9rcc
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: node01
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: default-token-v9rcc
secret:
defaultMode: 420
secretName: default-token-v9rcc
runAsUser
controlplane $ k exec ubuntu-sleeper -- whoami
root
Similarly, if i run the below code, it gives me that its being run by
apiVersion: v1
kind: Pod
metadata:
name: ubuntu-sleeper
namespace: default
spec:
securityContext:
runAsUser: 1010
containers:
- command:
- sleep
- "4800"
image: ubuntu
name: ubuntu-sleeper
controlplane $ k exec ubuntu-sleeper -- whoami
whoami: cannot find name for user ID 1010
The reason that the Pod runs as root is that the securityContext is listed twice in the podSpec. See lines 7 and 30 of the example file.
According this issue on the Kubernetes Github project, currently, the YAML and JSON parsers silently drop duplicate keys. In your case, Kubernetes is taking the second security context key, which is securityContext: {}.
It's quite frustrating, I've been there! Hope this helps. Keep an eye on that Github issue if you want to track the status of any changes to the Kubernetes YAML parser that will make detection of duplicate keys easier in the future.

Pod is in pending stage ( Error : FailedScheduling : nodes didn't match node selector )

I have a problem with one of the pods. It says that it is in a pending state.
If I describe the pod, this is what I can see:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NotTriggerScaleUp 1m (x58 over 11m) cluster-autoscaler pod didn't trigger scale-up (it wouldn't fit if a new node is added): 2 node(s) didn't match node selector
Warning FailedScheduling 1m (x34 over 11m) default-scheduler 0/6 nodes are available: 6 node(s) didn't match node selector.
If I check the logs, there is nothing in there (it just outputs empty value).
--- Update ---
This is my pod yaml file
apiVersion: v1
kind: Pod
metadata:
annotations:
checksum/config: XXXXXXXXXXX
checksum/dashboards-config: XXXXXXXXXXX
creationTimestamp: 2020-02-11T10:15:15Z
generateName: grafana-654667db5b-
labels:
app: grafana-grafana
component: grafana
pod-template-hash: "2102238616"
release: grafana
name: grafana-654667db5b-tnrlq
namespace: monitoring
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: grafana-654667db5b
uid: xxxx-xxxxx-xxxxxxxx-xxxxxxxx
resourceVersion: "98843547"
selfLink: /api/v1/namespaces/monitoring/pods/grafana-654667db5b-tnrlq
uid: xxxx-xxxxx-xxxxxxxx-xxxxxxxx
spec:
containers:
- env:
- name: GF_SECURITY_ADMIN_USER
valueFrom:
secretKeyRef:
key: xxxx
name: grafana
- name: GF_SECURITY_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
key: xxxx
name: grafana
- name: GF_INSTALL_PLUGINS
valueFrom:
configMapKeyRef:
key: grafana-install-plugins
name: grafana-config
image: grafana/grafana:5.0.4
imagePullPolicy: Always
name: grafana
ports:
- containerPort: 3000
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /api/health
port: 3000
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 30
resources:
requests:
cpu: 200m
memory: 100Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/grafana
name: config-volume
- mountPath: /var/lib/grafana/dashboards
name: dashboard-volume
- mountPath: /var/lib/grafana
name: storage-volume
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-tqb6j
readOnly: true
dnsPolicy: ClusterFirst
initContainers:
- command:
- sh
- -c
- cp /tmp/config-volume-configmap/* /tmp/config-volume 2>/dev/null || true; cp
/tmp/dashboard-volume-configmap/* /tmp/dashboard-volume 2>/dev/null || true
image: busybox
imagePullPolicy: Always
name: copy-configs
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /tmp/config-volume-configmap
name: config-volume-configmap
- mountPath: /tmp/dashboard-volume-configmap
name: dashboard-volume-configmap
- mountPath: /tmp/config-volume
name: config-volume
- mountPath: /tmp/dashboard-volume
name: dashboard-volume
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-tqb6j
readOnly: true
nodeSelector:
nodePool: cluster
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 300
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- emptyDir: {}
name: config-volume
- emptyDir: {}
name: dashboard-volume
- configMap:
defaultMode: 420
name: grafana-config
name: config-volume-configmap
- configMap:
defaultMode: 420
name: grafana-dashs
name: dashboard-volume-configmap
- name: storage-volume
persistentVolumeClaim:
claimName: grafana
- name: default-token-tqb6j
secret:
defaultMode: 420
secretName: default-token-tqb6j
status:
conditions:
- lastProbeTime: 2020-02-11T10:45:37Z
lastTransitionTime: 2020-02-11T10:15:15Z
message: '0/6 nodes are available: 6 node(s) didn''t match node selector.'
reason: Unschedulable
status: "False"
type: PodScheduled
phase: Pending
qosClass: Burstable
Do you know how should I further debug this?
Solution : You can do one of the two things to allow scheduler to fullfil your pod creation request.
you can choose to remove these lines from your pod yaml and start your pod creation again from scratch (if you need a selector for a reason go for approach as on next step 2)
nodeSelector:
nodePool: cluster
or
You can ensure that you add this nodePool: cluster as label to all your nodes so the pod will be scheduled by using the available selector.
You can use this command to label all nodes
kubectl label nodes <your node name> nodePool=cluster
Run above command by replacing node name from your cluster details for each node or only the nodes you want to be select with this label.
Your pod probably uses a node selector which can not fulfilled by scheduler.
Check pod description for something like that
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
env: test
spec:
...
nodeSelector:
disktype: ssd
And check whether your nodes are labeled accordingly.
The simplest option would be to use "nodeName" in the Pod yaml.
First, get the node where you want to run the Pod:
kubectl get nodes
Use the below attribute inside the Pod definition( yaml) so that the Pod is forced to run under the below mentioned node only.
nodeName: seliiuvd05714

Kubernetes job debug command

I wrote a job and I always get init error. I have noticed that if I remove the related command all goes fine and I do not get any init error.
My question is: how can I debug commands that need to run in the job? I use pod describe but all I can see is an exit status code 2.
apiVersion: batch/v1
kind: Job
metadata:
name: database-import
spec:
template:
spec:
initContainers:
- name: download-dump
image: google/cloud-sdk:alpine
command: ##### ERROR HERE!!!
- bash
- -c
- "gsutil cp gs://webshop-254812-sbg-data-input/pg/spryker-stg.gz /data/spryker-stage.gz"
volumeMounts:
- name: application-default-credentials
mountPath: "/secrets/"
readOnly: true
- name: data
mountPath: "/data/"
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /secrets/application_default_credentials.json
containers:
- name: database-import
image: postgres:9.6-alpine
command:
- bash
- -c
- "gunzip -c /data/spryker-stage.gz | psql -h postgres -Uusername -W spy_ch "
env:
- name: PGPASSWORD
value: password
volumeMounts:
- name: data
mountPath: "/data/"
volumes:
- name: application-default-credentials
secret:
secretName: application-default-credentials
- name: data
emptyDir: {}
restartPolicy: Never
backoffLimit: 4
And this is the job describe:
Name: database-import
Namespace: sbg
Selector: controller-uid=a70d74a2-f596-11e9-a7fe-025000000001
Labels: app.kubernetes.io/managed-by=tilt
Annotations: <none>
Parallelism: 1
Completions: 1
Start Time: Wed, 23 Oct 2019 15:11:40 +0200
Pods Statuses: 1 Running / 0 Succeeded / 3 Failed
Pod Template:
Labels: app.kubernetes.io/managed-by=tilt
controller-uid=a70d74a2-f596-11e9-a7fe-025000000001
job-name=database-import
Init Containers:
download-dump:
Image: google/cloud-sdk:alpine
Port: <none>
Host Port: <none>
Command:
/bin/bash
-c
gsutil cp gs://webshop-254812-sbg-data-input/pg/spryker-stg.gz /data/spryker-stage.gz
Environment:
GOOGLE_APPLICATION_CREDENTIALS: /secrets/application_default_credentials.json
Mounts:
/data/ from data (rw)
/secrets/ from application-default-credentials (ro)
Containers:
database-import:
Image: postgres:9.6-alpine
Port: <none>
Host Port: <none>
Command:
/bin/bash
-c
gunzip -c /data/spryker-stage.gz | psql -h postgres -Uusername -W
spy_ch
Environment:
PGPASSWORD: password
Mounts:
/data/ from data (rw)
Volumes:
application-default-credentials:
Type: Secret (a volume populated by a Secret)
SecretName: application-default-credentials-464thb4k85
Optional: false
data:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 2m5s job-controller Created pod: database-import-9tsjw
Normal SuccessfulCreate 119s job-controller Created pod: database-import-g68ld
Normal SuccessfulCreate 109s job-controller Created pod: database-import-8cx6v
Normal SuccessfulCreate 69s job-controller Created pod: database-import-tnjnh
The command to see the log of an init container ran in a job is:
kubectl logs -f <pod name> -c <initContainer name>
You check the logs using
Kubectl logs <Pod name>
Pod name is completed job or running job.
In logs you can more idea about error and easily you can debug the job on running on Kubernetes.
If you are using the Kubernetes Cluster on GKE and enabled the stackdriver monitoring then you can use it also for debugging.
Init:Error -> Init Container has failed to execute.
That is because there are some errors in the initContainers command section
There You can read how the yaml should be prepared.
I have fixed your yaml file.
apiVersion: batch/v1
kind: Job
metadata:
name: database-import
spec:
template:
spec:
containers:
- name: database-import
image: postgres:9.6-alpine
command:
- bash
- "-c"
- "gunzip -c /data/spryker-stage.gz | psql -h postgres -Uusername -W spy_ch "
env:
- name: PGPASSWORD
value: password
volumeMounts:
- name: data
mountPath: "/data/"
initContainers:
- name: download-dump
image: google/cloud-sdk:alpine
command:
- /bin/bash
- "-c"
- "gsutil cp gs://webshop-254812-sbg-data-input/pg/spryker-stg.gz /data/spryker-stage.gz"
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /secrets/application_default_credentials.json
volumeMounts:
- name: application-default-credentials
mountPath: "/secrets/"
readOnly: true
- name: data
mountPath: "/data/"
volumes:
- name: application-default-credentials
secret:
secretName: application-default-credentials
- name: data
emptyDir: {}
restartPolicy: Never
backoffLimit: 4
Result after kubectl apply -f job.yaml
job.batch/database-import created
Let me know if it works now.
EDIT
Use kubectl describe job <name of your job> and add results,we will see why it is not working then.

How to run pgAdmin in OpenShift?

I'm trying to run a pgAdmin container (the one I'm using comes from here) in an OpenShift cluster where I don't have admin privileges and the admin does not want to allow containers to run as root for security reasons.
The error I'm currently receiving looks like this:
Error with Standard Image
I created a Dockerfile that creates that directory ahead of time based on the image linked above and I get this error:
Error with Edited Image
Is there any way to run pgAdmin within OpenShift? I want to be able to let DB admins log into the instance of pgAdmin and configure the DB from there, without having to use the OpenShift CLI and port forwarding. When I use that method the port-forwarding connection drops very frequently.
Edit1:
Is there a way that I should edit the Dockerfile and entrypoint.sh file found on pgAdmin's github?
Edit2:
It looks like this is a bug with pgAdmin... :/
https://www.postgresql.org/message-id/15470-c84b4e5cc424169d%40postgresql.org
To work around these errors, you need to add a writable volume to the container and set pgadmin's configuration to use that directory.
Permission Denied: '/var/lib/pgadmin'
Permission Denied: '/var/log/pgadmin'
The OpenShift/Kubernetes YAML example below demonstrates this by supplying a custom /pgadmin4/config_local.py as documented here. This allows you to run the image as a container with regular privileges.
Note the configuration files base directory (/var/lib/pgadmin/data) still needs to be underneath the mount point (/var/lib/pgadmin/), as pgadmin's initialization code tries to create/change ownership of that directory which is not allowed on mount point directories inside the container.
apiVersion: v1
kind: List
items:
- apiVersion: v1
kind: Secret
metadata:
labels:
app: pgadmin-app
name: pgadmin
type: Opaque
stringData:
username: admin
password: DEFAULT_PASSWORD
- apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
serviceaccounts.openshift.io/oauth-redirectreference.pgadmin: '{"kind":"OAuthRedirectReference","apiVersion":"v1","reference":{"kind":"Route","name":"pgadmin"}}'
labels:
app: pgadmin-app
name: pgadmin
- apiVersion: v1
kind: ConfigMap
metadata:
labels:
app: pgadmin-app
name: pgadmin
data:
config_local.py: |-
import os
_BASEDIR = '/var/lib/pgadmin/data'
LOG_FILE = os.path.join(_BASEDIR, 'logfile')
SQLITE_PATH = os.path.join(_BASEDIR, 'sqlite.db')
STORAGE_DIR = os.path.join(_BASEDIR, 'storage')
SESSION_DB_PATH = os.path.join(_BASEDIR, 'sessions')
servers.json: |-
{
"Servers": {
"1": {
"Name": "postgresql",
"Group": "Servers",
"Host": "postgresql",
"Port": 5432,
"MaintenanceDB": "postgres",
"Username": "dbuser",
"SSLMode": "prefer",
"SSLCompression": 0,
"Timeout": 0,
"UseSSHTunnel": 0,
"TunnelPort": "22",
"TunnelAuthentication": 0
}
}
}
- apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
name: pgadmin
labels:
app: pgadmin-app
spec:
replicas: 1
selector:
app: pgadmin-app
deploymentconfig: pgadmin
template:
metadata:
labels:
app: pgadmin-app
deploymentconfig: pgadmin
name: pgadmin
spec:
serviceAccountName: pgadmin
containers:
- env:
- name: PGADMIN_DEFAULT_EMAIL
valueFrom:
secretKeyRef:
key: username
name: pgadmin
- name: PGADMIN_DEFAULT_PASSWORD
valueFrom:
secretKeyRef:
key: password
name: pgadmin
- name: PGADMIN_LISTEN_PORT
value: "5050"
- name: PGADMIN_LISTEN_ADDRESS
value: 0.0.0.0
image: docker.io/dpage/pgadmin4:4
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 30
httpGet:
path: /misc/ping
port: 5050
scheme: HTTP
periodSeconds: 60
successThreshold: 1
timeoutSeconds: 1
name: pgadmin
ports:
- containerPort: 5050
protocol: TCP
readinessProbe:
failureThreshold: 10
initialDelaySeconds: 3
httpGet:
path: /misc/ping
port: 5050
scheme: HTTP
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 1
volumeMounts:
- mountPath: /pgadmin4/config_local.py
name: pgadmin-config
subPath: config_local.py
- mountPath: /pgadmin4/servers.json
name: pgadmin-config
subPath: servers.json
- mountPath: /var/lib/pgadmin
name: pgadmin-data
- image: docker.io/openshift/oauth-proxy:latest
name: pgadmin-oauth-proxy
ports:
- containerPort: 5051
protocol: TCP
args:
- --http-address=:5051
- --https-address=
- --openshift-service-account=pgadmin
- --upstream=http://localhost:5050
- --cookie-secret=bdna987REWQ1234
volumes:
- name: pgadmin-config
configMap:
name: pgadmin
defaultMode: 0664
- name: pgadmin-data
emptyDir: {}
- apiVersion: v1
kind: Service
metadata:
name: pgadmin-oauth-proxy
labels:
app: pgadmin-app
spec:
ports:
- name: 80-tcp
protocol: TCP
port: 80
targetPort: 5051
selector:
app: pgadmin-app
deploymentconfig: pgadmin
- apiVersion: route.openshift.io/v1
kind: Route
metadata:
labels:
app: pgadmin-app
name: pgadmin
spec:
port:
targetPort: 80-tcp
tls:
insecureEdgeTerminationPolicy: Redirect
termination: edge
to:
kind: Service
name: pgadmin-oauth-proxy
Openshift by default doesn't allow to run containers with root privilege, you can add Security Context Constraints (SCC) to the user anyuid for the project where you are deploying the container.
Adding a SCC for the project:
$ oc adm policy add-scc-to-user anyuid system:serviceaccount:<your-project>:default
scc "anyuid" added to: ["system:serviceaccount:data-base-administration:default"]
$ oc get scc
NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES
anyuid false [] MustRunAs RunAsAny RunAsAny RunAsAny 10 false [configMap downwardAPI emptyDir persistentVolumeClaim projected secret]
PGAdmin deployed:
$ oc describe pod pgadmin4-4-fjv4h
Name: pgadmin4-4-fjv4h
Namespace: data-base-administration
Priority: 0
PriorityClassName: <none>
Node: host/IP
Start Time: Mon, 18 Feb 2019 23:22:30 -0400
Labels: app=pgadmin4
deployment=pgadmin4-4
deploymentconfig=pgadmin4
Annotations: openshift.io/deployment-config.latest-version=4
openshift.io/deployment-config.name=pgadmin4
openshift.io/deployment.name=pgadmin4-4
openshift.io/generated-by=OpenShiftWebConsole
openshift.io/scc=anyuid
Status: Running
IP: IP
Controlled By: ReplicationController/pgadmin4-4
Containers:
pgadmin4:
Container ID: docker://ID
Image: dpage/pgadmin4#sha256:SHA
Image ID: docker-pullable://docker.io/dpage/pgadmin4#sha256:SHA
Ports: 80/TCP, 443/TCP
Host Ports: 0/TCP, 0/TCP
State: Running
Started: Mon, 18 Feb 2019 23:22:37 -0400
Ready: True
Restart Count: 0
Environment:
PGADMIN_DEFAULT_EMAIL: secret
PGADMIN_DEFAULT_PASSWORD: secret
Mounts:
/var/lib/pgadmin from pgadmin4-1 (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-74b75 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
pgadmin4-1:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
default-token-74b75:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-74b75
Optional: false
QoS Class: BestEffort
Node-Selectors: node-role.kubernetes.io/compute=true
Tolerations: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 51m default-scheduler Successfully assigned data-base-administration/pgadmin4-4-fjv4h to host
Normal Pulling 51m kubelet, host pulling image "dpage/pgadmin4#sha256:SHA"
Normal Pulled 51m kubelet, host Successfully pulled image "dpage/pgadmin4#sha256:SHA"
Normal Created 51m kubelet, host Created container
Normal Started 51m kubelet, host Started container
I have already replied to similar issue for local installation OSError: [Errno 13] Permission denied: '/var/lib/pgadmin'
For docker image, you can map the /pgadmin4/config_local.py using environment variables, Check Mapped Files and Directories section on the https://hub.docker.com/r/dpage/pgadmin4/
This might work if you create a pgadmin user via the Dockerfile, and give it permission to write to /var/log/pgadmin.
You can create a user in the Dockerfile using the RUN command; something like this:
# Create pgadmin user
ENV_HOME=/pgadmin
RUN mkdir -p ${HOME} && \
mkdir -p ${HOME}/pgadmin && \
useradd -u 1001 -r -g 0 -G pgadmin -d ${HOME} -s /bin/bash \
-c "Default Application User" pgadmin
# Set user home and permissions with group 0 and writeable.
RUN chmod -R 700 ${HOME} && chown -R 1001:0 ${HOME}
# Create the log folder and set permissions
RUN mkdir /var/log/pgadmin && \
chmod 0600 /var/log/pgadmin && \
chown 1001:0 /var/log/pgadmin
# Run as 1001 (pgadmin)
USER 1001
Adjust your pgadmin install so it runs as 1001, and I think you should be set.