I'm trying to run a pgAdmin container (the one I'm using comes from here) in an OpenShift cluster where I don't have admin privileges and the admin does not want to allow containers to run as root for security reasons.
The error I'm currently receiving looks like this:
Error with Standard Image
I created a Dockerfile that creates that directory ahead of time based on the image linked above and I get this error:
Error with Edited Image
Is there any way to run pgAdmin within OpenShift? I want to be able to let DB admins log into the instance of pgAdmin and configure the DB from there, without having to use the OpenShift CLI and port forwarding. When I use that method the port-forwarding connection drops very frequently.
Edit1:
Is there a way that I should edit the Dockerfile and entrypoint.sh file found on pgAdmin's github?
Edit2:
It looks like this is a bug with pgAdmin... :/
https://www.postgresql.org/message-id/15470-c84b4e5cc424169d%40postgresql.org
To work around these errors, you need to add a writable volume to the container and set pgadmin's configuration to use that directory.
Permission Denied: '/var/lib/pgadmin'
Permission Denied: '/var/log/pgadmin'
The OpenShift/Kubernetes YAML example below demonstrates this by supplying a custom /pgadmin4/config_local.py as documented here. This allows you to run the image as a container with regular privileges.
Note the configuration files base directory (/var/lib/pgadmin/data) still needs to be underneath the mount point (/var/lib/pgadmin/), as pgadmin's initialization code tries to create/change ownership of that directory which is not allowed on mount point directories inside the container.
apiVersion: v1
kind: List
items:
- apiVersion: v1
kind: Secret
metadata:
labels:
app: pgadmin-app
name: pgadmin
type: Opaque
stringData:
username: admin
password: DEFAULT_PASSWORD
- apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
serviceaccounts.openshift.io/oauth-redirectreference.pgadmin: '{"kind":"OAuthRedirectReference","apiVersion":"v1","reference":{"kind":"Route","name":"pgadmin"}}'
labels:
app: pgadmin-app
name: pgadmin
- apiVersion: v1
kind: ConfigMap
metadata:
labels:
app: pgadmin-app
name: pgadmin
data:
config_local.py: |-
import os
_BASEDIR = '/var/lib/pgadmin/data'
LOG_FILE = os.path.join(_BASEDIR, 'logfile')
SQLITE_PATH = os.path.join(_BASEDIR, 'sqlite.db')
STORAGE_DIR = os.path.join(_BASEDIR, 'storage')
SESSION_DB_PATH = os.path.join(_BASEDIR, 'sessions')
servers.json: |-
{
"Servers": {
"1": {
"Name": "postgresql",
"Group": "Servers",
"Host": "postgresql",
"Port": 5432,
"MaintenanceDB": "postgres",
"Username": "dbuser",
"SSLMode": "prefer",
"SSLCompression": 0,
"Timeout": 0,
"UseSSHTunnel": 0,
"TunnelPort": "22",
"TunnelAuthentication": 0
}
}
}
- apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
name: pgadmin
labels:
app: pgadmin-app
spec:
replicas: 1
selector:
app: pgadmin-app
deploymentconfig: pgadmin
template:
metadata:
labels:
app: pgadmin-app
deploymentconfig: pgadmin
name: pgadmin
spec:
serviceAccountName: pgadmin
containers:
- env:
- name: PGADMIN_DEFAULT_EMAIL
valueFrom:
secretKeyRef:
key: username
name: pgadmin
- name: PGADMIN_DEFAULT_PASSWORD
valueFrom:
secretKeyRef:
key: password
name: pgadmin
- name: PGADMIN_LISTEN_PORT
value: "5050"
- name: PGADMIN_LISTEN_ADDRESS
value: 0.0.0.0
image: docker.io/dpage/pgadmin4:4
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 30
httpGet:
path: /misc/ping
port: 5050
scheme: HTTP
periodSeconds: 60
successThreshold: 1
timeoutSeconds: 1
name: pgadmin
ports:
- containerPort: 5050
protocol: TCP
readinessProbe:
failureThreshold: 10
initialDelaySeconds: 3
httpGet:
path: /misc/ping
port: 5050
scheme: HTTP
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 1
volumeMounts:
- mountPath: /pgadmin4/config_local.py
name: pgadmin-config
subPath: config_local.py
- mountPath: /pgadmin4/servers.json
name: pgadmin-config
subPath: servers.json
- mountPath: /var/lib/pgadmin
name: pgadmin-data
- image: docker.io/openshift/oauth-proxy:latest
name: pgadmin-oauth-proxy
ports:
- containerPort: 5051
protocol: TCP
args:
- --http-address=:5051
- --https-address=
- --openshift-service-account=pgadmin
- --upstream=http://localhost:5050
- --cookie-secret=bdna987REWQ1234
volumes:
- name: pgadmin-config
configMap:
name: pgadmin
defaultMode: 0664
- name: pgadmin-data
emptyDir: {}
- apiVersion: v1
kind: Service
metadata:
name: pgadmin-oauth-proxy
labels:
app: pgadmin-app
spec:
ports:
- name: 80-tcp
protocol: TCP
port: 80
targetPort: 5051
selector:
app: pgadmin-app
deploymentconfig: pgadmin
- apiVersion: route.openshift.io/v1
kind: Route
metadata:
labels:
app: pgadmin-app
name: pgadmin
spec:
port:
targetPort: 80-tcp
tls:
insecureEdgeTerminationPolicy: Redirect
termination: edge
to:
kind: Service
name: pgadmin-oauth-proxy
Openshift by default doesn't allow to run containers with root privilege, you can add Security Context Constraints (SCC) to the user anyuid for the project where you are deploying the container.
Adding a SCC for the project:
$ oc adm policy add-scc-to-user anyuid system:serviceaccount:<your-project>:default
scc "anyuid" added to: ["system:serviceaccount:data-base-administration:default"]
$ oc get scc
NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES
anyuid false [] MustRunAs RunAsAny RunAsAny RunAsAny 10 false [configMap downwardAPI emptyDir persistentVolumeClaim projected secret]
PGAdmin deployed:
$ oc describe pod pgadmin4-4-fjv4h
Name: pgadmin4-4-fjv4h
Namespace: data-base-administration
Priority: 0
PriorityClassName: <none>
Node: host/IP
Start Time: Mon, 18 Feb 2019 23:22:30 -0400
Labels: app=pgadmin4
deployment=pgadmin4-4
deploymentconfig=pgadmin4
Annotations: openshift.io/deployment-config.latest-version=4
openshift.io/deployment-config.name=pgadmin4
openshift.io/deployment.name=pgadmin4-4
openshift.io/generated-by=OpenShiftWebConsole
openshift.io/scc=anyuid
Status: Running
IP: IP
Controlled By: ReplicationController/pgadmin4-4
Containers:
pgadmin4:
Container ID: docker://ID
Image: dpage/pgadmin4#sha256:SHA
Image ID: docker-pullable://docker.io/dpage/pgadmin4#sha256:SHA
Ports: 80/TCP, 443/TCP
Host Ports: 0/TCP, 0/TCP
State: Running
Started: Mon, 18 Feb 2019 23:22:37 -0400
Ready: True
Restart Count: 0
Environment:
PGADMIN_DEFAULT_EMAIL: secret
PGADMIN_DEFAULT_PASSWORD: secret
Mounts:
/var/lib/pgadmin from pgadmin4-1 (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-74b75 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
pgadmin4-1:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
default-token-74b75:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-74b75
Optional: false
QoS Class: BestEffort
Node-Selectors: node-role.kubernetes.io/compute=true
Tolerations: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 51m default-scheduler Successfully assigned data-base-administration/pgadmin4-4-fjv4h to host
Normal Pulling 51m kubelet, host pulling image "dpage/pgadmin4#sha256:SHA"
Normal Pulled 51m kubelet, host Successfully pulled image "dpage/pgadmin4#sha256:SHA"
Normal Created 51m kubelet, host Created container
Normal Started 51m kubelet, host Started container
I have already replied to similar issue for local installation OSError: [Errno 13] Permission denied: '/var/lib/pgadmin'
For docker image, you can map the /pgadmin4/config_local.py using environment variables, Check Mapped Files and Directories section on the https://hub.docker.com/r/dpage/pgadmin4/
This might work if you create a pgadmin user via the Dockerfile, and give it permission to write to /var/log/pgadmin.
You can create a user in the Dockerfile using the RUN command; something like this:
# Create pgadmin user
ENV_HOME=/pgadmin
RUN mkdir -p ${HOME} && \
mkdir -p ${HOME}/pgadmin && \
useradd -u 1001 -r -g 0 -G pgadmin -d ${HOME} -s /bin/bash \
-c "Default Application User" pgadmin
# Set user home and permissions with group 0 and writeable.
RUN chmod -R 700 ${HOME} && chown -R 1001:0 ${HOME}
# Create the log folder and set permissions
RUN mkdir /var/log/pgadmin && \
chmod 0600 /var/log/pgadmin && \
chown 1001:0 /var/log/pgadmin
# Run as 1001 (pgadmin)
USER 1001
Adjust your pgadmin install so it runs as 1001, and I think you should be set.
Related
I'm trying to verify that my postgres pod is accessible via the service that I've just set up. As of now, I cannot verify this. What I'm able to do is to log into the container running postgres itself, and attempt to talk to the postgres server via the IP of the service. This does not succeed. However, I'm unsure if this is a valid test of whether other pods in the cluster could talk to postgres via the service or if there is a problem with how I'm doing the test, or if there is a fundamental problem in my service or pod configurations.
I'm doing this all on a minikube cluster.
Setup the pod and service:
$> kubectl create -f postgres-pod.yml
$> kubectl create -f postgres-service.yml
postgres-pod.yml
apiVersion: v1
kind: Pod
metadata:
name: postgres
labels:
env: prod
creation_method: manual
domain: infrastructure
spec:
containers:
- image: postgres:13-alpine
name: kubia-postgres
ports:
- containerPort: 5432
protocol: TCP
env:
- name: POSTGRES_PASSWORD
value: dave
- name: POSTGRES_USER
value: dave
- name: POSTGRES_DB
value: tmp
# TODO:
# volumes:
# - name: postgres-db-volume
postgres-service.yml
apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
ports:
- port: 5432
targetPort: 5432
selector:
name: postgres
Check that the service is up kubectl get services:
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 35d
postgres-service ClusterIP 10.110.159.21 <none> 5432/TCP 71m
Then, log in to the postgres container:
$> kubectl exec --stdin --tty postgres -- /bin/bash
from there, attempt to hit the service's IP:
bash-5.1# psql -U dave -h 10.110.159.21 -p 5432 tmp
psql: error: could not connect to server: Connection refused
Is the server running on host "10.110.159.21" and accepting
TCP/IP connections on port 5432?
So using this approach I am not able to connect to the postgres server using the IP of the service.
I'm unsure of several steps in this process:
Is the selecting by name block in the service configuration yaml correct?
Can you access the IP of a service from pods that are "behind" the service?
Is this, in fact, a valid way to verify that the DB server is accessible via the service, or is there some other way?
Hello, hope you are envoying your Kubernetes journey !
I wanted to try this on my kind (Kubernetes in docker) cluster locally. So this is what I've done:
First I have setup a kind cluster locally with this configuration (info here: https://kind.sigs.k8s.io/docs/user/quick-start/):
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: so-cluster-1
nodes:
- role: control-plane
image: kindest/node:v1.23.5
- role: control-plane
image: kindest/node:v1.23.5
- role: control-plane
image: kindest/node:v1.23.5
- role: worker
image: kindest/node:v1.23.5
- role: worker
image: kindest/node:v1.23.5
- role: worker
image: kindest/node:v1.23.5
after this I created my cluster with this command:
kind create cluster --config=config.yaml
Next, i have created a test namespace (manifest obtained with: kubectl create ns so-tests -o yaml --dry-run):
apiVersion: v1
kind: Namespace
metadata:
name: so-tests
From there, i got my environment setted up, so I had to deploy a postgres on it, but here is what I've changed:
1- Instead of creating a singleton pod, I created a statefulset (which aim is to deploy databases)
2- I decided to keep using your docker image "postgres:13-alpine" and added a security context to run as the native postgres user (not dave neither root) -- to know what is the id of the postgres user, i first deployed the statefulset without the security context and executed this commands:
❯ k exec -it postgres-0 -- bash
bash-5.1# whoami
root
bash-5.1# id
uid=0(root) gid=0(root) groups=1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel),11(floppy),20(dialout),26(tape),27(video)
bash-5.1# id postgres
uid=70(postgres) gid=70(postgres) groups=70(postgres),70(postgres)
bash-5.1# exit
so, once that i knew that the id of the postgres user was 70, I just added this in the statefulset manifest:
securityContext:
runAsUser: 70
fsGroup: 70
3- Instead of adding configuration and secrets as environment variable directly into the pod config of the statefulset, I decide to created a secret and a configmap:
First lets create a kubernetes secret with your password in it, here is the manifest (obtained from this command: "k create secret generic --from-literal password=dave postgres-secret -o yaml --dry-run=client"):
apiVersion: v1
data:
password: ZGF2ZQ==
kind: Secret
metadata:
name: postgres-secret
After this i created a configmap to store our postgres config, here is the manifest (obtained by running: kubectl create configmap postgres-config --from-literal user=dave --from-literal db=tmp --dry-run=client -o yaml )
apiVersion: v1
data:
db: tmp
user: dave
kind: ConfigMap
metadata:
name: postgres-config
Since, it is just for a testing purpose, i didnt setted up a dynamic volume provisionning for the statefulset, neither pre-provisionned volume. Instead I have configured a simple emptyDir to store the postgres data (/var/lib/postgresql/data).
N.B.: By default, emptyDir volumes are stored on whatever medium is backing the node - that might be disk or SSD or network storage, depending on your environment. However, you can set the emptyDir.medium field to "Memory" to tell Kubernetes to mount a tmpfs (RAM-backed filesystem) for you instead. (this came from here Create a new volume when pod restart in a statefulset)
Since it is a statefulset, it has to be exposed by a headless kubernetes service (https://kubernetes.io/fr/docs/concepts/services-networking/service/#headless-services)
Here are the manifests:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres
spec:
serviceName: "postgres"
replicas: 2
selector:
matchLabels:
env: prod
domain: infrastructure
template:
metadata:
labels:
env: prod
domain: infrastructure
spec:
terminationGracePeriodSeconds: 20
securityContext:
runAsUser: 70
fsGroup: 70
containers:
- name: kubia-postgres
image: postgres:13-alpine
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secret
key: password
- name: POSTGRES_USER
valueFrom:
configMapKeyRef:
name: postgres-config
key: user
- name: POSTGRES_DB
valueFrom:
configMapKeyRef:
name: postgres-config
key: db
ports:
- containerPort: 5432
protocol: TCP
volumeMounts:
- name: postgres-test-volume
mountPath: /var/lib/postgresql
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
volumes:
- name: postgres-test-volume
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: postgres-service
labels:
env: prod
domain: infrastructure
spec:
ports:
- port: 5432
protocol: TCP
targetPort: 5432
name: pgsql
clusterIP: None
selector:
env: prod
domain: infrastructure
---
apiVersion: v1
data:
password: ZGF2ZQ==
kind: Secret
metadata:
name: postgres-secret
---
apiVersion: v1
data:
db: tmp
user: dave
kind: ConfigMap
metadata:
name: postgres-config
---
I deployed this using:
kubectl apply -f postgres.yaml
I tested to connect into the postgres-0 pod to connect my db with $POSTGRES_USER and $POSTGRES_PASSWORD credentials:
❯ k exec -it pod/postgres-0 -- bash
bash-5.1$ psql --username=$POSTGRES_USER -W --host=localhost --port=5432 --dbname=tmp
Password:
psql (13.6)
Type "help" for help.
tmp=#
I listed the databases:
tmp=# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+-------+----------+------------+------------+-------------------
postgres | dave | UTF8 | en_US.utf8 | en_US.utf8 |
template0 | dave | UTF8 | en_US.utf8 | en_US.utf8 | =c/dave +
| | | | | dave=CTc/dave
template1 | dave | UTF8 | en_US.utf8 | en_US.utf8 | =c/dave +
| | | | | dave=CTc/dave
tmp | dave | UTF8 | en_US.utf8 | en_US.utf8 |
(4 rows)
and I connected to the "tmp" db:
tmp=# \c tmp
Password:
You are now connected to database "tmp" as user "dave".
succesful.
I also tried to connect the database using the IP, as you tried:
bash-5.1$ ip a | grep /24
inet 10.244.4.8/24 brd 10.244.4.255 scope global eth0
bash-5.1$ psql --username=$POSTGRES_USER -W --host=10.244.4.8 --port=5432 --dbname=tmp
Password:
psql (13.6)
Type "help" for help.
tmp=#
succesful.
I then downloaded dbeaver (from here https://dbeaver.io/download/ ) to test the access from outside of my cluster:
with a kubectl port-forward:
kubectl port-forward statefulset/postgres 5432:5432
Forwarding from 127.0.0.1:5432 -> 5432
Forwarding from [::1]:5432 -> 5432
I created the connection on dbeaver, and could access easily the db "tmp" from localhost:5361 with dave:dave credentials
kubectl port-forward statefulset/postgres 5432:5432
Forwarding from 127.0.0.1:5432 -> 5432
Forwarding from [::1]:5432 -> 5432
Handling connection for 5432
Handling connection for 5432
perfect.
same as before (with dbeaver), I tried to connect the db using a port forward, not of the pod, but of the service:
❯ kubectl port-forward service/postgres-service 5432:5432
Forwarding from 127.0.0.1:5432 -> 5432
Forwarding from [::1]:5432 -> 5432
Handling connection for 5432
Handling connection for 5432
It worked as well !
I also created a standalone pod, based on our config to access the db that is in another pod (via the servine name as hostname), here is the manifest of the pod:
apiVersion: v1
kind: Pod
metadata:
name: postgres
labels:
app: test
spec:
terminationGracePeriodSeconds: 20
securityContext:
runAsUser: 70
fsGroup: 70
containers:
- name: kubia-postgres
image: postgres:13-alpine
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secret
key: password
- name: POSTGRES_USER
valueFrom:
configMapKeyRef:
name: postgres-config
key: user
- name: POSTGRES_DB
valueFrom:
configMapKeyRef:
name: postgres-config
key: db
ports:
- containerPort: 5432
protocol: TCP
volumeMounts:
- name: postgres-test-volume
mountPath: /var/lib/postgresql
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
volumes:
- name: postgres-test-volume
emptyDir: {}
here is the result of the connection from inside the podtest:
bash-5.1$ psql --username=$POSTGRES_USER -W --host=postgres-service --port=5432 --dbname=tmp
Password:
psql (13.6)
Type "help" for help.
tmp=#
Here is how you can access it from outside the pod/namespace (make sure that there is no network rules that block the connection):
StatefulSetName-Ordinal.Service.Namespace.svc.cluster.local
i.e: postgres-0.postgres-service.so-tests.svc.cluster.local
To access the statefulsets workloads from outside the cluster here is a good start: How to expose a headless service for a StatefulSet externally in Kubernetes
Hope this will helped you. Thank you for your question.
Bguess
You cannot, at least with minikube, access the IP of a service from the pod "behind" that service if there is only one (1) replica.
I have a pod running mariadb container and I would like to backup my database but it fails with a Permission denied.
kubectl exec my-owncloud-mariadb-0 -it -- bash -c "mysqldump --single-transaction -h localhost -u myuser -ppassword mydatabase > owncloud-dbbackup_`date +"%Y%m%d"`.bak"
And the result is
bash: owncloud-dbbackup_20191121.bak: Permission denied
command terminated with exit code 1
I can't run sudo mysqldump because I get a sudo command not found.
I tried to export the backup file on different location: /home, the directory where mysqldump is located, /usr, ...
Here is the yaml of my pod:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2019-11-20T14:16:58Z"
generateName: my-owncloud-mariadb-
labels:
app: mariadb
chart: mariadb-7.0.0
component: master
controller-revision-hash: my-owncloud-mariadb-77495ddc7c
release: my-owncloud
statefulset.kubernetes.io/pod-name: my-owncloud-mariadb-0
name: my-owncloud-mariadb-0
namespace: default
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: StatefulSet
name: my-owncloud-mariadb
uid: 47f2a129-8d4e-4ae9-9411-473288623ed5
resourceVersion: "2509395"
selfLink: /api/v1/namespaces/default/pods/my-owncloud-mariadb-0
uid: 6a98de05-c790-4f59-b182-5aaa45f3b580
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels:
app: mariadb
release: my-owncloud
topologyKey: kubernetes.io/hostname
weight: 1
containers:
- env:
- name: MARIADB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
key: mariadb-root-password
name: my-owncloud-mariadb
- name: MARIADB_USER
value: myuser
- name: MARIADB_PASSWORD
valueFrom:
secretKeyRef:
key: mariadb-password
name: my-owncloud-mariadb
- name: MARIADB_DATABASE
value: mydatabase
image: docker.io/bitnami/mariadb:10.3.18-debian-9-r36
imagePullPolicy: IfNotPresent
livenessProbe:
exec:
command:
- sh
- -c
- exec mysqladmin status -uroot -p$MARIADB_ROOT_PASSWORD
failureThreshold: 3
initialDelaySeconds: 120
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: mariadb
ports:
- containerPort: 3306
name: mysql
protocol: TCP
readinessProbe:
exec:
command:
- sh
- -c
- exec mysqladmin status -uroot -p$MARIADB_ROOT_PASSWORD
failureThreshold: 3
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /bitnami/mariadb
name: data
- mountPath: /opt/bitnami/mariadb/conf/my.cnf
name: config
subPath: my.cnf
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-pbgxr
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
hostname: my-owncloud-mariadb-0
nodeName: 149.202.36.244
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 1001
runAsUser: 1001
serviceAccount: default
serviceAccountName: default
subdomain: my-owncloud-mariadb
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: data
persistentVolumeClaim:
claimName: data-my-owncloud-mariadb-0
- configMap:
defaultMode: 420
name: my-owncloud-mariadb
name: config
- name: default-token-pbgxr
secret:
defaultMode: 420
secretName: default-token-pbgxr
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2019-11-20T14:33:22Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2019-11-20T14:34:03Z"
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2019-11-20T14:34:03Z"
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2019-11-20T14:33:22Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://3898b6a20bd8c38699374b7db7f04ccef752ffd5a5f7b2bc9f7371e6a27c963a
image: bitnami/mariadb:10.3.18-debian-9-r36
imageID: docker-pullable://bitnami/mariadb#sha256:a89e2fab7951c622e165387ead0aa0bda2d57e027a70a301b8626bf7412b9366
lastState: {}
name: mariadb
ready: true
restartCount: 0
state:
running:
startedAt: "2019-11-20T14:33:24Z"
hostIP: 149.202.36.244
phase: Running
podIP: 10.42.2.56
qosClass: BestEffort
startTime: "2019-11-20T14:33:22Z"
Is their something I'm missing?
You might not have permission to write to the location inside container. try the below command
use /tmp or some other location where you can dump the backup file
kubectl exec my-owncloud-mariadb-0 -it -- bash -c "mysqldump --single-transaction -h localhost -u myuser -ppassword mydatabase > /tmp/owncloud-dbbackup_`date +"%Y%m%d"`.bak"
Given the pod YAML file you've shown, you can't usefully use kubectl exec to make a database backup.
You're getting a shell inside the pod and running mysqldump there to write out the dump file somewhere else inside the pod. You can't write it to the secret directory or the configmap directory, so your essential choices are either to write it to the pod filesystem (which will get deleted as soon as the pod exits, including if Kubernetes decides to relocate the pod within the cluster) or the mounted database directory (and your backup will survive exactly as long as the data it's backing up).
I'd run mysqldump from outside the pod. One good approach would be to create a separate Job that mounted some sort of long-term storage (or relied on external object storage; if you're running on AWS, for example, S3), connected to the database pod, and ran the backup that way. That has the advantage of being fairly self-contained (so you can debug it without interfering with your live database) and also totally automated (you could launch it from a Kubernetes CronJob).
kubectl exec doesn't seem to have the same flags docker exec does to control the user identity, so you're dependent on there being some path inside the container that its default user can write to. /tmp is typically world-writable so if you just want that specific command to work I'd try putting the dump file into /tmp/owncloud-dbbackup_....
My app can't seem to connect to the proxy thus my Cloudsql Database.
Below are my setup:
my-simple-app.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
name: web
name: web
spec:
replicas: 2
selector:
matchLabels:
name: web
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
minReadySeconds: 5
template:
metadata:
labels:
name: web
spec:
nodeSelector:
cloud.google.com/gke-nodepool: default-pool
containers:
- image: joelaw/nameko-hello:0.2
name: web
env:
- name: DB_HOST
value: 127.0.0.1
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: password
- name: DB_USER
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
ports:
- containerPort: 3000
name: http-server
- image: gcr.io/cloudsql-docker/gce-proxy:1.09
name: cloudsql-proxy
command: ["/cloud_sql_proxy", "--dir=/cloudsql",
"-instances=spheric-veric-task:asia-southeast1:authdb:5432",
"-credential_file=/secrets/cloudsql/credentials.json"]
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: ssl-certs
mountPath: /etc/ssl/certs
- name: cloudsql
mountPath: /cloudsql
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
- name: ssl-certs
hostPath:
path: /etc/ssl/certs
- name: cloudsql
emptyDir:
I had setup the secrets correctly I suppose.
Below are some data that I collected from the instance:
The pod live happily:
web-69c7777c68-s2jt6 2/2 Running 0 9m
web-69c7777c68-zbwtv 2/2 Running 0 9m
When I run: kubectl logs web-69c7777c68-zbwtv -c cloudsql-proxy
It recorded this:
2019/04/04 03:25:35 using credential file for authentication; email=auth-db-user#spheric-verve-228610.iam.gserviceaccount.com
2019/04/04 03:25:35 Listening on /cloudsql/spheric-veric-task:asia-southeast1:authdb:5432/.s.PGSQL.5432 for spheric-veric-task:asia-southeast1:authdb:5432
2019/04/04 03:25:35 Ready for new connections
Since the app is not configured to connect to the db, what I did is to ssh into the pod with:
kubectl exec -it web-69c7777c68-mrdpn -- /bin/bash
# Followed by installing postgresql driver:
apt-get install postgresql
# Trying to connect to cloudsql:
psql -h 127.0.0.1 -p 5432 -U
When I run psql in the container:
psql: could not connect to server: Connection refused
Is the server running on host "127.0.0.1" and accepting
TCP/IP connections on port 5432?
Can anyone of you kindly advise what should I do to connect to the DB?
You are specifying the instance connection string wrong, and so the proxy is listening on a unix socket in the /cloudsql/ directory instead of to a TCP port.
To tell the proxy to listen on a TCP port, use the following:
-instances=<INSTANCE_CONNECTION_NAME>=tcp:5432
Otherwise, the following format creates a unix socket (defaulting to the /cloudsql directory):
-instances=<INSTANCE_CONNECTION_NAME>
I have a yaml file for creating k8s pod with just one container. Is it possible to pre-add an username and its password from yaml file during k8s pod creation?
I searched many sites and found the env variable. However, I could not make the pod as my wish. The pod's status is always showing Crashoff after pod creation.
Is it possible to pre-add an username and its password from yaml file during k8s pod creation?
Following are my yaml file:
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: centos610-sp-v1
spec:
replicas: 1
template:
metadata:
labels:
app: centos610-sp-v1
spec:
containers:
- name: centos610-pod-v1
image: centos-done:6.10
env:
- name: SSH_USER
value: "user1"
- name: SSH_SUDO
value: "ALL=(ALL) NOPASSWD:ALL"
- name: PASSWORD
value: "password"
command: ["/usr/sbin/useradd"]
args: ["$(SSH_USER)"]
ports:
- containerPort: 22
resources:
limits:
cpu: "500m"
memory: "1G"
---
apiVersion: v1
kind: Service
metadata:
name: centos610-sp-v1
labels:
app: centos610-sp-v1
spec:
selector:
app: centos610-sp-v1
ports:
- port: 22
protocol: TCP
nodePort: 31022
type: NodePort
---
Should I use specific command as
env:
- name: MESSAGE
value: "hello world"
command: ["/bin/echo"]
args: ["$(MESSAGE)"]
or
command: ["/bin/sh"]
args: ["-c", "while true; do echo hello; sleep 10;done"]
pod's status after get
root#zero:~/k8s-temp# kubectl get pod
NAME READY STATUS RESTARTS AGE
centos610-sp-v1-6689c494b8-nb9kv 0/1 CrashLoopBackOff 5 3m
pod's status after describe
root#zero:~/k8s-temp# kubectl describe pod centos610-sp-v1-6689c494b8-nb9kv
Name: centos610-sp-v1-6689c494b8-nb9kv
Namespace: default
Node: zero/10.111.33.15
Start Time: Sat, 16 Mar 2019 01:16:59 +0800
Labels: app=centos610-sp-v1
pod-template-hash=2245705064
Annotations: <none>
Status: Running
IP: 10.233.127.104
Controlled By: ReplicaSet/centos610-sp-v1-6689c494b8
Containers:
centos610-pod-v1:
Container ID: docker://5fa076c5d245dd532ef7ce724b94033d93642dc31965ab3fbde61dd59bf7d314
Image: centos-done:6.10
Image ID: docker://sha256:26362e9cefe4e140933bf947e3beab29da905ea5d65f27fc54513849a06d5dd5
Port: 22/TCP
Host Port: 0/TCP
Command:
/usr/sbin/useradd
Args:
$(SSH_USER)
State: Terminated
Reason: Completed
Exit Code: 0
Started: Sat, 16 Mar 2019 01:17:17 +0800
Finished: Sat, 16 Mar 2019 01:17:17 +0800
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Sat, 16 Mar 2019 01:17:01 +0800
Finished: Sat, 16 Mar 2019 01:17:01 +0800
Ready: False
Restart Count: 2
Limits:
cpu: 500m
memory: 1G
Requests:
cpu: 500m
memory: 1G
Environment:
SSH_USER: user1
SSH_SUDO: ALL=(ALL) NOPASSWD:ALL
PASSWORD: password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-qbd8x (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-qbd8x:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-qbd8x
Optional: false
QoS Class: Guaranteed
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 22s default-scheduler Successfully assigned centos610-sp-v1-6689c494b8-nb9kv to zero
Normal SuccessfulMountVolume 22s kubelet, zero MountVolume.SetUp succeeded for volume "default-token-qbd8x"
Normal Pulled 5s (x3 over 21s) kubelet, zero Container image "centos-done:6.10" already present on machine
Normal Created 5s (x3 over 21s) kubelet, zero Created container
Normal Started 4s (x3 over 21s) kubelet, zero Started container
Warning BackOff 4s (x3 over 19s) kubelet, zero Back-off restarting failed container
2019/03/18 UPDATE
Although pre-add username and password from pod's yaml is not suggested but I just want to clarify how to use command & args from yaml file. Finally, I use following yaml file to create a username "user1" and its password "1234" successfully. Thank you all of you guys' great answer to make me more familiar with k8s about configMap, RBAC, container's behavior.
Actually, this link gave me a reference on how to use command & args
How to set multiple commands in one yaml file with Kubernetes?
Here are my final yaml file content:
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: centos610-sp-v1
spec:
replicas: 1
template:
metadata:
labels:
app: centos610-sp-v1
spec:
containers:
- name: centos610-pod-v1
image: centos-done:6.10
env:
- name: SSH_USER
value: "user1"
- name: SSH_SUDO
value: "ALL=(ALL) NOPASSWD:ALL"
- name: PASSWORD
value: "password"
command: ["/bin/bash", "-c"]
args: ["useradd $(SSH_USER); service sshd restart; echo $(SSH_USER):1234 | chpasswd; tail -f /dev/null"]
ports:
- containerPort: 22
resources:
limits:
cpu: "500m"
memory: "1G"
---
apiVersion: v1
kind: Service
metadata:
name: centos610-sp-v1
labels:
app: centos610-sp-v1
spec:
selector:
app: centos610-sp-v1
ports:
- port: 22
protocol: TCP
nodePort: 31022
type: NodePort
---
Keep username and password in a configMap or in a secret object. Load those values into container as environment variables
Follow the reference
https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/
If you want to add the user anyway , regardless of the fact that you can achive the same thing using A kubernetes native way , then
Please setup your user in the Docker image ( Dockerfile and then build it) instead.
Hope this helps.
2019/03/18 UPDATE
Although pre-add username and password from pod's yaml is not suggested but I just want to clarify how to use command & args from yaml file. Finally, I use following yaml file to create a username "user1" and its password "1234" successfully. Thank you all of you guys' great answer to make me more familiar with k8s about configMap, RBAC, container's behavior.
Actually, this link gave me a reference on how to use command & args
How to set multiple commands in one yaml file with Kubernetes?
Here are my final yaml file content:
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: centos610-sp-v1
spec:
replicas: 1
template:
metadata:
labels:
app: centos610-sp-v1
spec:
containers:
- name: centos610-pod-v1
image: centos-done:6.10
env:
- name: SSH_USER
value: "user1"
- name: SSH_SUDO
value: "ALL=(ALL) NOPASSWD:ALL"
- name: PASSWORD
value: "password"
command: ["/bin/bash", "-c"]
args: ["useradd $(SSH_USER); service sshd restart; echo $(SSH_USER):1234 | chpasswd; tail -f /dev/null"]
ports:
- containerPort: 22
resources:
limits:
cpu: "500m"
memory: "1G"
---
apiVersion: v1
kind: Service
metadata:
name: centos610-sp-v1
labels:
app: centos610-sp-v1
spec:
selector:
app: centos610-sp-v1
ports:
- port: 22
protocol: TCP
nodePort: 31022
type: NodePort
---
I have an installer that spins up two pods in my CI flow, let's call them web and activemq. When the web pod starts it tries to communicate with the activemq pod using the k8s assigned amq-deployment-0.activemq pod name.
Randomly, the web will get an unknown host exception when trying to access amq-deployment1.activemq. If I restart the web pod in this situation the web pod will have no problem communicating with the activemq pod.
I've logged into the web pod when this happens and the /etc/resolv.conf and /etc/hosts files look fine. The host machines /etc/resolve.conf and /etc/hosts are sparse with nothing that looks questionable.
Information:
There is only 1 worker node.
kubectl --version
Kubernetes v1.8.3+icp+ee
Any ideas on how to go about debugging this issue. I can't think of a good reason for it to happen randomly nor resolve itself on a pod restart.
If there is other useful information needed, I can get it. Thank in advance
For activeMQ we do have this service file
apiVersion: v1 kind: Service
metadata:
name: activemq
labels:
app: myapp
env: dev
spec:
ports:
- port: 8161
protocol: TCP
targetPort: 8161
name: http
- port: 61616
protocol: TCP
targetPort: 61616
name: amq
selector:
component: analytics-amq
app: myapp
environment: dev
type: fa-core
clusterIP: None
And this ActiveMQ stateful set (this is the template)
kind: StatefulSet
apiVersion: apps/v1beta1
metadata:
name: pa-amq-deployment
spec:
replicas: {{ activemqs }}
updateStrategy:
type: RollingUpdate
serviceName: "activemq"
template:
metadata:
labels:
component: analytics-amq
app: myapp
environment: dev
type: fa-core
spec:
containers:
- name: pa-amq
image: default/myco/activemq:latest
imagePullPolicy: Always
resources:
limits:
cpu: 150m
memory: 1Gi
livenessProbe:
exec:
command:
- /etc/init.d/activemq
- status
initialDelaySeconds: 10
periodSeconds: 15
failureThreshold: 16
ports:
- containerPort: 8161
protocol: TCP
name: http
- containerPort: 61616
protocol: TCP
name: amq
envFrom:
- configMapRef:
name: pa-activemq-conf-all
- secretRef:
name: pa-activemq-secret
volumeMounts:
- name: timezone
mountPath: /etc/localtime
volumes:
- name: timezone
hostPath:
path: /usr/share/zoneinfo/UTC
The Web stateful set:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: pa-web-deployment
spec:
replicas: 1
updateStrategy:
type: RollingUpdate
serviceName: "pa-web"
template:
metadata:
labels:
component: analytics-web
app: myapp
environment: dev
type: fa-core
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: component
operator: In
values:
- analytics-web
topologyKey: kubernetes.io/hostname
containers:
- name: pa-web
image: default/myco/web:latest
imagePullPolicy: Always
resources:
limits:
cpu: 1
memory: 2Gi
readinessProbe:
httpGet:
path: /versions
port: 8080
initialDelaySeconds: 30
periodSeconds: 15
failureThreshold: 76
livenessProbe:
httpGet:
path: /versions
port: 8080
initialDelaySeconds: 30
periodSeconds: 15
failureThreshold: 80
securityContext:
privileged: true
ports:
- containerPort: 8080
name: http
protocol: TCP
envFrom:
- configMapRef:
name: pa-web-conf-all
- secretRef:
name: pa-web-secret
volumeMounts:
- name: shared-volume
mountPath: /MySharedPath
- name: timezone
mountPath: /etc/localtime
volumes:
- nfs:
server: 10.100.10.23
path: /MySharedPath
name: shared-volume
- name: timezone
hostPath:
path: /usr/share/zoneinfo/UTC
This web pod also has a similar "unknown host" problem finding an external database we have configured. The issue being resolved similarly by restarting the pod. Here is the configuration of that external service. Maybe it is easier to tackle the problem from this angle? ActiveMQ has no problem using the database service name to find the DB and startup.
apiVersion: v1
kind: Service
metadata:
name: dbhost
labels:
app: myapp
env: dev
spec:
type: ExternalName
externalName: mydb.host.com
Is it possible that it is a question of which pod, and the app in its container, is started up first and which second?
In any case, connecting using a Service and not the pod name would be recommended as the pod's name assigned by Kubernetes changes between pod restarts.
A way to test connectivity, is to use telnet (or curl for the protocols it supports), if found in the image:
telnet <host/pod/Service> <port>
Not able to find a solution, I created a workaround. I set up the entrypoint.sh in my image to lookup the domain I need to access and write to the log, exiting on error:
#!/bin/bash
#disable echo and exit on error
set +ex
#####################################
# verfiy that the db service can be found or exit container
#####################################
# we do not want to install nslookup to determine if the db_host_name is valid name
# we have ping available though
# 0-success, 1-error pinging but lookup worked (services can not be pinged), 2-unreachable host
ping -W 2 -c 1 ${db_host_name} &> /dev/null
if [ $? -le 1 ]
then
echo "service ${db_host_name} is known"
else
echo "${db_host_name} service is NOT recognized. Exiting container..."
exit 1
fi
Next since only a pod restart fixed the issue. In my ansible deploy, I do a rollout check, querying the log to see if I need to do a pod restart. For example:
rollout-check.yml
- name: "Rollout status for {{rollout_item.statefulset}}"
shell: timeout 4m kubectl rollout status -n {{fa_namespace}} -f {{ rollout_item.statefulset }}
ignore_errors: yes
# assuming that the first pod will be the one that would have an issue
- name: "Get {{rollout_item.pod_name}} log to check for issue with dns lookup"
shell: kubectl logs {{rollout_item.pod_name}} --tail=1 -n {{fa_namespace}}
register: log_line
# the entrypoint will write dbhost service is NOT recognized. Exiting container... to the log
# if there is a problem getting to the dbhost
- name: "Try removing {{rollout_item.component}} pod if unable to deploy"
shell: kubectl delete pods -l component={{rollout_item.component}} --force --grace-period=0 --ignore-not-found=true -n {{fa_namespace}}
when: log_line.stdout.find('service is NOT recognized') > 0
I repeat this rollout check 6 times as sometimes even after a pod restart the service cannot be found. The additional checks are instant once the pod is successfully up.
- name: "Web rollout"
include_tasks: rollout-check.yml
loop:
- { c: 1, statefulset: "{{ dest_deploy }}/web.statefulset.yml", pod_name: "pa-web-deployment-0", component: "analytics-web" }
- { c: 2, statefulset: "{{ dest_deploy }}/web.statefulset.yml", pod_name: "pa-web-deployment-0", component: "analytics-web" }
- { c: 3, statefulset: "{{ dest_deploy }}/web.statefulset.yml", pod_name: "pa-web-deployment-0", component: "analytics-web" }
- { c: 4, statefulset: "{{ dest_deploy }}/web.statefulset.yml", pod_name: "pa-web-deployment-0", component: "analytics-web" }
- { c: 5, statefulset: "{{ dest_deploy }}/web.statefulset.yml", pod_name: "pa-web-deployment-0", component: "analytics-web" }
- { c: 6, statefulset: "{{ dest_deploy }}/web.statefulset.yml", pod_name: "pa-web-deployment-0", component: "analytics-web" }
loop_control:
loop_var: rollout_item