I am trying to backup postgres database from RDS using K8s cronjob.
I have created cronjob for it my EKS cluster and credentials are in Secrets.
When Its try to copy backup fail into AWS S3 bucket pod fails with error:
aws: error: argument command: Invalid choice, valid choices are:
I tried different options but its not working.
Anybody please help in resolving this issue.
Here is brief info:
K8s cluster is on AWS EKS
Db is on RDS
I am using following config for my cronjob:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: postgres-backup
spec:
schedule: "*/3 * * * *"
jobTemplate:
spec:
backoffLimit: 0
template:
spec:
initContainers:
- name: dump
image: postgres:12.1-alpine
volumeMounts:
- name: data
mountPath: /backup
args:
- pg_dump
- "-Fc"
- "-f"
- "/backup/redash-postgres.pgdump"
- "-Z"
- "9"
- "-v"
- "-h"
- "postgress.123456789.us-east-2.rds.amazonaws.com"
- "-U"
- "postgress"
- "-d"
- "postgress"
env:
- name: PGPASSWORD
valueFrom:
secretKeyRef:
# Retrieve postgres password from a secret
name: postgres
key: POSTGRES_PASSWORD
containers:
- name: save
image: amazon/aws-cli
volumeMounts:
- name: data
mountPath: /backup
args:
- aws
- "--version"
envFrom:
- secretRef:
# Must contain AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_DEFAULT_REGION
name: s3-backup-credentials
restartPolicy: Never
volumes:
- name: data
emptyDir: {}
Try this:
...
containers:
- name: save
image: amazon/aws-cli
...
args:
- "--version" # <-- the image entrypoint already call "aws", you only need to specify the arguments here.
...
Related
I am using a custom MongoDB image with a read-only file system and trying to deploy it to Kubernetes locally on my Mac using Kind. The below is my statefulset.yaml. I cannot run a custom script called mongo-init.sh, which creates my db users. Kubernetes only allows one point of entry, and so I am using that to run my db startup command: mongod -f /docker-entrypoint-initdb.d/mongod.conf. It doesn't allow me to run another command or script after that, even if I append a & at the end. I tried using "/bin/bash/", "-c", "command1", "command2" but that doesn't execute command2 if my first command is the mongod initialization command. Lastly, if I skip the mongod initialization command, the database is not up and running and I cannot connect to it, so I get a connection refused. When I kubectl exec onto the container, I can get into the database using mongo shell with mongo, but I cannot view anything or really execute any commands because I get an unauthorized error. Totally lost on this one and could use some help.
apiVersion: apps/v1
kind: StatefulSet
metadata:
annotations:
creationTimestamp: null
labels:
app: mydb
name: mongodb
namespace: mongodb
spec:
serviceName: "dbservice"
replicas: 1
selector:
matchLabels:
app: mydb
template:
metadata:
annotations:
creationTimestamp: null
labels:
app: mydb
spec:
# terminationGracePeriodSeconds: 10
imagePullSecrets:
- name: regcred
containers:
- env:
- name: MONGODB_APPLICATION_USER_PWD
valueFrom:
configMapKeyRef:
key: MONGODB_APPLICATION_USER_PWD
name: myconfigmap
- name: MONGO_INITDB_DATABASE
valueFrom:
configMapKeyRef:
key: MONGO_INITDB_DATABASE
name: myconfigmap
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
configMapKeyRef:
key: MONGO_INITDB_ROOT_PASSWORD
name: myconfigmap
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
configMapKeyRef:
key: MONGO_INITDB_ROOT_USERNAME
name: myconfigmap
image: 'localhost:5000/mongo-db:custom'
command: ["/docker-entrypoint-initdb.d/mongo-init.sh"]
name: mongodb
ports:
- containerPort: 27017
resources: {}
volumeMounts:
- mountPath: /data/db
name: mongodata
- mountPath: /etc/mongod.conf
subPath: mongod.conf
name: mongodb-conf
readOnly: true
- mountPath: /docker-entrypoint-initdb.d/mongo-init.sh
subPath: mongo-init.sh
name: mongodb-conf
initContainers:
- name: init-mydb
image: busybox:1.28
command: ["chown"]
args: ["-R", "998:998", "/data"]
volumeMounts:
- name: mongodata
mountPath: /data/db
volumes:
- name: mongodata
persistentVolumeClaim:
claimName: mongo-volume-claim
- name: mongodb-conf
configMap:
name: myconfigmap
defaultMode: 0777
The error message I see when I do kubectl logs <podname>:
MongoDB shell version v4.4.1
connecting to: mongodb://localhost:27017/admin?compressors=disabled&gssapiServiceName=mongodb
Error: couldn't connect to server localhost:27017, connection attempt failed: SocketException: Error connecting to localhost:27017 (127.0.0.1:27017) :: caused by :: Connection refused :
connect#src/mongo/shell/mongo.js:374:17
#(connect):2:6
exception: connect failed
exiting with code 1
I'm testing out whether I can mount data from S3 using initContainer. What I intended and expected was same volume being mounted to both initContainer and Container. Data from S3 gets downloaded using InitContainer to mountPath called /s3-data, and as the Container is run after the initContainer, it can read from the path the volume was mounted to.
However, the Container doesn't show me any logs, and just says 'stream closed'. The initContainer shows logs that data were successfully downloaded from S3.
What am I doing wrong? Thanks in advance.
apiVersion: batch/v1
kind: Job
metadata:
name: train-job
spec:
template:
spec:
initContainers:
- name: data-download
image: <My AWS-CLI Image>
command: ["/bin/sh", "-c"]
args:
- aws s3 cp s3://<Kubeflow Bucket>/kubeflowdata.tar.gz /s3-data
volumeMounts:
- mountPath: /s3-data
name: s3-data
env:
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef: {key: AWS_ACCESS_KEY_ID, name: aws-secret}
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef: {key: AWS_SECRET_ACCESS_KEY, name: aws-secret}
containers:
- name: check-proper-data-mount
image: <My Image>
command: ["/bin/sh", "-c"]
args:
- cd /s3-data
- echo "Just s3-data dir"
- ls
- echo "After making a sample file"
- touch sample.txt
- ls
volumeMounts:
- mountPath: /s3-data
name: s3-data
volumes:
- name: s3-data
emptyDir: {}
restartPolicy: OnFailure
backoffLimit: 6
You can try like the below mentioned the argument part
---
apiVersion: v1
kind: Pod
metadata:
labels:
purpose: demonstrate-command
name: command-demo
spec:
containers:
-
args:
- cd /s3-data;
echo "Just s3-data dir";
ls;
echo "After making a sample file";
touch sample.txt;
ls;
command:
- /bin/sh
- -c
image: "<My Image>"
name: containername
for reference:
How to set multiple commands in one yaml file with Kubernetes?
Can someone give me some guidance on best practice to bring multiple Talend jobs dynamically into Kubernetes?
I am using Talend Open Studio for Big Data
I have a listener server for Job2Docker
How do I change the scripts to automate a push to Docker Hub?
Is it possible to have a dynamic CronJob K8s type that can run jobs based on a configuration file.
I ended up not using Job2Docker in favour of a simple docker process.
Build the Talend Standalone job.
Unzip the build into a folder called jobs.
Use the below Dockerfile example to build and push your docker image.
Dockerfile
FROM java
WORKDIR /talend
COPY ./jobs /talend
Create a CronJob type for K8s
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: etl-edw-cronjob
spec:
schedule: "0 * * * *"
jobTemplate:
spec:
template:
spec:
restartPolicy: Never
containers:
- name: etl-edw-job
image: dockerhubrepo/your-etl
command: ["sh", "./process_data_warehouse_0.1/process_data_warehouse/process_data_warehouse_run.sh"]
env:
- name: PGHOST
value: postgres-cluster-ip-service
- name: PGPORT
value: "5432"
- name: PGDATABASE
value: infohub
- name: PGUSER
value: postgres
- name: PGPASSWORD
valueFrom:
secretKeyRef:
name: pgpassword
key: PGPASSWORD
- name: MONGOSERVER
value: mongo-service
- name: MONGOPORT
value: "27017"
- name: MONGODB
value: hearth
I'm trying to change the settings of my postgres database inside my local minikube cluster. I mistakenly deployed a database without specifying the postgres user, password and database.
The problem: When I add the new env variables and use kubectl apply -f postgres-deployment.yml, postgres does not create the user, password or database that specified by the environment variables.
This is the deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment
spec:
replicas: 1
selector:
matchLabels:
component: postgres
template:
metadata:
labels:
component: postgres
spec:
volumes:
- name: postgres-storage
persistentVolumeClaim:
claimName: database-persistent-volume-claim
containers:
- name: postgres
image: postgres
ports:
- containerPort: 5432
volumeMounts:
- name: postgres-storage
mountPath: /var/lib/postgresql/data
subPath: postgres
env:
- name: PGUSER
value: admin
- name: PGPASSWORD
value: password
- name: PGDATABSE
value: testdb
How can I change the settings of postgres when I apply the deployment file?
Can you share pod's logs?
kubectl logs <pod_name>
Postgres is using init script with defined variable names:
POSTGRES_USER
POSTGRES_PASSWORD
POSTGRES_DB
Try this one out
apiVersion: v1
kind: ReplicationController
metadata:
name: postgres
spec:
replicas: 1
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:9.6
env:
- name: POSTGRES_USER
value: admin
- name: POSTGRES_PASSWORD
value: password
- name: POSTGRES_DB
value: testdb
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
ports:
- containerPort: 5432
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: pg-data
volumes:
- name: pg-data
emptyDir: {}
I'm attempting to migrate over to Cloud SQL (Postgres). I have the following deployment in Kubernetes, having followed these instructions https://cloud.google.com/sql/docs/mysql/connect-container-engine :
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: menu-service
spec:
replicas: 1
template:
metadata:
labels:
app: menu-service
spec:
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
- name: cloudsql
emptyDir:
- name: ssl-certs
hostPath:
path: /etc/ssl/certs
containers:
- image: gcr.io/cloudsql-docker/gce-proxy:1.11
name: cloudsql-proxy
command: ["/cloud_sql_proxy", "--dir=/cloudsql",
"-instances=tabb-168314:europe-west2:production=tcp:5432",
"-credential_file=/secrets/cloudsql/credentials.json"]
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: ssl-certs
mountPath: /etc/ssl/certs
- name: cloudsql
mountPath: /cloudsql
- name: menu-service
image: eu.gcr.io/tabb-168314/menu-service:develop
imagePullPolicy: Always
env:
- name: MICRO_BROKER
value: "nats"
- name: MICRO_BROKER_ADDRESS
value: "nats.staging:4222"
- name: MICRO_REGISTRY
value: "kubernetes"
- name: ENV
value: "staging"
- name: PORT
value: "8080"
- name: POSTGRES_HOST
value: "127.0.0.1:5432"
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
- name: POSTGRES_PASS
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: password
- name: POSTGRES_DB
value: "menus"
ports:
- containerPort: 8080
But unfortunately I'm getting this error when trying to update the deployment:
MountVolume.SetUp failed for volume "kubernetes.io/secret/69b0ec99-baaf-11e7-82b8-42010a84010c-cloudsql-instance-credentials" (spec.Name: "cloudsql-instance-credentials") pod "69b0ec99-baaf-11e7-82b8-42010a84010c" (UID: "69b0ec99-baaf-11e7-82b8-42010a84010c") with: secrets "cloudsql-instance-credentials" not found
Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "staging"/"menu-service-1982520680-qzwzn". list of unattached/unmounted volumes=[cloudsql-instance-credentials]
Have I missed something here?
You are missing (at least) one of the secrets needed to start up this Pod, namely cloudsql-instance-credentials.
From https://cloud.google.com/sql/docs/mysql/connect-container-engine:
You need two secrets to enable your Container Engine application to access the data in your Cloud SQL instance:
The cloudsql-instance-credentials secret contains the service account.
The cloudsql-db-credentials secret provides the proxy user account and password. (I think you have this created, I can't see an error message about this one)
To create your secrets:
Create the secret containing the Service Account which enables authentication to Cloud SQL:
kubectl create secret generic cloudsql-instance-credentials \
--from-file=credentials.json=[PROXY_KEY_FILE_PATH]
[...]
The link above also describes how to create a GCP service account for this purpose, if you don't have one created already.