Bash script from a BAT file not running after connecting to a kubectl pod in Google Cloud Shell editor - postgresql

For my project, I have to connect to a postgres Database in Google Cloud Shell using a series of commands:
gcloud config set project <project-name> gcloud auth activate-service-account <keyname>#<project-name>.iam.gserviceaccount.com --key-file=<filename>.json gcloud container clusters get-credentials banting --region <region> --project <project> kubectl get pods -n <node> kubectl exec -it <pod-name> -n <node> bash apt-get update apt install postgresql postgresql-contrib psql -h <hostname> -p <port> -d <database> -U <userId>`
I am a beginner to this and just running the scripts provided to me by copy pasting till now.
But to make things easier, I have created a .bat file in the Shell editor with all the above commands and tried to run it using bash <filename>
But once the kubectl exec -it <pod-name> -n <node> bash command runs and new directory is opened like below, the rest of the commands do not run.
Defaulted container "<container>" out of: <node>, istio-proxy, istio-init (init) root#<pod-name>:/#
So how can I make the shell run the rest of these scripts from the .bat file:
apt-get update apt install postgresql postgresql-contrib psql -h <hostname> -p <port> -d <database> -U <userId>`

Cloud Shell is a Linux instance and default to the Bash shell.
BAT commonly refers to Windows|DOS batch files.
On Linux, shell scripts are generally .sh.
Your script needs to be revised in order to pass the commands intended for the kubectl exec command to the Pod and not to the current script.
You can try (!) the following. It creates a Bash (sub)shell on the Pod and runs the commands listed after -c in it:
gcloud config set project <project-name>
gcloud auth activate-service-account <keyname>#<project-name>.iam.gserviceaccount.com \
--key-file=<filename>.json
gcloud container clusters get-credentials banting \
--region <region> \
--project <project>
kubectl get pods -n <node>
kubectl exec -it <pod-name> -n <node> bash -c "apt-get update && apt install postgresql postgresql-contrib && psql -h <hostname> -p <port> -d <database> -U <userId>"
However, I have some feedback|recommendations:
It's unclear whether even this approach will work because your running psql but doing nothing with it. In theory, I think you could then pass a script to the psql command too but then your script is becoming very janky.
It is considered not good practice to install software in containers as you're doing. The recommendation is to create the image that you want to run beforehand and use that. It is recommended that containers be immutable
I encourage you to use long flags when you write scripts as short flags (-n) can be confusing whereas --namespace= is more clear (IMO). Yes, these take longer to type but your script is clearer as a result. When you're hacking on the command-line, short flags are fine.
I encourage you to not use gcloud config set e.g. gcloud config set project ${PROJECT}. This sets global values. And its use is confusing because subsequent commands use the values implicitly. Interestingly, you provide a good example of why this can be challenging. Your subsequent command gcloud container clusters get-credentials --project=${PROJECT} explicitly uses the --project flag (this is good) even though you've already implicitly set the value for project using gcloud config set project.

Related

Azure Devops Container Pipeline job is trying to redundantly give user:1000 sudo priveleges

I have a docker image, already made, that another pipeline uses for build jobs. That image already has a user:1000 with sudo (paswordless) permissions and a home directory. This was done to make manual use of the container more useful... there are applications in the image that prefer to run under a non-root user.
The pipeline using this image finds the existing user (great!) but then tries to give the user sudo permissions that it already has and this breaks the flow...
--<yaml pipeline code>--
container:
image: acr.url/foo/bar:v1
endpoint: <svc-connection>
--<pipeline run>--
...
/usr/bin/docker network create --label dc4b27 vsts_network_6b3e...
/usr/bin/docker inspect --format="{{index .Config.Labels \"com.azure.dev.pipelines.agent.handler.node.path\"}}" ***/foo/bar:v1
/usr/bin/docker create --name 9479... --label dc4b27 --network vsts_network_6b3ee... -v "/var/run/docker.sock":"/var/run/docker.sock" -v "/opt/azagent/_work/9":"/__w/9" -v "/opt/azagent/_work/_temp":"/__w/_temp" -v "/opt/azagent/_work/_tasks":"/__w/_tasks" -v "/opt/azagent/_work/_tool":"/__t" -v "/opt/azagent/externals":"/__a/externals":ro -v "/opt/azagent/_work/.taskkey":"/__w/.taskkey" ***/foo/bar:v1 "/__a/externals/node/bin/node" -e "setInterval(function(){}, 24 * 60 * 60 * 1000);"
9056...
/usr/bin/docker start 9056...
9056...
/usr/bin/docker ps --all --filter id=9056... --filter status=running --no-trunc --format "{{.ID}} {{.Status}}"
9056... Up Less than a second
/usr/bin/docker exec 9056... sh -c "command -v bash"
/bin/bash
whoami
devops
id -u devops
1000
Try to create a user with UID '1000' inside the container.
/usr/bin/docker exec 9056... bash -c "getent passwd 1000 | cut -d: -f1 "
/usr/bin/docker exec 9056... id -u viv
1000
Grant user 'viv' SUDO privilege and allow it run any command without authentication.
/usr/bin/docker exec 9056... groupadd azure_pipelines_sudo
groupadd: Permission denied.
groupadd: cannot lock /etc/group; try again later.
##[error]Docker exec fail with exit code 10
Finishing: Initialize containers
I am OK working with user:1000 in the container as the azure agent runs on the host VM under user:1000('devops') and so the id's match inside and outside of the container, getting around a shortcoming of the docker volume mount system.
The question is: Is there a pipeline yaml method or control parameter to tell the run not to try and setup sudo permissions on the discovered user account (uid:1000) in the container?
I am getting around this issue right now by adding options: --user 0 to the container: section in the yaml script but I would prefer not to do that...
Thx.

One command to restore local PostgreSQL dump into kubectl pod?

I'm just seeing if there is one command to restore a local backup to a pod running postgres. I've been unable to get it working with:
kubectl exec -it <pod> -- <various commands>
So I've just resorted to:
kubectl cp <my-file> <my-pod>:<my-file>
Then restoring it.
Thinking there is likely a better way, so thought I'd ask.
You can call pg_restore command directly in the pod specifying path to your local file as a dump source (connection options may vary depending on image you're using), e.g:
kubectl exec -i POD_NAME -- pg_restore -U USERNAME -C -d DATABASE < dump.sql
If the file was in s3 or another location available to the pod, you could always have a script inside the container that can download the file and perform the restore, in a single bash file.
That should allow you to perform the restore in a single command.
cat mybackup.dmp | kubectl exec -i ... -- pgrestore ...
Or something like that.

How to run a command in a container using kubectl exec that uses envrionment variables from the container?

I'm trying to write a script that runs some commands inside the container using kubectl exec. I'd like to use the environment variables that exist inside the container, but struggling to figure out how to prevent my local shell from evaluating the var and still have it evaluated in the container.
This was my first try, but $MONGODB_ROOT_PASSWORD get evaluated by my local shell instead of inside the container:
kubectl -n enterprise exec mycontainer -- mongodump --username root --password $MONGODB_ROOT_PASSWORD --out /dump
I tried this, but the had the same issue with pipe, it was evaluated in my local instead of in the container:
kubectl -n enterprise exec mycontainer -- echo 'mongodump --username root --password $MONGODB_ROOT_PASSWORD --out /dump' | sh
Is there a way to do this with kubectl exec?
You need a sh -c in there, like exec -- sh -c 'whatever $PASSWORD'.

How to copy docker volume from one machine to another?

I have created a docker volume for postgres on my local machine.
docker create volume postgres-data
Then I used this volume and run a docker.
docker run -it -v postgres-data:/var/lib/postgresql/9.6/main postgres
After that I did some database operations which got stored automatically in postgres-data. Now I want to copy that volume from my local machine to another remote machine. How to do the same.
Note - Database size is very large
If the second machine has SSH enabled you can use an Alpine container on the first machine to map the volume, bundle it up and send it to the second machine.
That would look like this:
docker run --rm -v <SOURCE_DATA_VOLUME_NAME>:/from alpine ash -c \
"cd /from ; tar -cf - . " | \
ssh <TARGET_HOST> \
'docker run --rm -i -v <TARGET_DATA_VOLUME_NAME>:/to alpine ash -c "cd /to ; tar -xpvf - "'
You will need to change:
SOURCE_DATA_VOLUME_NAME
TARGET_HOST
TARGET_DATA_VOLUME_NAME
Or, you could try using this helper script https://github.com/gdiepen/docker-convenience-scripts
Hope this helps.
I had an exact same problem but in my case, both volumes were in separate VPCs and couldn't expose SSH to outside world. I ended up creating dvsync which uses ngrok to create a tunnel between them and then use rsync over SSH to copy the data. In your case you could start the dvsync-server on your machine:
$ docker run --rm -e NGROK_AUTHTOKEN="$NGROK_AUTHTOKEN" \
--mount source=postgres-data,target=/data,readonly \
quay.io/suda/dvsync-server
and then start the dvsync-client on the target machine:
docker run -e DVSYNC_TOKEN="$DVSYNC_TOKEN" \
--mount source=MY_TARGET_VOLUME,target=/data \
quay.io/suda/dvsync-client
The NGROK_AUTHTOKEN can be found in ngrok dashboard and the DVSYNC_TOKEN is being shown by the dvsync-server in its stdout.
Once the synchronization is done, the dvsync-client container will stop.

Why can't you start postgres in docker using "service postgres start"?

All the tutorials point out to running postgres in the format of
docker run -d -p 5432 \
-t <your username>/postgresql \
/bin/su postgres -c '/usr/lib/postgresql/9.2/bin/postgres \
-D /var/lib/postgresql/9.2/main \
-c config_file=/etc/postgresql/9.2/main/postgresql.conf'
Why can't we in our Docker file have:
ENTRYPOINT ["/etc/init.d/postgresql-9.2", "start"]
And simply start the container by
docker run -d psql
Is that not the purpose of Entrypoint or am I missing something?
the difference is that the init script provided in /etc/init.d is not an entry point. Its purpose is quite different; to get the entry point started, in the background, and then report on the success or failure to the caller. that script causes a postgres process, usually indirectly via pg_ctl, to be started, detached from the controlling terminal.
for docker to work best, it needs to run the application directly, attached to the docker process. that way it can usefully and generically terminate it when the user asks for it, or quickly discover and respond to the process crashing.
Exemplify that IfLoop said.
Using CMD into Dockerfiles:
USE postgres
CMD ["/usr/lib/postgresql/9.2/bin/postgres", "-D", "/var/lib/postgresql/9.2/main", "-c", "config_file=/etc/postgresql/9.2/main/postgresql.conf"]
To run:
$docker run -d -p 5432:5432 psql
Watching PostgeSQL logs:
$docker logs -f POSTGRES_CONTAINER_ID