In concourse how do you hijack a container made via `fly execute` - concourse

If you run fly execute to perform a one-off build, how can I then hijack/intercept the container?

When you perform a fly execute it gives you back a global build ID, which you can then use as an argument to fly intercept
$ fly -t ci e -c ci/build-docs.yml
executing build 43627
...
$ fly -t ci i -b 43627
bash-4.4#

Related

How to execute docker compose run command with ansible?

Is there a way to execute "docker compose run" command with ansible? Here is my command that I want to transform into ansible task:
docker compose --env-file configs/.env.dev -f docker-compose.yml -f docker-compose.dev.yml run --rm artisan migrate:refresh
I know how to use ansible-shell module for that. Also I've used ansible's docker-compose module to execute compose up command, but I haven't found any module parameters to specify the specific command for execution.

kubectl run - How to pass some commands to be executed before reaching the interactive terminal?

When using kubectl run -ti with an interactive terminal, I would like to be able to pass a few commands in the kubectl run command to be run before the interactive terminal comes up, commands like apt install zip for example. In this way, I do not need to wait for the interactive terminal to come up and then run those common commands. Is there a way do so this?
Thanks
You can use the shell's exec to hand control over from your initial "outer" bash, responsible for doing the initialization steps you want, over to a fresh one (fresh in the sense that it does not have -c and can optionally be a login shell) which runs after your pre-steps:
kubectl run sample -it --image=ubuntu:20.04 -- \
bash -c "apt update; apt install -y zip; exec bash -il"

backup postgresql from azure container instance

I created Azure Container Instance and ran postgresql in it. Mounted an azure container instance storage account. How can I start backup work, possibly by sheduler?
When I run the command
az container exec --resource-group Vitalii-demo --name vitalii-demo --exec-command "pg_dumpall -c -U postgrace > dump.sql"
I get an error error: code = 2 desc = oci runtime error: exec failed: container_linux.go:247: starting container process caused "exec: \ "pg_dumpall -c -U postgrace > dump.sql\": executable file not found in $PATH"
I read that
Azure Container Instances currently supports launching a single process with az container exec, and you cannot pass command arguments. For example, you cannot chain commands like in sh -c "echo FOO && echo BAR", or execute echo FOO.
Perhaps there is an opportunity to run as a task? Thanks.
Unfortunately - and as you already mentioned - it's not possible to run any commands with arguments like echo FOO or chain multiple commands together with &&.
https://learn.microsoft.com/en-us/azure/container-instances/container-instances-exec#run-a-command-with-azure-cli
You should be able to run an interactive shell by using --exec-command /bin/bash.
But this will not help if you want to schedule the backups programatically.
pg_dumpall can also be configured by environment variables:
https://www.postgresql.org/docs/9.3/libpq-envars.html
You could launch your backup-container with the correct environment variables in order to connect your database service:
PGHOST
PGPORT
PGUSER
PGPASSWORD
When having these variables set, a simple pg_dumpall should totally do what you want.
Hope that helps.
UPDATE:
Yikes, even when configuring the connection via environment-variables you won't be able to state the desired output file... Sorry.
You could create your own Dockerimage with a pre-configured script for dumping your PostgreSQL-database.
Doing it that way, you can configure the output-file in your script and then simply execute the script with --exec-command dump_my_db.sh.
Keep in mind that your script has to be located somewhere in the default $PATH - e.g. /usr/local/bin.

kubectl exec fails with the error "Unable to use a TTY - input is not a terminal or the right kind of file"

I am running a jenkins pipeline with the following command:
kubectl exec -it kafkacat-5f8fcfcc57-2txhc -- kafkacat -b cord-kafka -C -t BBSim-OLT-0-Events -o s#1585031458
which is running fine on the terminal of the machine the pipeline is running on, but on the actual pipeline I get the following error: "Unable to use a TTY - input is not a terminal or the right kind of file"
Any tips on how to go about resolving this?
When the flags -it are used with kubectl exec, it enables the TTY interactive mode. Given the error that you mentioned, it seems that Jenkins doesn't allocate a TTY.
Since you are running the command in a Jenkins job, I would assume that your command is not necessarily interactive. A possible solution for the problem would be to simply remove the -t flag and try to execute the following instead:
kubectl exec -i kafkacat-5f8fcfcc57-2txhc -- kafkacat -b cord-kafka -C -t BBSim-OLT-0-Events -o s#1585031458
For windows git bash:
alias kubectl='winpty kubectl'
$ kubectl exec -it <container>
Or just use winpty before the desired command.
For Windows GitBash users, use Powershell and NOT GitBash
Remove the -t option. That requests a TTY, which as you noted does not exist in Jenkins.
Just a hint for anyone that gets stuck like I did with kafkacat suddenly returning no data after removing the -t.
Turns out if there's no tty then kafkacat defaults to Producer mode, I never used the -C flag because it's the default to be a Consumer, but in this case it's required.

How do avoid a docker container stop after the application is stopped

There is a docker container with Postgres server. Ones postgres is stopped or crashed (doesn't matter) I need to check some environment variables and the state of a few files.
By default, the container stops after an application is finished.
I know there is an option to change the default behavior in dockerfile but I no longer to find it ((
If somebody knows that please give me an Dockerfile example like this :
FROM something
RUN something ...
ENTRYPOINT [something]
You can simply run non exiting process in the end of entrypoint to keep the container alive, even if the main process exits.
For example use
tail -f 'some log file'
There isn't an "option" to keep a container running when the main process has stopped or died. You can run something different in the container while debugging the actual startup scripts. Sometimes you need to override an entrypoint to do this.
docker run -ti $IMAGE /bin/sh
docker run -ti --entrypoint=/bin/sh $IMAGE
If the main process will not stay running when you docker start the existing container then you won't be able to use that container interactively, otherwise you could:
docker start $CID
docker exec -ti $CID sh
For getting files from an existing container, you can docker cp anything you need from the stopped container.
docker cp $CID:/a/path /some/local/path
You can also docker export a tar archive of the complete container.
docker export $CID -o $CID.tar
tar -tvf $CID.tar | grep afile
The environment Docker injects can be seen with docker inspect, but this won't give you anything the process has added to the environment.
docker inspect $CID --format '{{ json .Config.Env }}'
In general, Docker requires a process to keep running in the foreground. Otherwise, it assumes that the application is stopped and the container is shut down. Below, I outline a few ways, that I'm aware of, which can prevent a container from stopping:
Use a process manager such as runit or systemd to run a process inside a container:
As an example, here you can find a Redhat article about running systemd within a docker container.
A few possible approaches for debugging purposes:
a) Add an artificial sleep or pause to the entrypoint:
For example, in bash, you can use this to create an infinite pause:
while true; do sleep 1; done
b) For a fast workaround, one can run the tail command in the container:
As an example, with the command below, we start a new container in detached/background mode (-d) and executing the tail -f /dev/null command inside the container. As a result, this will force the container to run forever.
docker run -d ubuntu:18.04 tail -f /dev/null
And if the main process crashed/exited, you may still look up the ENV variable or check out files with exec and the basic commands like cd, ls. A few relevant commands for that:
docker inspect -f \
'{{range $index, $value := .Config.Env}}{{$value}} {{end}}' name-of-container
docker exec -it name-of-container bash