How to pass multiple commands to docker-compose.uffizzi.yml - docker-compose

I need to pass the next command to my service in docker-compose.uffizzi.yml
bundle exec rails db:create db:migrate db:seed && bundle exec rails s -b 0.0.0.0 -p 3000
According to this doc: https://docs.uffizzi.com/references/compose-spec/#command
command can be passed as usual or converted to an array of strings.
But when I use it in such way I get the next error:
Error: failed to create containerd task: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "bundle exec rails db:create db:migrate db:seed && bundle exec rails s -b 0.0.0.0 -p 3000": executable file not found in $PATH: unknown
But if I use only one of the commands
bundle exec rails db:create db:migrate db:seed
or only
bundle exec rails s -b 0.0.0.0 -p 3000
it works fine. But I need both of them in my service command.
Do you have any ideas how to write this command in a right way?)

The right answer is to use command this way:
command: ["bash", "-c", "bundle exec rails db:create db:migrate db:seed && bundle exec rails s -b 0.0.0.0 -p 3000"]

Related

How to run multiple commands with gosu in Kubernetes job

I am defining a Kubernetes job to run a rake task but stuck in how to write the command...
I am new to K8s and trying to run a Rails application in K8s.
In my Rails app Dockerfile, I created a user , copied code to /home/abc and installed rvm and rails in this user, and also specified an entrypoint and command:
ENTRYPOINT ["/home/abc/docker-entrypoint.sh"]
CMD bash -l -c "cd /home/abc && rvm use 2.2.10 --default && rake db:migrate && exec /usr/bin/supervisord -c config/supervisord.conf"
In docker-entrypoint.sh, the last command is
exec gosu abc "$#"
The goal is to at the end, gosu to user abc, and then run db migration and start the server through supervisord. It works, although I dont know whether it is a good practice or not...
Now I would like to run another rake task for some purpose.
Firstly, I tried to run it using kubectl exec command:
kubectl exec my-app-deployment-xxxx -- gosu abc bash -l -c 'cd /home/abc && rvm use 2.2.10 --default && rake app:init:test_task'
It worked, but it requires to know the pod id, which is dynamic. so I tried to create a K8s job and specify in the command:
containers:
- name: my-app
image: my-app:v0.2
command:
- "gosu"
- "abc"
- "bash -l -c 'cd /home/abc && rvm use 2.2.10 --default && rake app:init:test_task'"
I expect the job can be completed successfully, but it failed, and the error info when kubectl logs job_pod is like:
error: exec: "bash -l -c 'cd /home/abc && rvm use 2.2.10 --default && rake app:init:test_task'": stat bash -l -c cd /home/abc && rvm use 2.2.10 --default && rake app:init:test_task': no such file or directory
I think it should be because of how to write the 'command' part to run multiple commands with gosu...
Thanks for your help!
Since gosu takes the user name and the Bash shell as arguments, I'd say that this is one rather than three separate commands.
Given that, there can be only one single entrypoint in each container, you can try running it as follows:
containers:
- name: my-app
image: my-app:v0.2
command: ["/bin/sh", "-c", "gosu username bash -l -c 'cd /home/abc && rvm use 2.2.10 --default && rake app:init:test_task'"]
Notice that you have to spawn a new TTY in order to run the command as the image's entrypoint is replaced when running commands in the container spec in Kubernetes.

Heroku Postgres extension errors with `rake db:structure:load` or `rake db:setup`?

When running rake db:structure:load on Heroku, we get the following error:
$ heroku run rake db:structure:load -a my_heroku_app
Running rake db:structure:load on ⬢ my_heroku_app... up, run.9343 (Standard-1X)
psql:/app/db/structure.sql:21: ERROR: must be owner of extension plpgsql
rake aborted!
failed to execute:
psql -v ON_ERROR_STOP=1 -q -f /app/db/structure.sql d7u1inlf2d16bd
Heroku's current suggestion is to manually comment out all COMMENT ON EXTENSION lines in structure.sql or switch to schema.rb. Another approach is to add a small prepend that fixes this automatically. I have it in our config/initializers folder, but many other places should work:
https://gist.github.com/jsilvestri/0210d83b7ee2aa54876e2be3323dd3fc

docker varnish cmd error - no such file or directory

I'm trying to get a Varnish container running as part of a multicontainer Docker environment.
I'm using https://github.com/newsdev/docker-varnish as a base.
My Dockerfile looks like:
FROM newsdev/varnish:4.1.0
COPY start-varnishd.sh /usr/local/bin/start-varnishd
ENV VARNISH_VCL_PATH /etc/varnish/default.vcl
ENV VARNISH_PORT 80
ENV VARNISH_MEMORY 64m
EXPOSE 80
CMD [ "exec /usr/local/sbin/varnishd -j unix,user=varnishd -F -f /etc/varnish/default.vcl -s malloc,64m -a 0.0.0.0:80 -p http_req_hdr_len=16384 -p http_resp_hdr_len=16384" ]
When I run this as part of a docker-compose setup, I get:
ERROR: for eventsapi_varnish_1 Cannot start service varnish: oci
runtime error: container_linux.go:262: starting container process
caused "exec: \"exec /usr/local/sbin/varnishd -j unix,user=varnishd -F
-f /etc/varnish/default.vcl -s malloc,64m -a 0.0.0.0:80 -p http_req_hdr_len=16384 -p http_resp_hdr_len=16384\": stat exec
/usr/local/sbin/varnishd -j unix,user=varnishd -F -f
/etc/varnish/default.vcl -s malloc,64m -a 0.0.0.0:80 -p
http_req_hdr_len=16384 -p http_resp_hdr_len=16384: no such file or
directory"
I get the same if I try
CMD ["start-varnishd"]
(as it is in the base newsdev/docker-varnish)
or
CMD [/usr/local/bin/start-varnishd]
But if I run a bash shell on the container directly:
docker run -t -i eventsapi_varnish /bin/bash
and then run the varnishd command from there, varnish starts up fine (and starts complaining that it can't find the web container, obviously).
What am I doing wrong? What file can't it find? Again looking around the running container directly, it seems that Varnish is where it thinks it should be, the VCL file is where it thinks it should be... what's stopping it running from within docker-compose?
Thanks!
I didn't get to the bottom of why I was getting this error, but "fixed" it by using the (more recent?) fork: https://hub.docker.com/r/tripviss/varnish/. My Dockerfile is now just:
FROM tripviss/varnish:5.1
COPY default.vcl /usr/local/etc/varnish/

docker exec -it returns "cannot enable tty mode on non tty input"

docker exec -it command returns following error "cannot enable tty mode on non tty input"
level="fatal" msg="cannot enable tty mode on non tty input"
I am running docker(1.4.1) on centos box 6.6.
I am trying to execute the following command
docker exec -it containerName /bin/bash
but I am getting following error
level="fatal" msg="cannot enable tty mode on non tty input"
Running docker exec -i instead of docker exec -it fixed my issue. Indeed, my script was launched by CRONTAB which isn't a terminal.
As a reminder:
Usage: docker exec [OPTIONS] CONTAINER COMMAND [ARG...]
Run a command in a running container
-i, --interactive=false Keep STDIN open even if not attached
-t, --tty=false Allocate a pseudo-TTY
If you're getting this error in windows docker client then you may need to use the run command as below
$ winpty docker run -it ubuntu /bin/bash
just use "-i"
docker exec -i [your-ps] [command]
If you're on Windows and using docker-machine and you're using GIT Bash or Cygwin, to "get inside" a running container you'll need to do the following:
docker-machine ssh default to ssh into the virtual machine (Virtualbox most likely)
docker exec -it <container> bash to get into the container.
EDIT:
I've recently discovered that if you use Windows PowerShell you can docker exec directly into the container, with Cygwin or Git Bash you can use winpty docker exec -it <container> bash and skip the docker-machine ssh step above.
I get "cannot enable tty mode on non tty input" for the following command on windows with boot2docker
docker exec -it <containerIdOrName> bash
Below command fixed the problem
winpty docker exec -it <containerIdOrName> bash
docker exec runs a new command in an already-running container. It is not the way to start a new container -- use docker run for that.
That may be the cause for the "non tty input" error. Or it could be where you are running docker. Is it a true terminal? That is, is a full tty session available? You might want to check if you are in an interactive session with
[[ $- == *i* ]] && echo 'Interactive' || echo 'Not interactive'
from https://unix.stackexchange.com/questions/26676/how-to-check-if-a-shell-is-login-interactive-batch
I encountered this same error message in Windows 7 64bit using Mintty shipped with Git for Windows.
$docker run -i -t ubuntu /bin/bash
cannot enable tty mode on non tty input
I tried to prefix the above command with winpty as other answers suggested but running it showed me another error message below:
$ winpty docker run -i -t ubuntu /bin/bash
exec: "D:\\Git\\usr\\bin\\bash": executable file not found in $PATH
docker: Error response from daemon: Container command not found or does not exist..
Then I happened to run the following command which gave me what I want:
$ winpty docker run -i -t ubuntu bash
root#512997713d49:/# ls
bin dev home lib64 mnt proc run srv tmp var
boot etc lib media opt root sbin sys usr
root#512997713d49:/#
I'm running docker exec -it under jenkins jobs and getting error 'cannot enable tty mode on non tty input'. No output to docker exec command is returned. My job login sequence was:
jenkins shell -> ssh user#<testdriver> -> ssh root#<sut> -> su - <user> -> docker exec -it <container>
I made a change to use -T flag in the initial ssh from jenkins. "-T - Disable pseudo-terminal allocation". And use -i flag with docker exec instead of -it. "-i - interactive. -t - allocate pseudo tty.". This seems to have solved my problem.
jenkins shell -> ssh -T user#<testdriver> -> ssh root#<sut> -> su - <user> -> docker exec -i <container>
Behaviour kindof matches this docker exec tty bug: https://github.com/docker/docker/issues/8755. Workaround on that docker bug discussion suggests using this:
docker exec -it <CONTAINER> script -qc <COMMAND>
Using that workaround didn't solve my problem. It is interesting though. Try these using different flags and under different ssh invocations, you can see 'not a tty' even with using -t with docker exec:
$ docker exec -it <CONTAINER> script -qc 'tty'
/dev/pts/0
$ docker exec -it <CONTAINER> 'tty'
not a tty
$ docker exec -it <CONTAINER> bash -c 'tty'
not a tty

Starting a rake task as daemon

I'm trying to daemonize a rake task by running the following command (on Ubuntu 12.04)
start-stop-daemon -S --pidfile /home/dep/apps/fid/current/tmp/pids/que.pid
-u dep -d /home/dep/apps/fid/current -b -m
-a "bundle exec rake que:work RAILS_ENV=staging > /home/dep/apps/fid/current/log/que.log 2>&1"
-v
The console says
Starting bundle exec rake que:work RAILS_ENV=staging > /home/dep/apps/fid/current/log/que.log 2>&1...
Detaching to start bundle exec rake que:work RAILS_ENV=staging > /home/dep/apps/fid/current/log/que.log 2>&1...done.
but nothing happen.
the pid file is empty and no log file created.
Am I missing anything here?
Thanks.
Try to get more about the environments (and their differences) when running bundle from your normal environment and running it from start-stop-daemon.
e.g. print all env variables in both cases and adjust accordingly.