I'm trying to daemonize a rake task by running the following command (on Ubuntu 12.04)
start-stop-daemon -S --pidfile /home/dep/apps/fid/current/tmp/pids/que.pid
-u dep -d /home/dep/apps/fid/current -b -m
-a "bundle exec rake que:work RAILS_ENV=staging > /home/dep/apps/fid/current/log/que.log 2>&1"
-v
The console says
Starting bundle exec rake que:work RAILS_ENV=staging > /home/dep/apps/fid/current/log/que.log 2>&1...
Detaching to start bundle exec rake que:work RAILS_ENV=staging > /home/dep/apps/fid/current/log/que.log 2>&1...done.
but nothing happen.
the pid file is empty and no log file created.
Am I missing anything here?
Thanks.
Try to get more about the environments (and their differences) when running bundle from your normal environment and running it from start-stop-daemon.
e.g. print all env variables in both cases and adjust accordingly.
Related
I need to pass the next command to my service in docker-compose.uffizzi.yml
bundle exec rails db:create db:migrate db:seed && bundle exec rails s -b 0.0.0.0 -p 3000
According to this doc: https://docs.uffizzi.com/references/compose-spec/#command
command can be passed as usual or converted to an array of strings.
But when I use it in such way I get the next error:
Error: failed to create containerd task: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "bundle exec rails db:create db:migrate db:seed && bundle exec rails s -b 0.0.0.0 -p 3000": executable file not found in $PATH: unknown
But if I use only one of the commands
bundle exec rails db:create db:migrate db:seed
or only
bundle exec rails s -b 0.0.0.0 -p 3000
it works fine. But I need both of them in my service command.
Do you have any ideas how to write this command in a right way?)
The right answer is to use command this way:
command: ["bash", "-c", "bundle exec rails db:create db:migrate db:seed && bundle exec rails s -b 0.0.0.0 -p 3000"]
using vscode and wsl2, I have tried to launch a container using the default method and no customization. This generated the same error as below.
so following vscode docs I set a "workspaceMount" in devcontainer.json
"workspaceMount": "source=${localWorkspaceFolder},target=/workspaces/myRepo,type=bind,consistency=delegated",
"workspaceFolder": "/workspaces",
I select Reopen in container, the launch sequence happens but an error is generated
a mount config is invalid, make sure it has the right format and a source folder that exists on the machine where the Docker daemon is running
the log error is
Command failed: docker run -a STDOUT -a STDERR --mount source=d:\git\myRepo,target=/workspaces/myRepo,type=bind,consistency=delegated --mount type=volume,src=vscode,dst=/vscode -l vsch.quality=stable -l vsch.remote.devPort=0 -l vsch.local.folder=d:\git\myRepo --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --entrypoint /bin/sh vsc-myRepo-a878aa9edbcf04f717c76e764dabcde6 -c echo Container started ; trap "exit 0" 15; while sleep 1 & wait $!; do :; done
by launching the container from docker desktop I can confirm
cd /workspaces
ls -l
drwxr-xr-x 2 root root 4096 Dec 3 11:48 myRepo
Is this issue due to owner root:root ?
Should this be changed by chown in the Dokerfile? if so could you provide a sample code to do this, is it by RUN chown ...?
I guess you followed the documentation in https://code.visualstudio.com/docs/remote/containers-advanced
The source should contains the subfolder "myRepo" and the target only "workspaces"
"workspaceMount": "source=${localWorkspaceFolder}/myRepo,target=/workspaces,type=bind,consistency=delegated",
"workspaceFolder": "/workspaces",
I am defining a Kubernetes job to run a rake task but stuck in how to write the command...
I am new to K8s and trying to run a Rails application in K8s.
In my Rails app Dockerfile, I created a user , copied code to /home/abc and installed rvm and rails in this user, and also specified an entrypoint and command:
ENTRYPOINT ["/home/abc/docker-entrypoint.sh"]
CMD bash -l -c "cd /home/abc && rvm use 2.2.10 --default && rake db:migrate && exec /usr/bin/supervisord -c config/supervisord.conf"
In docker-entrypoint.sh, the last command is
exec gosu abc "$#"
The goal is to at the end, gosu to user abc, and then run db migration and start the server through supervisord. It works, although I dont know whether it is a good practice or not...
Now I would like to run another rake task for some purpose.
Firstly, I tried to run it using kubectl exec command:
kubectl exec my-app-deployment-xxxx -- gosu abc bash -l -c 'cd /home/abc && rvm use 2.2.10 --default && rake app:init:test_task'
It worked, but it requires to know the pod id, which is dynamic. so I tried to create a K8s job and specify in the command:
containers:
- name: my-app
image: my-app:v0.2
command:
- "gosu"
- "abc"
- "bash -l -c 'cd /home/abc && rvm use 2.2.10 --default && rake app:init:test_task'"
I expect the job can be completed successfully, but it failed, and the error info when kubectl logs job_pod is like:
error: exec: "bash -l -c 'cd /home/abc && rvm use 2.2.10 --default && rake app:init:test_task'": stat bash -l -c cd /home/abc && rvm use 2.2.10 --default && rake app:init:test_task': no such file or directory
I think it should be because of how to write the 'command' part to run multiple commands with gosu...
Thanks for your help!
Since gosu takes the user name and the Bash shell as arguments, I'd say that this is one rather than three separate commands.
Given that, there can be only one single entrypoint in each container, you can try running it as follows:
containers:
- name: my-app
image: my-app:v0.2
command: ["/bin/sh", "-c", "gosu username bash -l -c 'cd /home/abc && rvm use 2.2.10 --default && rake app:init:test_task'"]
Notice that you have to spawn a new TTY in order to run the command as the image's entrypoint is replaced when running commands in the container spec in Kubernetes.
docker exec -it command returns following error "cannot enable tty mode on non tty input"
level="fatal" msg="cannot enable tty mode on non tty input"
I am running docker(1.4.1) on centos box 6.6.
I am trying to execute the following command
docker exec -it containerName /bin/bash
but I am getting following error
level="fatal" msg="cannot enable tty mode on non tty input"
Running docker exec -i instead of docker exec -it fixed my issue. Indeed, my script was launched by CRONTAB which isn't a terminal.
As a reminder:
Usage: docker exec [OPTIONS] CONTAINER COMMAND [ARG...]
Run a command in a running container
-i, --interactive=false Keep STDIN open even if not attached
-t, --tty=false Allocate a pseudo-TTY
If you're getting this error in windows docker client then you may need to use the run command as below
$ winpty docker run -it ubuntu /bin/bash
just use "-i"
docker exec -i [your-ps] [command]
If you're on Windows and using docker-machine and you're using GIT Bash or Cygwin, to "get inside" a running container you'll need to do the following:
docker-machine ssh default to ssh into the virtual machine (Virtualbox most likely)
docker exec -it <container> bash to get into the container.
EDIT:
I've recently discovered that if you use Windows PowerShell you can docker exec directly into the container, with Cygwin or Git Bash you can use winpty docker exec -it <container> bash and skip the docker-machine ssh step above.
I get "cannot enable tty mode on non tty input" for the following command on windows with boot2docker
docker exec -it <containerIdOrName> bash
Below command fixed the problem
winpty docker exec -it <containerIdOrName> bash
docker exec runs a new command in an already-running container. It is not the way to start a new container -- use docker run for that.
That may be the cause for the "non tty input" error. Or it could be where you are running docker. Is it a true terminal? That is, is a full tty session available? You might want to check if you are in an interactive session with
[[ $- == *i* ]] && echo 'Interactive' || echo 'Not interactive'
from https://unix.stackexchange.com/questions/26676/how-to-check-if-a-shell-is-login-interactive-batch
I encountered this same error message in Windows 7 64bit using Mintty shipped with Git for Windows.
$docker run -i -t ubuntu /bin/bash
cannot enable tty mode on non tty input
I tried to prefix the above command with winpty as other answers suggested but running it showed me another error message below:
$ winpty docker run -i -t ubuntu /bin/bash
exec: "D:\\Git\\usr\\bin\\bash": executable file not found in $PATH
docker: Error response from daemon: Container command not found or does not exist..
Then I happened to run the following command which gave me what I want:
$ winpty docker run -i -t ubuntu bash
root#512997713d49:/# ls
bin dev home lib64 mnt proc run srv tmp var
boot etc lib media opt root sbin sys usr
root#512997713d49:/#
I'm running docker exec -it under jenkins jobs and getting error 'cannot enable tty mode on non tty input'. No output to docker exec command is returned. My job login sequence was:
jenkins shell -> ssh user#<testdriver> -> ssh root#<sut> -> su - <user> -> docker exec -it <container>
I made a change to use -T flag in the initial ssh from jenkins. "-T - Disable pseudo-terminal allocation". And use -i flag with docker exec instead of -it. "-i - interactive. -t - allocate pseudo tty.". This seems to have solved my problem.
jenkins shell -> ssh -T user#<testdriver> -> ssh root#<sut> -> su - <user> -> docker exec -i <container>
Behaviour kindof matches this docker exec tty bug: https://github.com/docker/docker/issues/8755. Workaround on that docker bug discussion suggests using this:
docker exec -it <CONTAINER> script -qc <COMMAND>
Using that workaround didn't solve my problem. It is interesting though. Try these using different flags and under different ssh invocations, you can see 'not a tty' even with using -t with docker exec:
$ docker exec -it <CONTAINER> script -qc 'tty'
/dev/pts/0
$ docker exec -it <CONTAINER> 'tty'
not a tty
$ docker exec -it <CONTAINER> bash -c 'tty'
not a tty
I'm trying to create a service / script to automatically start and controll my nodejs server, but it doesnt seem to work at all.
First of all, I used this source as main reference http://kvz.io/blog/2009/12/15/run-nodejs-as-a-service-on-ubuntu-karmic/
After testing around, I minimzed the content of the actual file to avoid any kind of error, resulting in this (the bare minimum, but it doesnt work)
description "server"
author "blah"
start on started mountall
stop on shutdown
respawn
respawn limit 99 5
script
export HOME="/var/www"
exec nodejs /var/www/server/server.js >> /var/log/node.log 2>&1
end script
The file is saved in /etc/init/server.conf
when trying to start the script (as root, or normal user), I get:
root#iof304:/etc/init# start server
start: Job failed to start
Then, I tried to check my syntax with init-checkconf, resulting in:
$ init-checkconf /etc/init/server.conf
File /etc/init/server.conf: syntax ok
I tried different other things, like initctl reload-configuration with no result.
What can I do? How can I get this to work? It can't be that hard, right?
This is what our typical startup script looks like. As you can see we're running our node processes as user nodejs. We're also using the pre-start script to make sure all of the log file directories and .tmp directories are created with the right permissions.
#!upstart
description "grabagadget node.js server"
author "Jeffrey Van Alstine"
start on started mysql
stop on shutdown
respawn
script
export HOME="/home/nodejs"
exec start-stop-daemon --start --chuid nodejs --make-pidfile --pidfile /var/run/nodejs/grabagadget.pid --startas /usr/bin/node -- /var/nodejs/grabagadget/app.js --environment production >> /var/log/nodejs/grabagadget.log 2>&1
end script
pre-start script
mkdir -p /var/log/nodejs
chown nodejs:root /var/log/nodejs
mkdir -p /var/run/nodejs
mkdir -p /var/nodejs/grabagadget/.tmp
# Git likes to reset permissions on this file, but it really needs to be writable on server start
chown nodejs:root /var/nodejs/grabagadget/views/layout.ejs
chown -R nodejs:root /var/nodejs/grabagadget/.tmp
# Date format same as (new Date()).toISOString() for consistency
sudo -u nodejs echo "[`date -u +%Y-%m-%dT%T.%3NZ`] (sys) Starting" >> /var/log/nodejs/grabagadget.log
end script
pre-stop script
rm /var/run/nodejs/grabagadget.pid
sudo -u nodejs echo "[`date -u +%Y-%m-%dT%T.%3NZ`] (sys) Stopping" >> /var/log/nodejs/grabgadget.log
end script
As of Ubuntu 15, upstart is no longer being used, see systemd.