I'm experiencing unexpected behavior using acbuild run. To get used to rkt the idea was to start with a CentOS7 based container running a SSH host. The bare CentOS 7 container referenced below as centos7.aci was created on a up-to-date CentOS7 install using the instructions given here.
The script used to build the SSHd ACI is
#! /bin/bash
acbuild begin ./centos7.aci
acbuild run -- yum install -y openssh-server
acbuild run -- mkdir /var/run/sshd
acbuild run -- sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
acbuild run -- sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
acbuild run -- ssh-keygen -A -C "" -N "" -q
acbuild run -- echo 'root:screencast' | chpasswd
acbuild set-name centos7-sshd
acbuild set-exec -- /usr/sbin/sshd -D
acbuild port add ssh tcp 22
acbuild write --overwrite centos7-sshd.aci
acbuild end
When it's spinned up using rkt run --insecure-options=image ./centos7-sshd.aci
the server runs but connection attempts fail because the password is not accepted. If I use rkt enter to get into the running container and re-run echo 'root:screencast' | chpasswd inside, I can login. So that acbuild run instruction has just not worked for some reason... To test a bit more, I replaced it by
acbuild run -- mkdir ~/.ssh
acbuild run -- echo "<rkt host SSH public key>“ >> ~/.ssh/authorized_keys
to enable key based instead of password login. It doesn't work: the key is refused. The reason is obvious once you look into the container: there's no authorized_keys file in ~/.ssh/. If I add a
acbuild run -- touch ~/.ssh/authorized_keys instruction before the key appending attempt, the file is created but it's still empty. So again a acbuild run instruction didn't work - without error notice. May it be related to the fact that both „ignored“ instructions use operators like >> and | ? All commands shown in the examples I've seen don't use any such operators yet the docs don't mention anything and a Google search doesn't help either. In dockerfile RUN instructions they also work fine... what is going wrong here?
P.S.: I tried to use the chroot instead of the default systemd-nspawn engine in the „ignored“ acbuild run instructions => same results
P.P.S.: there's no acbuild tag yet on StackOverflow so I had to tag this as rkt - could somebody with enough reputation create one please? Thx
Ok, I understood what happens using the the acbuild run --debug option.
When
acbuild run -- echo 'root:screencast' | chpasswd
gets executed it returns Running: [echo root:screencast] , the pipe is executed on the host machine. To get the intended result it should be
acbuild run -- /bin/sh -c "echo 'root:screencast' | chpasswd"
or in generic form
acbuild run -- /bin/sh -c "<cmd with pipes>"
as explained here
Related
I have the need to start a java rest server with concourse that lives on an Ubuntu 18.04 machine. The version of concourse my company uses is 5.5.11. The server code is written in Java, so a simple java -jar <uber.jar> suffices from the command line (see below). In production, I will not have this simple luxury, hence my question.
I have an scp command working that copies the .jar from concourse to the target Ubuntu machine:
scp -i /tmp/key.p8 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null ./${NEW_DIR}/${ARTIFACT_NAME}.${ARTIFACT_FILE_TYPE} ${SRV_ACCOUNT_USER}#${JAVA_VM_HOST}:/var/www
Note that my private key is passed with -i and I can confirm that is working.
I followed this other SO Q&A that seemed to be promising: Getting ssh to execute a command in the background on target machine
, but after trying a few permutations of the suggested solution and other answers, I still don't have my rest service kicked off.
I've tried a few permutations of this line in my concourse script:
ssh -f -i /tmp/pvt_key1.p8 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null ${SRV_ACCOUNT_USER}#${JAVA_VM_HOST} "bash -c 'nohup java -jar /var/www/${ARTIFACT_NAME}.${ARTIFACT_FILE_TYPE} -c \"/opt/testcerts/clientkeystore\" -w \"password\" > /dev/null 2>&1 &'"
I've tried with and without the -f and -t switches in ssh, with and without the file stream redirection, with and without nohup and the Linux background ('&') command and various ways to escape the quotes.
At the bash prompt, this line successfully starts my server. The two switches are needed to point to the certificate and provide the password:
java -jar rest-service.jar -c "/opt/certificates/clientkeystore" -w "password"
I really think this is possible to do in Concourse, but I'm stuck at this point.
After a lot of trial an error, it seems I needed to do this:
ssh -f -i /tmp/pvt_key1.p8 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null ${SRV_ACCOUNT_USER}#${JAVA_VM_HOST} "bash -c 'sudo java -jar /var/www/${ARTIFACT_NAME}.${ARTIFACT_FILE_TYPE} -c \"/path/to/my/certificate\" -w \"password\" > /var/www/log.txt 2>&1 &'"
The key was I was missing the 'sudo' portion of the command. Using nohup as opposed to putting in a Linux bash background indicator ('&') seems to give me an error in the pipeline. This works for me, but others are welcome to post responses with better answers or methods that might be a better practice.
I'm trying to run 3 services at my container startup (snmpd, sshd and centengine)
As runlevel is unknown in the container, services won't start.
I built an image with this Dockerfile :
FROM centos:6.7
MAINTAINER nael <me#mail>
# Update CentOS
RUN yum -y update
# Install wget
RUN yum install -y wget
# Get Centreon Repo
RUN wget http://yum.centreon.com/standard/3.0/stable/ces-standard.repo -O /etc/yum.repos.d/ces-standard.repo
# Install Packages (SSH, sudo, Centreon Poller & Engine, SNMP)
RUN yum install -y --nogpgcheck openssh-clients openssh-server centreon-poller-centreon-engine sudo net-snmp net-snmp-utils
# Install supervisord
RUN rpm -Uvh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
RUN yum --enablerepo=epel install -y supervisor
RUN mv -f /etc/supervisord.conf /etc/supervisord.conf.org
ADD supervisord.conf /etc/
# For sshd & centengine
EXPOSE 22 5669
# Change user password
RUN echo -e "password" | (passwd --stdin user)
# Disable PAM (causing issues while ssh login)
RUN sed -ri 's/UsePAM yes/#UsePAM yes/g' /etc/ssh/sshd_config
RUN sed -ri 's/#UsePAM no/UsePAM no/g' /etc/ssh/sshd_config
# Start supervisord
CMD ["/usr/bin/supervisord"]
Here is the supervisord.conf file
[supervisord]
nodaemon=true
pidfile=/var/run/supervisord.pid
logfile=/var/log/supervisor/supervisord.log
[program:centengine]
command=service centengine start
[program:snmpd]
command=service snmpd start
[program:sshd]
command=service sshd start
But with this Dockerfile and supervisord.conf, when I start my container theses services aren't running.
What could be the problem ?
Not anymore using supervisord.
I just include a script with all the services ... start commands in the Dockerfile. When I create my container with docker run ... I just specify that I want to start it with my script.
& that's working very well.
Thanks #warmoverflow for trying to solve this.
You may find my dockerfy utility useful starting services, pre-running initialization commands before the primary command starts. See https://github.com/markriggins/dockerfy
For example:
RUN wget https://github.com/markriggins/dockerfy/releases/download/0.2.4/dockerfy-linux-amd64-0.2.4.tar.gz; \
tar -C /usr/local/bin -xvzf dockerfy-linux-amd64-*tar.gz; \
rm dockerfy-linux-amd64-*tar.gz;
ENTRYPOINT dockerfy
COMMAND --start bash -c "while false; do echo 'Ima Service'; sleep 1; done" -- \
--reap -- \
nginx
Would run a bash script as a service, echoing "Ima Service" every second, while the primary command nginx runs. If nginx exits, then the "Ima Service" script will automatically be stopped.
As an added benefit, any zombie processes left over by nginx will be automatically cleaned up.
You can also tail log files such as /var/log/nginx/error.log to stderr, edit nginx's configuration prior to startup and much more
How would I accomplish the following in Capistrano?
sudo su - postgres
/usr/pgsql-9.2/bin/pg_ctl status -D /var/lib/pgsql/9.2/data/
The following task doesn't work:
task :postgres_check do
on roles(:db) do in: :sequence |host|
execute "sudo su - postgres << EOF
/usr/pgsql-9.2/bin/pg_ctl status -D /var/lib/pgsql/9.2/data/
EOF"
end
end
The commands in the execute statement works in a bash script.
EDIT 1:
I also tried the following:
task :postgres_check do
on roles(:postgres_pref_db), in: :sequence do |host|
execute "/usr/pgsql-9.2/bin/pg_ctl status -D /var/lib/pgsql/9.2/data", :shell => "sudo su - postgres"
end
end
Which errors with:
DEBUG [68eb95f2] Command: /usr/pgsql-9.2/bin/pg_ctl status -D /var/lib/pgsql/9.2/data
DEBUG [68eb95f2] pg_ctl: could not open PID file "/var/lib/pgsql/9.2/data/postmaster.pid": Permission denied
cap aborted!
SSHKit::Command::Failed: /usr/pgsql-9.2/bin/pg_ctl status -D /var/lib/pgsql/9.2/data stdout: Nothing written
It appears that it still executing the command as the ssh user.
I came across this and explored the answer for myself. I wouldn't have accepted the answer either to I'll provide what I did.
task :copy_files do
on roles(:web) do |host|
as 'other_user' do
execute "whoami"
end
end
end
Capistrano 3 uses SSH KIT and I found these examples really helpful for getting bash commands to work inside my tasks.
https://github.com/capistrano/sshkit/blob/master/EXAMPLES.md
You'll want to checkout ssh kit and see about on(), within(), with(), as() ... they can be used nested in any order. So you end up having a lot of control even if it takes a few minutes to learn.
I think for your specific example you will want to use as() and within() to become the postgres user and run commands within a certain directory.
Also I had to disable requiretty on my /etc/sudoers for my deploy user.
I'm trying to create a service / script to automatically start and controll my nodejs server, but it doesnt seem to work at all.
First of all, I used this source as main reference http://kvz.io/blog/2009/12/15/run-nodejs-as-a-service-on-ubuntu-karmic/
After testing around, I minimzed the content of the actual file to avoid any kind of error, resulting in this (the bare minimum, but it doesnt work)
description "server"
author "blah"
start on started mountall
stop on shutdown
respawn
respawn limit 99 5
script
export HOME="/var/www"
exec nodejs /var/www/server/server.js >> /var/log/node.log 2>&1
end script
The file is saved in /etc/init/server.conf
when trying to start the script (as root, or normal user), I get:
root#iof304:/etc/init# start server
start: Job failed to start
Then, I tried to check my syntax with init-checkconf, resulting in:
$ init-checkconf /etc/init/server.conf
File /etc/init/server.conf: syntax ok
I tried different other things, like initctl reload-configuration with no result.
What can I do? How can I get this to work? It can't be that hard, right?
This is what our typical startup script looks like. As you can see we're running our node processes as user nodejs. We're also using the pre-start script to make sure all of the log file directories and .tmp directories are created with the right permissions.
#!upstart
description "grabagadget node.js server"
author "Jeffrey Van Alstine"
start on started mysql
stop on shutdown
respawn
script
export HOME="/home/nodejs"
exec start-stop-daemon --start --chuid nodejs --make-pidfile --pidfile /var/run/nodejs/grabagadget.pid --startas /usr/bin/node -- /var/nodejs/grabagadget/app.js --environment production >> /var/log/nodejs/grabagadget.log 2>&1
end script
pre-start script
mkdir -p /var/log/nodejs
chown nodejs:root /var/log/nodejs
mkdir -p /var/run/nodejs
mkdir -p /var/nodejs/grabagadget/.tmp
# Git likes to reset permissions on this file, but it really needs to be writable on server start
chown nodejs:root /var/nodejs/grabagadget/views/layout.ejs
chown -R nodejs:root /var/nodejs/grabagadget/.tmp
# Date format same as (new Date()).toISOString() for consistency
sudo -u nodejs echo "[`date -u +%Y-%m-%dT%T.%3NZ`] (sys) Starting" >> /var/log/nodejs/grabagadget.log
end script
pre-stop script
rm /var/run/nodejs/grabagadget.pid
sudo -u nodejs echo "[`date -u +%Y-%m-%dT%T.%3NZ`] (sys) Stopping" >> /var/log/nodejs/grabgadget.log
end script
As of Ubuntu 15, upstart is no longer being used, see systemd.
I am trying to run the example on "http://gearman.org/getting_started" on Ubuntu in VirtualBox environment.
At first I tried to download an old version 0.16 by using apt-get install gearman-job-server, apt-get install gearman-tools and everything worked well. The server ran in the background, I was able to create 2 workers and verify that I can call them by creating a client.
I decided to download and compile the latest version, 1.1.6. Now, I am trying to do the same thing with the new version and I am having errors.
I run the server as admin:
sudo gearmand
The statement
gearadmin --getpid
seems to work - it returns me the process ID of the server. Thus, the server is running, and this answer is not relevant.
Now, I am adding a worker:
gearman -w -f wc -- wc -l
It seems to run.
Nevertheless,
gearadmin --workers
results in something that probably represents and empty list :
33 127.0.0.1 - :
.
(In version 0.16, I was able to see 2 lines, the second showing the registered function name.)
Attempting to run the client
gearman -f wc < /etc/passwd
results in
gearman: gearman_client_run_tasks : flush(GEARMAN_COULD_NOT_CONNECT) localhost:0 -> libgearman/connection.cc:671"
This might be the very same problem described in here - the port not specified, but I have no idea how to do it through the command line tool.
Any idea?
Ok, It looks like the answer in here was the key to success. Probably, the "getting started" section was not updated for a while. Indeed, one must specify a port explicitly for gearmand and gearman .
Server:
sudo gearmand -p 5000
Worker:
gearman -p 5000 -w -f wc -- wc -l
Client:
gearman -p 5000 -f wc < /etc/passwd