chef service start_command not working - service

I'm trying to launch a node process as a service using forever, but the configuration is not working correctly. What's wrong with it?
execute "npm install -g forever"
restart_command_string = "forever restart /#{studio_server_folder}/#{studio_server_script}"
reload_command_string = "forever restart /#{studio_server_folder}/#{studio_server_script}"
start_command_string = "forever start /#{studio_server_folder}/#{studio_server_script}"
stop_command_string = "forever stop /#{studio_server_folder}/#{studio_server_script}"
status_command_string = "if [ $(forever list | grep -c \"studio-server\") -gt 0 ]; then echo 1; else echo 0; fi"
# execute "if [ $(forever list | grep -c \"studio-server\") -gt 0 ]; then #{restart_command_string}; else #{start_command}; fi"
service 'studio-server' do
supports :status => true, :restart => true, :reload => true
start_command start_command_string
reload_command reload_command_string
stop_command stop_command_string
status_command status_command_string
restart_command restart_command_string
action [:start]
end
execute 'service --status-all >> /servicestatus'

That status command isn't a command, it is a fragment of bash script and thus is unlikely to be working. In general I would highly recommend using a real service manager like supervisord or systemd.

Related

CircleCI run failed on delete k8s resource

I have CircleCI setup and running fine normally, it will helps with creating deployment for me. Today I have suddenly had an issue with the step in creating the deployment due to an error related to kubernetes.
I have the config.yml followed the doc from https://circleci.com/developer/orbs/orb/circleci/kubernetes
Here is my version of setup in the config file:
version: 2.1
orbs:
kube-orb: circleci/kubernetes#1.3.0
commands:
docker-check:
steps:
- docker/check:
docker-username: MY_USERNAME
docker-password: MY_PASS
registry: $DOCKER_REGISTRY
jobs:
create-deployment:
executor: aws-eks/python3
parameters:
cluster-name:
description: Name of the EKS cluster
type: string
steps:
- checkout
# It failed on this step
- kube-orb/delete-resource:
now: true
resource-names: my-frontend-deployment
resource-types: deployments
wait: true
Below is a copy of the error log
#!/bin/bash -eo pipefail
#!/bin/bash
RESOURCE_FILE_PATH=$(eval echo "$PARAM_RESOURCE_FILE_PATH")
RESOURCE_TYPES=$(eval echo "$PARAM_RESOURCE_TYPES")
RESOURCE_NAMES=$(eval echo "$PARAM_RESOURCE_NAMES")
LABEL_SELECTOR=$(eval echo "$PARAM_LABEL_SELECTOR")
ALL=$(eval echo "$PARAM_ALL")
CASCADE=$(eval echo "$PARAM_CASCADE")
FORCE=$(eval echo "$PARAM_FORCE")
GRACE_PERIOD=$(eval echo "$PARAM_GRACE_PERIOD")
IGNORE_NOT_FOUND=$(eval echo "$PARAM_IGNORE_NOT_FOUND")
NOW=$(eval echo "$PARAM_NOW")
WAIT=$(eval echo "$PARAM_WAIT")
NAMESPACE=$(eval echo "$PARAM_NAMESPACE")
DRY_RUN=$(eval echo "$PARAM_DRY_RUN")
KUSTOMIZE=$(eval echo "$PARAM_KUSTOMIZE")
if [ -n "${RESOURCE_FILE_PATH}" ]; then
if [ "${KUSTOMIZE}" == "1" ]; then
set -- "$#" -k
else
set -- "$#" -f
fi
set -- "$#" "${RESOURCE_FILE_PATH}"
elif [ -n "${RESOURCE_TYPES}" ]; then
set -- "$#" "${RESOURCE_TYPES}"
if [ -n "${RESOURCE_NAMES}" ]; then
set -- "$#" "${RESOURCE_NAMES}"
elif [ -n "${LABEL_SELECTOR}" ]; then
set -- "$#" -l
set -- "$#" "${LABEL_SELECTOR}"
fi
fi
if [ "${ALL}" == "true" ]; then
set -- "$#" --all=true
fi
if [ "${FORCE}" == "true" ]; then
set -- "$#" --force=true
fi
if [ "${GRACE_PERIOD}" != "-1" ]; then
set -- "$#" --grace-period="${GRACE_PERIOD}"
fi
if [ "${IGNORE_NOT_FOUND}" == "true" ]; then
set -- "$#" --ignore-not-found=true
fi
if [ "${NOW}" == "true" ]; then
set -- "$#" --now=true
fi
if [ -n "${NAMESPACE}" ]; then
set -- "$#" --namespace="${NAMESPACE}"
fi
if [ -n "${DRY_RUN}" ]; then
set -- "$#" --dry-run="${DRY_RUN}"
fi
set -- "$#" --wait="${WAIT}"
set -- "$#" --cascade="${CASCADE}"
if [ "$SHOW_EKSCTL_COMMAND" == "1" ]; then
set -x
fi
kubectl delete "$#"
if [ "$SHOW_EKSCTL_COMMAND" == "1" ]; then
set +x
fi
error: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1"
Exited with code exit status 1
CircleCI received exit code 1
Does anyone have idea what is wrong with it? Im not sure whether the issue is happening on Circle CI side or Kubernetes side.
I was facing the exact issue since yesterday morning (16 hours ago). Then taking #Gavy's advice, I simply added this in my config.yml:
steps:
- checkout
# !!! HERE !!!
- kubernetes/install-kubectl:
kubectl-version: v1.23.5
- run:
And now it works. Hope it helps.

using supervisord to run lsyncd script

I'm trying to run my lsynd's script with supervisord in order to have it always run.
I've coded this conf for my supervisor
[program:autostart_lsyncd]
command=bash -c "lsyncd /home/sync/lsyncd_script.lua"
autostart=true
autorestart=unexpected
numprocs=1
startsecs = 0
stderr_logfile=/var/log/autostart_sync.err.log
stdout_logfile=/var/log/autostart_sync.out.log
Script runs ok at startup but it exits always
2018-04-09 09:48:49,638 INFO success: autostart_lsyncd entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
2018-04-09 09:48:49,639 INFO exited: autostart_lsyncd (exit status 0; expected)
I can't understand if this is the correct way to keep alive a lsynd script or not.
Suggestions?
I'm using this configuration to supervisord in file /etc/supervisor/conf.d/lsyncd.conf
[program:lsyncd]
command=/usr/bin/lsyncd -nodaemon /etc/lsyncd/lsyncd.conf.lua
autostart=true
autorestart=unexpected
startretries=3
And this configuration to lsyncd (/etc/lsyncd/lsyncd.conf.lua):
settings {
logfile = "/var/log/lsyncd/lsyncd.log",
statusFile = "/var/log/lsyncd/lsyncd.status"
}
sync {
default.rsync,
source="/var/www/html/sites/default/files",
target="root#cdn:/var/www/html/sites/default/files",
exclude = {"*.php", "*.po", "\.ht*"},
rsync = {
archive = false,
acls = false,
compress = true,
links = false,
owner = false,
perms = false,
verbose = true,
rsh = "/usr/bin/ssh -p 22 -o StrictHostKeyChecking=no"
}
}
Also I had configure ssh keys and install rsync in the servers.

Ubuntu 16.04 server custom service

There is a way for create custom service in ubuntu 16.04?
I want something autostart at startup, manageable with service mycustomservice start
And the service should start a php websocket: (php ratchet)
php -f socket.php
try with supervisor enter link description here
[program:ratchet]
command = bash -c "ulimit -n 10000; exec /usr/bin/php ./bin/tutorial-terminal-chat.php"
process_name = Ratchet
numprocs = 1
autostart = true
autorestart = true
user = root
stdout_logfile = ./logs/info.log
stdout_logfile_maxbytes = 1MB
stderr_logfile = ./logs/error.log
stderr_logfile_maxbytes = 1MB

Getting error in hack/update-all.sh script

I have been trying to run ./hack/update-all.sh script and I am getting this error while updating codegen:
$ ./hack/update-all.sh
Running in the silent mode, run with -v if you want to see script logs.
Running in short-circuit mode; run with -a to force all scripts to run.
Updating generated-protobuf
Updating codegen
# runtime
/usr/local/go/src/runtime/os2_linux_generic.go:12: _SS_DISABLE redeclared in this block
previous declaration at /usr/local/go/src/runtime/os2_linux.go:8
/usr/local/go/src/runtime/os2_linux_generic.go:13: _NSIG redeclared in this block
previous declaration at /usr/local/go/src/runtime/os2_linux.go:9
/usr/local/go/src/runtime/os2_linux_generic.go:14: _SI_USER redeclared in this block
previous declaration at /usr/local/go/src/runtime/os2_linux.go:10
/usr/local/go/src/runtime/os2_linux_generic.go:15: _SIG_BLOCK redeclared in this block
previous declaration at /usr/local/go/src/runtime/os2_linux.go:11
/usr/local/go/src/runtime/os2_linux_generic.go:16: _SIG_UNBLOCK redeclared in this block
previous declaration at /usr/local/go/src/runtime/os2_linux.go:12
/usr/local/go/src/runtime/os2_linux_generic.go:17: _SIG_SETMASK redeclared in this block
previous declaration at /usr/local/go/src/runtime/os2_linux.go:13
/usr/local/go/src/runtime/os2_linux_generic.go:18: _RLIMIT_AS redeclared in this block
previous declaration at /usr/local/go/src/runtime/os2_linux.go:14
/usr/local/go/src/runtime/os2_linux_generic.go:24: sigset redeclared in this block
previous declaration at /usr/local/go/src/runtime/os2_linux.go:20
/usr/local/go/src/runtime/os2_linux_generic.go:26: rlimit redeclared in this block
previous declaration at /usr/local/go/src/runtime/os2_linux.go:22
/usr/local/go/src/runtime/panic1.go:11: paniclk redeclared in this block
previous declaration at /usr/local/go/src/runtime/panic.go:552
/usr/local/go/src/runtime/panic1.go:11: too many errors
!!! Error in /home/peeyush/work/kubernetes/hack/lib/golang.sh:435
'go install "${goflags[#]:+${goflags[#]}}" -ldflags "${goldflags}" "${nonstatics[#]:+${nonstatics[#]}}"' exited with status 2
Call stack:
1: /home/peeyush/work/kubernetes/hack/lib/golang.sh:435 kube::golang::build_binaries_for_platform(...)
2: /home/peeyush/work/kubernetes/hack/lib/golang.sh:574 kube::golang::build_binaries(...)
3: /home/peeyush/work/kubernetes/hack/build-go.sh:26 main(...)
Exiting with status 1
!!! Error in /home/peeyush/work/kubernetes/hack/lib/golang.sh:494
'( kube::golang::setup_env; echo "Go version: $(go version)"; local host_platform; host_platform=$(kube::golang::host_platform); local goflags goldflags; eval "goflags=(${KUBE_GOFLAGS:-})"; goldflags="${KUBE_GOLDFLAGS:-} $(kube::version::ldflags)"; local use_go_build; local -a targets=(); local arg; for arg in "$#";
do
if [[ "${arg}" == "--use_go_build" ]]; then
use_go_build=true;
else
if [[ "${arg}" == -* ]]; then
goflags+=("${arg}");
else
targets+=("${arg}");
fi;
fi;
done; if [[ ${#targets[#]} -eq 0 ]]; then
targets=("${KUBE_ALL_TARGETS[#]}");
fi; local -a platforms=("${KUBE_BUILD_PLATFORMS[#]:+${KUBE_BUILD_PLATFORMS[#]}}"); if [[ ${#platforms[#]} -eq 0 ]]; then
platforms=("${host_platform}");
fi; local binaries; binaries=($(kube::golang::binaries_from_targets "${targets[#]}")); local parallel=false; if [[ ${#platforms[#]} -gt 1 ]]; then
local gigs; gigs=$(kube::golang::get_physmem); if [[ ${gigs} -ge ${KUBE_PARALLEL_BUILD_MEMORY} ]]; then
kube::log::status "Multiple platforms requested and available ${gigs}G >= threshold ${KUBE_PARALLEL_BUILD_MEMORY}G, building platforms in parallel"; parallel=true;
else
kube::log::status "Multiple platforms requested, but available ${gigs}G < threshold ${KUBE_PARALLEL_BUILD_MEMORY}G, building platforms in serial"; parallel=false;
fi;
fi; if [[ "${parallel}" == "true" ]]; then
kube::log::status "Building go targets for ${platforms[#]} in parallel (output will appear in a burst when complete):" "${targets[#]}"; local platform; for platform in "${platforms[#]}";
do
( kube::golang::set_platform_envs "${platform}"; kube::log::status "${platform}: go build started"; kube::golang::build_binaries_for_platform ${platform} ${use_go_build:-}; kube::log::status "${platform}: go build finished" ) &> "/tmp//${platform//\//_}.build" &
done; local fails=0; for job in $(jobs -p);
do
wait ${job} || let "fails+=1";
done; for platform in "${platforms[#]}";
do
cat "/tmp//${platform//\//_}.build";
done; exit ${fails};
else
for platform in "${platforms[#]}";
do
kube::log::status "Building go targets for ${platform}:" "${targets[#]}"; kube::golang::set_platform_envs "${platform}"; kube::golang::build_binaries_for_platform ${platform} ${use_go_build:-};
done;
fi )' exited with status 1
Call stack:
1: /home/peeyush/work/kubernetes/hack/lib/golang.sh:494 kube::golang::build_binaries(...)
2: /home/peeyush/work/kubernetes/hack/build-go.sh:26 main(...)
Exiting with status 1
!!! Error in ./hack/../hack/update-codegen.sh:32
'"${KUBE_ROOT}/hack/build-go.sh" ${BUILD_TARGETS[*]}' exited with status 1
Call stack:
1: ./hack/../hack/update-codegen.sh:32 main(...)
Exiting with status 1
Updating codegen FAILED
Any idea what could be the reason behind this? Or how to resolve this issue?
Looks like an environment issue. Cleared everything, cloned the repo afresh and everything is working fine.

Capistrano compile assets error - assets:precompile:nondigest?

My App seems to be deploying correctly but I'm getting this error:
* executing "cd /home/deploy/tomahawk/releases/20120208222225 && bundle exec rake RAILS_ENV=production RAILS_GROUPS=assets assets:precompile"
servers: ["ip_address"]
[ip_address] executing command
*** [err :: ip_address] /opt/ruby/bin/ruby /opt/ruby/bin/rake assets:precompile:nondigest RAILS_ENV=production RAILS_GROUPS=assets
I've tried solutions here for trying to compile assets: http://lassebunk.dk/2011/09/03/getting-your-assets-to-work-when-upgrading-to-rails-3-1/
And Here: http://railsmonkey.net/2011/08/deploying-rails-3-1-applications-with-capistrano/
And here: http://dev.af83.com/2011/09/30/capistrano-rails-3-1-assets-can-be-tricky.html
Here is my deploy.rb :
require "bundler/capistrano"
load 'deploy/assets'
set :default_environment, {
'PATH' => "/opt/ruby/bin/:$PATH"
}
set :application, "tomahawk"
set :repository, "repo_goes_here"
set :deploy_to, "/home/deploy/#{application}"
set :rails_env, 'production'
set :branch, "master"
set :scm, :git
set :user, "deploy"
set :runner, "deploy"
set :use_sudo, true
role :web, "my_ip"
role :app, "my_ip"
role :db, "my_ip", :primary => true
set :normalize_asset_timestamps, false
after "deploy", "deploy:cleanup"
namespace :deploy do
desc "Restarting mod_rails with restart.txt"
task :restart, :roles => :app, :except => { :no_release => true } do
run "touch #{current_path}/tmp/restart.txt"
end
[:start, :stop].each do |t|
desc "#{t} task is a no-op with mod_rails"
task t, :roles => :domain do ; end
end
end
task :after_update_code do
run "ln -nfs #{deploy_to}/shared/config/database.yml #{release_path}/config/database.yml"
end
first don't forget to add the gem below
group :production do
gem 'therubyracer'
gem 'execjs'
end
then in your cap file just add this line in your after_update_code
run "cd #{release_path}; rake assets:precompile RAILS_ENV=production "
this worked fine for me ;)
cheers,
Gregory HORION
I have the same problem. I have added this to my deploy.rb (for adding option '--trace'):
namespace :deploy do
namespace :assets do
task :precompile, :roles => :web, :except => { :no_release => true } do
run "cd #{current_path} && #{rake} RAILS_ENV=#{rails_env} RAILS_GROUPS=assets assets:precompile --trace"
end
end
end
And error seems to be just notice :
*** [err :: my-server] ** Invoke assets:precompile (first_time)
...
I later noticed that capistrano wasn't able to delete old releases, I got an error:
*** [err :: ip_address] sudo: no tty present and no askpass program specified
I found this link regarding this error:
http://www.mail-archive.com/capistrano#googlegroups.com/msg07323.html
I had to add this line to my deploy file:
default_run_options[:pty] = true
This also solved the weird error I was getting above.
The official explanation, which I don't understand :)
No default PTY. Prior to 2.1, Capistrano would request a pseudo-tty for each command that it executed. This had the side-effect of causing the profile scripts for the user to not be loaded. Well, no more! As of 2.1, Capistrano no longer requests a pty on each command, which means your .profile (or .bashrc, or whatever) will be properly loaded on each command! Note, however, that some have reported on some systems, when a pty is not allocated, some commands will go into non-interactive mode automatically. If you’re not seeing commands prompt like they used to, like svn or passwd, you can return to the previous behavior by adding the following line to your capfile: default_run_options[:pty] = true
Here's what worked for me:
1) Add rvm-capistrano to your Gemfile
2) in confg/deploy, add the lines:
require 'rvm/capistrano'
set :rvm_ruby_string, '1.9.2' # Set to your version number
3) You may also need to set :rvm_type and :rvm_bin_path. See this Ninjahideout blog that goes into more detail.
4) apt-get/yum install nodejs on your server
(See my reply to this related Stackoverflow question.)
The message you see is the output of rake assets:precompile .
When you run rake assets:precompile, how to avoid default output
the solution is to add -q behand your command,
analysis is below, if you want to see:
# :gem_path/actionpack/lib/sprockets/assets.rake
namespace :assets do
# task entry, it will call invoke_or_reboot_rake_task
task :precompile do
invoke_or_reboot_rake_task "assets:precompile:all"
end
# it will call ruby_rake_task
def invoke_or_reboot_rake_task(task)
ruby_rake_task task
end
# it will call ruby
def ruby_rake_task(task, fork = true)
env = ENV['RAILS_ENV'] || 'production'
groups = ENV['RAILS_GROUPS'] || 'assets'
args = [$0, task,"RAILS_ENV=#{env}","RAILS_GROUPS=#{groups}"]
ruby(*args)
end
end
# :gem_path/rake/file_utils.rb
module FileUtils
# it will call sh
def ruby(*args,&block)
options = (Hash === args.last) ? args.pop : {}
sh(*([RUBY] + args + [options]), &block)
end
# it will call set_verbose_option
# and if options[:verbose] == true, it do not output cmd
# but default of options[:verbose] is an object
def sh(*cmd, &block)
# ...
set_verbose_option(options)
# ...
Rake.rake_output_message cmd.join(" ") if options[:verbose]
# ...
end
# default of options[:verbose] is Rake::FileUtilsExt::DEFAULT, which is an object
def set_verbose_option(options) # :nodoc:
unless options.key? :verbose
options[:verbose] =
Rake::FileUtilsExt.verbose_flag == Rake::FileUtilsExt::DEFAULT ||
Rake::FileUtilsExt.verbose_flag
end
end
end
# :gem_path/rake/file_utils_ext.rb
module Rake
module FileUtilsExt
DEFAULT = Object.new
end
end
# :gem_path/rake/application.rb
# the only to solve the disgusting output when run `rake assets:precompile`
# is add a `-q` option.
['--quiet', '-q',
"Do not log messages to standard output.",
lambda { |value| Rake.verbose(false) }
],
['--verbose', '-v',
"Log message to standard output.",
lambda { |value| Rake.verbose(true) }
],