cap production deploy fails with ActiveRecord::NoDatabaseError: FATAL: database "rails/rbwapp" does not exist - deployment

I have an empty Rails application working on my development machine.
I have been following the tutorial at https://gorails.com/deploy/ubuntu/20.04 in order to deploy it to a new DigitalOcean ubuntu 20.04 droplet.
I work my way through the entire entire tutorial and get to the step where I run the deploy:
ThinkPad-T570:~/rails/rbwapp$ cap production deploy
00:00 git:wrapper
01 mkdir -p /tmp
✔ 01 rbw#64.225.88.53 0.110s
Uploading /tmp/git-ssh-rbwapp-production-rbw.sh 100.0%
02 chmod 700 /tmp/git-ssh-rbwapp-production-rbw.sh
✔ 02 rbw#64.225.88.53 0.063s
00:00 git:check
01 git ls-remote git#github.com:rwaddington/rbwapp.git HEAD
01 0d717f41d83befe63856ddf6cfdf4d808519a1dc HEAD
✔ 01 rbw#64.225.88.53 0.795s
00:01 deploy:check:directories
01 mkdir -p /home/rbw/rails/rbwapp/shared /home/rbw/rails/rbwapp/releases
✔ 01 rbw#64.225.88.53 0.059s
00:01 deploy:check:linked_dirs
01 mkdir -p /home/rbw/rails/rbwapp/shared/log /home/rbw/rails/rbwapp/shared/tmp/pids /home/rbw/rails/rbwapp/shared/tmp/cache /…
✔ 01 rbw#64.225.88.53 0.100s
00:01 git:clone
The repository mirror is at /home/rbw/rails/rbwapp/repo
00:01 git:update
01 git remote set-url origin git#github.com:rwaddington/rbwapp.git
✔ 01 rbw#64.225.88.53 0.107s
02 git remote update --prune
02 Fetching origin
✔ 02 rbw#64.225.88.53 0.820s
00:02 git:create_release
01 mkdir -p /home/rbw/rails/rbwapp/releases/20200828170348
✔ 01 rbw#64.225.88.53 0.106s
02 git archive master | /usr/bin/env tar -x -f - -C /home/rbw/rails/rbwapp/releases/20200828170348
✔ 02 rbw#64.225.88.53 0.074s
00:03 deploy:set_current_revision
01 echo "0d717f41d83befe63856ddf6cfdf4d808519a1dc" > REVISION
✔ 01 rbw#64.225.88.53 0.061s
00:03 deploy:symlink:linked_dirs
01 mkdir -p /home/rbw/rails/rbwapp/releases/20200828170348 /home/rbw/rails/rbwapp/releases/20200828170348/tmp /home/rbw/rails/…
✔ 01 rbw#64.225.88.53 0.102s
02 rm -rf /home/rbw/rails/rbwapp/releases/20200828170348/log
✔ 02 rbw#64.225.88.53 0.060s
03 ln -s /home/rbw/rails/rbwapp/shared/log /home/rbw/rails/rbwapp/releases/20200828170348/log
✔ 03 rbw#64.225.88.53 0.104s
04 ln -s /home/rbw/rails/rbwapp/shared/tmp/pids /home/rbw/rails/rbwapp/releases/20200828170348/tmp/pids
✔ 04 rbw#64.225.88.53 0.055s
05 ln -s /home/rbw/rails/rbwapp/shared/tmp/cache /home/rbw/rails/rbwapp/releases/20200828170348/tmp/cache
✔ 05 rbw#64.225.88.53 0.107s
06 ln -s /home/rbw/rails/rbwapp/shared/tmp/sockets /home/rbw/rails/rbwapp/releases/20200828170348/tmp/sockets
✔ 06 rbw#64.225.88.53 0.065s
07 ln -s /home/rbw/rails/rbwapp/shared/vendor/bundle /home/rbw/rails/rbwapp/releases/20200828170348/vendor/bundle
✔ 07 rbw#64.225.88.53 0.105s
08 ln -s /home/rbw/rails/rbwapp/shared/.bundle /home/rbw/rails/rbwapp/releases/20200828170348/.bundle
✔ 08 rbw#64.225.88.53 0.061s
09 ln -s /home/rbw/rails/rbwapp/shared/public/system /home/rbw/rails/rbwapp/releases/20200828170348/public/system
✔ 09 rbw#64.225.88.53 0.106s
10 ln -s /home/rbw/rails/rbwapp/shared/public/uploads /home/rbw/rails/rbwapp/releases/20200828170348/public/uploads
✔ 10 rbw#64.225.88.53 0.063s
11 ln -s /home/rbw/rails/rbwapp/shared/public/assets /home/rbw/rails/rbwapp/releases/20200828170348/public/assets
✔ 11 rbw#64.225.88.53 0.105s
00:05 bundler:config
01 $HOME/.rbenv/bin/rbenv exec bundle config --local deployment true
✔ 01 rbw#64.225.88.53 0.389s
02 $HOME/.rbenv/bin/rbenv exec bundle config --local path /home/rbw/rails/rbwapp/shared/bundle
✔ 02 rbw#64.225.88.53 0.514s
03 $HOME/.rbenv/bin/rbenv exec bundle config --local without development:test
✔ 03 rbw#64.225.88.53 0.607s
00:07 bundler:install
The Gemfile's dependencies are satisfied, skipping installation
00:08 deploy:assets:precompile
01 $HOME/.rbenv/bin/rbenv exec bundle exec rake assets:precompile
01 yarn install v1.22.5
01 [1/4] Resolving packages...
01 [2/4] Fetching packages...
01 info fsevents#2.1.3: The platform "linux" is incompatible with this module.
01 info "fsevents#2.1.3" is an optional dependency and failed compatibility check. Excluding it from installation.
01 info fsevents#1.2.13: The platform "linux" is incompatible with this module.
01 info "fsevents#1.2.13" is an optional dependency and failed compatibility check. Excluding it from installation.
01 [3/4] Linking dependencies...
01 [4/4] Building fresh packages...
01 Done in 13.85s.
01 yarn install v1.22.5
01 [1/4] Resolving packages...
01 [2/4] Fetching packages...
01 info fsevents#2.1.3: The platform "linux" is incompatible with this module.
01 info "fsevents#2.1.3" is an optional dependency and failed compatibility check. Excluding it from installation.
01 info fsevents#1.2.13: The platform "linux" is incompatible with this module.
01 info "fsevents#1.2.13" is an optional dependency and failed compatibility check. Excluding it from installation.
01 [3/4] Linking dependencies...
01 [4/4] Building fresh packages...
01 Done in 7.12s.
01 Compiling...
01 Compiled all packs in /home/rbw/rails/rbwapp/releases/20200828170348/public/packs
01 Hash: 666d45eeef50ba415ec8
01 Version: webpack 4.44.1
01 Time: 9726ms
01 Built at: 08/28/2020 5:04:31 PM
01 Asset Size Chunks Chunk Names
01 js/application-dd6e88065a32b23f8e21.js 69.3 KiB 0 [emitted] [immutable] application
01 js/application-dd6e88065a32b23f8e21.js.br 15.3 KiB [emitted]
01 js/application-dd6e88065a32b23f8e21.js.gz 17.7 KiB [emitted]
01 js/application-dd6e88065a32b23f8e21.js.map 205 KiB 0 [emitted] [dev] application
01 js/application-dd6e88065a32b23f8e21.js.map.br 44 KiB [emitted]
01 js/application-dd6e88065a32b23f8e21.js.map.gz 51 KiB [emitted]
01 manifest.json 364 bytes [emitted]
01 manifest.json.br 129 bytes [emitted]
01 manifest.json.gz 142 bytes [emitted]
01 Entrypoint application = js/application-dd6e88065a32b23f8e21.js js/application-dd6e88065a32b23f8e21.js.map
01 [0] (webpack)/buildin/module.js 552 bytes {0} [built]
01 [1] ./app/javascript/packs/application.js 742 bytes {0} [built]
01 [5] ./app/javascript/channels/index.js 205 bytes {0} [built]
01 [6] ./app/javascript/channels sync _channel\.js$ 160 bytes {0} [built]
01 + 3 hidden modules
01
✔ 01 rbw#64.225.88.53 36.283s
00:44 deploy:assets:backup_manifest
01 mkdir -p /home/rbw/rails/rbwapp/releases/20200828170348/assets_manifest_backup
✔ 01 rbw#64.225.88.53 0.107s
02 cp /home/rbw/rails/rbwapp/releases/20200828170348/public/assets/.sprockets-manifest-36bf47d96af8c04d2269f7a55275b52a.json /…
✔ 02 rbw#64.225.88.53 0.064s
00:44 deploy:migrate
[deploy:migrate] Run `rake db:migrate`
00:44 deploy:migrating
01 $HOME/.rbenv/bin/rbenv exec bundle exec rake db:migrate
01 rake aborted!
01 ActiveRecord::NoDatabaseError: FATAL: database "rails/rbwapp" does not exist
... stackdump omitted ...
The error is correct in the sense that database "rails/rbwapp" doesn't exist, but it's also not supposed to. It should be looking for "rbwapp_production" (which does exist)
I'm stuck. Any suggestions greatly appreciated...

Related

How to run a process in daemon mode with systemd service?

I've googled and read quite a bit of blogs, posts, etc. on this. I've also been trying them out manually on my EC2 instance. However, I'm still not able to properly configure the systemd service unit to have it run the process in background as I expect. The process I'm running is nessus service. Here's my service unit definition:
$ cat /etc/systemd/system/nessusagent.service
[Unit]
Description=Nessus
[Service]
ExecStart=/opt/myorg/bin/init_nessus
Type=simple
[Install]
WantedBy=multi-user.target
and here is my script /opt/myorg/bin/init_nessus:
$ cat /opt/apiq/bin/init_nessus
#!/usr/bin/env bash
set -e
NESSUS_MANAGER_HOST=...
NESSUS_MANAGER_PORT=...
NESSUS_CLIENT_GROUP=...
NESSUS_LINKING_KEY=...
#-------------------------------------------------------------------------------
# link nessus agent with manager host
#-------------------------------------------------------------------------------
/opt/nessus_agent/sbin/nessuscli agent link --key=${NESSUS_LINKING_KEY} --host=${NESSUS_MANAGER_HOST} --port=${NESSUS_MANAGER_PORT} --groups=${NESSUS_CLIENT_GROUP}
if [ $? -ne 0 ]; then
echo "Cannot link the agent to the Nessus manager, quitting."
exit 1
fi
/opt/nessus_agent/sbin/nessus-service -q -D
When I run the service, I always get the following:
$ systemctl status nessusagent.service
● nessusagent.service - Nessus
Loaded: loaded (/etc/systemd/system/nessusagent.service; enabled; vendor preset: enabled)
Active: inactive (dead) since Mon 2020-08-24 06:40:40 UTC; 9min ago
Process: 27787 ExecStart=/opt/myorg/bin/init_nessus (code=exited, status=0/SUCCESS)
Main PID: 27787 (code=exited, status=0/SUCCESS)
...
Aug 24 06:40:40 ip-10-27-0-104 init_nessus[27787]: + /opt/nessus_agent/sbin/nessuscli agent link --key=... --host=... --port=8834 --groups=...
Aug 24 06:40:40 ip-10-27-0-104 init_nessus[27787]: [info] [agent] HostTag::getUnix: setting TAG value to '8596420322084e3ab97d3c39e5c92e00'
Aug 24 06:40:40 ip-10-27-0-104 init_nessus[27787]: [info] [agent] Successfully linked to <myorg.com>:8834
Aug 24 06:40:40 ip-10-27-0-104 init_nessus[27787]: + '[' 0 -ne 0 ']'
Aug 24 06:40:40 ip-10-27-0-104 init_nessus[28506]: + /opt/nessus_agent/sbin/nessus-service -q -D
However, I can't see the process that I expect to see:
$ ps faux | grep nessus
root 28565 0.0 0.0 12940 936 pts/0 S+ 06:54 0:00 \_ grep --color=auto nessus
If I run the last command manually, I can see it:
$ /opt/nessus_agent/sbin/nessus-service -q -D
$ ps faux | grep nessus
root 28959 0.0 0.0 12940 1016 pts/0 S+ 07:00 0:00 \_ grep --color=auto nessus
root 28952 0.0 0.0 6536 116 ? S 07:00 0:00 /opt/nessus_agent/sbin/nessus-service -q -D
root 28953 0.2 0.0 69440 9996 pts/0 Sl 07:00 0:00 \_ nessusd -q
What is it that I'm missing here?
Eventually figured out that this was because of the extra -D option in the last command. Removing the -D option fixed the issue. Running the process in daemon mode inside a system manager is not the way to go. We need to run it in the foreground and let the system manager handle it.

Search a string after match [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
I have a file which has multiple record of netstat output, my sample file looks something like the one below. I want to search for PID for ex : 3453, part of the output I also want to see snapshot time, so that I can find whether PID was exist part of particular snapshot or not. Any thoughts?
zzz ***Sat Apr 11 03:00:26 UTC 2020
USER PID PPID PRI %CPU %MEM VSZ RSS WCHAN S STARTED TIME COMMAND
test 1234 3445 19 2.4 1.9 4070932 3539756 futex_wait_queue_ S Apr 04 04:00:17 test -quiet
test1 3453 6741 19 2.4 1.9 4070932 3539756 futex_wait_queue_ S Apr 04 04:00:17 test -quiet
zzz ***Sat Apr 11 03:01:26 UTC 2020
USER PID PPID PRI %CPU %MEM VSZ RSS WCHAN S STARTED TIME COMMAND
test 3453 3453 19 2.4 1.9 4070932 3539756 futex_wait_queue_ S Apr 04 04:00:17 test -quiet
test1 7842 8712 19 2.4 1.9 4070932 3539756 futex_wait_queue_ S Apr 04 04:00:17 test -quiet
Expected sample output for search 3453:
zzz ***Sat Apr 11 03:00:26 UTC 2020
test1 3453 6741 19 2.4 1.9 4070932 3539756 futex_wait_queue_ S Apr 04 04:00:17 test -quiet
zz ***Sat Apr 11 03:01:26 UTC 2020
test 3453 3453 19 2.4 1.9 4070932 3539756 futex_wait_queue_ S Apr 04 04:00:17 test -quiet
With GNU sed:
sed -n '/\*\*\*/,/^$/{ /\*\*\*/p; /3453/p }' file
Output:
zzz ***Sat Apr 11 03:00:26 UTC 2020
test1 3453 6741 19 2.4 1.9 4070932 3539756 futex_wait_queue_ S Apr 04 04:00:17 test -quiet
zzz ***Sat Apr 11 03:01:26 UTC 2020
test 3453 3453 19 2.4 1.9 4070932 3539756 futex_wait_queue_ S Apr 04 04:00:17 test -quiet
See: man sed and The Stack Overflow Regular Expressions FAQ

Capistrano's /current folder always points to the same (non-existent) release

My capistrano release appears to be "sucked", since no matter how many cap production deploys I send, the /current folder is generated over and over again pointing to the same folder.
# Trying to open /current with FTP Sync
Command: CWD /public_html/storekey-demo/releases/20180813141339
Response: 550 /public_html/storekey-demo/releases/20180813141339: No such file or directory
Error: Failed to retrieve directory listing
Even after deleting the /current folder, the /releases folder, the /repo folder, the /temp folder, and the revisions.log file, and running capistrano again, the symlink is created pointing to the same folder. I think I've tried everything I can think of.
This is my deploy.rb file
lock "~> 3.11.0"
set :application, "storekey_demo" # don't use "-"
set :repo_url, "git#gitlab.com: xxxxxxxxxxx .git"
set :deploy_to, "/home/u0000000/public_html/storekey-demo"
set :tmp_dir, '/home/u0000000/public_html/tmp'
namespace :deploy do
desc "Build"
after :updated, :build do
on roles(:web) do
within release_path do
execute :composer, "install --no-dev --quiet --optimize-autoloader"
end
end
end
end
namespace :deploy do
desc "Copy Env"
after :finished, :copy do
on roles(:all) do
upload! "production.env", "#{current_path}/.env"
end
end
end
This is my deploy log, as you can see there are no errors
> cap production deploy
00:00 git:wrapper
01 mkdir -p /home/u000000000/public_html/tmp
✔ 01 u000000000#185.201.11.23 7.806s
Uploading /home/u000000000/public_html/tmp/git-ssh-storekey_demo-production-francisco.sh 100.0%
02 chmod 700 /home/u000000000/public_html/tmp/git-ssh-storekey_demo-production-francisco.sh
✔ 02 u000000000#185.201.11.23 0.445s
00:09 git:check
01 git ls-remote git#gitlab.com: xxxxxx .git HEAD
01 75bb7ded165efb968f00d29808b0673d7517aa41 HEAD
✔ 01 u000000000#185.201.11.23 1.438s
00:10 deploy:check:directories
01 mkdir -p /home/u000000000/public_html/storekey-demo/shared /home/u000000000/public_html/storekey-demo/releases
✔ 01 u000000000#185.201.11.23 0.399s
00:12 git:clone
The repository mirror is at /home/u000000000/public_html/storekey-demo/repo
00:12 git:update
01 git remote set-url origin git#gitlab.com: xxxxxxx .git
✔ 01 u000000000#185.201.11.23 0.465s
02 git remote update --prune
02 Fetching origin
✔ 02 u000000000#185.201.11.23 1.537s
00:15 git:create_release
01 mkdir -p /home/u000000000/public_html/storekey-demo/releases/20180813165948
✔ 01 u000000000#185.201.11.23 0.455s
02 git archive master | /usr/bin/env tar -x -f - -C /home/u000000000/public_html/storekey-demo/releases/20180813165948
✔ 02 u000000000#185.201.11.23 6.387s
00:23 deploy:set_current_revision
01 echo "75bb7ded165efb968f00d29808b0673d7517aa41" > REVISION
✔ 01 u000000000#185.201.11.23 0.443s
00:24 deploy:build
01 composer install --no-dev --quiet --optimize-autoloader
✔ 01 u000000000#185.201.11.23 6.045s
00:30 deploy:symlink:release
01 ln -s /home/u000000000/public_html/storekey-demo/releases/20180813165948 /home/u000000000/public_html/storekey-demo/releases/current
✔ 01 u000000000#185.201.11.23 25.267s
02 mv /home/u000000000/public_html/storekey-demo/releases/current /home/u000000000/public_html/storekey-demo
✔ 02 u000000000#185.201.11.23 0.421s
00:56 deploy:cleanup
Keeping 5 of 6 deployed releases on 185.201.11.23
01 rm -rf /home/u000000000/public_html/storekey-demo/releases/20180813164636
✔ 01 u000000000#185.201.11.23 0.543s
00:58 deploy:log_revision
01 echo "Branch master (at 75bb7ded165efb968f00d29808b0673d7517aa41) deployed as release 20180813165948 by francisco" >> /home/u000000000/publ…
✔ 01 u000000000#185.201.11.23 0.530s
00:59 deploy:copy
Uploading production.env 100.0%
It seems like the problem was in FileZilla itself, and not in capistrano.
I entered through the SSH console in my server and deleted the /current symlink, then created it again pointing to the last build, and run capistrano production deploy. The problem is fixed and changes are being deployed correctly, but FireZilla is still not recognizing the symlink correctly.

Outgoing mails limit in Qmail

I was googling and found this link helpful https://serverfault.com/questions/538233/qmail-limit-number-of-emails-sent-by-user-or-domain-per-hour
My OS is Centos with Plesk 11.x + Qmail
I tried my best to follow given directives in the link e.g.
1) I installed spamdkye successfully
2) Created mkdir -p /home/vpopmail/bin/qmail-antispam
3) mkdir -p /etc/qmailadmin/qmail-spam/blacklist
4) crontab -e
*/5 * * * * /home/vpopmail/bin/qmail-antispam >> /var/log/maillog 2>&1
5) I saved script as qmail-antispam as well as moved as blacklist but on both positions it displays:
Code:
# tail -f /var/log/maillog | grep "qmail-antispam"
/bin/sh: /home/vpopmail/bin/qmail-antispam: is a directory
/bin/sh: /home/vpopmail/bin/qmail-antispam: is a directory
Tue Oct 20 12:55:01 CDT 2015 qmail-antispam : Revisando logs
Tue Oct 20 12:55:01 CDT 2015 qmail-antispam : Fin de revision
Tue Oct 20 13:00:01 CDT 2015 qmail-antispam : Revisando logs
Tue Oct 20 13:00:01 CDT 2015 qmail-antispam : Fin de revision
Questions:
Please advise where:
1) where I save this script either in /etc/qmailadmin/qmail-spam/blacklist or /home/vpopmail/bin/qmail-antispam and with which name?
2) MAX_CORREOS=3000 //is that to set maximum limit??
ID_SERVER="ID_SERVER" //May I change this to my server hostname??
CONTACTO=admin#gmail.com //Here I set admin email to send report. Is that right??
Please advise
Thanks in anticipation
vi /home/vpopmail/bin/qmail-antispam
copy & paste the script provided in Link
Edit script and change following:
MAX_CORREOS=100
ID_SERVER="my server hostname"
CONTACTO=admin#gmail.com
# chown root /home/vpopmail/bin/qmail-antispam
# chmod 755 /home/vpopmail/bin/qmail-antispam
and check its working fine:
# tail -f /var/log/maillog | grep "qmail-antispam"
Wed Oct 21 19:20:01 CDT 2015 qmail-antispam : Revisando logs
Wed Oct 21 19:25:01 CDT 2015 qmail-antispam : Revisando logs
Wed Oct 21 19:30:01 CDT 2015 qmail-antispam : Revisando logs
Wed Oct 21 19:35:01 CDT 2015 qmail-antispam : Revisando logs
Wed Oct 21 19:40:01 CDT 2015 qmail-antispam : Revisando logs
Wed Oct 21 19:45:01 CDT 2015 qmail-antispam : Revisando logs
Wed Oct 21 19:50:01 CDT 2015 qmail-antispam : Revisando logs
Wed Oct 21 19:55:01 CDT 2015 qmail-antispam : Revisando logs
Wed Oct 21 20:00:01 CDT 2015 qmail-antispam : Revisando logs

gsutil rsync -C "continue" option not working

gsutil rsync -C "continue" option is not working from backup_script:
$GSUTIL rsync -c -C -e -r -x $EXCLUDES $SOURCE/Documents/ $DESTINATION/Documents/
From systemd log:
$ journalctl --since 12:00
Jul 25 12:00:14 localhost.localdomain CROND[9694]: (wolfv) CMDOUT (CommandException: Error opening file "file:///home/wolfv/Documents/PC_maintenance/backup_systems/gsutil/ssmtp.conf": .)
Jul 25 12:00:14 localhost.localdomain CROND[9694]: (wolfv) CMDOUT (Caught ^C - exiting)
Jul 25 12:00:14 localhost.localdomain CROND[9694]: (wolfv) CMDOUT (Caught ^C - exiting)
Jul 25 12:00:14 localhost.localdomain CROND[9694]: (wolfv) CMDOUT (Caught ^C - exiting)
Jul 25 12:00:14 localhost.localdomain CROND[9694]: (wolfv) CMDOUT (Caught ^C - exiting)
because owner is root rather than user:
$ ls -l ssmtp.conf
-rw-r-----. 1 root root 1483 Jul 24 21:30 ssmtp.conf
rsyc worked fine after deleting the root-owned file.
This happened on a Fedora22 machine, when cron called backup_script which called gsutil rsync.
Thanks for reporting that problem. We'll get a fix for this bug in gsutil release 4.14.
Mike