Rubber + Google cloud as generic provider - capistrano

I've setup and created (manually) instance at google cloud, added my ssh key (from id_rsa.pub) into it, so I can ssh username#x.x.x.x without any password, user set as sudoer.
I've setup 'generic' provider in rubber.yml:
generic: key_file: "#{Dir[(File.expand_path('~') rescue '/root') +
'/.ssh/id_rsa.pub'].first}"
I've set in deploy.rb:
set :initial_ssh_user, 'username'
Now when i am trying to cap rubber:create_staging RAILS_ENV=staging,
After few questions of role/ip its asking me for root password:
** Instance api-staging created: api-staging
Waiting for instances to start
** Instance running, fetching hostname/ip data
.Trying to enable root login
Password for username # x.x.x.x: .
* 2014-11-16 14:07:45 executing `rubber:_ensure_key_file_present'
* 2014-11-16 14:07:45 executing `rubber:_allow_root_ssh'
* executing "sudo -p 'sudo password: ' bash -l -c 'mkdir -p /root/.ssh && cp /home/username/.ssh/authorized_keys /root/.ssh/'"
servers: ["x.x.x.x"]
** Can't connect as user cat.of.duty to x.x.x.x, assuming root allowed
* 2014-11-16 14:07:46 executing `rubber:_direct_connection_x.x.x.x'
* executing "echo"
servers: ["x.x.x.x"]
.. ** Failed to connect to x.x.x.x, retrying
Over and over again

Related

Elixir on docker cannot connect to postgres

I am trying to deploy elixir application on Docker container, it is successfully deployed but when I run an API endpoint it shows following error
{
errors: {
detail: "Internal Server Error"
}
}
after check logs docker logs [CONTAINER_ID] the following errors occurred (it cannot connect to database)
my database is on AWS Aurora and I am able to connect to it using pgadmin
18:21:26.624 [error] #PID<0.652.0> running ABC.Endpoint (connection #PID<0.651.0>, stream id 1) terminated
Server: localhost:4000 (http)
Request: GET /api/v1/popular-searches/us/en
** (exit) an exception was raised:
** (DBConnection.ConnectionError) connection not available and request was dropped from queue after 2850ms. This means requests are coming in and your connection pool cannot serve them fast enough. You can address this by:
1. Tracking down slow queries and making sure they are running fast enough
2. Increasing the pool_size (albeit it increases resource consumption)
3. Allowing requests to wait longer by increasing :queue_target and :queue_interval
See DBConnection.start_link/2 for more information
(ecto_sql 3.5.3) lib/ecto/adapters/sql.ex:751: Ecto.Adapters.SQL.raise_sql_call_error/1
(ecto_sql 3.5.3) lib/ecto/adapters/sql.ex:684: Ecto.Adapters.SQL.execute/5
(ecto 3.5.5) lib/ecto/repo/queryable.ex:229: Ecto.Repo.Queryable.execute/4
(ecto 3.5.5) lib/ecto/repo/queryable.ex:17: Ecto.Repo.Queryable.all/3
(ecto 3.5.5) lib/ecto/repo/queryable.ex:157: Ecto.Repo.Queryable.one!/3
(_api 0.1.1) lib/api_web/controllers/V1/cms_data_controller.ex:14: ApiWeb.V1.CMSDataController.get_popular_searches/2
(_api 0.1.1) lib/_api_web/controllers/V1/cms_data_controller.ex:1: ApiWeb.V1.CMSDataController.action/2
(_api 0.1.1) lib/_api_web/controllers/V1/cms_data_controller.ex:1: ApiWeb.V1.CMSDataController.phoenix_controller_pipeline/2
18:21:26.695 [error] Postgrex.Protocol (#PID<0.487.0>) failed to connect: ** (DBConnection.ConnectionError) tcp connect (postgres:5432): non-existing domain - :nxdomain
18:21:26.714 [error] Postgrex.Protocol (#PID<0.479.0>) failed to connect: ** (DBConnection.ConnectionError) tcp connect (postgres:5432): non-existing domain - :nxdomain
18:21:26.718 [error] Postgrex.Protocol (#PID<0.469.0>) failed to connect: ** (DBConnection.ConnectionError) tcp connect (postgres:5432): non-existing domain - :nxdomain
18:21:26.810 [error] Postgrex.Protocol (#PID<0.493.0>) failed to connect: ** (DBConnection.ConnectionError) tcp connect (postgres:5432): non-existing domain - :
I have checked the env variables in the container which are correct, database URL is correct, my Docker file looks file this:
# base image elixer to start with
FROM elixir:1.13.4
# install hex package manager
RUN mix local.hex --force
RUN curl -o phoenix_new.ez https://github.com/phoenixframework/archives/raw/master/phoenix_new.ez
RUN mix archive.install ./phoenix_new.ez
RUN mkdir /app
COPY . /app
WORKDIR /app
ENV MIX_ENV=prod
ENV PORT=4000
ENV DATABASE_URL=postgres://[URL]
RUN mix local.rebar --force
RUN mix deps.get --only prod
RUN mix compile
RUN mix phx.digest
CMD mix phx.server
after building the images i start it like this:
docker build . -t [name]
docker run --name [name] -p 4000:4000 -d [name]
What am I doing wrong?
any help would be appreciated.

Error creating a connection to Postgresql while preparing konga database

After installing Konga, we are trying to prepare Konga database on the already running Postgresql database. by using suggested command i.e.
node ./bin/konga.js prepare --adapter postgres --uri postgresql://localhost:5432/konga
But we are facing the error as below:
Error creating a connection to Postgresql using the following settings:
postgresql://localhost:5432/konga?host=localhost&port=5432&schema=true&ssl=false&adapter=sails-postgresql&user=postgres&password=XXXX&database=konga_database&identity=postgres
* * *
Complete error details:
error: password authentication failed for user "root"
error: A hook (`orm`) failed to load!
error: Failed to prepare database: error: password authentication failed for user "root"
We even created the schema konga_database manually and have tried several variations for prepare command but no fate
node ./bin/konga.js prepare --adapter postgres --uri postgresql://kong:XXXX#localhost:5432/konga_database
node ./bin/konga.js prepare --adapter postgres --uri postgresql://kong#localhost:5432/konga
node ./bin/konga.js prepare --adapter postgres --uri postgresql://kong#localhost:5432/konga_database
Below is config/connections.js
postgres: {
adapter: 'sails-postgresql',
url: process.env.DB_URI,
host: process.env.DB_HOST || 'localhost',
user: process.env.DB_USER || 'postgres',
password: process.env.DB_PASSWORD || 'XXXX',
port: process.env.DB_PORT || 5432,
database: process.env.DB_DATABASE ||'konga_database',
// schema: process.env.DB_PG_SCHEMA ||'public',
// poolSize: process.env.DB_POOLSIZE || 10,
ssl: process.env.DB_SSL ? true : false // If set, assume it's true
},
Below is .env file configuration
PORT=1337
NODE_ENV=production
KONGA_HOOK_TIMEOUT=120000
DB_ADAPTER=postgres
DB_URI=postgresql://localhost:5432/konga
DB_HOST=localhost
DB_PORT=5432
DB_USER=postgres
DB_PASSWORD=XXXX
KONGA_LOG_LEVEL=info
TOKEN_SECRET=
kong and postgresql are already running on the AWS linux AMI 2 server on there respective ports i.e. 8443 & 5432
Please help us to prepare DB and start konga service. Also. let us know in case you need more info.
Node v: v12.19.0
NPM v: 6.14.8
Regards
Nitin G
Maybe I overlooked it, but what version of PostreSQL are you using?
Konga is not able to support postgresql 12:
https://github.com/pantsel/konga/issues/487
Have you tried like this?
.env
DB_URI=postgresql://userdb:passworddb#localhost:5432/kongadb
I tried on Postgresql 9.6
https://www.rosehosting.com/blog/how-to-install-postgresql-9-6-on-ubuntu-20-04/

How do I create a the first mongodb user with authorization enabled?

I am trying to create a admin user. I've tried several different ways.
I realize that authorization is enabled, and if I turned it off, then back on it would allow me to create the first user. However I am trying to create the first user while authorization is enabled.
I have wiped the data directory and I am dealing with a fresh database.
I've been able to use rs.initiate() and db.createUser() from the console, but what I'm discovering is that it's impossible for me to run a script that both 1)initiates the replica set and 2) creates the admin user using --eval at the same time.
My config looks like this:
storage:
dbPath: /var/mongodb/db/1
net:
bindIp: localhost,192.168.103.100
port: 27001
security:
authorization: enabled
keyFile: /var/mongodb/pki/m103-keyfile
systemLog:
destination: file
path: /var/mongodb/db/mongod1.log
logAppend: true
processManagement:
fork: true
replication:
replSetName: m103-repl
Then I connect with this:
mongo --host localhost:27001
The within the mongo console I tried this:
use admin
db.createUser({
user: "m103-admin",
pwd: "m103-pass",
roles: [
{role: "root", db: "admin"}
]
})
and I get this error:
2020-03-16T19:57:36.796+0000 E QUERY [thread1] Error: couldn't add user: not master :
_getErrorWithCode#src/mongo/shell/utils.js:25:13
DB.prototype.createUser#src/mongo/shell/db.js:1437:15
#(shell):1:1
Update, I've tried starting the mongod then running rs.initiate, however I am still getting and issue when I try and create a user.
# ------------------------------------------------------------------------------------------
echo "\nchanging the directory to home dir\n"
cd ~/
# ------------------------------------------------------------------------------------------
echo "\nkilling all running mongo processes\n"
sleep 3
kill $(ps aux | grep '[m]ongod' | awk '{print $2}')
sleep 3
# ------------------------------------------------------------------------------------------
echo "\nremoving all data directories\n"
rm -rf /var/mongodb/db/1
# ------------------------------------------------------------------------------------------
echo "\nremoving all log files\n"
rm -rf /var/mongodb/db/mongod1.log
# ------------------------------------------------------------------------------------------
echo "\nremoving all log files\n"
rm -rf /var/mongodb/pki/m103-keyfile
# ------------------------------------------------------------------------------------------
echo "\ncreating the keyfile\n"
sudo mkdir -p /var/mongodb/pki
sudo chown vagrant:vagrant -R /var/mongodb
openssl rand -base64 741 > /var/mongodb/pki/m103-keyfile
chmod 600 /var/mongodb/pki/m103-keyfile
# ------------------------------------------------------------------------------------------
echo "\ncreating data directories\n"
mkdir -p /var/mongodb/db/1
# ------------------------------------------------------------------------------------------
echo "\ntouching the logs\n"
touch /var/mongodb/db/mongod1.log
# ------------------------------------------------------------------------------------------
echo "\nstarting up mongo repl 1\n"
mongod --config /shared/replica-sets/mongod-repl-1.conf
sleep 3
# ------------------------------------------------------------------------------------------
echo "\nreplicaSet initiate\n"
mongo --port 27001 --eval='rs.initiate()'
# ------------------------------------------------------------------------------------------
echo "\ncreating the user\n"
mongo mongodb://localhost:27001/admin?replicaSet=m103-repl --eval='db.createUser({user:"m103-admin",pwd:"m103-pass",roles:[{role:"userAdminAnyDatabase",db:"admin"}]});'
Here's what the script returns:
MongoDB shell version v3.6.17
connecting to: mongodb://localhost:27001/admin?gssapiServiceName=mongodb&replicaSet=m103-repl
2020-03-16T20:41:39.786+0000 I NETWORK [thread1] Starting new replica set monitor for m103-repl/localhost:27001
2020-03-16T20:41:39.787+0000 I NETWORK [thread1] Successfully connected to localhost:27001 (1 connections now open to localhost:27001 with a 5 second timeout)
2020-03-16T20:41:39.788+0000 I NETWORK [thread1] Successfully connected to 192.168.103.100:27001 (1 connections now open to 192.168.103.100:27001 with a 5 second timeout)
2020-03-16T20:41:39.788+0000 W NETWORK [thread1] Unable to reach primary for set m103-repl
2020-03-16T20:41:39.788+0000 W NETWORK [thread1] Unable to reach primary for set m103-repl
2020-03-16T20:41:40.333+0000 W NETWORK [thread1] Unable to reach primary for set m103-repl
2020-03-16T20:41:40.933+0000 I NETWORK [thread1] changing hosts to m103-repl/192.168.103.100:27001 from m103-repl/localhost:27001
Implicit session: session { "id" : UUID("bb96245e-3187-4b01-923f-a1a7d7533159") }
MongoDB server version: 3.6.17
2020-03-16T20:41:40.938+0000 E QUERY [thread1] Error: couldn't add user: there are no users authenticated :
_getErrorWithCode#src/mongo/shell/utils.js:25:13
DB.prototype.createUser#src/mongo/shell/db.js:1437:15
#(shell eval):1:1
In a Replica Set you first have to initialize the Replica Set with
rs.initiate()
Then connect to the PRIMARY host, there you can create the admin user.
Follow the Deploy Replica Set With Keyfile Authentication tutorial, it describes the deployment step-by-step.
For Sharded Cluster follow Deploy Sharded Cluster with Keyfile Authentication
Follow these tutorials carefully, have a particular look which commands are executed on each host or only once or just on Primary Hosts, etc.

Ansible postgresql_db task fails after a very long pause

The following ansible task (in a vagrant VM) fails :
- name: ensure database is created
postgresql_db: name={{dbname}}
sudo_user: postgres
the task pauses for a few minutes before failing
the vagrant VM is a centos6.5.1,
the tasks output is :
TASK: [postgresql | ensure database is created] *******************************
fatal: [192.168.78.6] => failed to parse:
We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:
#1) Respect the privacy of others.
#2) Think before you type.
#3) With great power comes great responsibility.
[sudo via ansible, key=glxzviadepqkwddapvjheeuillbdakly] password:
FATAL: all hosts have already failed -- aborting
I have verified that postgres is prooperly installed
by doing vagrant ssh and connecting vial psql.
I also validated that I can do a "sudo su postgres" within the VM ...
======== update
It looks like the problem is the sudo_user: postgres, because removing the
above postgres tasks and replacing with this one causes the same problem :
- name: say hello from postgress
command: echo "hello"
sudo_user: postgres
the output is exactly the same as above, so it's really a problem of
ansible doing a sudo_user on centos6.5
one interesting observation, although I can do "sudo su postgres" from
inside the vm
when I call "psql" (as the postgres user) I get the message :
could not change directory to "/home/vagrant": Permission denied
but the psql shell still starts successfully
======== conclusion
Problem was fixed by changing to a stock centos box,
lesson learned : when using ansible/vagrant, only use stock OS images...
I am using the wait for host:
- local_action: wait_for port=22 host="{{PosgresHost}}" search_regex=OpenSSH delay=1 timeout=60
ignore_errors: yes
PS:
I think you should use gather_facts: False and do setup after ssh is up.
Example main.yml:
---
- name: Setup
hosts: all
#connection: local
user: root
gather_facts: False
roles:
- main
Example roles/main/tasks/main.yml
- debug: msg="System {{ inventory_hostname }} "
- local_action: wait_for port=22 host="{{ inventory_hostname}}" search_regex=OpenSSH delay=1 timeout=60
ignore_errors: yes
- action: setup
ansible-playbook -i 127.0.0.1, main.yml
PLAY [Setup] ******************************************************************
TASK: [main | debug msg="System {{ inventory_hostname }} "] *******************
ok: [127.0.0.1] => {
"msg": "System 127.0.0.1 "
}
TASK: [main | wait_for port=22 host="{{ inventory_hostname}}" search_regex=OpenSSH delay=1 timeout=60] ***
ok: [127.0.0.1 -> 127.0.0.1]
PLAY RECAP ********************************************************************
127.0.0.1 : ok=2 changed=0 unreachable=0 failed=0

Capistrano times out when deploying using Amazon RDS

I cannot seem to get Capistrano to play nicely with AmazonRDS. I've looked all over the place for any info on setting this up correctly, but haven't found any. Right now, when I cap deploy, the process times out.
This is my deploy.rb:
set :deploy_to, "/opt/bitnami/apps/annarbortshirtcompany.com/cms/"
set :scm, :git
set :repository, "ssh://user#ec2-repository.compute-1.amazonaws.com/~/repo/cms.git"
set :deploy_via, :remote_cache
set :user, "user"
ssh_options[:keys] = [File.join(ENV["HOME"], "EC2", "admin.pem")]
ssh_options[:forward_agent] = true
set :branch, "master"
set :use_sudo, true
set :location, "ec2-webserver.compute-1.amazonaws.com"
role :web, location
role :app, location
role :db, "cmsinstance.c7r8frl6npxn.us-east-1.rds.amazonaws.com", :primary => true
# If you are using Passenger mod_rails uncomment this:
namespace :deploy do
task :start do ; end
task :stop do ; end
task :restart, :roles => :app, :except => { :no_release => true } do
run "#{try_sudo} touch #{File.join(current_path,'tmp','restart.txt')}"
end
end
The username for the RDS database instance differs from the SSH username set here, but is defined in my database.yml. I figure that this is probably not being read by capistrano, but have no idea how to make that happen.
When I "cap deploy":
ubuntu#ubuntu-VirtualBox:~/RailsApps/cms$ cap deploy
* executing `deploy'
* executing `deploy:update'
** transaction: start
* executing `deploy:update_code'
updating the cached checkout on all servers
executing locally: "git ls-remote ssh://user#ec2-repository.compute-1.amazonaws.com/~/repo/cms.git master"
command finished in 1590ms
* executing "if [ -d /app-directory/shared/cached-copy ]; then cd /app-directory/shared/cached-copy && git fetch -q origin && git fetch --tags -q origin && git reset -q --hard ffc4ec7762566f801c4a9140aa3980dc71e3d06f && git clean -q -d -x -f; else git clone -q ssh://user#ec2-repository.amazonaws.com/~/repo/cms.git /app-directory/shared/cached-copy && cd /app-directory/shared/cached-copy && git checkout -q -b deploy ffc4ec7762566f801c4a9140aa3980dc71e3d06f; fi"
servers: ["ec2-webserver.compute-1.amazonaws.com", "dbinstance.us-east1.rds.amazonaws.com"]
*** [deploy:update_code] rolling back
* executing "rm -rf /app-directory/releases/20110607161612; true"
servers: ["ec2-webserver.compute-1.amazonaws.com", "dbinstance.us-east1.rds.amazonaws.com"]
** [deploy:update_code] exception while rolling back: Capistrano::ConnectionError, connection failed for: dbinstance.us-east1.rds.amazonaws.com (Errno::ETIMEDOUT: Connection timed out - connect(2))
connection failed for: dbinstance.us-east1.rds.amazonaws.com (Errno::ETIMEDOUT: Connection timed out - connect(2))
Why would it want to "update the cached checkout on all servers"? The DB server shouldn't even be needed at this point. I am stumped at how to fix this. Hopefully someone can point me in the right direction!
I had this exactly problem and struggled with it for what I'm embarrassed to say was a good 5 or 6 hours. In the end, when I realized what the problem was I felt like smacking myself because I knew this once but had forgotten it. Here's the crux of the problem, starting with this part of deploy.rb:
set :location, "ec2-webserver.compute-1.amazonaws.com"
role :web, location
role :app, location
role :db, "cmsinstance.c7r8frl6npxn.us-east-1.rds.amazonaws.com", :primary => true
When you define the machine roles for Capistrano, you're not actually identifying which machines will play a particular role...rather, you're identifying on which machines the Capistrano code will run when applying a deployment recipe for a role. So, when you define the :db role, you want to point to your EC2 instance, not the RDS instance. You can't ssh into the RDS machine, so it's impossible for Capistrano to run a recipe there. Instead, point :db to the same machine as you're pointing :web and :app, i.e.
set :location, "ec2-webserver.compute-1.amazonaws.com"
role :web, location
role :app, location
role :db, location, :primary => true
How does the RDS machine then have any involvement? Well, it's the database.yml file that dictates which machine is actually running the RDBMS where the SQL needs to be executed. You just need to be sure you're setting the host: value for the target database, e.g.:
production:
adapter: mysql2
encoding: utf8
reconnect: false
database: <your_db>_production
pool: 5
username: <username>
password: <password>
host: cmsinstance.c7r8frl6npxn.us-east-1.rds.amazonaws.com
Make sense?
I hope this save someone else the frustration I experienced.
David