I cannot seem to get Capistrano to play nicely with AmazonRDS. I've looked all over the place for any info on setting this up correctly, but haven't found any. Right now, when I cap deploy, the process times out.
This is my deploy.rb:
set :deploy_to, "/opt/bitnami/apps/annarbortshirtcompany.com/cms/"
set :scm, :git
set :repository, "ssh://user#ec2-repository.compute-1.amazonaws.com/~/repo/cms.git"
set :deploy_via, :remote_cache
set :user, "user"
ssh_options[:keys] = [File.join(ENV["HOME"], "EC2", "admin.pem")]
ssh_options[:forward_agent] = true
set :branch, "master"
set :use_sudo, true
set :location, "ec2-webserver.compute-1.amazonaws.com"
role :web, location
role :app, location
role :db, "cmsinstance.c7r8frl6npxn.us-east-1.rds.amazonaws.com", :primary => true
# If you are using Passenger mod_rails uncomment this:
namespace :deploy do
task :start do ; end
task :stop do ; end
task :restart, :roles => :app, :except => { :no_release => true } do
run "#{try_sudo} touch #{File.join(current_path,'tmp','restart.txt')}"
end
end
The username for the RDS database instance differs from the SSH username set here, but is defined in my database.yml. I figure that this is probably not being read by capistrano, but have no idea how to make that happen.
When I "cap deploy":
ubuntu#ubuntu-VirtualBox:~/RailsApps/cms$ cap deploy
* executing `deploy'
* executing `deploy:update'
** transaction: start
* executing `deploy:update_code'
updating the cached checkout on all servers
executing locally: "git ls-remote ssh://user#ec2-repository.compute-1.amazonaws.com/~/repo/cms.git master"
command finished in 1590ms
* executing "if [ -d /app-directory/shared/cached-copy ]; then cd /app-directory/shared/cached-copy && git fetch -q origin && git fetch --tags -q origin && git reset -q --hard ffc4ec7762566f801c4a9140aa3980dc71e3d06f && git clean -q -d -x -f; else git clone -q ssh://user#ec2-repository.amazonaws.com/~/repo/cms.git /app-directory/shared/cached-copy && cd /app-directory/shared/cached-copy && git checkout -q -b deploy ffc4ec7762566f801c4a9140aa3980dc71e3d06f; fi"
servers: ["ec2-webserver.compute-1.amazonaws.com", "dbinstance.us-east1.rds.amazonaws.com"]
*** [deploy:update_code] rolling back
* executing "rm -rf /app-directory/releases/20110607161612; true"
servers: ["ec2-webserver.compute-1.amazonaws.com", "dbinstance.us-east1.rds.amazonaws.com"]
** [deploy:update_code] exception while rolling back: Capistrano::ConnectionError, connection failed for: dbinstance.us-east1.rds.amazonaws.com (Errno::ETIMEDOUT: Connection timed out - connect(2))
connection failed for: dbinstance.us-east1.rds.amazonaws.com (Errno::ETIMEDOUT: Connection timed out - connect(2))
Why would it want to "update the cached checkout on all servers"? The DB server shouldn't even be needed at this point. I am stumped at how to fix this. Hopefully someone can point me in the right direction!
I had this exactly problem and struggled with it for what I'm embarrassed to say was a good 5 or 6 hours. In the end, when I realized what the problem was I felt like smacking myself because I knew this once but had forgotten it. Here's the crux of the problem, starting with this part of deploy.rb:
set :location, "ec2-webserver.compute-1.amazonaws.com"
role :web, location
role :app, location
role :db, "cmsinstance.c7r8frl6npxn.us-east-1.rds.amazonaws.com", :primary => true
When you define the machine roles for Capistrano, you're not actually identifying which machines will play a particular role...rather, you're identifying on which machines the Capistrano code will run when applying a deployment recipe for a role. So, when you define the :db role, you want to point to your EC2 instance, not the RDS instance. You can't ssh into the RDS machine, so it's impossible for Capistrano to run a recipe there. Instead, point :db to the same machine as you're pointing :web and :app, i.e.
set :location, "ec2-webserver.compute-1.amazonaws.com"
role :web, location
role :app, location
role :db, location, :primary => true
How does the RDS machine then have any involvement? Well, it's the database.yml file that dictates which machine is actually running the RDBMS where the SQL needs to be executed. You just need to be sure you're setting the host: value for the target database, e.g.:
production:
adapter: mysql2
encoding: utf8
reconnect: false
database: <your_db>_production
pool: 5
username: <username>
password: <password>
host: cmsinstance.c7r8frl6npxn.us-east-1.rds.amazonaws.com
Make sense?
I hope this save someone else the frustration I experienced.
David
Related
I have an Azure database server which I need to obtain the connection string for by way of command line. However, it is telling me that the "az postgres server show-connection-string" is not right, that "show-connection-string" is not recognized. I did a copy and paste from the Microsoft documentation but it gives the error, and I can't seem to find the answer. I could use a hand obtaining the connection string. Here is my CLI command:
az postgres server show-connection-string --server-name myserver1 --admin-user myuser#myserver --admin-password mypassword
Ensure you are running the latest version of the Azure CLI. As of this answer, the latest is 2.26.0 but I got this command to run in CloudShell which is running 2.25.0.
ken#Azure:~$ az postgres server show-connection-string --server-name myserver1 --admin-user myuser#myserver --admin-password mypassword
{
"connectionStrings": {
"C++ (libpq)": "host=myserver1.postgres.database.azure.com port=5432 dbname={database} user=myuser#myserver#myserver1 password=mypassword sslmode=require",
"ado.net": "Server=myserver1.postgres.database.azure.com;Database={database};Port=5432;User Id=myuser#myserver#myserver1;Password=mypassword;",
"jdbc": "jdbc:postgresql://myserver1.postgres.database.azure.com:5432/{database}?user=myuser#myserver#myserver1&password=mypassword",
"node.js": "var client = new pg.Client('postgres://myuser#myserver#myserver1:mypassword#myserver1.postgres.database.azure.com:5432/{database}');",
"php": "host=myserver1.postgres.database.azure.com port=5432 dbname={database} user=myuser#myserver#myserver1 password=mypassword",
"psql_cmd": "postgresql://myuser#myserver#myserver1:mypassword#myserver1.postgres.database.azure.com/{database}?sslmode=require",
"python": "cnx = psycopg2.connect(database='{database}', user='myuser#myserver#myserver1', host='myserver1.postgres.database.azure.com', password='mypassword', port='5432')",
"ruby": "cnx = PG::Connection.new(:host => 'myserver1.postgres.database.azure.com', :user => 'myuser#myserver#myserver1', :dbname => '{database}', :port => '5432', :password => 'mypassword')"
}
}
In knex documentation of configuration of knexfile.js for PostgreSQL, they have a property called client, which looks this way:
...
client: 'pg'
...
However, going through some other projects that utilize PostgreSQL I noticed that they have a different value there, which looks this way:
...
client: 'postgresql'
...
Does this string correspond to the name of some sort of command line tool that is being used with the project or I misunderstand something?
Postgresql is based on a server-client model as described in 'Architectural Fundamentals'
psql is the standard cli client of postgres as mentioned here in the docs.
A client may as well be a GUI such as pg-admin, or a node-package such as 'pg' - here's a list.
The client parameter is required and determines which client adapter will be used with the library.
You should also read the docs of 'Server Setup and Operation'
To initialize the library you can do the following (in this case on localhost):
var knex = require('knex')({
client: 'mysql',
connection: {
host : '127.0.0.1',
user : 'your_database_user',
password : 'your_database_password',
database : 'myapp_test'
}
})
The standard user of the client deamon ist 'postgres' - which you can use of course, but its highly advisable to create a new user as stated in the docs and/or apply a password to the standard user 'postgres'.
On Debian stretch i.E.:
# su - postgres
$ psql -d template1 -c "ALTER USER postgres WITH PASSWORD 'SecretPasswordHere';"
Make sure you delete the command line history so nobody can read out your pwd:
rm ~/.psql_history
Now you can add a new user (i.E. foobar) on the system and for postgres
# adduser foobar
and
# su - postgres
$ createuser --pwprompt --interactive foobar
Lets look at the following setup:
module.exports = {
development: {
client: 'xyz',
connection: { user: 'foobar', database: 'my_app' }
},
production: { client: 'abc', connection: process.env.DATABASE_URL }
};
This basically tells us the following:
In dev - use the client xyz to connect to postgresqls database my_app with the user foobar (in this case without pwd)
In prod - retrieve the globalenv the url of the db-server is set to and connect via the client abc
Here's an example how node's pg-client package opens a connection pool:
const pool = new Pool({
user: 'foobar',
host: 'someUrl',
database: 'someDataBaseName',
password: 'somePWD',
port: 5432,
})
If you could clarify or elaborate your setup or what you like to achieve a little more i could give you some more detailed info - but i hope that helped anyways..
I've setup and created (manually) instance at google cloud, added my ssh key (from id_rsa.pub) into it, so I can ssh username#x.x.x.x without any password, user set as sudoer.
I've setup 'generic' provider in rubber.yml:
generic: key_file: "#{Dir[(File.expand_path('~') rescue '/root') +
'/.ssh/id_rsa.pub'].first}"
I've set in deploy.rb:
set :initial_ssh_user, 'username'
Now when i am trying to cap rubber:create_staging RAILS_ENV=staging,
After few questions of role/ip its asking me for root password:
** Instance api-staging created: api-staging
Waiting for instances to start
** Instance running, fetching hostname/ip data
.Trying to enable root login
Password for username # x.x.x.x: .
* 2014-11-16 14:07:45 executing `rubber:_ensure_key_file_present'
* 2014-11-16 14:07:45 executing `rubber:_allow_root_ssh'
* executing "sudo -p 'sudo password: ' bash -l -c 'mkdir -p /root/.ssh && cp /home/username/.ssh/authorized_keys /root/.ssh/'"
servers: ["x.x.x.x"]
** Can't connect as user cat.of.duty to x.x.x.x, assuming root allowed
* 2014-11-16 14:07:46 executing `rubber:_direct_connection_x.x.x.x'
* executing "echo"
servers: ["x.x.x.x"]
.. ** Failed to connect to x.x.x.x, retrying
Over and over again
The following ansible task (in a vagrant VM) fails :
- name: ensure database is created
postgresql_db: name={{dbname}}
sudo_user: postgres
the task pauses for a few minutes before failing
the vagrant VM is a centos6.5.1,
the tasks output is :
TASK: [postgresql | ensure database is created] *******************************
fatal: [192.168.78.6] => failed to parse:
We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:
#1) Respect the privacy of others.
#2) Think before you type.
#3) With great power comes great responsibility.
[sudo via ansible, key=glxzviadepqkwddapvjheeuillbdakly] password:
FATAL: all hosts have already failed -- aborting
I have verified that postgres is prooperly installed
by doing vagrant ssh and connecting vial psql.
I also validated that I can do a "sudo su postgres" within the VM ...
======== update
It looks like the problem is the sudo_user: postgres, because removing the
above postgres tasks and replacing with this one causes the same problem :
- name: say hello from postgress
command: echo "hello"
sudo_user: postgres
the output is exactly the same as above, so it's really a problem of
ansible doing a sudo_user on centos6.5
one interesting observation, although I can do "sudo su postgres" from
inside the vm
when I call "psql" (as the postgres user) I get the message :
could not change directory to "/home/vagrant": Permission denied
but the psql shell still starts successfully
======== conclusion
Problem was fixed by changing to a stock centos box,
lesson learned : when using ansible/vagrant, only use stock OS images...
I am using the wait for host:
- local_action: wait_for port=22 host="{{PosgresHost}}" search_regex=OpenSSH delay=1 timeout=60
ignore_errors: yes
PS:
I think you should use gather_facts: False and do setup after ssh is up.
Example main.yml:
---
- name: Setup
hosts: all
#connection: local
user: root
gather_facts: False
roles:
- main
Example roles/main/tasks/main.yml
- debug: msg="System {{ inventory_hostname }} "
- local_action: wait_for port=22 host="{{ inventory_hostname}}" search_regex=OpenSSH delay=1 timeout=60
ignore_errors: yes
- action: setup
ansible-playbook -i 127.0.0.1, main.yml
PLAY [Setup] ******************************************************************
TASK: [main | debug msg="System {{ inventory_hostname }} "] *******************
ok: [127.0.0.1] => {
"msg": "System 127.0.0.1 "
}
TASK: [main | wait_for port=22 host="{{ inventory_hostname}}" search_regex=OpenSSH delay=1 timeout=60] ***
ok: [127.0.0.1 -> 127.0.0.1]
PLAY RECAP ********************************************************************
127.0.0.1 : ok=2 changed=0 unreachable=0 failed=0
Trying out a Sinatra | Mongoid 3. I run into the following error, whenever I attempt to save to the database.
Mongoid::Errors::NoSessionsConfig:
Problem:
No sessions configuration provided.
Summary:
Mongoid's configuration requires that you provide details about each session that can be connected to, and requires in the sessions config at least 1 default session to exist.
Resolution:
Double check your mongoid.yml to make sure that you have a top-level sessions key with at least 1 default session configuration for it. You can regenerate a new mongoid.yml for assistance via `rails g mongoid:config`.
Example:
development:
sessions:
default:
database: mongoid_dev
hosts:
- localhost:27017
from /Users/rhodee/.rbenv/versions/1.9.3-p194/lib/ruby/gems/1.9.1/gems/mongoid-3.0.13/lib/mongoid/sessions/factory.rb:61:in `create_session'
I've already confirmed the following:
Mongoid.yml file is loaded
The hash contains correct environment and db name
Using pry the return value from the Mongoid.load! method returns:
=> {"sessions"=>
{"default"=>
{"database"=>"bluster",
"hosts"=>["localhost:27017"],
"options"=>{"consistency"=>"strongĀ "}}}}
If it's any help check, I've added the app.rb file and mongoid.yml file as well.
App.rb
require 'sinatra'
require 'mongoid'
require 'pry'
require 'routes'
require 'location'
configure :development do
enable :logging, :dump_errors, :run, :sessions
Mongoid.load!(File.join(File.dirname(__FILE__), "config", "mongoid.yml"))
end
Mongoid.yml
development:
sessions:
default:
database: bluster
hosts:
- localhost:27017
options:
consistency: strongĀ
require 'sinatra'
require 'mongoid'
require 'pry'
require 'routes'
configure :development do
enable :logging, :dump_errors, :run, :sessions
Mongoid.load!(File.join(File.dirname(__FILE__), "config", "mongoid.yml"))
end
get '/db' do
"db: " << Mongoid.default_session[:moped].database.inspect
end
I put together an example, and it is working just fine for me. Probably your problem is something else, like the config file not having read access or something else. Anyways my config file is identical as yours and this is my sinatra file, and it works fine.