Setting up travis ci with phoenix + heroku - postgresql

I'm running into issues with automating deployments with Travis CI to Heroku for my Phoenix app. Here's Travis CI build error:
(Mix) The database for AgilePulse.Repo couldn't be created: tcp connect: connection refused - :econnrefused
Here's my .travis.yml config:
language: elixir
elixir:
- 1.3.2
otp_release:
- 19.0
sudo: false
addons:
postgresql: '9.5'
notifications:
email: false
env:
- MIX_ENV=test
before_script:
- cp config/travis_ci_test.exs config/test.secret.exs
- mix do ecto.create, ecto.migrate
Here's my travis_ci_test.exs:
use Mix.Config
# Configure your database
config :agile_pulse, AgilePulse.Repo,
adapter: Ecto.Adapters.Postgres,
username: "postgres",
password: "",
database: "travis_ci_test",
hostname: "localhost",
pool: Ecto.Adapters.SQL.Sandbox
Any pointers would be greatly appreciated!
Additional info:
GitHub repo: https://github.com/cscairns/agile-pulse-api

On a second look: judging by the travis log you posted, looks like you're bootstrapping an Ubuntu 12.04 Precise for your build; I suspect that Postgres 9.5 is not available on precise:
https://docs.travis-ci.com/user/database-setup/#Using-a-different-PostgreSQL-Version
Could you try switching to Postgres 9.4 and see if that works?

Related

What is correct dbt profiles for postgresql?

I have a dbt_project like below.
name: 'DataTransformer'
version: '1.0.0'
config-version: 2
profile: 'DataTransformer'
Using Postgres, hence I have a profile at .dbt/profiles.yml
DataTransformer:
target: dev
outputs:
dev:
type: postgres
host: localhost
port: 55000
user: postgres
pass: postgrespw
dbname: postgres
schema: public
threads: 4
But when I run dbt debug, I got Credentials in profile "DataTransformer", target "dev" invalid: ['dbname'] is not of type 'string'
I have searched, there are several people has encountered the same error but in different databases. Still not sure why happens in my case

GAE Connection to SQL: password authentication failed for user 'postgres'

nodejs app on GAE flex deploys correctly, but won't connect to postgres, even though initial knex migration worked and the tables were created. Ive read through the documentation and cant understand how all of the below can be true.
running psql -h [ipaddress] -p 5432 -U postgres mydb and entering the password from my local machine works!
package.json..
"prestart": "npx knex migrate:latest && npx knex seed:run
"start": "NODE_ENV=production npm-run-all build server"
worked! tables were created and seed was run
knexfile
production: {
client: 'postgresql',
connection: {
database: DB_PASS,
user: DB_USER,
password: DB_PASS,
host: DB_HOST
},
pool: {
min: 2,
max: 10
},
migrations: {
directory: './db/migrations',
tableName: 'knex_migrations'
},
seeds: {
directory: './db/seeds/dev'
}
}
yaml
runtime: nodejs
env: flex
instance_class: F2
beta_settings:
cloud_sql_instances: xxxx-00000:us-west1:myinst
env_variables:
DB_USER: 'postgres'
DB_PASS: 'mypass'
DB_NAME: 'myddb'
DB_HOST: '/cloudsql/xxxx-00000:us-west1:myinst'
handlers:...
IAM
oddly, it was a log only issue. the logs still say user authentication failed, but actually the app was connected.

concourse git resource error: getting the final child's pid from pipe caused "EOF"

when trying to pull a git resource we are getting an error
runc run: exit status 1: container_linux.go:345: starting container process caused "process_linux.go:303: getting the final child's pid from pipe caused \"EOF\""
we are using oracle linux - release 7.6. Docker version 18.03.1-ce.
we have followed the instructions on https://github.com/concourse/concourse-docker. we have tried with older versions of concourse (4.2.0 & 4.2.3). we can see the workers are up using fly.
we found this: https://github.com/concourse/concourse/issues/4021 on github which had a similar issue but couldn't find the relating story on stack overflow which the answerer had mentioned.
our docker compose file:
version: '3'
services:
db:
image: postgres
environment:
POSTGRES_DB: concourse
POSTGRES_USER: concourse_user
POSTGRES_PASSWORD: concourse_pass
web:
image: concourse/concourse
command: web
links: [db]
depends_on: [db]
ports: ["61111:8080"]
volumes: ["<path to repo folder>/keys/web:/concourse-keys"]
environment:
CONCOURSE_EXTERNAL_URL: <our url>
CONCOURSE_POSTGRES_HOST: db
CONCOURSE_POSTGRES_USER: concourse_user
CONCOURSE_POSTGRES_PASSWORD: concourse_pass
CONCOURSE_POSTGRES_DATABASE: concourse
CONCOURSE_ADD_LOCAL_USER: test:test
CONCOURSE_MAIN_TEAM_LOCAL_USER: test
worker:
image: concourse/concourse
command: worker
privileged: true
depends_on: [web]
volumes: ["<path to repo folder>/keys/worker:/concourse-keys"]
links: [web]
stop_signal: SIGUSR2
environment:
CONCOURSE_TSA_HOST: web:2222
we expected the resource to pull as the connectivity to the repo is in place and verified.
Not sure about your second issue with volumes, but I solved the original problem by setting user.max_user_namespaces parameter to 15000:
sysctl -w user.max_user_namespaces=15000
The solution was found here: https://github.com/docker/docker.github.io/issues/7962
This issue was fixed by updating the kernal from 3.1.x to 4.1.x. we have a new issue: failed to create volume on all our pipelines. i will update if i find a solution to this too

Heroku can't connect with Postgres DB/Knex/Express

I have an Express API deployed to Heroku, but when I attempt to run the migrations, it throws the following error:
heroku run knex migrate:latest Running knex migrate:latest on ⬢
bookmarks-node-api... up, run.9925 (Free) Using environment:
production Error: connect ECONNREFUSED 127.0.0.1:5432
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1117:14)
In my knexfile.js, I have:
production: {
client: 'postgresql',
connection: {
database: process.env.DATABASE_URL
},
pool: {
min: 2,
max: 10
},
migrations: {
directory: './database/migrations'
}
}
I also tried assigning the migrations directory to tableName: 'knex_migrations' which throws the error:
heroku run knex migrate:latest Running knex migrate:latest on ⬢
bookmarks-node-api... up, run.7739 (Free) Using environment:
production Error: ENOENT: no such file or directory, scandir
'/app/migrations'
Here is the config as set in Heroku:
-node-api git:(master) heroku pg:info
=== DATABASE_URL
Plan: Hobby-dev
Status: Available
Connections: 0/20
PG Version: 10.7
Created: 2019-02-21 12:58 UTC
Data Size: 7.6 MB
Tables: 0
Rows: 0/10000 (In compliance)
Fork/Follow: Unsupported
Rollback: Unsupported
I think the issue is that for some reason, it is looking at localhost for the database, as if the environment is being read as development though the trace shows Using environment: production.
When you provide an object as your connection you're providing individual parts of the connection information. Here, you're saying that the name your database is everything contained in process.env.DATABASE_URL:
connection: {
database: process.env.DATABASE_URL
},
Any keys you don't provide values for fall back to defaults. An example is the host key, which defaults to the local machine.
But the DATABASE_URL environment variable contains all of the information that you need to connect (host, port, user, password, and database name) in a single string. That whole value should be your connection setting:
connection: process.env.DATABASE_URL,
You should check to see if the Postgres add-on is setup as described in these docs since the DATABASE_URL is automatically set for you as stated here.

Ansible postgresql_db task fails after a very long pause

The following ansible task (in a vagrant VM) fails :
- name: ensure database is created
postgresql_db: name={{dbname}}
sudo_user: postgres
the task pauses for a few minutes before failing
the vagrant VM is a centos6.5.1,
the tasks output is :
TASK: [postgresql | ensure database is created] *******************************
fatal: [192.168.78.6] => failed to parse:
We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:
#1) Respect the privacy of others.
#2) Think before you type.
#3) With great power comes great responsibility.
[sudo via ansible, key=glxzviadepqkwddapvjheeuillbdakly] password:
FATAL: all hosts have already failed -- aborting
I have verified that postgres is prooperly installed
by doing vagrant ssh and connecting vial psql.
I also validated that I can do a "sudo su postgres" within the VM ...
======== update
It looks like the problem is the sudo_user: postgres, because removing the
above postgres tasks and replacing with this one causes the same problem :
- name: say hello from postgress
command: echo "hello"
sudo_user: postgres
the output is exactly the same as above, so it's really a problem of
ansible doing a sudo_user on centos6.5
one interesting observation, although I can do "sudo su postgres" from
inside the vm
when I call "psql" (as the postgres user) I get the message :
could not change directory to "/home/vagrant": Permission denied
but the psql shell still starts successfully
======== conclusion
Problem was fixed by changing to a stock centos box,
lesson learned : when using ansible/vagrant, only use stock OS images...
I am using the wait for host:
- local_action: wait_for port=22 host="{{PosgresHost}}" search_regex=OpenSSH delay=1 timeout=60
ignore_errors: yes
PS:
I think you should use gather_facts: False and do setup after ssh is up.
Example main.yml:
---
- name: Setup
hosts: all
#connection: local
user: root
gather_facts: False
roles:
- main
Example roles/main/tasks/main.yml
- debug: msg="System {{ inventory_hostname }} "
- local_action: wait_for port=22 host="{{ inventory_hostname}}" search_regex=OpenSSH delay=1 timeout=60
ignore_errors: yes
- action: setup
ansible-playbook -i 127.0.0.1, main.yml
PLAY [Setup] ******************************************************************
TASK: [main | debug msg="System {{ inventory_hostname }} "] *******************
ok: [127.0.0.1] => {
"msg": "System 127.0.0.1 "
}
TASK: [main | wait_for port=22 host="{{ inventory_hostname}}" search_regex=OpenSSH delay=1 timeout=60] ***
ok: [127.0.0.1 -> 127.0.0.1]
PLAY RECAP ********************************************************************
127.0.0.1 : ok=2 changed=0 unreachable=0 failed=0