I have a dbt_project like below.
name: 'DataTransformer'
version: '1.0.0'
config-version: 2
profile: 'DataTransformer'
Using Postgres, hence I have a profile at .dbt/profiles.yml
DataTransformer:
target: dev
outputs:
dev:
type: postgres
host: localhost
port: 55000
user: postgres
pass: postgrespw
dbname: postgres
schema: public
threads: 4
But when I run dbt debug, I got Credentials in profile "DataTransformer", target "dev" invalid: ['dbname'] is not of type 'string'
I have searched, there are several people has encountered the same error but in different databases. Still not sure why happens in my case
Related
I have a schema based multitenancy setup in postgres.
Typeorm allows us to run migration using a single command:
typeorm migration:run -d db.connection.js
the connection is defined like this:
{
type: 'postgres',
host: process.env.DB_HOST,
port: Number(process.env.DB_PORT),
username: process.env.DB_USER,
password: process.env.DB_PASSWORD,
database: process.env.DB_NAME,
ssl: process.env.DB_SSL === 'true',
schema,
migrations: migrations || ['./**/*.migration.js'],
...options,
}
Now, I have 3 schema in my database: A, B and C, how do I run the migration for all 3 of them at once? This is a really big issue for me as manually triggering migration for each schema individually is time consuming and cumbersome. Does typeorm have a method to specify multiple schemas?
nodejs app on GAE flex deploys correctly, but won't connect to postgres, even though initial knex migration worked and the tables were created. Ive read through the documentation and cant understand how all of the below can be true.
running psql -h [ipaddress] -p 5432 -U postgres mydb and entering the password from my local machine works!
package.json..
"prestart": "npx knex migrate:latest && npx knex seed:run
"start": "NODE_ENV=production npm-run-all build server"
worked! tables were created and seed was run
knexfile
production: {
client: 'postgresql',
connection: {
database: DB_PASS,
user: DB_USER,
password: DB_PASS,
host: DB_HOST
},
pool: {
min: 2,
max: 10
},
migrations: {
directory: './db/migrations',
tableName: 'knex_migrations'
},
seeds: {
directory: './db/seeds/dev'
}
}
yaml
runtime: nodejs
env: flex
instance_class: F2
beta_settings:
cloud_sql_instances: xxxx-00000:us-west1:myinst
env_variables:
DB_USER: 'postgres'
DB_PASS: 'mypass'
DB_NAME: 'myddb'
DB_HOST: '/cloudsql/xxxx-00000:us-west1:myinst'
handlers:...
IAM
oddly, it was a log only issue. the logs still say user authentication failed, but actually the app was connected.
I am trying to upload mongo db backup to google drive
I am installing following bundles dizda/cloud-backup-bundle and Happyr
/
GoogleSiteAuthenticatorBundle for adapters I am using cache/adapter-bundle
configuration:
dizda_cloud_backup:
output_file_prefix: '%dizda_hostname%'
timeout: 300
processor:
type: zip # Required: tar|zip|7z
options:
compression_ratio: 6
password: '%dizda_compressed_password%'
cloud_storages:
google_drive:
token_name: 'AIzaSyA4AE21Y-YqneV5f9POG7MPx4TF1LGmuO8' # Required
remote_path: ~ # Not required, default "/", but you can use path like "/Accounts/backups/"
databases:
mongodb:
all_databases: false # Only required when no database is set
database: '%database_name%'
db_host: '%mongodb_backup_host%'
db_port: '%mongodb_port%'
db_user: '%mongodb_user%'
db_password: '%mongodb_password%'
cache_adapter:
providers:
my_redis:
factory: 'cache.factory.redis'
happyr_google_site_authenticator:
cache_service: 'cache.provider.my_redis'
tokens:
google_drive:
client_id: '85418079755-28ncgsoo91p69bum6ulpt0mipfdocb07.apps.googleusercontent.com'
client_secret: 'qj0ipdwryCNpfbJQbd-mU2Mu'
redirect_url: 'http://localhost:8000/googledrive/'
scopes: ['https://www.googleapis.com/auth/drive']
when I use factory: 'cache.factory.mongodb' getting
You have requested a non-existent service "cache.factory.mongodb" this while running server and while running backup command getting
Something went terribly wrong. We could not create a backup. Read your log files to see what caused this error
I verified logs getting Command "--env=prod dizda:backup:start" exited with code "1" {"command":"--env=prod dizda:backup:start","code":1} []
I am not sure which adapter needs to use and what's going on here.
Can someone help me? Thanks in advance
I have an Express API deployed to Heroku, but when I attempt to run the migrations, it throws the following error:
heroku run knex migrate:latest Running knex migrate:latest on ⬢
bookmarks-node-api... up, run.9925 (Free) Using environment:
production Error: connect ECONNREFUSED 127.0.0.1:5432
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1117:14)
In my knexfile.js, I have:
production: {
client: 'postgresql',
connection: {
database: process.env.DATABASE_URL
},
pool: {
min: 2,
max: 10
},
migrations: {
directory: './database/migrations'
}
}
I also tried assigning the migrations directory to tableName: 'knex_migrations' which throws the error:
heroku run knex migrate:latest Running knex migrate:latest on ⬢
bookmarks-node-api... up, run.7739 (Free) Using environment:
production Error: ENOENT: no such file or directory, scandir
'/app/migrations'
Here is the config as set in Heroku:
-node-api git:(master) heroku pg:info
=== DATABASE_URL
Plan: Hobby-dev
Status: Available
Connections: 0/20
PG Version: 10.7
Created: 2019-02-21 12:58 UTC
Data Size: 7.6 MB
Tables: 0
Rows: 0/10000 (In compliance)
Fork/Follow: Unsupported
Rollback: Unsupported
I think the issue is that for some reason, it is looking at localhost for the database, as if the environment is being read as development though the trace shows Using environment: production.
When you provide an object as your connection you're providing individual parts of the connection information. Here, you're saying that the name your database is everything contained in process.env.DATABASE_URL:
connection: {
database: process.env.DATABASE_URL
},
Any keys you don't provide values for fall back to defaults. An example is the host key, which defaults to the local machine.
But the DATABASE_URL environment variable contains all of the information that you need to connect (host, port, user, password, and database name) in a single string. That whole value should be your connection setting:
connection: process.env.DATABASE_URL,
You should check to see if the Postgres add-on is setup as described in these docs since the DATABASE_URL is automatically set for you as stated here.
I'm running into issues with automating deployments with Travis CI to Heroku for my Phoenix app. Here's Travis CI build error:
(Mix) The database for AgilePulse.Repo couldn't be created: tcp connect: connection refused - :econnrefused
Here's my .travis.yml config:
language: elixir
elixir:
- 1.3.2
otp_release:
- 19.0
sudo: false
addons:
postgresql: '9.5'
notifications:
email: false
env:
- MIX_ENV=test
before_script:
- cp config/travis_ci_test.exs config/test.secret.exs
- mix do ecto.create, ecto.migrate
Here's my travis_ci_test.exs:
use Mix.Config
# Configure your database
config :agile_pulse, AgilePulse.Repo,
adapter: Ecto.Adapters.Postgres,
username: "postgres",
password: "",
database: "travis_ci_test",
hostname: "localhost",
pool: Ecto.Adapters.SQL.Sandbox
Any pointers would be greatly appreciated!
Additional info:
GitHub repo: https://github.com/cscairns/agile-pulse-api
On a second look: judging by the travis log you posted, looks like you're bootstrapping an Ubuntu 12.04 Precise for your build; I suspect that Postgres 9.5 is not available on precise:
https://docs.travis-ci.com/user/database-setup/#Using-a-different-PostgreSQL-Version
Could you try switching to Postgres 9.4 and see if that works?