GAE Connection to SQL: password authentication failed for user 'postgres' - postgresql

nodejs app on GAE flex deploys correctly, but won't connect to postgres, even though initial knex migration worked and the tables were created. Ive read through the documentation and cant understand how all of the below can be true.
running psql -h [ipaddress] -p 5432 -U postgres mydb and entering the password from my local machine works!
package.json..
"prestart": "npx knex migrate:latest && npx knex seed:run
"start": "NODE_ENV=production npm-run-all build server"
worked! tables were created and seed was run
knexfile
production: {
client: 'postgresql',
connection: {
database: DB_PASS,
user: DB_USER,
password: DB_PASS,
host: DB_HOST
},
pool: {
min: 2,
max: 10
},
migrations: {
directory: './db/migrations',
tableName: 'knex_migrations'
},
seeds: {
directory: './db/seeds/dev'
}
}
yaml
runtime: nodejs
env: flex
instance_class: F2
beta_settings:
cloud_sql_instances: xxxx-00000:us-west1:myinst
env_variables:
DB_USER: 'postgres'
DB_PASS: 'mypass'
DB_NAME: 'myddb'
DB_HOST: '/cloudsql/xxxx-00000:us-west1:myinst'
handlers:...
IAM

oddly, it was a log only issue. the logs still say user authentication failed, but actually the app was connected.

Related

MikroORM failed to connect to database despite correct username and password

I'm following Ben Awad's youtube tutorial on writing a full stack application. I'm using MikroORM with postgres.
I created a database called tut, a user called tut, then gave that user access to the database. I can verify that the user has access to the db like so:
$ su - tut
Password:
user:/home/tut$ psql
tut=>
Here's what my mikro-orm.config.ts looks like:
import {Post} from "../entities/Post";
import {MikroORM} from "#mikro-orm/core";
import path from "path"
export default {
migrations: {
path: path.join(__dirname, "./migrations"),
pattern: /^[\w-]+\d+.*\.[tj]s$/
},
entities: [Post],
dbName: 'tut',
user: 'tut',
password: 'tut',
type: 'postgresql',
debug: process.env.NODE_ENV !== 'production',
} as Parameters<typeof MikroORM.init>[0]
When I attempt to connect to the db in index.ts I get a "MikroORM failed to connect to database tut on postgresql://tut:*****#127.0.0.1:5432" (error code 28P01).
Am I supposed to be running a psql server on localhost? The tutorial doesn't have you do that as far as I can tell.
I fixed this by running \password in psql as tut, thanks #AdrianKlaver

Error creating a connection to Postgresql while preparing konga database

After installing Konga, we are trying to prepare Konga database on the already running Postgresql database. by using suggested command i.e.
node ./bin/konga.js prepare --adapter postgres --uri postgresql://localhost:5432/konga
But we are facing the error as below:
Error creating a connection to Postgresql using the following settings:
postgresql://localhost:5432/konga?host=localhost&port=5432&schema=true&ssl=false&adapter=sails-postgresql&user=postgres&password=XXXX&database=konga_database&identity=postgres
* * *
Complete error details:
error: password authentication failed for user "root"
error: A hook (`orm`) failed to load!
error: Failed to prepare database: error: password authentication failed for user "root"
We even created the schema konga_database manually and have tried several variations for prepare command but no fate
node ./bin/konga.js prepare --adapter postgres --uri postgresql://kong:XXXX#localhost:5432/konga_database
node ./bin/konga.js prepare --adapter postgres --uri postgresql://kong#localhost:5432/konga
node ./bin/konga.js prepare --adapter postgres --uri postgresql://kong#localhost:5432/konga_database
Below is config/connections.js
postgres: {
adapter: 'sails-postgresql',
url: process.env.DB_URI,
host: process.env.DB_HOST || 'localhost',
user: process.env.DB_USER || 'postgres',
password: process.env.DB_PASSWORD || 'XXXX',
port: process.env.DB_PORT || 5432,
database: process.env.DB_DATABASE ||'konga_database',
// schema: process.env.DB_PG_SCHEMA ||'public',
// poolSize: process.env.DB_POOLSIZE || 10,
ssl: process.env.DB_SSL ? true : false // If set, assume it's true
},
Below is .env file configuration
PORT=1337
NODE_ENV=production
KONGA_HOOK_TIMEOUT=120000
DB_ADAPTER=postgres
DB_URI=postgresql://localhost:5432/konga
DB_HOST=localhost
DB_PORT=5432
DB_USER=postgres
DB_PASSWORD=XXXX
KONGA_LOG_LEVEL=info
TOKEN_SECRET=
kong and postgresql are already running on the AWS linux AMI 2 server on there respective ports i.e. 8443 & 5432
Please help us to prepare DB and start konga service. Also. let us know in case you need more info.
Node v: v12.19.0
NPM v: 6.14.8
Regards
Nitin G
Maybe I overlooked it, but what version of PostreSQL are you using?
Konga is not able to support postgresql 12:
https://github.com/pantsel/konga/issues/487
Have you tried like this?
.env
DB_URI=postgresql://userdb:passworddb#localhost:5432/kongadb
I tried on Postgresql 9.6
https://www.rosehosting.com/blog/how-to-install-postgresql-9-6-on-ubuntu-20-04/

mongo db docker image authentication failed

I'm using https://hub.docker.com/_/mongo mongo image in my local docker environment, but I'm getting Authentication failed error. In docker-compose I add it like:
my-mongo:
image: mongo
restart: always
container_name: my-mongo
environment:
MONGO_INITDB_ROOT_USERNAME: mongo
MONGO_INITDB_ROOT_PASSWORD: asdfasdf
networks:
- mynet
I also tried to run mongo CLI from inside the container but still getting the same error:
root#76e6db78228b:/# mongo
MongoDB shell version v4.2.3
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("c87c0f0e-fe83-41a6-96e9-4aa4ede8fa25") }
MongoDB server version: 4.2.3
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
http://docs.mongodb.org/
Questions? Try the support group
http://groups.google.com/group/mongodb-user
> use translations
switched to db translations
> db.auth("mongo", "asdfasdf")
Error: Authentication failed.
0
Also, I'm trying to create a separate user:
> use admin
switched to db admin
db.auth("mongo", "asdfasdf")
1
> db.createUser({
user: "user",
pwd: "asdfasdf",
roles: [ {role: "readWrite", db: "translations" } ]
})
Successfully added user: {
"user" : "user",
"roles" : [
{
"role" : "readWrite",
"db" : "translations"
}
]
}
> use translations
switched to db translations
> db.auth("user", "asdfasdf")
Error: Authentication failed.
0
and the same, what I'm doing wrong???
Updated:
root#8bf81ef1fc4f:/# mongo -u mongo -p asdfasdf --authenticationDatabase admin
MongoDB shell version v4.2.3
connecting to: mongodb://127.0.0.1:27017/?authSource=admin&compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("02231489-eaf4-40be-a108-248cec88257e") }
MongoDB server version: 4.2.3
Server has startup warnings:
2020-02-26T16:24:12.942+0000 I STORAGE [initandlisten]
2020-02-26T16:24:12.943+0000 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
2020-02-26T16:24:12.943+0000 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem
---
Enable MongoDB's free cloud-based monitoring service, which will then receive and display
metrics about your deployment (disk utilization, CPU, operation statistics, etc).
The monitoring data will be available on a MongoDB website with a unique URL accessible to you
and anyone you share the URL with. MongoDB may use this information to make product
improvements and to suggest MongoDB products and deployment options to you.
To enable free monitoring, run the following command: db.enableFreeMonitoring()
To permanently disable this reminder, run the following command: db.disableFreeMonitoring()
---
> db.createUser({user: "someuser", pwd: "asdfasdf", roles: [{role: "readWrite", db: "translations"}]})
Successfully added user: {
"user" : "someuser",
"roles" : [
{
"role" : "readWrite",
"db" : "translations"
}
]
}
> use translations
switched to db translations
> db.auth("someuser", "asdfasdf")
Error: Authentication failed.
0
>
After some time, I figured out.
On the same folder, create docker-compose.yml and init-mongo.js
docker-compose.yml
version: '3.7'
services:
database:
image: mongo
container_name : your-cont-name
command: mongod --auth
environment:
- MONGO_INITDB_DATABASE=my_db
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=root
ports:
- '27017-27019:27017-27019'
volumes:
- mongodbdata:/data/db
- ./init-mongo.js:/docker-entrypoint-initdb.d/mongo-init.js:ro
volumes:
mongodbdata:
driver: local
init-mongo.js
db.createUser(
{
user: "your_user",
pwd: "your_password",
roles: [
{
role: "readWrite",
db: "my_db"
}
]
}
);
db.createCollection("test"); //MongoDB creates the database when you first store data in that database
Auth
First, execute the bash inside the container
docker exec -it your-cont-name bash
Now we can login.
For the admin
mongo -u admin -p root
For the your_user you have to specify the db (with the --authenticationDatabase) otherwise you'll have an auth error
mongo -u your_user -p your_password --authenticationDatabase my_db
After that, you should switch to the right db with
use my_db
If you don't execute this command, you'll be on test db
Note
For being sure of having the right config, i prefer to
docker-compose stop
docker-compose rm
docker volume rm <your-volume>
docker-compose up --build -d
as stated in the Docs
These variables, used in conjunction, create a new user and set that
user's password. This user is created in the admin authentication
database and given the role of root, which is a "superuser" role.
so you need to add --authenticationDatabase admin to your command since the mongod will be started with mongod --auth
example:
mongo -u mongo -p asdfasdf --authenticationDatabase admin
i have the same issue, after google two hours finally sovled;
solution:find out the host machine direcory mounted into mongodb container, delete it,then re-create the mongodb container.
mongo db container create by docker-compose.yaml mount a diretory from host mechine to the container for save the mongo datbases. when you remove the container the mouted direcotry do not deleted, so the default username and password pass by env var could be long time ago you set, now you change the user name and password. just do not work,cause recreate the container will not recreate the "admin" database .
I've fallen in this trap and wasted a day while everything was correct.
I'm writing this for future me(s) because it wasn't mentioned anywhere else and also to avoid my mistake while setting up user/pass combination to connect to their database from other services.
Assuming everything is right:👇
If you are mounting some local folder of yours as storage for your database like below:
services:
your_mongo_db:
// ...some config
volumes:
- ./__TEST_DB_DATA__:/data/db
- ./init-mongo.js:/docker-entrypoint-initdb.d/mongo-init.js:ro
environment:
- "MONGO_INITDB_ROOT_USERNAME=root"
- "MONGO_INITDB_ROOT_PASSWORD=pass"
//...more config
Please remember to remove this folder before re-running your compose file. I think when you run the docker-compose command for the first time, Mongo will create and store the user data there (like any other collections) and then reuse it for the next time (since you mounted that volume).
I had the same problem myself,
Please first remove the username and password from credentials.
environment:
MONGO_INITDB_ROOT_USERNAME: mongo
MONGO_INITDB_ROOT_PASSWORD: asdfasdf
after you remove the credentials you may check dbs or users on your mongodb.
show dbs
show users
Those commands also needs auth, so if you can see them, can be null, then you fix your issue.
Than,
then create a admin user,
use admin
db.createUser({user: "root", pwd: "root", roles:["root"]})
then you can logout and try to connect with credentials to the shell as an admin.
In addition if you are still having some issues about creating new user,
In my case I changed mechanisms to SCRAM-SHA-1 than it worked like a charm.
{
user: "<name>",
pwd: passwordPrompt(), // Or "<cleartext password>"
customData: { <any information> },
roles: [
{ role: "<role>", db: "<database>" } | "<role>",
...
],
authenticationRestrictions: [
{
clientSource: ["<IP>" | "<CIDR range>", ...],
serverAddress: ["<IP>" | "<CIDR range>", ...]
},
...
],
mechanisms: [ "<SCRAM-SHA-1|SCRAM-SHA-256>", ... ],
passwordDigestor: "<server|client>"
}
I had the same problem myself, follows this steps:
Steps 1 and 2 are to delete de old configuration, and set and apply the new configuration, its so important:
Delete to the containers mongo:
docker rm mongo -f
If you have created volumes, delete them:
docker volume rm $(docker volume ls -q) -f
In ports field of docker-compose.yml set:
- 27018:27017 ->
Its so important that ports is not 27017:27017, in my case it was generating conflict.
Up docker compose:
docker-compose up
Try now the connection with authentication!
Example of docker-compose.yml:
mongo:
container_name: mongo
image: mongo:4.4
restart: always
environment:
TZ: "Europe/Madrid"
MONGO_INITDB_ROOT_USERNAME: "user"
MONGO_INITDB_ROOT_PASSWORD: "admin1"
volumes:
- ./mongoDataBase:/data/db
ports:
- 27018:27017
Best regards!

Heroku can't connect with Postgres DB/Knex/Express

I have an Express API deployed to Heroku, but when I attempt to run the migrations, it throws the following error:
heroku run knex migrate:latest Running knex migrate:latest on ⬢
bookmarks-node-api... up, run.9925 (Free) Using environment:
production Error: connect ECONNREFUSED 127.0.0.1:5432
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1117:14)
In my knexfile.js, I have:
production: {
client: 'postgresql',
connection: {
database: process.env.DATABASE_URL
},
pool: {
min: 2,
max: 10
},
migrations: {
directory: './database/migrations'
}
}
I also tried assigning the migrations directory to tableName: 'knex_migrations' which throws the error:
heroku run knex migrate:latest Running knex migrate:latest on ⬢
bookmarks-node-api... up, run.7739 (Free) Using environment:
production Error: ENOENT: no such file or directory, scandir
'/app/migrations'
Here is the config as set in Heroku:
-node-api git:(master) heroku pg:info
=== DATABASE_URL
Plan: Hobby-dev
Status: Available
Connections: 0/20
PG Version: 10.7
Created: 2019-02-21 12:58 UTC
Data Size: 7.6 MB
Tables: 0
Rows: 0/10000 (In compliance)
Fork/Follow: Unsupported
Rollback: Unsupported
I think the issue is that for some reason, it is looking at localhost for the database, as if the environment is being read as development though the trace shows Using environment: production.
When you provide an object as your connection you're providing individual parts of the connection information. Here, you're saying that the name your database is everything contained in process.env.DATABASE_URL:
connection: {
database: process.env.DATABASE_URL
},
Any keys you don't provide values for fall back to defaults. An example is the host key, which defaults to the local machine.
But the DATABASE_URL environment variable contains all of the information that you need to connect (host, port, user, password, and database name) in a single string. That whole value should be your connection setting:
connection: process.env.DATABASE_URL,
You should check to see if the Postgres add-on is setup as described in these docs since the DATABASE_URL is automatically set for you as stated here.

what is client property of knexfile.js

In knex documentation of configuration of knexfile.js for PostgreSQL, they have a property called client, which looks this way:
...
client: 'pg'
...
However, going through some other projects that utilize PostgreSQL I noticed that they have a different value there, which looks this way:
...
client: 'postgresql'
...
Does this string correspond to the name of some sort of command line tool that is being used with the project or I misunderstand something?
Postgresql is based on a server-client model as described in 'Architectural Fundamentals'
psql is the standard cli client of postgres as mentioned here in the docs.
A client may as well be a GUI such as pg-admin, or a node-package such as 'pg' - here's a list.
The client parameter is required and determines which client adapter will be used with the library.
You should also read the docs of 'Server Setup and Operation'
To initialize the library you can do the following (in this case on localhost):
var knex = require('knex')({
client: 'mysql',
connection: {
host : '127.0.0.1',
user : 'your_database_user',
password : 'your_database_password',
database : 'myapp_test'
}
})
The standard user of the client deamon ist 'postgres' - which you can use of course, but its highly advisable to create a new user as stated in the docs and/or apply a password to the standard user 'postgres'.
On Debian stretch i.E.:
# su - postgres
$ psql -d template1 -c "ALTER USER postgres WITH PASSWORD 'SecretPasswordHere';"
Make sure you delete the command line history so nobody can read out your pwd:
rm ~/.psql_history
Now you can add a new user (i.E. foobar) on the system and for postgres
# adduser foobar
and
# su - postgres
$ createuser --pwprompt --interactive foobar
Lets look at the following setup:
module.exports = {
development: {
client: 'xyz',
connection: { user: 'foobar', database: 'my_app' }
},
production: { client: 'abc', connection: process.env.DATABASE_URL }
};
This basically tells us the following:
In dev - use the client xyz to connect to postgresqls database my_app with the user foobar (in this case without pwd)
In prod - retrieve the globalenv the url of the db-server is set to and connect via the client abc
Here's an example how node's pg-client package opens a connection pool:
const pool = new Pool({
user: 'foobar',
host: 'someUrl',
database: 'someDataBaseName',
password: 'somePWD',
port: 5432,
})
If you could clarify or elaborate your setup or what you like to achieve a little more i could give you some more detailed info - but i hope that helped anyways..