I'm trying to build a Phoenix application with Distillery, however when I try start the compiled application I keep getting an error about not being able to fine the database.
17:44:50.925 [error] GenServer #PID<0.1276.0> terminating
** (KeyError) key :database not found in: [hostname: "localhost", username: "martinffx", types: Ecto.Adapters.Postgres.TypeModule, port: 5432, name: Glitchr.Repo.Pool, otp_app: :glitchr, repo: Glitchr.Repo, adapter: Ecto.Adapters.Postgres, pool_size: 10, ssl: true, pool_timeout: 5000, timeout: 15000, adapter: Ecto.Adapters.Postgres, pool_size: 10, ssl: true, pool: DBConnection.Poolboy]
(elixir) lib/keyword.ex:343: Keyword.fetch!/2
(postgrex) lib/postgrex/protocol.ex:76: Postgrex.Protocol.connect/1
(db_connection) lib/db_connection/connection.ex:134: DBConnection.Connection.connect/2
(connection) lib/connection.ex:622: Connection.enter_connect/5
(stdlib) proc_lib.erl:247: :proc_lib.init_p_do_apply/3
Last message: nil
Now the details above aren't what I've provided in the prod.ex
config :glitchr, Glitchr.Endpoint,
http: [port: {:system, "PORT"}],
url: [host: "localhost", port: {:system, "PORT"}],
cache_static_manifest: "priv/static/manifest.json",
server: true,
root: ".",
version: Mix.Project.config[:version],
secret_key_base: System.get_env("SECRET_KEY_BASE")
config :glitchr, Glitchr.Repo,
adapter: Ecto.Adapters.Postgres,
url: System.get_env("DATABASE_URL"),
pool_size: String.to_integer(System.get_env("POOL_SIZE") || "10"),
ssl: true
Where my database url is DATABASE_URL=postgres://glitchr:password#db:5433/glitchr
Why is that? How can I debug what I'm not getting right?
Related
I try to connect two docker containers, one is localstack with lambda function I am invoking, the other one is mongo replica set.I tried connecting them with various network configurations, but I fail to reach the mongo from lambda.
Lambda VPC config is empty: "VpcConfig": {},, I read that it should be if I want to access external network.
Key points:
I can connect to mongo from my rest API app on my host.
I can ping the mongo container from within localstack container.
root#e42146a357e1:/opt/code/localstack# ping db
PING db (172.19.0.2) 56(84) bytes of data.
64 bytes from db.localstack_backend (172.19.0.2): icmp_seq=1 ttl=64 time=0.040 ms
64 bytes from db.localstack_backend (172.19.0.2): icmp_seq=2 ttl=64 time=0.038 ms
...
The error I get from Lambda:
MongoServerSelectionError: connect ECONNREFUSED 127.0.0.1:27017
at Timeout._onTimeout (/tmp/function.zipfile.d9dce9f0/node_modules/mongodb/lib/sdam/topology.js:277:38)
at listOnTimeout (node:internal/timers:564:17)
at process.processTimers (node:internal/timers:507:7) {
reason: TopologyDescription {
type: 'ReplicaSetNoPrimary',
servers: Map(3) {
'127.0.0.1:27017' => [ServerDescription],
'127.0.0.1:27018' => [ServerDescription],
'127.0.0.1:27019' => [ServerDescription]
},
stale: false,
compatible: true,
heartbeatFrequencyMS: 10000,
localThresholdMS: 15,
setName: 'rs0',
maxElectionId: new ObjectId("7fffffff0000000000000001"),
maxSetVersion: 1,
commonWireVersion: 0,
logicalSessionTimeoutMinutes: null
},
code: undefined,
[Symbol(errorLabels)]: Set(0) {}
}
What I don't know here is if the error comes from the server or from the client. I also don't know why in the error servers are set to 127.0.0.1:port.
Connection string which lambda uses: mongodb://db:27017,db:27018,db:27019/?replicaSet=rs0
I tried also with IPs from docker network.
Client connection:
const client = new MongoClient('mongodb://db:27017,db:27018,db:27019/?replicaSet=rs0', {
maxPoolSize: 20,
minPoolSize: 0,
retryReads: true,
retryWrites: true,
});
Docker compose file:
services:
db:
image: candis/mongo-replica-set
ports:
- "27017:27017"
- "27018:27018"
- "27019:27019"
networks:
- backend
container_name: db
localstack:
image: localstack/localstack
ports:
- "4510-4559:4510-4559"
- "4566:4566"
networks:
- backend
container_name: localstack
networks:
backend:
driver: bridge
I was thinking that maybe lambda can't connect with outside world due to VPC configuration.
I am using this NPM package for DB connection
POSTGRES NPM
I have two DBs that I need connection with, I am able to connect with first one but upon execution of second query. It throws an error error.Pool code snippet.. Any help would be much appreciated
` var pool = new pg.Pool(config.db);
var pool1 = new pg.Pool(config.db1)
tasks = postgreSQL.loadDBPlugin( pool );
on('task', tasks);`
`db: {
user: "postgres",
password: "*****",
host: "localhost",
port: "5432",
database: "global"
},
db1:{
user: "postgres",
password: "*****",
host: "localhost",
port: "5432",
database: "site"
},`
This NPM is able to connect to two DBs simultaneously
Postgres
Trying to connect to an Azure PostgreSQL server from my local Strapi project (eventually deployed in a docker container).
I have the connection configured according to the strapi docs, including using the cert downloaded from the azure portal:
https://docs.strapi.io/developer-docs/latest/setup-deployment-guides/configurations/required/databases.html#configuration-structure
my database.js file:
// PostgreSQL
////////////////////////////////////////
const fs = require("#strapi/strapi/lib/services/fs");
const parse = require("pg-connection-string").parse;
const db = parse("azure-conn-string");
module.exports = ({ env }) => ({
connection: {
client: "postgres",
connection: {
host: db.host,
port: db.port,
database: db.database,
user: db.user,
password: db.password,
ssl: {
ca: fs.readFileSync(`${__dirname}/db.crt.pem`).toString(),
},
},
},
});
when starting the server i get this in the terminal:
error: no pg_hba.conf entry for host "ip-address", user "username", database "db_name", SSL off
at Parser.parseErrorMessage (/Users/x/x/dockertest/node_modules/pg-protocol/dist/parser.js:287:98)
at Parser.handlePacket (/Users/x/x/dockertest/node_modules/pg-protocol/dist/parser.js:126:29)
at Parser.parse (/Users/x/x/dockertest/node_modules/pg-protocol/dist/parser.js:39:38)
at Socket.<anonymous> (/Users/x/x/dockertest/node_modules/pg-protocol/dist/index.js:11:42)
at Socket.emit (events.js:400:28)
at Socket.emit (domain.js:475:12)
at addChunk (internal/streams/readable.js:293:12)
at readableAddChunk (internal/streams/readable.js:267:9)
at Socket.Readable.push (internal/streams/readable.js:206:10)
at TCP.onStreamRead (internal/stream_base_commons.js:188:23)
It seems like its trying to connect with ssl off, I'm not sure why since it should be configured to be enabled. If I remove the ssl rule from the azure database, the connection will go through so it seems like the problem is something with the ssl config.
Can anyone help?
module.exports = ({ env }) => ({
defaultConnection: "default",
connection: {
client: "postgres",
connection: {
host: env("DATABASE_HOST", "localhost"),
port: env.int("DATABASE_PORT", 5432),
database: env("DATABASE_NAME", "postgres"),
user: env("DATABASE_USER", "postgres"),
password: env("DATABASE_PASSWORD", "0000"),
schema: env("DATABASE_SCHEMA", "public"),
},
}
});
I used scaffolding to generate a new microservice,then I made the following configuration for mongodb:
logging:
level:
ROOT: DEBUG
io.github.jhipster: DEBUG
com.fzai.fileservice: DEBUG
eureka:
instance:
prefer-ip-address: true
client:
service-url:
defaultZone: http://admin:${jhipster.registry.password}#localhost:8761/eureka/
spring:
profiles:
active: dev
include:
- swagger
# Uncomment to activate TLS for the dev profile
#- tls
devtools:
restart:
enabled: true
additional-exclude: static/**
livereload:
enabled: false # we use Webpack dev server + BrowserSync for livereload
jackson:
serialization:
indent-output: true
data:
mongodb:
host: 42.193.124.204
port: 27017
username: admin
password: admin123
authentication-database: fileService
database: fileService
mail:
host: localhost
port: 25
username:
password:
messages:
cache-duration: PT1S # 1 second, see the ISO 8601 standard
thymeleaf:
cache: false
sleuth:
sampler:
probability: 1 # report 100% of traces
zipkin: # Use the "zipkin" Maven profile to have the Spring Cloud Zipkin dependencies
base-url: http://localhost:9411
enabled: false
locator:
discovery:
enabled: true
server:
port: 8081
# ===================================================================
# JHipster specific properties
#
# Full reference is available at: https://www.jhipster.tech/common-application-properties/
# ===================================================================
jhipster:
cache: # Cache configuration
hazelcast: # Hazelcast distributed cache
time-to-live-seconds: 3600
backup-count: 1
management-center: # Full reference is available at: http://docs.hazelcast.org/docs/management-center/3.9/manual/html/Deploying_and_Starting.html
enabled: false
update-interval: 3
url: http://localhost:8180/mancenter
# CORS is disabled by default on microservices, as you should access them through a gateway.
# If you want to enable it, please uncomment the configuration below.
cors:
allowed-origins: "*"
allowed-methods: "*"
allowed-headers: "*"
exposed-headers: "Authorization,Link,X-Total-Count"
allow-credentials: true
max-age: 1800
security:
client-authorization:
access-token-uri: http://uaa/oauth/token
token-service-id: uaa
client-id: internal
client-secret: internal
mail: # specific JHipster mail property, for standard properties see MailProperties
base-url: http://127.0.0.1:8081
metrics:
logs: # Reports metrics in the logs
enabled: false
report-frequency: 60 # in seconds
logging:
use-json-format: false # By default, logs are not in Json format
logstash: # Forward logs to logstash over a socket, used by LoggingConfiguration
enabled: false
host: localhost
port: 5000
queue-size: 512
audit-events:
retention-period: 30 # Number of days before audit events are deleted.
oauth2:
signature-verification:
public-key-endpoint-uri: http://uaa/oauth/token_key
#ttl for public keys to verify JWT tokens (in ms)
ttl: 3600000
#max. rate at which public keys will be fetched (in ms)
public-key-refresh-rate-limit: 10000
web-client-configuration:
#keep in sync with UAA configuration
client-id: web_app
secret: changeit
An error occurred while I was running the project:
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'mongobee' defined in class path resource [com/fzai/fileservice/config/DatabaseConfiguration.class]: Invocation of init method failed; nested exception is com.mongodb.MongoQueryException: Query failed with error code 13 and error message 'not authorized on fileService to execute command { find: "system.indexes", filter: { ns: "fileService.dbchangelog", key: { changeId: 1, author: 1 } }, limit: 1, singleBatch: true, $db: "fileService" }' on server 42.193.124.204:27017
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1771)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:593)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:515)
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:320)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:318)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:847)
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:877)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:549)
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:141)
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:744)
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:391)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:312)
at com.fzai.fileservice.FileServiceApp.main(FileServiceApp.java:70)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.springframework.boot.devtools.restart.RestartLauncher.run(RestartLauncher.java:49)
Caused by: com.mongodb.MongoQueryException: Query failed with error code 13 and error message 'not authorized on fileService to execute command { find: "system.indexes", filter: { ns: "fileService.dbchangelog", key: { changeId: 1, author: 1 } }, limit: 1, singleBatch: true, $db: "fileService" }' on server 42.193.124.204:27017
at com.mongodb.operation.FindOperation$1.call(FindOperation.java:706)
at com.mongodb.operation.FindOperation$1.call(FindOperation.java:695)
at com.mongodb.operation.OperationHelper.withConnectionSource(OperationHelper.java:462)
at com.mongodb.operation.OperationHelper.withConnection(OperationHelper.java:406)
at com.mongodb.operation.FindOperation.execute(FindOperation.java:695)
at com.mongodb.operation.FindOperation.execute(FindOperation.java:83)
at com.mongodb.client.internal.MongoClientDelegate$DelegateOperationExecutor.execute(MongoClientDelegate.java:179)
at com.mongodb.client.internal.FindIterableImpl.first(FindIterableImpl.java:198)
at com.github.mongobee.dao.ChangeEntryIndexDao.findRequiredChangeAndAuthorIndex(ChangeEntryIndexDao.java:35)
at com.github.mongobee.dao.ChangeEntryDao.ensureChangeLogCollectionIndex(ChangeEntryDao.java:121)
at com.github.mongobee.dao.ChangeEntryDao.connectMongoDb(ChangeEntryDao.java:61)
at com.github.mongobee.Mongobee.execute(Mongobee.java:143)
at com.github.mongobee.Mongobee.afterPropertiesSet(Mongobee.java:126)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1830)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1767)
... 19 common frames omitted
But in my other simple springboot project, I used the same configuration, which can run and use successfully:
spring:
application:
name: springboot1
data:
mongodb:
host: 42.193.124.204
port: 27017
username: admin
password: admin123
authentication-database: fileService
database: fileService
This is the user and role I created:
{
"_id" : "fileService.admin",
"userId" : UUID("03f75395-f129-4273-b6a6-b2dc3d1f7974"),
"user" : "admin",
"db" : "fileService",
"roles" : [
{
"role" : "dbOwner",
"db" : "fileService"
},
{
"role" : "readWrite",
"db" : "fileService"
}
],
"mechanisms" : [
"SCRAM-SHA-1",
"SCRAM-SHA-256"
]
}
I want to know what's wrong.
Im using mongox fork on my elixir server
it used to work well, untill today, when I keep getting the below error:
GenServer #PID<0.23055.0> terminating
** (stop) exited in: GenServer.call(Mongo.PBKDF2Cache, {\"a9f40827e764c2e9d74318e934596194\", <<88, 76, 231, 25, 765, 153, 136, 68, 54, 109, 131, 126, 543, 654, 201, 250>>, 10000}, 5000)
** (EXIT) no process: the process is not alive or there's no process currently associated with the given name, possibly because its application isn't started
(elixir) lib/gen_server.ex:766: GenServer.call/3
(mongox) lib/mongo/connection/auth/scram.ex:66: Mongo.Connection.Auth.SCRAM.second_message/5
(mongox) lib/mongo/connection/auth/scram.ex:25: Mongo.Connection.Auth.SCRAM.conversation_first/6
(mongox) lib/mongo/connection/auth.ex:29: anonymous fn/3 in Mongo.Connection.Auth.run/1
(elixir) lib/enum.ex:2914: Enum.find_value_list/3
(mongox) lib/mongo/connection/auth.ex:28: Mongo.Connection.Auth.run/1
(mongox) lib/mongo/connection.ex:206: Mongo.Connection.connect/2
(connection) lib/connection.ex:622: Connection.enter_connect/5
Last message: nil
State: %{auth: [{\"my-user\", \"mypassword\"}], database: \"my-db\", opts: [backoff: 1000, hosts: [{\"xxxxxx.mlab.com\", \"17060\"}], port: 17060, hostname: 'xxxxxx.mlab.com', name: MongoPool, max_overflow: 500], queue: %{}, request_id: 0, socket: nil, tail: nil, timeout: 5000, wire_version: nil, write_concern: [w: 1]}
after digging into the code, I figured out:
failed on this row (on the deps):
https://github.com/work-capital/mongox/blob/feature/nan_type_support/lib/mongo/connection/auth/scram.ex#L66
when trying to call to pbkdf2
which makes a genserver call
def pbkdf2(password, salt, iterations) do
GenServer.call(#name, {password, salt, iterations})
is this an error with the connecting to the mongo instance ( which is on mlab) or is it an issue with the code?
here are my configs:
config.exs
config :mongo_app, MongoApp,
host: "xxxx.mlab.com",
database: "my-db",
username: "my-user",
password: "mypass**",
port: "17060",
pool_size: "100",
max_overflow: "500"
mix.exs:
def application do
[extra_applications: [:logger, :poolboy],
mod: {MongoApp.Application, []}]
end
defp deps do
[{:mongox, git: "https://github.com/work-capital/mongox.git", branch: "feature/nan_type_support"},
{:poolboy, "~> 1.5"}
]
application.ex:
defmodule MongoApp.Application do
use Application
def start(_type, _args) do
MongoApp.connect
import Supervisor.Spec, warn: false
children = [
]
opts = [strategy: :one_for_one, name: MongoApp.Supervisor]
Supervisor.start_link(children, opts)
end
end