I am trying to deploy elixir application on Docker container, it is successfully deployed but when I run an API endpoint it shows following error
{
errors: {
detail: "Internal Server Error"
}
}
after check logs docker logs [CONTAINER_ID] the following errors occurred (it cannot connect to database)
my database is on AWS Aurora and I am able to connect to it using pgadmin
18:21:26.624 [error] #PID<0.652.0> running ABC.Endpoint (connection #PID<0.651.0>, stream id 1) terminated
Server: localhost:4000 (http)
Request: GET /api/v1/popular-searches/us/en
** (exit) an exception was raised:
** (DBConnection.ConnectionError) connection not available and request was dropped from queue after 2850ms. This means requests are coming in and your connection pool cannot serve them fast enough. You can address this by:
1. Tracking down slow queries and making sure they are running fast enough
2. Increasing the pool_size (albeit it increases resource consumption)
3. Allowing requests to wait longer by increasing :queue_target and :queue_interval
See DBConnection.start_link/2 for more information
(ecto_sql 3.5.3) lib/ecto/adapters/sql.ex:751: Ecto.Adapters.SQL.raise_sql_call_error/1
(ecto_sql 3.5.3) lib/ecto/adapters/sql.ex:684: Ecto.Adapters.SQL.execute/5
(ecto 3.5.5) lib/ecto/repo/queryable.ex:229: Ecto.Repo.Queryable.execute/4
(ecto 3.5.5) lib/ecto/repo/queryable.ex:17: Ecto.Repo.Queryable.all/3
(ecto 3.5.5) lib/ecto/repo/queryable.ex:157: Ecto.Repo.Queryable.one!/3
(_api 0.1.1) lib/api_web/controllers/V1/cms_data_controller.ex:14: ApiWeb.V1.CMSDataController.get_popular_searches/2
(_api 0.1.1) lib/_api_web/controllers/V1/cms_data_controller.ex:1: ApiWeb.V1.CMSDataController.action/2
(_api 0.1.1) lib/_api_web/controllers/V1/cms_data_controller.ex:1: ApiWeb.V1.CMSDataController.phoenix_controller_pipeline/2
18:21:26.695 [error] Postgrex.Protocol (#PID<0.487.0>) failed to connect: ** (DBConnection.ConnectionError) tcp connect (postgres:5432): non-existing domain - :nxdomain
18:21:26.714 [error] Postgrex.Protocol (#PID<0.479.0>) failed to connect: ** (DBConnection.ConnectionError) tcp connect (postgres:5432): non-existing domain - :nxdomain
18:21:26.718 [error] Postgrex.Protocol (#PID<0.469.0>) failed to connect: ** (DBConnection.ConnectionError) tcp connect (postgres:5432): non-existing domain - :nxdomain
18:21:26.810 [error] Postgrex.Protocol (#PID<0.493.0>) failed to connect: ** (DBConnection.ConnectionError) tcp connect (postgres:5432): non-existing domain - :
I have checked the env variables in the container which are correct, database URL is correct, my Docker file looks file this:
# base image elixer to start with
FROM elixir:1.13.4
# install hex package manager
RUN mix local.hex --force
RUN curl -o phoenix_new.ez https://github.com/phoenixframework/archives/raw/master/phoenix_new.ez
RUN mix archive.install ./phoenix_new.ez
RUN mkdir /app
COPY . /app
WORKDIR /app
ENV MIX_ENV=prod
ENV PORT=4000
ENV DATABASE_URL=postgres://[URL]
RUN mix local.rebar --force
RUN mix deps.get --only prod
RUN mix compile
RUN mix phx.digest
CMD mix phx.server
after building the images i start it like this:
docker build . -t [name]
docker run --name [name] -p 4000:4000 -d [name]
What am I doing wrong?
any help would be appreciated.
I am using docker-compose for influxdb and Cassandra for my application but when I am trying to run application locally on my mac system I am getting below error - I am new for both so not sure where to make the changes for IPs for both docker images.
for Cassandra -
Connection error: ('Unable to connect to any servers', {'127.0.0.1': error(111, "Tried connecting to [('127.0.0.1', 9042)]. Last error: Connection refused"), '::1': error(99, "Tried connecting to [('::1', 9042, 0, 0)]. Last error: Cannot assign requested address")})
For Influxdb - when trying to create a user programmatically
curl: (6) Could not resolve host: influxdb
I have tried everyting to connect my Chainlink node up to my postgresql database with no luck. I have scoured the interwebs for answers to no avail...
Here is the error message I am receiving:
[ERROR] failed to initialize database, got error failed to connect to `host=/tmp user=root database=`: dial error (dial unix /tmp/.s.PGSQL.5432: connect: no such file or directory)
Here is my .env file:
ROOT=/chainlink
LOG_LEVEL=debug
ETH_CHAIN_ID=42
MIN_OUTGOING_CONFIRMATIONS=2
LINK_CONTRACT_ADDRESS=0xa36085F69e2889c224210F603D836748e7dC0088
CHAINLINK_TLS_PORT=0
SECURE_COOKIES=false
GAS_UPDATER_ENABLED=true
ALLOW_ORIGINS=*
ETH_URL=wss://kovan.infura.io/ws/v3/id...
DATABASE_URL=https://chainlink-db-url://postgres:Password#chainlink-kovan:5432
I have tried every configuration of the connection string. Also I am able to connect to the db via pgAdmin no problem and the dbs are publicaly accessible.
The postgresql database is on AWS.
Please change the syntax of your DATABASE_URL to:
DATABASE_URL=postgresql://"username":"password"#"public-ip-pg-server":5432/"database-name"
just change:
"username" : you need to configure a new user, because the default/admin user postgres will not work for it.
"password" : password of the user
"public-ip-pg-server" : the public ip address of your postgresql-server
"database-name" : the name of your database
PS: delete all " in your syntax (;
Here is the link to the official documentation: https://docs.chain.link/docs/connecting-to-a-remote-database/
I have successfully connected to a Postgres database using the go sql package:
...
db, err := sql.Open("postgres", connStr)
I then use the returned database to execute a (long running) query:
rows, err := db.Query(...)
And am getting the error:
dial tcp xx.xxx.xxx.xx:5432: connect: connection timed out
I have a couple of questions regarding this:
why is the connection timing out?
is there anything I can do to prevent it timing out?
sql.Open() may just validate its arguments without creating a connection to
the database. To verify that the data source name is valid, call Ping.
The sql.Open() function has only created an object, your pool is currently empty. In simple words connection with the database hasn't been established yet.
You need to call db.Ping() to make sure your pool has a working connection.
Services
My service based on flask + postgresql + gunicorn + supervisor + nginx
When deploying by docker, after running the service, then accessing the api, sometimes it told the error message, and sometimes it workes well.
And the sqlachemy connect database add the parameters 'sslmode:disable'.
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/sql/elements.py", line 287, in _execute_on_connection
Return connection._execute_clauseelement(self, multiparams, params)
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1107, in _execute_clauseelement
Distilled_params,
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1248, in _execute_context
e, statement, parameters, cursor, context
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1466, in _handle_dbapi_exception
Util.raise_from_cause(sqlalchemy_exception, exc_info)
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/util/compat.py", line 383, in raise_from_cause
Reraise(type(exception), exception, tb=exc_tb, cause=cause)
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1244, in _execute_context
Cursor, statement, parameters, context
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/engine/default.py", line 552, in do_execute
Cursor.execute(statement, parameters)
OperationalError: (psycopg2.OperationalError) server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
Information
Docker for Mac: version: 2.0.0.3 (31259)
macOS: version 10.14.2
Python: version 2.7.15
Recurrence method
When view port information by command
lsof -i:5432
the port 5432 is postgresql database default port,if the outputconsole was
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
postgres 86469 user 4u IPv6 0xxddd 0t0 TCP *:postgresql (LISTEN)
postgres 86469 user 5u IPv4 0xxddr 0t0 TCP *:postgresql (LISTEN)
it would display the error message:
OperationalError: (psycopg2.OperationalError) server closed the connection unexpectedly
but if the outputconsolelog show this:
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
com.docke 62421 user 26u IPv4 0xe93 0t0 TCP 192.168.2.7:6435->192.168.2.7:postgresql (ESTABLISHED)
postgres 86460 user 4u IPv6 0xed3 0t0 TCP *:postgresql (LISTEN)
postgres 86460 user 5u IPv4 0xe513 0t0 TCP *:postgresql (LISTEN)
postgres 86856 user 11u IPv4 0xfe93 0t0 TCP 192.168.2.7:postgresql->192.168.2.7:6435 (ESTABLISHED)
the situation, the api would work well.
Becauce of Docker for mac?
Refer link https://github.com/docker/for-mac/issues/2442 , the issue can not solve my problem.
Notice a similar problem?
Refer link Python & Sqlalchemy - Connection pattern -> Disconnected from the remote server randomly
also this issue can not solve my problem.
Solution
flask_sqlachemy need the parameter pool_pre_ping
from flask_sqlalchemy import SQLAlchemy as _BaseSQLAlchemy
class SQLAlchemy(_BaseSQLAlchemy):
def apply_pool_defaults(self, app, options):
super(SQLAlchemy, self).apply_pool_defaults(self, app, options)
options["pool_pre_ping"] = True
db = SQLAlchemy()
Same logic for sqlalchemy.orm, ( on which flask_sqlalchemy is based btw )
engine = sqlalchemy.create_engine(connection_string, pool_pre_ping=True)
More protection strategies can be setup such as it is described in the doc: https://docs.sqlalchemy.org/en/13/core/pooling.html#disconnect-handling-pessimistic
For example, here is my engine instantiation:
engine = sqlalchemy.create_engine(connection_string,
pool_size=10,
max_overflow=2,
pool_recycle=300,
pool_pre_ping=True,
pool_use_lifo=True)
sqlalchemy.orm.sessionmaker(bind=engine, query_cls=RetryingQuery)
For RetryingQuery code, cf: Retry failed sqlalchemy queries
I'm posting my own answer to this, since none of the above addressed my particular setup (Postgres 12.2, SQLAlchemy 1.3).
To stop the OperationalErrors, I had to pass in some additional connect_args to create_engine:
create_engine(
connection_string,
pool_pre_ping=True,
connect_args={
"keepalives": 1,
"keepalives_idle": 30,
"keepalives_interval": 10,
"keepalives_count": 5,
}
)
Building on the Solution in the answer and the info from #MaxBlax360's answer. I think the proper way to set these config values in Flask-SQLAlchemy is by setting app.config['SQLALCHEMY_ENGINE_OPTIONS']:
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
app = Flask(__name__)
# pool_pre_ping should help handle DB connection drops
app.config['SQLALCHEMY_ENGINE_OPTIONS'] = {"pool_pre_ping": True}
app.config['SQLALCHEMY_DATABASE_URI'] = \
f'postgresql+psycopg2://{POSTGRES_USER}:{dbpass}#{POSTGRES_HOST}:{POSTGRES_PORT}/{POSTGRES_DBNAME}'
db = SQLAlchemy(app)
See also Flask-SQLAlchemy docs on Configuration Keys
My db configuration:
app = Flask(__name__)
app.config['SQLALCHEMY_ENGINE_OPTIONS'] = {"pool_pre_ping": True}
app.config['SQLALCHEMY_DATABASE_URI'] = os.environ['DATABASE_URL']
app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False
# Play with following options:
app.config['SQLALCHEMY_POOL_SIZE'] = 10
app.config['SQLALCHEMY_MAX_OVERFLOW'] = 20
app.config['SQLALCHEMY_POOL_RECYCLE'] = 1800
db = SQLAlchemy(app)