I'm working on deploying my flask application on Nginx with Gunicorn and as of yesterday I was able to access my website on a remote device and run queries that affected the database but for some reason today it has abruptly stopped and I get the following error in the log file.
Connection._handle_dbapi_exception_noconnection(
File "/home/dancungerald/Documents/Python/SCHEYE/scheye_venv/lib/python3.8/site-
packages/sqlalchemy/engine/base.py", line 2117, in
_handle_dbapi_exception_noconnection
util.raise_(
File "/home/dancungerald/Documents/Python/SCHEYE/scheye_venv/lib/python3.8/site-
packages/sqlalchemy/util/compat.py", line 207, in raise_
raise exception
File "/home/dancungerald/Documents/Python/SCHEYE/scheye_venv/lib/python3.8/site-
packages/sqlalchemy/engine/base.py", line 3280, in _wrap_pool_connect
return fn()
File "/home/dancungerald/Documents/Python/SCHEYE/scheye_venv/lib/python3.8/site-
packages/sqlalchemy/pool/base.py", line 310, in connect
return _ConnectionFairy._checkout(self)
File "/home/dancungerald/Documents/Python/SCHEYE/scheye_venv/lib/python3.8/site-
packages/sqlalchemy/pool/base.py", line 868, in _checkout
fairy = _ConnectionRecord.checkout(pool)
File "/home/dancungerald/Documents/Python/SCHEYE/scheye_venv/lib/python3.8/site-
packages/sqlalchemy/pool/base.py", line 476, in checkout
rec = pool._do_get()
File "/home/dancungerald/Documents/Python/SCHEYE/scheye_venv/lib/python3.8/site-
packages/sqlalchemy/pool/impl.py", line 146, in _do_get
self._dec_overflow()
File "/home/dancungerald/Documents/Python/SCHEYE/scheye_venv/lib/python3.8/site-
packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__
compat.raise_(
File "/home/dancungerald/Documents/Python/SCHEYE/scheye_venv/lib/python3.8/site-
packages/sqlalchemy/util/compat.py", line 207, in raise_
raise exception
File "/home/dancungerald/Documents/Python/SCHEYE/scheye_venv/lib/python3.8/site-
packages/sqlalchemy/pool/impl.py", line 143, in _do_get
return self._create_connection()
File "/home/dancungerald/Documents/Python/SCHEYE/scheye_venv/lib/python3.8/site-
packages/sqlalchemy/pool/base.py", line 256, in _create_connection
return _ConnectionRecord(self)
File "/home/dancungerald/Documents/Python/SCHEYE/scheye_venv/lib/python3.8/site-
packages/sqlalchemy/pool/base.py", line 371, in __init__
self.__connect()
File "/home/dancungerald/Documents/Python/SCHEYE/scheye_venv/lib/python3.8/site-
packages/sqlalchemy/pool/base.py", line 666, in __connect
pool.logger.debug("Error on connect(): %s", e)
File "/home/dancungerald/Documents/Python/SCHEYE/scheye_venv/lib/python3.8/site-
packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__
compat.raise_(
File "/home/dancungerald/Documents/Python/SCHEYE/scheye_venv/lib/python3.8/site-
packages/sqlalchemy/util/compat.py", line 207, in raise_
raise exception
File "/home/dancungerald/Documents/Python/SCHEYE/scheye_venv/lib/python3.8/site-
packages/sqlalchemy/pool/base.py", line 661, in __connect
self.dbapi_connection = connection = pool._invoke_creator(self)
File "/home/dancungerald/Documents/Python/SCHEYE/scheye_venv/lib/python3.8/site-
packages/sqlalchemy/engine/create.py", line 590, in connect
return dialect.connect(*cargs, **cparams)
File "/home/dancungerald/Documents/Python/SCHEYE/scheye_venv/lib/python3.8/site-
packages/sqlalchemy/engine/default.py", line 597, in connect
return self.dbapi.connect(*cargs, **cparams)
File "/home/dancungerald/Documents/Python/SCHEYE/scheye_venv/lib/python3.8/site-
packages/psycopg2/__init__.py", line 122, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) connection to server at
"localhost" (127.0.0.1), port 5432 failed: FATAL: the database system is starting up
(Background on this error at: https://sqlalche.me/e/14/e3q8)
[2022-09-29 15:20:41 +0300] [29652] [INFO] Worker exiting (pid: 29652)
Traceback (most recent call last):
File "/home/dancungerald/Documents/Python/SCHEYE/scheye_venv/lib/python3.8/site-
packages/gunicorn/arbiter.py", line 209, in run
self.sleep()
File "/home/dancungerald/Documents/Python/SCHEYE/scheye_venv/lib/python3.8/site-
packages/gunicorn/arbiter.py", line 357, in sleep
ready = select.select([self.PIPE[0]], [], [], 1.0)
File "/home/dancungerald/Documents/Python/SCHEYE/scheye_venv/lib/python3.8/site-
packages/gunicorn/arbiter.py", line 242, in handle_chld
self.reap_workers()
File "/home/dancungerald/Documents/Python/SCHEYE/scheye_venv/lib/python3.8/site-
packages/gunicorn/arbiter.py", line 525, in reap_workers
raise HaltServer(reason, self.WORKER_BOOT_ERROR)
gunicorn.errors.HaltServer: <HaltServer 'Worker failed to boot.' 3>
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/dancungerald/Documents/Python/SCHEYE/scheye_venv/bin/gunicorn", line 8,
in <module>
sys.exit(run())
File "/home/dancungerald/Documents/Python/SCHEYE/scheye_venv/lib/python3.8/site-
packages/gunicorn/app/wsgiapp.py", line 67, in run
WSGIApplication("%(prog)s [OPTIONS] [APP_MODULE]").run()
File "/home/dancungerald/Documents/Python/SCHEYE/scheye_venv/lib/python3.8/site-
packages/gunicorn/app/base.py", line 231, in run
super().run()
File "/home/dancungerald/Documents/Python/SCHEYE/scheye_venv/lib/python3.8/site-
packages/gunicorn/app/base.py", line 72, in run
Arbiter(self).run()
File "/home/dancungerald/Documents/Python/SCHEYE/scheye_venv/lib/python3.8/site-
packages/gunicorn/arbiter.py", line 229, in run
self.halt(reason=inst.reason, exit_status=inst.exit_status)
File "/home/dancungerald/Documents/Python/SCHEYE/scheye_venv/lib/python3.8/site-
packages/gunicorn/arbiter.py", line 342, in halt
self.stop()
File "/home/dancungerald/Documents/Python/SCHEYE/scheye_venv/lib/python3.8/site-
packages/gunicorn/arbiter.py", line 393, in stop
time.sleep(0.1)
File "/home/dancungerald/Documents/Python/SCHEYE/scheye_venv/lib/python3.8/site-
packages/gunicorn/arbiter.py", line 242, in handle_chld
self.reap_workers()
File "/home/dancungerald/Documents/Python/SCHEYE/scheye_venv/lib/python3.8/site-
packages/gunicorn/arbiter.py", line 525, in reap_workers
raise HaltServer(reason, self.WORKER_BOOT_ERROR)
gunicorn.errors.HaltServer: <HaltServer 'Worker failed to boot.' 3>
[2022-09-29 15:20:42 +0300] [29673] [INFO] Starting gunicorn 20.1.0
[2022-09-29 15:20:42 +0300] [29673] [INFO] Listening at: http://127.0.0.1:8000
(29673)
[2022-09-29 15:20:42 +0300] [29673] [INFO] Using worker: sync
[2022-09-29 15:20:42 +0300] [29675] [INFO] Booting worker with pid: 29675
[2022-09-29 15:20:42 +0300] [29676] [INFO] Booting worker with pid: 29676
[2022-09-29 15:20:42 +0300] [29677] [INFO] Booting worker with pid: 29677
[2022-09-29 15:20:42 +0300] [29678] [INFO] Booting worker with pid: 29678
[2022-09-29 15:20:42 +0300] [29679] [INFO] Booting worker with pid: 29679
[2022-09-29 15:20:42 +0300] [29680] [INFO] Booting worker with pid: 29680
[2022-09-29 15:20:42 +0300] [29681] [INFO] Booting worker with pid: 29681
[2022-09-29 15:20:43 +0300] [29682] [INFO] Booting worker with pid: 29682
[2022-09-29 15:20:43 +0300] [29683] [INFO] Booting worker with pid: 29683
[2022-09-29 15:46:12 +0300] [29682] [INFO] Worker exiting (pid: 29682)
[2022-09-29 15:46:12 +0300] [29681] [INFO] Worker exiting (pid: 29681)
[2022-09-29 15:46:12 +0300] [29683] [INFO] Worker exiting (pid: 29683)
[2022-09-29 15:46:12 +0300] [29679] [INFO] Worker exiting (pid: 29679)
[2022-09-29 15:46:12 +0300] [29678] [INFO] Worker exiting (pid: 29678)
[2022-09-29 15:46:12 +0300] [29677] [INFO] Worker exiting (pid: 29677)
[2022-09-29 15:46:12 +0300] [29680] [INFO] Worker exiting (pid: 29680)
[2022-09-29 15:46:12 +0300] [29675] [INFO] Worker exiting (pid: 29675)
[2022-09-29 15:46:12 +0300] [29676] [INFO] Worker exiting (pid: 29676)
[2022-09-29 15:46:12 +0300] [29673] [INFO] Handling signal: term
[2022-09-29 15:46:12 +0300] [29673] [WARNING] Worker with pid 29675 was terminated
due to signal 15
[2022-09-29 15:46:12 +0300] [29673] [WARNING] Worker with pid 29678 was terminated
due to signal 15
[2022-09-29 15:46:12 +0300] [29673] [WARNING] Worker with pid 29681 was terminated
due to signal 15
[2022-09-29 15:46:12 +0300] [29673] [WARNING] Worker with pid 29682 was terminated
due to signal 15
[2022-09-29 15:46:12 +0300] [29673] [WARNING] Worker with pid 29677 was terminated
due to signal 15
[2022-09-29 15:46:12 +0300] [29673] [WARNING] Worker with pid 29679 was terminated
due to signal 15
[2022-09-29 15:46:12 +0300] [29673] [WARNING] Worker with pid 29680 was terminated
due to signal 15
[2022-09-29 15:46:12 +0300] [29673] [WARNING] Worker with pid 29676 was terminated
due to signal 15
[2022-09-29 15:46:12 +0300] [29673] [WARNING] Worker with pid 29683 was terminated
due to signal 15
[2022-09-29 15:46:12 +0300] [29673] [INFO] Shutting down: Master
[2022-09-29 15:46:14 +0300] [30879] [INFO] Starting gunicorn 20.1.0
[2022-09-29 15:46:14 +0300] [30879] [INFO] Listening at: http://127.0.0.1:8000
(30879)
[2022-09-29 15:46:14 +0300] [30879] [INFO] Using worker: sync
[2022-09-29 15:46:14 +0300] [30881] [INFO] Booting worker with pid: 30881
[2022-09-29 15:46:14 +0300] [30882] [INFO] Booting worker with pid: 30882
[2022-09-29 15:46:14 +0300] [30883] [INFO] Booting worker with pid: 30883
[2022-09-29 15:46:14 +0300] [30884] [INFO] Booting worker with pid: 30884
[2022-09-29 15:46:14 +0300] [30885] [INFO] Booting worker with pid: 30885
[2022-09-29 15:46:14 +0300] [30886] [INFO] Booting worker with pid: 30886
[2022-09-29 15:46:14 +0300] [30887] [INFO] Booting worker with pid: 30887
[2022-09-29 15:46:14 +0300] [30888] [INFO] Booting worker with pid: 30888
[2022-09-29 15:46:14 +0300] [30889] [INFO] Booting worker with pid: 30889
It proposes that postgres is not running but I started the server and when I run netstat -pln |grep 5432 I see that postgres is running on port 5432 so I doubt it's anything to do with the database connection.
If it helps, when it worked I was operating on my home network/wifi but it failed when I ran it today on an institutional network/wifi.
I have no idea what transpired for the app to behave this way. Any help would be highly appreciated.
I enabled postgres to start up on boot with supervisord as per this website. Here's what my supervisor configuration file looked like:
[program:appName]
directory=/Project/File/Directory/appName
command=/Project/File/Location/appName/app_venv/bin/gunicorn -w 9 run:app
user=username
autostart=true
autorestart=true
stopasgroup=true
killasgroup=true
stderr_logfile=/var/log/appName/app.err.log
stdout_logfile=/var/log/appName/app.out.log
[program:postgresql]
user=postgres
command=/usr/local/pgsql/bin/pg_ctl -D /usr/local/pgsql/data start
autostart=true
autorestart=true
stopasgroup=true
killasgroup=true
redirect_stderr=true
Related
I am attempting to connect my RDS postgres db over to Heroku. I'm not sure what I am doing wrong but every time I use heroku, my old sqlite3 db keeps getting used. Locally my postgres db works fine, but it's not deploying on heroku. I am using dj_database url. I've gone through and made sure my database settings matched what I have on heroku over and over, and I'm still not sure why it's not pushing to Heroku.
traceback
2020-07-11T19:26:13.248140+00:00 app[web.1]: [2020-07-11 19:26:13 +0000] [4] [INFO] Shutting down: Master
2020-07-11T19:26:13.320264+00:00 heroku[web.1]: Process exited with status 0
2020-07-11T19:26:24.501196+00:00 heroku[web.1]: Starting process with command `gunicorn dating_project.wsgi --log-file -`
2020-07-11T19:26:25.000000+00:00 app[api]: Build succeeded
2020-07-11T19:26:27.286692+00:00 app[web.1]: [2020-07-11 19:26:27 +0000] [4] [INFO] Starting gunicorn 20.0.4
2020-07-11T19:26:27.287606+00:00 app[web.1]: [2020-07-11 19:26:27 +0000] [4] [INFO] Listening at: http://0.0.0.0:14756 (4)
2020-07-11T19:26:27.287746+00:00 app[web.1]: [2020-07-11 19:26:27 +0000] [4] [INFO] Using worker: sync
2020-07-11T19:26:27.293342+00:00 app[web.1]: [2020-07-11 19:26:27 +0000] [10] [INFO] Booting worker with pid: 10
2020-07-11T19:26:27.343582+00:00 app[web.1]: [2020-07-11 19:26:27 +0000] [11] [INFO] Booting worker with pid: 11
2020-07-11T19:26:27.851588+00:00 heroku[web.1]: State changed from starting to up
2020-07-11T19:26:59.695073+00:00 heroku[router]: at=error code=H12 desc="Request timeout" method=GET path="/" host=cupids-corner.herokuapp.com request_id=f5dd71c5-d6a5-4a06-8d8e-b506859c933a fwd="100.36.43.223" dyno=web.1 connect=1ms service=30001ms status=503 bytes=0 protocol=https
2020-07-11T19:27:00.453216+00:00 app[web.1]: [2020-07-11 19:27:00 +0000] [4] [CRITICAL] WORKER TIMEOUT (pid:11)
2020-07-11T19:27:01.464739+00:00 app[web.1]: [2020-07-11 19:27:01 +0000] [14] [INFO] Booting worker with pid: 14
2020-07-11T19:29:18.266925+00:00 app[api]: Release v77 created by user rezazandirz#gmail.com
2020-07-11T19:29:18.266925+00:00 app[api]: Remove DATABASE_URL config vars by user rezazandirz#gmail.com
2020-07-11T19:29:18.642733+00:00 heroku[web.1]: Restarting
2020-07-11T19:29:18.654156+00:00 heroku[web.1]: State changed from up to starting
2020-07-11T19:29:20.114791+00:00 heroku[web.1]: Stopping all processes with SIGTERM
2020-07-11T19:29:20.154180+00:00 app[web.1]: [2020-07-11 15:29:20 -0400] [14] [INFO] Worker exiting (pid: 14)
2020-07-11T19:29:20.155794+00:00 app[web.1]: [2020-07-11 15:29:20 -0400] [10] [INFO] Worker exiting (pid: 10)
2020-07-11T19:29:20.159258+00:00 app[web.1]: [2020-07-11 19:29:20 +0000] [4] [INFO] Handling signal: term
2020-07-11T19:29:20.257065+00:00 app[web.1]: [2020-07-11 19:29:20 +0000] [4] [INFO] Shutting down: Master
2020-07-11T19:29:20.343634+00:00 heroku[web.1]: Process exited with status 0
2020-07-11T19:29:21.811888+00:00 app[api]: Release v78 created by user rezazandirz#gmail.com
2020-07-11T19:29:21.811888+00:00 app[api]: Set DATABASE_URL config vars by user rezazandirz#gmail.com
2020-07-11T19:29:23.059673+00:00 heroku[web.1]: Restarting
2020-07-11T19:29:28.210410+00:00 heroku[web.1]: Starting process with command `gunicorn dating_project.wsgi --log-file -`
2020-07-11T19:29:30.070467+00:00 app[web.1]: [2020-07-11 19:29:30 +0000] [4] [INFO] Starting gunicorn 20.0.4
2020-07-11T19:29:30.071054+00:00 app[web.1]: [2020-07-11 19:29:30 +0000] [4] [INFO] Listening at: http://0.0.0.0:29527 (4)
2020-07-11T19:29:30.071160+00:00 app[web.1]: [2020-07-11 19:29:30 +0000] [4] [INFO] Using worker: sync
2020-07-11T19:29:30.075339+00:00 app[web.1]: [2020-07-11 19:29:30 +0000] [10] [INFO] Booting worker with pid: 10
2020-07-11T19:29:30.144814+00:00 app[web.1]: [2020-07-11 19:29:30 +0000] [11] [INFO] Booting worker with pid: 11
2020-07-11T19:29:31.175076+00:00 heroku[web.1]: Stopping all processes with SIGTERM
2020-07-11T19:29:31.203882+00:00 app[web.1]: [2020-07-11 15:29:31 -0400] [10] [INFO] Worker exiting (pid: 10)
2020-07-11T19:29:31.204310+00:00 app[web.1]: [2020-07-11 19:29:31 +0000] [4] [INFO] Handling signal: term
2020-07-11T19:29:31.204311+00:00 app[web.1]: [2020-07-11 15:29:31 -0400] [11] [INFO] Worker exiting (pid: 11)
2020-07-11T19:29:31.304852+00:00 app[web.1]: [2020-07-11 19:29:31 +0000] [4] [INFO] Shutting down: Master
2020-07-11T19:29:31.389352+00:00 heroku[web.1]: Process exited with status 0
2020-07-11T19:29:33.605449+00:00 heroku[web.1]: Starting process with command `gunicorn dating_project.wsgi --log-file -`
2020-07-11T19:29:35.686242+00:00 app[web.1]: [2020-07-11 19:29:35 +0000] [4] [INFO] Starting gunicorn 20.0.4
2020-07-11T19:29:35.686798+00:00 app[web.1]: [2020-07-11 19:29:35 +0000] [4] [INFO] Listening at: http://0.0.0.0:18649 (4)
2020-07-11T19:29:35.686918+00:00 app[web.1]: [2020-07-11 19:29:35 +0000] [4] [INFO] Using worker: sync
2020-07-11T19:29:35.690770+00:00 app[web.1]: [2020-07-11 19:29:35 +0000] [10] [INFO] Booting worker with pid: 10
2020-07-11T19:29:35.717528+00:00 app[web.1]: [2020-07-11 19:29:35 +0000] [11] [INFO] Booting worker with pid: 11
2020-07-11T19:29:35.746321+00:00 app[api]: Remove DATABASE_URL config vars by user rezazandirz#gmail.com
2020-07-11T19:29:35.746321+00:00 app[api]: Release v79 created by user rezazandirz#gmail.com
2020-07-11T19:29:35.798708+00:00 heroku[web.1]: State changed from starting to up
2020-07-11T19:29:36.594691+00:00 heroku[web.1]: Restarting
2020-07-11T19:29:36.607908+00:00 heroku[web.1]: State changed from up to starting
2020-07-11T19:29:37.305963+00:00 app[api]: Release v80 created by user rezazandirz#gmail.com
2020-07-11T19:29:37.305963+00:00 app[api]: Set DATABASE_URL config vars by user rezazandirz#gmail.com
2020-07-11T19:29:37.529577+00:00 heroku[web.1]: Stopping all processes with SIGTERM
2020-07-11T19:29:37.557612+00:00 app[web.1]: [2020-07-11 19:29:37 +0000] [4] [INFO] Handling signal: term
2020-07-11T19:29:37.557644+00:00 app[web.1]: [2020-07-11 15:29:37 -0400] [11] [INFO] Worker exiting (pid: 11)
2020-07-11T19:29:37.557800+00:00 app[web.1]: [2020-07-11 15:29:37 -0400] [10] [INFO] Worker exiting (pid: 10)
2020-07-11T19:29:37.658117+00:00 app[web.1]: [2020-07-11 19:29:37 +0000] [4] [INFO] Shutting down: Master
2020-07-11T19:29:37.723003+00:00 heroku[web.1]: Process exited with status 0
2020-07-11T19:29:37.726220+00:00 heroku[web.1]: Restarting
2020-07-11T19:29:46.309454+00:00 heroku[web.1]: Starting process with command `gunicorn dating_project.wsgi --log-file -`
2020-07-11T19:29:47.301312+00:00 app[api]: Release v81 created by user rezazandirz#gmail.com
2020-07-11T19:29:47.301312+00:00 app[api]: Set DATABASE_URL config vars by user rezazandirz#gmail.com
2020-07-11T19:29:47.523466+00:00 heroku[web.1]: Starting process with command `gunicorn dating_project.wsgi --log-file -`
2020-07-11T19:29:47.857179+00:00 heroku[web.1]: Restarting
2020-07-11T19:29:48.340854+00:00 app[web.1]: [2020-07-11 19:29:48 +0000] [4] [INFO] Starting gunicorn 20.0.4
2020-07-11T19:29:48.341491+00:00 app[web.1]: [2020-07-11 19:29:48 +0000] [4] [INFO] Listening at: http://0.0.0.0:44163 (4)
2020-07-11T19:29:48.341618+00:00 app[web.1]: [2020-07-11 19:29:48 +0000] [4] [INFO] Using worker: sync
2020-07-11T19:29:48.345653+00:00 app[web.1]: [2020-07-11 19:29:48 +0000] [10] [INFO] Booting worker with pid: 10
2020-07-11T19:29:48.445622+00:00 app[web.1]: [2020-07-11 19:29:48 +0000] [12] [INFO] Booting worker with pid: 12
2020-07-11T19:29:49.699615+00:00 heroku[web.1]: Stopping all processes with SIGTERM
2020-07-11T19:29:49.736903+00:00 app[web.1]: [2020-07-11 15:29:49 -0400] [10] [INFO] Worker exiting (pid: 10)
2020-07-11T19:29:49.737269+00:00 app[web.1]: [2020-07-11 19:29:49 +0000] [4] [INFO] Handling signal: term
2020-07-11T19:29:49.737373+00:00 app[web.1]: [2020-07-11 15:29:49 -0400] [12] [INFO] Worker exiting (pid: 12)
2020-07-11T19:29:49.837631+00:00 app[web.1]: [2020-07-11 19:29:49 +0000] [4] [INFO] Shutting down: Master
2020-07-11T19:29:49.909420+00:00 heroku[web.1]: Process exited with status 0
2020-07-11T19:29:49.923941+00:00 app[web.1]: [2020-07-11 19:29:49 +0000] [4] [INFO] Starting gunicorn 20.0.4
2020-07-11T19:29:49.924487+00:00 app[web.1]: [2020-07-11 19:29:49 +0000] [4] [INFO] Listening at: http://0.0.0.0:28971 (4)
2020-07-11T19:29:49.924586+00:00 app[web.1]: [2020-07-11 19:29:49 +0000] [4] [INFO] Using worker: sync
2020-07-11T19:29:49.928891+00:00 app[web.1]: [2020-07-11 19:29:49 +0000] [10] [INFO] Booting worker with pid: 10
2020-07-11T19:29:49.931500+00:00 app[web.1]: [2020-07-11 19:29:49 +0000] [11] [INFO] Booting worker with pid: 11
2020-07-11T19:29:51.034436+00:00 heroku[web.1]: Stopping all processes with SIGTERM
2020-07-11T19:29:51.080195+00:00 app[web.1]: [2020-07-11 15:29:51 -0400] [11] [INFO] Worker exiting (pid: 11)
2020-07-11T19:29:51.080236+00:00 app[web.1]: [2020-07-11 19:29:51 +0000] [4] [INFO] Handling signal: term
2020-07-11T19:29:51.081422+00:00 app[web.1]: [2020-07-11 15:29:51 -0400] [10] [INFO] Worker exiting (pid: 10)
2020-07-11T19:29:51.181003+00:00 app[web.1]: [2020-07-11 19:29:51 +0000] [4] [INFO] Shutting down: Master
2020-07-11T19:29:51.237472+00:00 heroku[web.1]: Process exited with status 0
2020-07-11T19:29:59.954021+00:00 heroku[web.1]: Starting process with command `gunicorn dating_project.wsgi --log-file -`
2020-07-11T19:30:02.457037+00:00 app[web.1]: [2020-07-11 19:30:02 +0000] [4] [INFO] Starting gunicorn 20.0.4
2020-07-11T19:30:02.457662+00:00 app[web.1]: [2020-07-11 19:30:02 +0000] [4] [INFO] Listening at: http://0.0.0.0:4135 (4)
2020-07-11T19:30:02.457742+00:00 app[web.1]: [2020-07-11 19:30:02 +0000] [4] [INFO] Using worker: sync
2020-07-11T19:30:02.462351+00:00 app[web.1]: [2020-07-11 19:30:02 +0000] [10] [INFO] Booting worker with pid: 10
2020-07-11T19:30:02.493543+00:00 app[web.1]: [2020-07-11 19:30:02 +0000] [11] [INFO] Booting worker with pid: 11
2020-07-11T19:30:03.007273+00:00 heroku[web.1]: State changed from starting to up
2020-07-11T20:02:10.400956+00:00 heroku[web.1]: Idling
2020-07-11T20:02:10.403216+00:00 heroku[web.1]: State changed from up to down
2020-07-11T20:02:12.198104+00:00 heroku[web.1]: Stopping all processes with SIGTERM
2020-07-11T20:02:12.267882+00:00 app[web.1]: [2020-07-11 16:02:12 -0400] [10] [INFO] Worker exiting (pid: 10)
2020-07-11T20:02:12.320113+00:00 app[web.1]: [2020-07-11 16:02:12 -0400] [11] [INFO] Worker exiting (pid: 11)
2020-07-11T20:02:12.347672+00:00 app[web.1]: [2020-07-11 20:02:12 +0000] [4] [INFO] Handling signal: term
2020-07-11T20:02:12.451184+00:00 app[web.1]: [2020-07-11 20:02:12 +0000] [4] [INFO] Shutting down: Master
2020-07-11T20:02:12.705666+00:00 heroku[web.1]: Process exited with status 0
settings.py
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'cupids-corner',
'USER': 'rezazandi',
'PASSWORD': '***********',
'HOST' : 'cupids-corner.cp5uqhct8bo1.us-east-1.rds.amazonaws.com',
'PORT' : '5432'
}
}
import dj_database_url
db_from_env = dj_database_url.config(conn_max_age=600)
DATABASES['default'].update(db_from_env)
heroku.com/config_vars
Key: DATABSE_URL
Value: postgres://rezazandi:(password)#cupids-corner.cp5uqhct8bo1.us-east-1.rds.amazonaws.com:5432/cupids-corner
The solution to my problem was that I had to go into my RDS on AWS and change my inbound rules to allow ALL connections. After that, it worked.
I'm running django in EKS (kubernetes). I have a run script that executes
exec /usr/local/bin/gunicorn config.wsgi --timeout 30 -b 0.0.0.0:8000 --chdir /app --workers 1 --worker-tmp-dir /dev/shm --threads 2
but when I check the container logs, it seems to be ignoring the fact that I told it to run more than a single thread
| [2020-03-12 03:32:33 +0000] [28] [INFO] Starting gunicorn 20.0.4
│ [2020-03-12 03:32:33 +0000] [28] [INFO] Listening at: http://0.0.0.0:8000 (28)
│ [2020-03-12 03:32:33 +0000] [28] [INFO] Using worker: sync
│ [2020-03-12 03:32:33 +0000] [30] [INFO] Booting worker with pid: 30
Has anyone else experienced this, or can see something that I'm just not seeing in my config?
TIA
You didn't specify a worker class so it probably tried to switch to gthread for you but you likely don't have the futures library loadable? Reagrdless, I really don't recommend gunicorn in k8s. Twisted Web on a thread pool is a much better bet.
Waiting for the server to restart when working with Play cost us a lot of time.
One thing I see in the log is that shutting down and starting the HikaryPool takes a lot of time (> 40 seconds).
Here is the log:
2019-10-31 09:11:47,327 [info] application - Shutting down connection pool.
2019-10-31 09:11:47,328 [info] c.z.h.HikariDataSource - HikariPool-58 - Shutdown initiated...
2019-10-31 09:11:53,629 [info] c.z.h.HikariDataSource - HikariPool-58 - Shutdown completed.
2019-10-31 09:11:53,629 [info] application - Shutting down connection pool.
2019-10-31 09:11:53,629 [info] c.z.h.HikariDataSource - HikariPool-59 - Shutdown initiated...
2019-10-31 09:11:53,636 [info] c.z.h.HikariDataSource - HikariPool-59 - Shutdown completed.
2019-10-31 09:11:53,636 [info] application - Shutting down connection pool.
2019-10-31 09:11:53,636 [info] c.z.h.HikariDataSource - HikariPool-60 - Shutdown initiated...
2019-10-31 09:11:53,640 [info] c.z.h.HikariDataSource - HikariPool-60 - Shutdown completed.
....
2019-10-31 09:12:26,454 [info] p.a.d.DefaultDBApi - Database [amseewen] initialized at jdbc:postgresql://localhost:5432/bpf?currentSchema=amseewen
2019-10-31 09:12:26,454 [info] application - Creating Pool for datasource 'amseewen'
2019-10-31 09:12:26,454 [info] c.z.h.HikariDataSource - HikariPool-68 - Starting...
2019-10-31 09:12:26,455 [info] c.z.h.HikariDataSource - HikariPool-68 - Start completed.
2019-10-31 09:12:26,455 [info] p.a.d.DefaultDBApi - Database [companyOds] initialized at jdbc:sqlserver://localhost:1433;databaseName=companyOds
2019-10-31 09:12:26,455 [info] application - Creating Pool for datasource 'companyOds'
2019-10-31 09:12:26,455 [info] c.z.h.HikariDataSource - HikariPool-69 - Starting...
2019-10-31 09:12:26,456 [info] c.z.h.HikariDataSource - HikariPool-69 - Start completed.
2019-10-31 09:12:26,457 [info] p.a.d.DefaultDBApi - Database [company] initialized at jdbc:oracle:thin:#castor.olymp:1521:citrin
2019-10-31 09:12:26,457 [info] application - Creating Pool for datasource 'company'
2019-10-31 09:12:26,457 [info] c.z.h.HikariDataSource - HikariPool-70 - Starting...
2019-10-31 09:12:26,458 [info] c.z.h.HikariDataSource - HikariPool-70 - Start completed.
2019-10-31 09:12:26,458 [info] p.a.d.DefaultDBApi - Database [amseewen] initialized at jdbc:postgresql://localhost:5432/bpf?currentSchema=amseewen
2019-10-31 09:12:26,458 [info] application - Creating Pool for datasource 'amseewen'
2019-10-31 09:12:26,458 [info] c.z.h.HikariDataSource - HikariPool-71 - Starting...
2019-10-31 09:12:26,459 [info] c.z.h.HikariDataSource - HikariPool-71 - Start completed.
2019-10-31 09:12:26,459 [info] p.a.d.DefaultDBApi - Database [companyOds] initialized at jdbc:sqlserver://localhost:1433;databaseName=companyOds
2019-10-31 09:12:26,459 [info] application - Creating Pool for datasource 'companyOds'
2019-10-31 09:12:26,459 [info] c.z.h.HikariDataSource - HikariPool-72 - Starting...
2019-10-31 09:12:26,459 [info] c.z.h.HikariDataSource - HikariPool-72 - Start completed.
Is there a way to shorten this time?
Updates
I use The Play integration of Intellij. The build-tool is sbt.
Here is the configuration:
sbt 1.2.8
Thread Pools
We use the default thread pool for the application. For the Database access we use:
database.dispatcher {
executor = "thread-pool-executor"
throughput = 1
thread-pool-executor {
fixed-pool-size = 55 # db conn pool (50) + number of cores (4) + housekeeping (1)
}
}
Ok with the help of billoneil on the Hikari Github Page and suggestions of #Issilva, I could figure out the problem:
The problem are now datasources where the database is not reachable (during development). So we configured it, that the application also
starts when the database is not reachable (initializationFailTimeout = -1).
So there are 2 problems when shutting down:
The pools are shutdown sequentially.
A pool that has no connection takes 10 seconds to shutdown.
The suggested solution is not to initialise the datasources that can not be reached. Except a strange exception the shutdown time problem is solved (down to milliseconds).
I am trying to deploy my Scalatra web application in heroku but I am having one problem.
My application works in local with SBT and using "heroku local web". I am using heroku sbt plugin.
When I use "sbt stage deployHeroku" the application is uploaded and started properly, obtaining:
user#user-X550JF:~/Documents/SOFT/cloudrobe$ sbt stage deployHeroku
Detected sbt version 0.13.9
....
....
[info] Packaging /home/user/Documents/SOFT/cloudrobe/target/scala-2.11/cloudrobe_2.11-0.1.0-SNAPSHOT.war ...
[info] Done packaging.
[success] Total time: 2 s, completed May 25, 2016 1:04:51 AM
[info] -----> Packaging application...
[info] - app: cloudrobe
[info] - including: target/universal/stage/
[info] -----> Creating build...
[info] - file: target/heroku/slug.tgz
[info] - size: 45MB
[info] -----> Uploading slug... (100%)
[info] - success
[info] -----> Deploying...
[info] remote:
[info] remote: -----> Fetching set buildpack https://codon-buildpacks.s3.amazonaws.com/buildpacks/heroku/jvm-common.tgz... done
[info] remote: -----> sbt-heroku app detected
[info] remote: -----> Installing OpenJDK 1.8... done
[info] remote:
[info] remote: -----> Discovering process types
[info] remote: Procfile declares types -> web
[info] remote:
[info] remote: -----> Compressing...
[info] remote: Done: 93.5M
[info] remote: -----> Launching...
[info] remote: Released v11
[info] remote: https://cloudrobe.herokuapp.com/ deployed to Heroku
[info] remote:
[info] -----> Done
___________________________________________________________________________
Using "heroku logs" I can see:
2016-05-24T23:14:16.007200+00:00 app[web.1]: 23:14:16.006 [main] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:5, serverValue:5}] to localhost:33333
2016-05-24T23:14:16.370324+00:00 app[web.1]: 23:14:16.370 [main] INFO o.f.s.servlet.ServletTemplateEngine - Scalate template engine using working directory: /tmp/scalate-5146893161861816095-workdir
2016-05-24T23:14:16.746719+00:00 app[web.1]: 23:14:16.746 [main] INFO o.e.j.server.handler.ContextHandler - Started o.e.j.w.WebAppContext#7a356a0d{/,file:/app/src/main/webapp,AVAILABLE}
2016-05-24T23:14:16.782745+00:00 app[web.1]: 23:14:16.782 [main] INFO o.e.jetty.server.ServerConnector - Started ServerConnector#7dc51783{HTTP/1.1}{0.0.0.0:8080}
2016-05-24T23:14:16.782924+00:00 app[web.1]: 23:14:16.782 [main] INFO org.eclipse.jetty.server.Server - Started #6674ms
But, 5 or 10 seconds later appears the following error showing that the connection has been timed out:
2016-05-24T23:52:32.962896+00:00 heroku[router]: at=error code=H20 desc="App boot timeout" method=GET path="/" host=cloudrobe.herokuapp.com request_id=a7f68d98-54a2-44b7-8f5f-47efce0f1833 fwd="52.90.128.17" dyno= connect= service= status=503 bytes=
2016-05-24T23:52:45.463575+00:00 heroku[web.1]: Error R10 (Boot timeout) -> Web process failed to bind to $PORT within 60 seconds of launch
This is my Procfile using the port 5000:
web: target/universal/stage/bin/cloudrobe -Dhttp.address=127.0.0.1
Thank you.
Your app is binding to port 8080, but it needs to bind to the port set as the $PORT environment variable on Heroku. To do this, you need to add -Dhttp.port=$PORT to your Procfile. It also needs to bind to 0.0.0.0 and not 127.0.0.1. So it might look like this:
web: target/universal/stage/bin/cloudrobe -Dhttp.address=0.0.0.0 -Dhttp.port=$PORT
I'm trying to set up a twistd daemon on dotcloud:
My supervisord.conf file:
[program:apnsd]
command=/home/dotcloud/env/bin/twistd --logfile /var/log/supervisor/apnsd.log apnsd -c gp_config.py
directory=/home/dotcloud/current/apnsd
However, it looks like the command 'exits early', which then prompts Supervisor to try and restart, which then fails because the twistd dameon is running in the background.
From the supervisord log:
more supervisord.log
2012-05-19 03:07:52,723 CRIT Set uid to user 1000
2012-05-19 03:07:52,723 WARN Included extra file "/etc/supervisor/conf.d/uwsgi.c
onf" during parsing
2012-05-19 03:07:52,723 WARN Included extra file "/home/dotcloud/current/supervi
sord.conf" during parsing
2012-05-19 03:07:52,922 INFO RPC interface 'supervisor' initialized
2012-05-19 03:07:52,922 WARN cElementTree not installed, using slower XML parser
for XML-RPC
2012-05-19 03:07:52,923 CRIT Server 'unix_http_server' running without any HTTP
authentication checking
2012-05-19 03:07:52,932 INFO daemonizing the supervisord process
2012-05-19 03:07:52,934 INFO supervisord started with pid 144
2012-05-19 03:07:53,941 INFO spawned: 'apnsd' with pid 147
2012-05-19 03:07:53,949 INFO spawned: 'uwsgi' with pid 149
2012-05-19 03:07:54,706 INFO exited: apnsd (exit status 0; not expected)
2012-05-19 03:07:55,712 INFO spawned: 'apnsd' with pid 175
2012-05-19 03:07:55,712 INFO success: uwsgi entered RUNNING state, process has s
tayed up for > than 1 seconds (startsecs)
2012-05-19 03:07:56,261 INFO exited: apnsd (exit status 1; not expected)
2012-05-19 03:07:58,267 INFO spawned: 'apnsd' with pid 176
2012-05-19 03:07:58,783 INFO exited: apnsd (exit status 1; not expected)
2012-05-19 03:08:01,790 INFO spawned: 'apnsd' with pid 177
2012-05-19 03:08:02,840 INFO success: apnsd entered RUNNING state, process has s
tayed up for > than 1 seconds (startsecs)
From the apnsd log:
dotcloud#hack-default-www-0:/var/log/supervisor$ more apnsd-stderr---supervisor
-m7GnKV.log
INFO:root:Reactor Type: <twisted.internet.pollreactor.PollReactor object at 0x10
a09d0>
DEBUG:root:Creating listener: apnsd.listeners.line.LineProtocolFactory
INFO:root:Listening on Line Protocol on :1055
DEBUG:root:Listener Created: <apnsd.listeners.line.LineProtocolFactory instance
at 0x12fc8c0>
DEBUG:root:Creating App Factory: apnsd.daemon.APNSFactory
INFO:root:Connecting to APNS Server, App: apns_dev:AAA.com.company.www
INFO:root:apns_dev:AAA.com.company.www -> Started connecting to APNS con
nector...
INFO:root:Registering Application: apns_dev:GoParcel...
DEBUG:root:Creating App Factory: apnsd.daemon.APNSFactory
INFO:root:Connecting to APNS Server, App: apns_dev:T365ED94A9.com.appitems.parce
ls
INFO:root:apns_dev:T365ED94A9.com.appitems.parcels -> Started connecting to APNS
connector...
INFO:root:Registering Application: apns_dev:GoParcelVictor...
Another twistd server is running, PID 172
This could either be a previously started instance of your application or a
different application entirely. To start a new one, either run it in some other
directory, or use the --pidfile and --logfile parameters to avoid clashes.
Another twistd server is running, PID 172
--More--(42%)
Status of worker is failed:
./dotcloud run hack.worker supervisorctl status
USER PATH IS: C:\Users\Taras/.dotcloud\dotcloud.conf
# supervisorctl status
apnsd FATAL Exited too quickly (process log may
have details)
But the twistd process is there (ps -ef):
dotcloud 171 1 0 03:13 ? 00:00:00 /home/dotcloud/env/bin/python /home/dotcloud/env/bin/twistd --logfile /var/log/supervisor/apnsd.log apnsd -c gp_config.py
I am having a similar problem when trying to start the process through a wrapper script (and using exec so that a child process isn't created). What am I doing wrong?
Supervisor expects the controlled process to remain in the foreground, but twistd forks to the background by default. Supervisor therefore thinks that it has exited, and tries to start it again.
You should start twistd with the --nodaemon option: twistd will remain in the foreground, and Supervisor will be happy!