Openshift application stopped and restarted automaticlally with catridge of type DIY - postgresql

Opens shift application stopped and restarted automatically with cartridge of type DIY,so continues downtime for my application,as i running spring boot application with PostgreSQL database and the server starts and can see application running but after while server is down why and then it automatically started and automatically shutdown ,i also see only few logs in logs directory following are the logs
these are some logs for application---->
rhc tail tiworld
==> app-root/logs/diy.log <==
[2016-07-22 08:55:41] INFO WEBrick 1.3.1
[2016-07-22 08:55:41] INFO ruby 1.8.7 (2013-06-27) [x86_64-linux]
[2016-07-22 08:55:41] INFO WEBrick::HTTPServer#start: pid=380495 port=8080
127.3.82.129 - - [22/Jul/2016:09:10:32 EDT] "HEAD / HTTP/1.1" 200 0
- -> /
127.3.82.129 - - [22/Jul/2016:09:10:32 EDT] "HEAD / HTTP/1.1" 200 0
- -> /
[2016-07-22 09:21:58] INFO going to shutdown ...
[2016-07-22 09:21:58] INFO WEBrick::HTTPServer#start done.
==> app-root/logs/postgresql.log <==
2016-07-27 12:51:12 GMT LOG: could not bind socket for statistics collector: Cannot assign requested address
2016-07-27 12:51:12 GMT LOG: disabling statistics collector for lack of working socket
2016-07-27 12:51:12 GMT WARNING: autovacuum not started because of misconfiguration
2016-07-27 12:51:12 GMT HINT: Enable the "track_counts" option.
2016-07-27 12:51:12 GMT LOG: database system was interrupted; last known up at 2016-07-27 12:45:45 GMT
2016-07-27 12:51:12 GMT FATAL: the database system is starting up
2016-07-27 12:51:12 GMT LOG: database system was not properly shut down; automatic recovery in progress
2016-07-27 12:51:12 GMT LOG: record with zero length at 0/198F218
2016-07-27 12:51:12 GMT LOG: redo is not required
2016-07-27 12:51:12 GMT LOG: database system is ready to accept connections
You can tail this application directly with:
ssh -t 579217552d5271eaa80000c0#programmers-pvb.rhcloud.com 'tail */log*/*'
/var/lib/gems/2.3.0/gems/net-ssh-2.9.2/lib/net/ssh/ruby_compat.rb:30:in `select': closed stream (IOError)
from /var/lib/gems/2.3.0/gems/net-ssh-2.9.2/lib/net/ssh/ruby_compat.rb:30:in `io_select'
from /var/lib/gems/2.3.0/gems/net-ssh-2.9.2/lib/net/ssh/transport/packet_stream.rb:75:in `available_for_read?'
from /var/lib/gems/2.3.0/gems/net-ssh-2.9.2/lib/net/ssh/transport/packet_stream.rb:87:in `next_packet'
from /var/lib/gems/2.3.0/gems/net-ssh-2.9.2/lib/net/ssh/transport/session.rb:183:in `block in poll_message'
from /var/lib/gems/2.3.0/gems/net-ssh-2.9.2/lib/net/ssh/transport/session.rb:178:in `loop'
from /var/lib/gems/2.3.0/gems/net-ssh-2.9.2/lib/net/ssh/transport/session.rb:178:in `poll_message'
from /var/lib/gems/2.3.0/gems/net-ssh-2.9.2/lib/net/ssh/connection/session.rb:461:in `dispatch_incoming_packets'
from /var/lib/gems/2.3.0/gems/net-ssh-2.9.2/lib/net/ssh/connection/session.rb:222:in `preprocess'
from /var/lib/gems/2.3.0/gems/net-ssh-2.9.2/lib/net/ssh/connection/session.rb:206:in `process'
from /var/lib/gems/2.3.0/gems/net-ssh-2.9.2/lib/net/ssh/connection/session.rb:170:in `block in loop'
from /var/lib/gems/2.3.0/gems/net-ssh-2.9.2/lib/net/ssh/connection/session.rb:170:in `loop'
from /var/lib/gems/2.3.0/gems/net-ssh-2.9.2/lib/net/ssh/connection/session.rb:170:in `loop'
from /var/lib/gems/2.3.0/gems/rhc-1.38.4/lib/rhc/ssh_helpers.rb:198:in `block in ssh_ruby'
from /var/lib/gems/2.3.0/gems/net-ssh-2.9.2/lib/net/ssh.rb:215:in `start'
from /var/lib/gems/2.3.0/gems/rhc-1.38.4/lib/rhc/ssh_helpers.rb:173:in `ssh_ruby'
from /var/lib/gems/2.3.0/gems/rhc-1.38.4/lib/rhc/commands/tail.rb:40:in `tail'
from /var/lib/gems/2.3.0/gems/rhc-1.38.4/lib/rhc/commands/tail.rb:21:in `run'
from /var/lib/gems/2.3.0/gems/rhc-1.38.4/lib/rhc/commands.rb:294:in `execute'
from /var/lib/gems/2.3.0/gems/rhc-1.38.4/lib/rhc/commands.rb:285:in `block (3 levels) in to_commander'
from /var/lib/gems/2.3.0/gems/commander-4.2.1/lib/commander/command.rb:180:in `call'
from /var/lib/gems/2.3.0/gems/commander-4.2.1/lib/commander/command.rb:155:in `run'
from /var/lib/gems/2.3.0/gems/commander-4.2.1/lib/commander/runner.rb:421:in `run_active_command'
from /var/lib/gems/2.3.0/gems/rhc-1.38.4/lib/rhc/command_runner.rb:72:in `run!'
from /var/lib/gems/2.3.0/gems/commander-4.2.1/lib/commander/delegates.rb:8:in `run!'
from /var/lib/gems/2.3.0/gems/rhc-1.38.4/lib/rhc/cli.rb:37:in `start'
from /var/lib/gems/2.3.0/gems/rhc-1.38.4/bin/rhc:20:in `<top (required)>'
from /usr/local/bin/rhc:23:in `load'
from /usr/local/bin/rhc:23:in `<main>

If you are running this on a small gear especially if you have java & a db on the same gear, changes are that you are running out of resources and the gear is restarting (after awhile it will not restart automatically anymore).
You can check out this article for more information on checking your memory utilization: https://developers.openshift.com/faq/troubleshooting.html#_why_is_my_application_restarting_automatically_or_having_memory_issues

Related

Postgres-15.1 is restarting continuously on using shared_preload_libraries extension

Postgres is restarting continuously on using shared_preload_libraries extension.
https://postgresqlco.nf/doc/en/param/shared_preload_libraries/
I am running postgres-15.1 using a python-based daemon in CentOS7-32bit arch. It is working fine if we do not use "shared_preload_libraries" extension. But after enabling this extension using "ALTER SYSTEM SET shared_preload_libraries" command, the postgres is restarting every few seconds.
Initially it was working fine with postgres-9.6.4.
Postgres logs:
waiting for server to start....2023-02-15 07:13:45.676 GMT [28605] LOG: skipping missing configuration file "/home/runtime/pgsql/data/postgresql.auto.conf"
2023-02-15 07:13:45.825 GMT [28605] LOG: starting PostgreSQL 15.1 on i686-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44), 32-bit
2023-02-15 07:13:45.825 GMT [28605] LOG: listening on IPv4 address "127.0.0.1", port 5432
2023-02-15 07:13:45.933 GMT [28605] LOG: listening on Unix socket "/home/runtime/pgsql/.s.PGSQL.5432"
2023-02-15 07:13:45.969 GMT [28608] LOG: database system was shut down at 2023-02-15 07:13:35 GMT
2023-02-15 07:13:45.989 GMT [28605] LOG: database system is ready to accept connections
done
server started
ALTER SYSTEM
ALTER SYSTEM
ALTER SYSTEM
ALTER SYSTEM
2023-02-15 07:13:51.480 GMT [28605] LOG: received fast shutdown request
waiting for server to shut down....2023-02-15 07:13:51.512 GMT [28605] LOG: aborting any active transactions
2023-02-15 07:13:51.513 GMT [28605] LOG: background worker "logical replication launcher" (PID 28611) exited with exit code 1
2023-02-15 07:13:51.513 GMT [28606] LOG: shutting down
2023-02-15 07:13:51.536 GMT [28606] LOG: checkpoint starting: shutdown immediate
2023-02-15 07:13:51.908 GMT [28606] LOG: checkpoint complete: wrote 3 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.090 s, sync=0.028 s, total=0.395 s; sync files=2, longest=0.021 s, average=0.014 s; distance=0 kB, estimate=0 kB
2023-02-15 07:13:51.909 GMT [28605] LOG: database system is shut down
done
server stopped
I tried to use postgres-15.0 and postgres-14.4, got the same behavior with both. I am not able to find any open issues w.r.t. shared_preload_libraries extension with new versions of Postgres.
PS: I have built this Postgres from the source code with openssl-1.1.1i.
I am using "citus" library with this.
ALTER SYSTEM SET shared_preload_libraries="citus";
I have generated a new citus.so file from it's source code using postgres-15.1. github.com/citusdata/citus

Is there a way to use ident authentication with pghero or other workaround?

Installed pghero according to the github docs (on CentOS 7), but seeing nothing in web browser (no connection error displayed, but browser is totally blank) and when starting service curl giving response. Looking at the logs I see...
[root#airflowetl ~]# pghero logs
==> /var/log/pghero/production.log <==
Started GET "/" for 127.0.0.1 at 2020-01-28 16:21:47 -1000
Processing by PgHero::HomeController#index as */*
Completed 500 Internal Server Error in 55ms
PG::ConnectionBad (FATAL: password authentication failed for user "airflow"):
...
...
...
Started GET "/" for 127.0.0.1 at 2020-01-28 23:51:28 -1000
Processing by PgHero::HomeController#index as */*
Completed 500 Internal Server Error in 11ms
PG::ConnectionBad (FATAL: Ident authentication failed for user "airflow"):
...
...
...
Jan 28 22:59:10 airflowetl.co.local systemd[1]: pghero-web-1.service holdoff time over, scheduling restart.
Jan 28 22:59:10 airflowetl.co.local systemd[1]: Stopped pghero-web-1.service.
Jan 28 22:59:10 airflowetl.co.local systemd[1]: start request repeated too quickly for pghero-web-1.service
Jan 28 22:59:10 airflowetl.co.local systemd[1]: Failed to start pghero-web-1.service.
Jan 28 22:59:10 airflowetl.co.local systemd[1]: Unit pghero-web-1.service entered failed state.
Jan 28 22:59:10 airflowetl.co.local systemd[1]: pghero-web-1.service failed.
Jan 28 23:09:36 airflowetl.co.local systemd[1]: Stopping pghero-web.service...
Jan 28 23:09:36 airflowetl.co.local systemd[1]: Stopped pghero-web.service.
Jan 28 23:09:36 airflowetl.co.local systemd[1]: Started pghero-web.service.
Jan 28 23:09:36 airflowetl.co.local systemd[1]: Started pghero-web-1.service.
Jan 28 23:09:37 airflowetl.co.local pghero-web-1.service[12134]: [12134] Puma starting in cluster mode...
Jan 28 23:09:37 airflowetl.co.local pghero-web-1.service[12134]: [12134] * Version 4.3.0 (ruby 2.6.3-p62), codename: Mysterious Trave
Jan 28 23:09:37 airflowetl.co.local pghero-web-1.service[12134]: [12134] * Min threads: 1, max threads: 16
Jan 28 23:09:37 airflowetl.co.local pghero-web-1.service[12134]: [12134] * Environment: production
Jan 28 23:09:37 airflowetl.co.local pghero-web-1.service[12134]: [12134] * Process workers: 3
Jan 28 23:09:37 airflowetl.co.local pghero-web-1.service[12134]: [12134] * Preloading application
Jan 28 23:09:38 airflowetl.co.local pghero-web-1.service[12134]: [12134] * Listening on tcp://0.0.0.0:3001
Jan 28 23:09:38 airflowetl.co.local pghero-web-1.service[12134]: [12134] ! WARNING: Detected 1 Thread(s) started in app boot:
Jan 28 23:09:38 airflowetl.co.local pghero-web-1.service[12134]: [12134] ! #<Thread:0x0000561740ea27e0#/opt/pghero/vendor/bundle/ruby
Jan 28 23:09:38 airflowetl.co.local pghero-web-1.service[12134]: [12134] Use Ctrl-C to stop
Jan 28 23:09:38 airflowetl.co.local pghero-web-1.service[12134]: [12134] - Worker 0 (pid: 12213) booted, phase: 0
Jan 28 23:09:38 airflowetl.co.local pghero-web-1.service[12134]: [12134] - Worker 1 (pid: 12215) booted, phase: 0
Jan 28 23:09:38 airflowetl.co.local pghero-web-1.service[12134]: [12134] - Worker 2 (pid: 12219) booted, phase: 0
...
and can see the
500 Internal Server Error in 55ms
error. Checking the service status, seeing...
[root#airflowetl ~]# service pghero status
Redirecting to /bin/systemctl status pghero.service
● pghero.service
Loaded: loaded (/etc/systemd/system/pghero.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2020-01-28 23:09:36 HST; 4s ago
Main PID: 12132 (sleep)
CGroup: /system.slice/pghero.service
└─12132 /bin/sleep infinity
Jan 28 23:09:36 airflowetl.co.local systemd[1]: Started pghero.service.
[root#airflowetl ~]# netstat -tulnp | grep 3001
tcp 0 0 0.0.0.0:3001 0.0.0.0:* LISTEN 12134/puma 4.3.0 (t
[root#airflowetl ~]# curl -v http://localhost:3001/
* About to connect() to localhost port 3001 (#0)
* Trying ::1...
* Connection refused
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 3001 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.29.0
> Host: localhost:3001
> Accept: */*
>
< HTTP/1.1 500 Internal Server Error
< Content-Type: text/html; charset=UTF-8
< X-Request-Id: 2bad5f50-438e-4cb3-8e79-41c84eb75c2c
< X-Runtime: 0.017069
< Content-Length: 0
<
* Connection #0 to host localhost left intact
No experience with postgresql or db admin stuff, but appears that the error is due to the fact that I use ident authentication (and appears pghero wants to use a password):
[root#airflowetl ~]# cat /var/lib/pgsql/data/pg_hba.conf
# PostgreSQL Client Authentication Configuration File
# ===================================================
...
# TYPE DATABASE USER ADDRESS METHOD
# "local" is for Unix domain socket connections only
local all all peer
# IPv4 local connections:
#host all all 127.0.0.1/32 ident
#host all all 0.0.0.0/0 trust
host all all 0.0.0.0/0 md5
#host all all 127.0.0.1/32 trust
# IPv6 local connections:
host all all ::1/128 ident
# Allow replication connections from localhost, by a user with the
# replication privilege.
#local replication postgres peer
#host replication postgres 127.0.0.1/32 ident
#host replication postgres ::1/128 ident
#------------------------------------------------------------------------------
# CONNECTIONS AND AUTHENTICATION
#------------------------------------------------------------------------------
# - Connection Settings -
#listen_addresses = 'localhost' # what IP address(es) to listen on;
# comma-separated list of addresses;
# defaults to 'localhost'; use '*' for all
# (change requires restart)
listen_addresses = '*' # for apache-airflow connection
I did this following an article of setting up psql as backend for airflow orchestration tool.
Have tried using multiple urls
sudo pghero config:set DATABASE_URL=postgresql://airflow:xxxx#localhost:5432/airflow
sudo pghero config:set DATABASE_URL=postgresql+psycopg2://airflow:xxxx#localhost:5432/airflow
but same results.
Not sure how to move forward at this point. Anyone with more experience with pghero or postgresql know what could be done here?
No experience with postgresql or db admin stuff, but appears that the error is due to the fact that I use ident authentication (and appears pghero wants to use a password)
It isn't about what pghero wants. It is PostgreSQL which is demanding password authentication.
host all all 0.0.0.0/0 md5
host all all ::1/128 ident
You are using md5 (i.e. password) on all IPv4 connections (including "localhost"), and using ident on only the IPv6 connection from ::1, which is the IPv6 way of spelling "localhost". pghero is coming in over IPv4, not IPv6, so it is getting commanded to use a password.
You can change the "md5" to "ident" for the 0.0.0.0/0 line (but you probably shouldn't as "ident" is not very secure from outside hosts), or add a line before that one to indicate 127.0.0.1/32 specifically should use ident. Or change your pghero config to try to connect over IPv6 rather than IPv4.
Your new log file entry shows that it is trying ident and failing at that too. I don't understand why you are getting both, but they are 7 hours apart so maybe you had changed pg_hba.conf in between. PostgreSQL will create a more complete report and put it in the PostgreSQL server's log file about why the ident authentication failed. (It doesn't sent to the complete report to the unauthenticated client, because that would reveal sensitive information). Find the PostgreSQL server's log file.

postgreSql log file Errors

My application is deployed on remote application server (Linux) and from there it tries to connect to DB server (PostgreSql 9.4) which is again present on another remote server (Linux). I send a long message to app server through JMS and this message processing takes many hours to get processed. But unfortunately I am getting facing some issues of performance with DB server. When I see postgresql.log file I can see the below errors/warning:
< 2017-05-05 09:18:00.676 CEST >LOG: could not receive data from client: Connection timed out
< 2017-05-05 13:38:33.704 CEST >LOG: incomplete startup packet
< 2017-05-05 13:42:29.158 CEST >LOG: unexpected EOF on client connection with an open transaction
< 2017-05-05 13:50:49.163 CEST >LOG: checkpoints are occurring too frequently (1 second apart)
< 2017-05-05 13:50:49.163 CEST >HINT: Consider increasing the configuration parameter "checkpoint_segments".
Do I need to update something in postgresql.conf file. Can somebody please advise what should I follow to avoid these errors?

Postgresql unable to start: No space left on device

I have taken dump for the db and it make it run short of memory for Postgresql.
I have then restarted postgresql but it failed to restart.And kept on giving me error
[FAIL] Starting PostgreSQL 9.4 database server: main[....] The PostgreSQL server failed to start. Please check the log output: ... failed!
failed!
and in log file there were following lines
2017-05-05 05:49:25 UTC LOG: could not close temporary statistics file "pg_stat_tmp/global.tmp": No space left on device
2017-05-05 05:49:30 UTC LOG: using stale statistics instead of current ones because stats collector is not responding
2017-05-05 05:49:35 UTC LOG: using stale statistics instead of current ones because stats collector is not responding
2017-05-05 05:49:35 UTC LOG: could not close temporary statistics file "pg_stat_tmp/db_0.tmp": No space left on device
2017-05-05 05:49:35 UTC LOG: could not write temporary statistics file "pg_stat_tmp/db_85990.tmp": No space left on device
2017-05-05 05:49:35 UTC LOG: could not close temporary statistics file "pg_stat_tmp/global.tmp": No space left on device
2017-05-05 05:49:40 UTC LOG: could not close temporary statistics file "pg_stat_tmp/db_0.tmp": No space left on device
2017-05-05 05:49:40 UTC LOG: could not close temporary statistics file "pg_stat_tmp/global.tmp": No space left on device
2017-05-05 05:49:45 UTC LOG: using stale statistics instead of current ones because stats collector is not responding
Please Help me to solve this issue if some one can.
thanks.

Error deploying Rails 5 app with Sinatra 2.0.0beta2 to Amazon Linux AMI

I'm running into an exception with Sinatra 2.0.0 beta 2 with Rails 5 deploying to the Amazon Linux AMI v 2.1.6. I've posted the issue in the Sinatra Github repro but it's been suggested I post it here.
Edit: I ran into this using Elastic Beanstalk but as #neal reports, this also happens with Capistrano deploying to EC2.
Steps to reproduce the issue follow:
Make a new Rails 5 application in a clean directory
$ gem install rails
$ rails --version
(confirm Rails 5.0.0.1)
$ rails new test-app
Add this line to the gemfile:
gem 'sinatra', '2.0.0.beta2'
Create a new Elastic Beanstalk Web environment of type “64bit Amazon Linux 2016.03 v2.1.6 running Ruby 2.3 (Puma)”, Web server. Use all defaults except change the instance size to t2.small (anything smaller doesn’t have enough memory to deploy)
Add two new environment variables to the Elastic Beanstalk environment using the Web console
SECRET_KEY_BASE = (set a value for this)
RAILS_ENV = production
Deploy the application to this new environment, for example with the eb command line tools.
Deploy it again
Tail the logs through the Elastic Beanstalk console
RESULT:
-------------------------------------
/var/log/puma/puma.log
-------------------------------------
=== puma startup: 2016-08-26 02:39:12 +0000 ===
=== puma startup: 2016-08-26 02:39:12 +0000 ===
[15926] - Worker 0 (pid: 15929) booted, phase: 0
[15926] - Gracefully shutting down workers...
/opt/rubies/ruby-2.3.1/lib/ruby/gems/2.3.0/gems/sinatra-2.0.0.beta2/lib/sinatra/main.rb:11:in `expand_path': No such file or directory - getcwd (Errno::ENOENT)
from /opt/rubies/ruby-2.3.1/lib/ruby/gems/2.3.0/gems/sinatra-2.0.0.beta2/lib/sinatra/main.rb:11:in `block in <class:Application>'
from (eval):1:in `run?'
from /opt/rubies/ruby-2.3.1/lib/ruby/gems/2.3.0/gems/sinatra-2.0.0.beta2/lib/sinatra/main.rb:26:in `block in <module:Sinatra>'
from /opt/rubies/ruby-2.3.1/lib/ruby/gems/2.3.0/gems/puma-3.6.0/lib/puma/cluster.rb:120:in `fork'
from /opt/rubies/ruby-2.3.1/lib/ruby/gems/2.3.0/gems/puma-3.6.0/lib/puma/cluster.rb:120:in `block in spawn_workers'
from /opt/rubies/ruby-2.3.1/lib/ruby/gems/2.3.0/gems/puma-3.6.0/lib/puma/cluster.rb:116:in `times'
from /opt/rubies/ruby-2.3.1/lib/ruby/gems/2.3.0/gems/puma-3.6.0/lib/puma/cluster.rb:116:in `spawn_workers'
from /opt/rubies/ruby-2.3.1/lib/ruby/gems/2.3.0/gems/puma-3.6.0/lib/puma/cluster.rb:426:in `run'
from /opt/rubies/ruby-2.3.1/lib/ruby/gems/2.3.0/gems/puma-3.6.0/lib/puma/launcher.rb:172:in `run'
from /opt/rubies/ruby-2.3.1/lib/ruby/gems/2.3.0/gems/puma-3.6.0/lib/puma/cli.rb:74:in `run'
from /opt/rubies/ruby-2.3.1/lib/ruby/gems/2.3.0/gems/puma-3.6.0/bin/puma:10:in `<top (required)>'
from /opt/rubies/ruby-2.3.1/bin/puma:23:in `load'
from /opt/rubies/ruby-2.3.1/bin/puma:23:in `<top (required)>'
from /opt/rubies/ruby-2.3.1/lib/ruby/gems/2.3.0/gems/bundler-1.12.1/lib/bundler/cli/exec.rb:63:in `load'
from /opt/rubies/ruby-2.3.1/lib/ruby/gems/2.3.0/gems/bundler-1.12.1/lib/bundler/cli/exec.rb:63:in `kernel_load'
from /opt/rubies/ruby-2.3.1/lib/ruby/gems/2.3.0/gems/bundler-1.12.1/lib/bundler/cli/exec.rb:24:in `run'
from /opt/rubies/ruby-2.3.1/lib/ruby/gems/2.3.0/gems/bundler-1.12.1/lib/bundler/cli.rb:304:in `exec'
from /opt/rubies/ruby-2.3.1/lib/ruby/gems/2.3.0/gems/bundler-1.12.1/lib/bundler/vendor/thor/lib/thor/command.rb:27:in `run'
from /opt/rubies/ruby-2.3.1/lib/ruby/gems/2.3.0/gems/bundler-1.12.1/lib/bundler/vendor/thor/lib/thor/invocation.rb:126:in `invoke_command'
from /opt/rubies/ruby-2.3.1/lib/ruby/gems/2.3.0/gems/bundler-1.12.1/lib/bundler/vendor/thor/lib/thor.rb:359:in `dispatch'
from /opt/rubies/ruby-2.3.1/lib/ruby/gems/2.3.0/gems/bundler-1.12.1/lib/bundler/vendor/thor/lib/thor/base.rb:440:in `start'
from /opt/rubies/ruby-2.3.1/lib/ruby/gems/2.3.0/gems/bundler-1.12.1/lib/bundler/cli.rb:11:in `start'
from /opt/rubies/ruby-2.3.1/lib/ruby/gems/2.3.0/gems/bundler-1.12.1/exe/bundle:27:in `block in <top (required)>'
from /opt/rubies/ruby-2.3.1/lib/ruby/gems/2.3.0/gems/bundler-1.12.1/lib/bundler/friendly_errors.rb:98:in `with_friendly_errors'
from /opt/rubies/ruby-2.3.1/lib/ruby/gems/2.3.0/gems/bundler-1.12.1/exe/bundle:19:in `<top (required)>'
from /opt/rubies/ruby-2.3.1/bin/bundle:23:in `load'
from /opt/rubies/ruby-2.3.1/bin/bundle:23:in `<main>'
[15926] === puma shutdown: 2016-08-26 02:41:17 +0000 ===
[15926] - Goodbye!
=== puma startup: 2016-08-26 02:41:20 +0000 ===
=== puma startup: 2016-08-26 02:41:20 +0000 ===
[16296] - Worker 0 (pid: 16299) booted, phase: 0
This isn't just an elastic bean stalk issue, I can confirm it also happens when deploying a rails 5 app using capistrano / EC2 Ubuntu / nginx