I recently moved our Artifactory database to a newly build Postgres HA cluster.
Artifactory itself is running in a docker container while the Postgres cluster runs on own virtual servers building a HA cluster through Patroni.
While connecting and setting it up worked, there are now errors in the log about slow queries and HikariCP failing to validate the connection:
2022-05-23T06:16:43.595Z [jfac ] [WARN ] [d3a436283f12bf94] [c.z.h.p.PoolBase:184 ] [jf-access-task8 ] - HikariCP Main - Failed to validate connection org.postgresql.jdbc.PgConnection#f32a6c4 (This connection has been closed.). Possibly consider using a shorter maxLifetime value.
2022-05-23T06:17:04.729Z [jfac ] [WARN ] [ ] [c.z.h.p.PoolBase:184 ] [efault-executor-4398] - HikariCP Main - Failed to validate connection org.postgresql.jdbc.PgConnection#3fb91ee6 (This connection has been closed.). Possibly consider using a shorter maxLifetime value.
2022-05-23T06:17:36.996Z [jfac ] [WARN ] [265a1aa154894c1b] [c.z.h.p.PoolBase:184 ] [7.0.0.1-8040-exec-10] - HikariCP Main - Failed to validate connection org.postgresql.jdbc.PgConnection#1b08eca5 (This connection has been closed.). Possibly consider using a shorter maxLifetime value.
2022-05-23T06:17:38.319Z [jfmd ] [INFO ] [ ] [database_bearer.go:368 ] [main ] - Slow statement detected (6.794897251s) - UPDATE md_servers SET last_heartbeat = $1 WHERE id = $2 [database]
2022-05-23T06:17:38.569Z [jfmd ] [INFO ] [ ] [database_bearer.go:368 ] [main ] - Slow query detected (6.683101054s) - SELECT task_key, execution_interval_millis, progress, modified, task_start_key, task_end_key, depends_on FROM md_tasks [database] [workerNumber][1]
These also lead to the result that when I import the backup of the old database, it will fail at some random points and have artifacts missing in the end.
I tried to change the maxLifetime value for Hiakri but I couldn't find out where.
The "/var/opt/jfrog/artifactory" directory in the container is mapped to a static folder on the host so I tried setting it in the system.yaml but with no effect.
Right now my system.yaml looks like this:
shared:
database:
type: postgresql
driver: org.postgresql.Driver
url: jdbc:postgresql://postgreshost/artifactory
username: artifactory
password: myPassword
Where can I then change the value for maxLifetime or is it possible that the errors come from something different?
EDIT:
The Postgres log looks like this:
2022-05-23 12:56:46.413 CEST [604373] LOG: could not receive data from client: Connection reset by peer
2022-05-23 12:56:46.413 CEST [604373] LOG: unexpected EOF on client connection with an open transaction
2022-05-23 13:14:12.083 CEST [617283] ERROR: deadlock detected
2022-05-23 13:14:12.083 CEST [617283] DETAIL: Process 617283 waits for ShareLock on transaction 3362529; blocked by process 617510.
Process 617510 waits for ShareLock on transaction 3362515; blocked by process 617283.
Process 617283: UPDATE access_topology SET state = $1, last_updated = $2 WHERE endpoint_id = $3
Process 617510: UPDATE access_topology SET state = $1, last_updated = $2 WHERE endpoint_id = $3
2022-05-23 13:14:12.083 CEST [617283] HINT: See server log for query details.
2022-05-23 13:14:12.083 CEST [617283] CONTEXT: while updating tuple (0,19) in relation "access_topology"
2022-05-23 13:14:12.083 CEST [617283] STATEMENT: UPDATE access_topology SET state = $1, last_updated = $2 WHERE endpoint_id = $3
2022-05-23 13:14:12.083 CEST [617510] LOG: duration: 2591.626 ms execute <unnamed>: UPDATE access_topology SET state = $1, last_updated = $2 WHERE endpoint_id = $3
2022-05-23 13:14:12.083 CEST [617510] DETAIL: parameters: $1 = 'HEALTHY', $2 = '1653304449485', $3 = '2'
2022-05-23 13:24:54.601 CEST [602250] LOG: could not receive data from client: Connection reset by peer
2022-05-23 13:26:03.849 CEST [619028] WARNING: there is no transaction in progress
2022-05-23 13:26:04.021 CEST [615736] LOG: could not receive data from client: Connection reset by peer
2022-05-23 13:26:04.021 CEST [615736] LOG: unexpected EOF on client connection with an open transaction
2022-05-23 13:46:49.625 CEST [612081] LOG: could not receive data from client: Connection reset by peer
2022-05-23 13:54:01.641 CEST [622155] WARNING: there is no transaction in progress
2022-05-23 13:54:01.821 CEST [619028] LOG: could not receive data from client: Connection reset by peer
2022-05-23 13:54:01.821 CEST [619028] LOG: unexpected EOF on client connection with an open transaction
2022-05-23 14:03:18.089 CEST [623174] LOG: could not receive data from client: Connection reset by peer
2022-05-23 14:08:11.485 CEST [622155] LOG: could not receive data from client: Connection reset by peer
2022-05-23 14:12:51.144 CEST [618894] LOG: unexpected EOF on client connection with an open transaction
2022-05-23 14:33:30.014 CEST [626629] LOG: could not receive data from client: Connection reset by peer
2022-05-23 14:58:48.349 CEST [606834] LOG: could not receive data from client: Connection reset by peer
2022-05-23 15:32:18.237 CEST [629334] LOG: could not receive data from client: Connection reset by peer
2022-05-23 15:35:24.877 CEST [623774] LOG: could not receive data from client: Connection reset by peer
2022-05-23 15:52:01.765 CEST [632939] LOG: could not receive data from client: Connection reset by peer
2022-05-23 15:52:01.765 CEST [632939] LOG: unexpected EOF on client connection with an open transaction
2022-05-23 15:55:18.621 CEST [609860] LOG: could not receive data from client: Connection reset by peer
2022-05-23 16:09:03.689 CEST [620914] LOG: could not receive data from client: Connection reset by peer
2022-05-23 16:09:03.689 CEST [620914] LOG: unexpected EOF on client connection with an open transaction
2022-05-23 16:13:49.429 CEST [636884] LOG: could not receive data from client: Connection reset by peer
2022-05-23 16:29:46.716 CEST [605671] LOG: could not receive data from client: Connection reset by peer
2022-05-23 16:31:04.776 CEST [639239] WARNING: there is no transaction in progress
2022-05-23 16:31:04.949 CEST [621365] LOG: could not receive data from client: Connection reset by peer
2022-05-23 16:31:04.949 CEST [621365] LOG: unexpected EOF on client connection with an open transaction
2022-05-23 17:03:12.241 CEST [637393] LOG: could not receive data from client: Connection reset by peer
2022-05-23 17:38:06.594 CEST [610944] LOG: could not send data to client: Connection reset by peer
2022-05-23 17:38:06.594 CEST [610944] STATEMENT: select itemid,functionid,name,parameter,triggerid from functions
2022-05-23 17:38:06.594 CEST [610944] FATAL: connection to client lost
2022-05-23 17:38:06.594 CEST [610944] STATEMENT: select itemid,functionid,name,parameter,triggerid from functions
2022-05-23 17:38:54.605 CEST [639239] LOG: could not receive data from client: Connection reset by peer
2022-05-23 17:38:54.605 CEST [639239] LOG: unexpected EOF on client connection with an open transaction
2022-05-23 18:02:18.184 CEST [648559] LOG: duration: 5929.036 ms execute <unnamed>: DELETE FROM archive_names WHERE NOT EXISTS (SELECT 1 FROM indexed_archives_entries i WHERE i.entry_name_id = name_id)
2022-05-23 19:30:02.765 CEST [658076] WARNING: there is no transaction in progress
2022-05-23 19:30:02.933 CEST [635061] LOG: could not receive data from client: Connection reset by peer
2022-05-23 19:30:02.933 CEST [635061] LOG: unexpected EOF on client connection with an open transaction
2022-05-23 19:30:02.937 CEST [633277] LOG: could not receive data from client: Connection reset by peer
2022-05-23 19:30:02.937 CEST [633277] LOG: unexpected EOF on client connection with an open transaction
2022-05-23 19:40:56.365 CEST [659235] WARNING: there is no transaction in progress
2022-05-23 19:40:56.529 CEST [639099] LOG: could not receive data from client: Connection reset by peer
2022-05-23 19:40:56.529 CEST [639099] LOG: unexpected EOF on client connection with an open transaction
2022-05-23 19:52:11.589 CEST [590926] LOG: could not receive data from client: Connection reset by peer
2022-05-23 19:52:28.782 CEST [653019] LOG: could not send data to client: Connection reset by peer
2022-05-23 19:52:28.782 CEST [653019] STATEMENT: select triggerid,description,expression,error,priority,type,value,state,lastchange,status,recovery_mode,recovery_expression,correlation_mode,correlation_tag,opdata,event_name,null,null,n>2022-05-23 19:52:28.782 CEST [653019] FATAL: connection to client lost
2022-05-23 19:52:28.782 CEST [653019] STATEMENT: select triggerid,description,expression,error,priority,type,value,state,lastchange,status,recovery_mode,recovery_expression,correlation_mode,correlation_tag,opdata,event_name,null,null,n>2022-05-23 19:53:45.017 CEST [2301] LOG: could not receive data from client: Connection reset by peer
2022-05-23 20:02:31.205 CEST [661487] LOG: could not receive data from client: Connection reset by peer
2022-05-23 20:21:44.867 CEST [663487] LOG: duration: 1576.960 ms execute <unnamed>: UPDATE access_topology SET state = $1, last_updated = $2 WHERE endpoint_id = $3
2022-05-23 20:21:44.867 CEST [663487] DETAIL: parameters: $1 = 'HEALTHY', $2 = '1653330103289', $3 = '9002'
2022-05-23 20:40:16.014 CEST [665490] WARNING: there is no transaction in progress
2022-05-23 20:40:16.177 CEST [658077] LOG: could not receive data from client: Connection reset by peer
2022-05-23 20:40:16.177 CEST [658077] LOG: unexpected EOF on client connection with an open transaction
2022-05-23 20:50:51.189 CEST [666502] LOG: unexpected EOF on client connection with an open transaction
2022-05-23 20:55:00.297 CEST [666590] LOG: could not receive data from client: Connection reset by peer
2022-05-23 21:17:08.552 CEST [669403] WARNING: there is no transaction in progress
2022-05-23 21:17:08.721 CEST [658076] LOG: could not receive data from client: Connection reset by peer
2022-05-23 21:17:08.721 CEST [658076] LOG: unexpected EOF on client connection with an open transaction
2022-05-23 21:30:06.624 CEST [665490] LOG: could not receive data from client: Connection reset by peer
2022-05-23 21:46:20.992 CEST [672459] WARNING: there is no transaction in progress
2022-05-23 21:46:21.161 CEST [669403] LOG: could not receive data from client: Connection reset by peer
2022-05-23 21:46:21.161 CEST [669403] LOG: unexpected EOF on client connection with an open transaction
2022-05-23 22:25:21.813 CEST [610953] LOG: could not receive data from client: Connection reset by peer
2022-05-23 23:01:26.281 CEST [680388] LOG: could not receive data from client: Connection reset by peer
2022-05-23 23:29:56.756 CEST [683342] LOG: duration: 1211.342 ms execute <unnamed>: UPDATE access_topology SET state = $1, last_updated = $2 WHERE endpoint_id = $3
2022-05-23 23:29:56.756 CEST [683342] DETAIL: parameters: $1 = 'HEALTHY', $2 = '1653341395543', $3 = '9002'
Related
In the Datastore logs, I encountered the following error, Not sure what has gone wrong.
[7804] LOG: starting PostgreSQL 13.1, compiled by Visual C++ build 1914, 64-bit
2021-08-23 22:56:15.980 CEST [7804] LOG: listening on IPv4 address "127.0.0.1", port 9003
2021-08-23 22:56:15.983 CEST [7804] LOG: listening on IPv4 address "10.91.198.36", port 9003
2021-08-23 22:56:16.041 CEST [8812] LOG: database system was shut down at 2021-08-23 22:54:51 CEST
2021-08-23 22:56:16.044 CEST [8812] LOG: invalid primary checkpoint record
2021-08-23 22:56:16.045 CEST [8812] PANIC: could not locate a valid checkpoint record
2021-08-23 22:56:16.076 CEST [7804] LOG: startup process (PID 8812) was terminated by exception 0xC0000409
2021-08-23 22:56:16.076 CEST [7804] HINT: See C include file "ntstatus.h" for a description of the hexadecimal value.
2021-08-23 22:56:16.078 CEST [7804] LOG: aborting startup due to startup process failure
2021-08-23 22:56:16.094 CEST [7804] LOG: database system is shut down
Somebody deleted crucial WAL files (to free space?), and now your cluster is corrupted
Restore from backup. If you have no backup, running pg_resetwal is an option, since it seems there was a clean shutdown.
I have edited my pg_hba file and copied it to server and restarted the services by "sudo service postgresql restart" but after that the server is not connecting.
Showing the below error, Your database returned: "Connection to 138.2xx.1xx.xx:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections."
The Jenkins job and data visualization tools are failing which was working fine previously. What could be the reason.
Getting this in PostgreSQL Log
2019-10-23 07:21:25.829 CEST [11761] LOG: received fast shutdown request
2019-10-23 07:21:25.829 CEST [11761] LOG: aborting any active transactions
2019-10-23 07:21:25.829 CEST [11766] LOG: autovacuum launcher shutting down
2019-10-23 07:21:25.832 CEST [11763] LOG: shutting down
2019-10-23 07:21:25.919 CEST [11761] LOG: database system is shut down
2019-10-23 07:21:27.068 CEST [22633] LOG: database system was shut down at 2019-10-23 07:21:25 CEST
2019-10-23 07:21:27.073 CEST [22633] LOG: MultiXact member wraparound protections are now enabled
2019-10-23 07:21:27.075 CEST [22631] LOG: database system is ready to accept connections
2019-10-23 07:21:27.075 CEST [22637] LOG: autovacuum launcher started
2019-10-23 07:21:27.390 CEST [22639] [unknown]#[unknown] LOG: incomplete startup packet
Below shows no response.
root#Ubuntu-1604-xenial-64-minimal ~ # pg_isready -h localhost -p 5432
localhost:5432 - no response
Below was already added to the postgresql.config file.
listen_addresses = '*'
Do i need to restart the entire server?
Can anyone please help me to resolve this.
I'm trying to do python manage.py syncdb on a Django installation, but I keep getting OperationalError: ERROR: pgbouncer cannot connect to server. pgbouncer.log contains lines such as:
2017-09-19 19:44:15.107 1128 LOG C-0x8a9930: mydb/myuser#unix:6432 closing because: pgbouncer cannot connect to server (age=0)
2017-09-19 19:44:15.107 1128 WARNING C-0x8a9930: mydb/myuser#unix:6432 Pooler Error: pgbouncer cannot connect to server
2017-09-19 19:44:15.107 1128 LOG S-0x8c72e0: mydb/myuser#35.154.149.188:5432 new connection to server
2017-09-19 19:44:15.107 1128 LOG C-0x8a9930: mydb/myuser#unix:6432 login failed: db=mydb user=myuser
2017-09-19 19:44:30.108 1128 LOG S-0x8c72e0: mydb/myuser#35.154.149.188:5432 closing because: connect failed (age=15)
In case needed, ps -aef | grep pgbouncer yields:
postgres 1128 1 0 18:38 ? 00:00:00 /usr/sbin/pgbouncer -d /etc/pgbouncer/pgbouncer.ini
myuser 1919 1533 0 19:45 pts/0 00:00:00 grep --color=auto pgbouncer
Moreover, grep port /etc/pgbouncer/pgbouncer.ini results in:
;; dbname= host= port= user= password=
mydb = host=xx.xxx.xxx.xxx port=5432 dbname=mydb
;forcedb = host=127.0.0.1 port=300 user=baz password=foo client_encoding=UNICODE datestyle=ISO connect_query='SELECT 1'
listen_port = 6432
Lastly, the relevant parts of settings.py contain:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'mydb',
'USER': 'myuser',
'PASSWORD': 'mypassword',
'HOST': '/var/run/postgresql',
'PORT': '6432',
}
I turned log_connections to on in postgresql.conf, restarted PG and tried again. Here's the relevant lines:
2017-09-20 07:50:59 UTC LOG: database system is ready to accept connections
2017-09-20 07:50:59 UTC LOG: autovacuum launcher started
2017-09-20 07:51:00 UTC LOG: connection received: host=[local]
2017-09-20 07:51:00 UTC LOG: incomplete startup packet
2017-09-20 07:51:00 UTC LOG: connection received: host=[local]
2017-09-20 07:51:00 UTC LOG: connection authorized: user=postgres database=postgres
2017-09-20 07:51:01 UTC LOG: connection received: host=[local]
2017-09-20 07:51:01 UTC LOG: connection authorized: user=postgres database=postgres
2017-09-20 07:51:01 UTC LOG: connection received: host=[local]
2017-09-20 07:51:01 UTC LOG: connection authorized: user=postgres database=postgres
It seems the connection is going through, but the user and database name is postgres. Those credentials aren't what I supplied in pgbouncer.ini.
However, explicitly adding myuser in the connection string described in pgbouncer.ini leads to:
2017-09-20 09:37:37 UTC FATAL: Peer authentication failed for user "myuser"
2017-09-20 09:37:37 UTC DETAIL: Connection matched pg_hba.conf line 90: "local all all peer"
Totally stumped.
It seems the mis-configuration emanated from this line in settings.py:
'PORT': '6432',
I commented it and pgbouncer started working.
Though I'm not sure 'why'.
Maybe there's a collision on this port; Pgbouncer and PG coexist on a single server in my case. I've set them up over different VMs in the past without a hitch (and without needing to comment 'PORT': '6432',)
i am run pg_dump on my vps server, it throw me error:
pg_dump: [archiver (db)] query failed: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
pg_dump: [archiver (db)] query was: SELECT
( SELECT alias FROM pg_catalog.ts_token_type('22171'::pg_catalog.oid) AS t
WHERE t.tokid = m.maptokentype ) AS tokenname,
m.mapdict::pg_catalog.regdictionary AS dictname
FROM pg_catalog.pg_ts_config_map AS m
WHERE m.mapcfg = '22172'
ORDER BY m.mapcfg, m.maptokentype, m.mapseqno
Then I notice the sql on the above error:
SELECT
( SELECT alias FROM pg_catalog.ts_token_type('22171'::pg_catalog.oid) AS t
WHERE t.tokid = m.maptokentype ) AS tokenname,
m.mapdict::pg_catalog.regdictionary AS dictname
FROM pg_catalog.pg_ts_config_map AS m
WHERE m.mapcfg = '22172'
ORDER BY m.mapcfg, m.maptokentype, m.mapseqno
So I try to run SELECT alias FROM pg_catalog.ts_token_type('22171'::pg_catalog.oid) on psql
So it throw me error:
pzz_development=# SELECT alias FROM pg_catalog.ts_token_type('22171'::pg_catalog.oid);
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.
!> \q
How can I figure out the problem, and dump my data properly?
EDIT:
Then i check postgresql log at /var/log/postgresql/postgresql-9.3-main.log
2015-08-10 16:22:49 CST LOG: server process (PID 4029) was terminated by signal 11: Segmentation fault
2015-08-10 16:22:49 CST DETAIL: Failed process was running: SELECT
( SELECT alias FROM pg_catalog.ts_token_type('22171'::pg_catalog.oid) AS t
WHERE t.tokid = m.maptokentype ) AS tokenname,
m.mapdict::pg_catalog.regdictionary AS dictname
FROM pg_catalog.pg_ts_config_map AS m
WHERE m.mapcfg = '22172'
ORDER BY m.mapcfg, m.maptokentype, m.mapseqno
2015-08-10 16:22:49 CST LOG: terminating any other active server processes
2015-08-10 16:22:49 CST WARNING: terminating connection because of crash of another server process
2015-08-10 16:22:49 CST DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2015-08-10 16:22:49 CST HINT: In a moment you should be able to reconnect to the database and repeat your command.
2015-08-10 16:22:49 CST LOG: all server processes terminated; reinitializing
2015-08-10 16:22:49 CST LOG: database system was interrupted; last known up at 2015-08-10 16:22:45 CST
2015-08-10 16:22:50 CST LOG: database system was not properly shut down; automatic recovery in progress
2015-08-10 16:22:50 CST LOG: unexpected pageaddr 0/2AE6000 in log segment 000000010000000000000004, offset 11427840
2015-08-10 16:22:50 CST LOG: redo is not required
2015-08-10 16:22:50 CST LOG: MultiXact member wraparound protections are now enabled
2015-08-10 16:22:50 CST LOG: autovacuum launcher started
2015-08-10 16:22:50 CST LOG: database system is ready to accept connections
We've been having mysterious, intermittent Apache crashes lately, a few times a day, but without pattern as to when or how long in between or what time of day.
I would upgrade DBD::Pg, but I couldn't find a PPD out there for anything newer than the one we use (2.14.1). That said, the changes since then don't seem particularly relevant to our usage.
The Windows Event Viewer shows this:
Event Type: Error
Event Source: Application Error
Event Category: (100)
Event ID: 1000
Date: 2010-11-01
Time: 9:55:28 AM
User: N/A
Computer: myserver
Description:
Faulting application httpd.exe, version 2.2.17.0, faulting module Pg.dll, version 0.0.0.0, fault address 0x0000e8a5.
So I looked in the Apache logs, which said:
[Mon Nov 01 09:55:32 2010] [notice] Parent: child process exited with status 3221225477 -- Restarting.
Not terribly helpful, so I looked in the PostgreSQL logs (Pg.dll is part of DBD::Pg), and they said:
2010-11-01 09:55:32 EDT LOG: unexpected EOF on client connection
2010-11-01 09:55:32 EDT LOG: unexpected EOF on client connection
2010-11-01 09:55:33 EDT LOG: could not receive data from client: No connection could be made because the target machine actively refused it.
2010-11-01 09:55:33 EDT LOG: unexpected EOF on client connection
2010-11-01 09:55:33 EDT LOG: could not receive data from client: No connection could be made because the target machine actively refused it.
2010-11-01 09:55:33 EDT LOG: could not receive data from client: No connection could be made because the target machine actively refused it.
2010-11-01 09:55:33 EDT LOG: unexpected EOF on client connection
2010-11-01 09:55:33 EDT LOG: could not receive data from client: No connection could be made because the target machine actively refused it.
2010-11-01 09:55:33 EDT LOG: unexpected EOF on client connection
2010-11-01 09:55:33 EDT LOG: unexpected EOF on client connection
2010-11-01 09:55:33 EDT LOG: could not receive data from client: No connection could be made because the target machine actively refused it.
2010-11-01 09:55:33 EDT LOG: unexpected EOF on client connection
2010-11-01 09:55:33 EDT LOG: could not receive data from client: No connection could be made because the target machine actively refused it.
2010-11-01 09:55:33 EDT LOG: unexpected EOF on client connection
2010-11-01 09:55:33 EDT LOG: could not receive data from client: No connection could be made because the target machine actively refused it.
2010-11-01 09:55:33 EDT LOG: unexpected EOF on client connection
2010-11-01 09:55:33 EDT LOG: could not receive data from client: No connection could be made because the target machine actively refused it.
2010-11-01 09:55:33 EDT LOG: unexpected EOF on client connection
2010-11-01 09:55:33 EDT LOG: could not receive data from client: No connection could be made because the target machine actively refused it.
2010-11-01 09:55:33 EDT LOG: unexpected EOF on client connection
2010-11-01 09:55:33 EDT LOG: could not receive data from client: No connection could be made because the target machine actively refused it.
2010-11-01 09:55:33 EDT LOG: unexpected EOF on client connection
2010-11-01 09:55:33 EDT LOG: could not receive data from client: No connection could be made because the target machine actively refused it.
2010-11-01 09:55:33 EDT LOG: could not receive data from client: No connection could be made because the target machine actively refused it.
2010-11-01 09:55:33 EDT LOG: unexpected EOF on client connection
2010-11-01 09:55:33 EDT LOG: unexpected EOF on client connection
2010-11-01 09:55:33 EDT LOG: could not receive data from client: No connection could be made because the target machine actively refused it.
2010-11-01 09:55:33 EDT LOG: unexpected EOF on client connection
2010-11-01 09:55:33 EDT LOG: could not receive data from client: No connection could be made because the target machine actively refused it.
2010-11-01 09:55:33 EDT LOG: unexpected EOF on client connection
2010-11-01 09:55:33 EDT LOG: could not receive data from client: No connection could be made because the target machine actively refused it.
2010-11-01 09:55:33 EDT LOG: unexpected EOF on client connection
2010-11-01 09:55:33 EDT LOG: could not receive data from client: No connection could be made because the target machine actively refused it.
2010-11-01 09:55:33 EDT LOG: unexpected EOF on client connection
2010-11-01 09:55:33 EDT LOG: could not receive data from client: No connection could be made because the target machine actively refused it.
2010-11-01 09:55:33 EDT LOG: unexpected EOF on client connection
2010-11-01 09:55:33 EDT LOG: could not receive data from client: No connection could be made because the target machine actively refused it.
2010-11-01 09:55:33 EDT LOG: unexpected EOF on client connection
2010-11-01 09:55:33 EDT LOG: could not receive data from client: No connection could be made because the target machine actively refused it.
2010-11-01 09:55:33 EDT LOG: unexpected EOF on client connection
2010-11-01 09:55:33 EDT LOG: could not receive data from client: No connection could be made because the target machine actively refused it.
2010-11-01 09:55:33 EDT LOG: unexpected EOF on client connection
2010-11-01 09:55:33 EDT LOG: could not receive data from client: No connection could be made because the target machine actively refused it.
2010-11-01 09:55:33 EDT LOG: unexpected EOF on client connection
2010-11-01 09:55:33 EDT LOG: could not receive data from client: No connection could be made because the target machine actively refused it.
2010-11-01 09:55:33 EDT LOG: could not receive data from client: No connection could be made because the target machine actively refused it.
2010-11-01 09:55:33 EDT LOG: unexpected EOF on client connection
2010-11-01 09:55:33 EDT LOG: could not receive data from client: No connection could be made because the target machine actively refused it.
2010-11-01 09:55:33 EDT LOG: unexpected EOF on client connection
2010-11-01 09:55:33 EDT LOG: unexpected EOF on client connection
2010-11-01 09:55:33 EDT LOG: could not receive data from client: No connection could be made because the target machine actively refused it.
2010-11-01 09:55:33 EDT LOG: unexpected EOF on client connection
2010-11-01 09:55:33 EDT LOG: could not receive data from client: No connection could be made because the target machine actively refused it.
2010-11-01 09:55:33 EDT LOG: unexpected EOF on client connection
But other than that I have no clue as to the cause, except possibly times of higher (but not that high--we have very few users) server load.
Any ideas what could be causing this?
Intermittent networking problems may cause this. I tried logging into PostgreSQL using psql from Turnkey running on VirtualBox on the same server PostgreSQL is running on, and it would sometimes cause PostgreSQL threads to take up a lot more CPU, and then not allow the login in the end. Installing PostgreSQL on Turnkey made this stop happening, so it's possible that there's a general networking problem at the Windows level that is causing PostgreSQL and Pg.dll in particular to choke, which then crashes Apache, since DBD::Pg is loaded persistently via mod_perl.