WAL contains references to invalid pages - postgresql

centos 6.7
postgresql 9.5.3
I've DB servers that are on master-standby replication.
Suddenly, standby server's postgresql process was stopped with this logs.
2016-07-14 18:14:19.544 JST [][5783e03b.3cdb][0][15579]WARNING: page 1671400 of relation base/16400/559613 is uninitialized
2016-07-14 18:14:19.544 JST [][5783e03b.3cdb][0][15579]CONTEXT: xlog redo Heap2/VISIBLE: cutoff xid 1902107520
2016-07-14 18:14:19.544 JST [][5783e03b.3cdb][0][15579]PANIC: WAL contains references to invalid pages
2016-07-14 18:14:19.544 JST [][5783e03b.3cdb][0][15579]CONTEXT: xlog redo Heap2/VISIBLE: cutoff xid 1902107520
2016-07-14 18:14:21.026 JST [][5783e038.3cd9][0][15577]LOG: startup process (PID 15579) was terminated by signal 6: Aborted
2016-07-14 18:14:21.026 JST [][5783e038.3cd9][0][15577]LOG: terminating any other active server processes
And, master server's postgresql logs were nothing special.
But, master server's /var/log/messages was listed as below.
Jul 14 05:38:44 host kernel: sbridge: HANDLING MCE MEMORY ERROR
Jul 14 05:38:44 host kernel: CPU 8: Machine Check Exception: 0 Bank 9: 8c000040000800c0
Jul 14 05:38:44 host kernel: TSC 0 ADDR 1f7dad7000 MISC 90004000400008c PROCESSOR 0:306e4 TIME 1468442324 SOCKET 1 APIC 20
Jul 14 05:38:44 host kernel: EDAC MC1: CE row 1, channel 0, label "CPU_SrcID#1_Channel#0_DIMM#1": 1 Unknown error(s): memory scrubbing on FATAL area : cpu=8 Err=0008:00c0 (ch=0), addr = 0x1f7dad7000 => socket=1, Channel=0(mask=1), rank=4
Jul 14 05:38:44 host kernel:
Jul 14 18:30:40 host kernel: sbridge: HANDLING MCE MEMORY ERROR
Jul 14 18:30:40 host kernel: CPU 8: Machine Check Exception: 0 Bank 9: 8c000040000800c0
Jul 14 18:30:40 host kernel: TSC 0 ADDR 1f7dad7000 MISC 90004000400008c PROCESSOR 0:306e4 TIME 1468488640 SOCKET 1 APIC 20
Jul 14 18:30:41 host kernel: EDAC MC1: CE row 1, channel 0, label "CPU_SrcID#1_Channel#0_DIMM#1": 1 Unknown error(s): memory scrubbing on FATAL area : cpu=8 Err=0008:00c0 (ch=0), addr = 0x1f7dad7000 => socket=1, Channel=0(mask=1), rank=4
Jul 14 18:30:41 host kernel:
The memory error's started at 1 week ago. So, I doubt the memory error causes postgresql's error.
My question is here.
1) Can memory error of kernel cause postgresql's "WAL contains references to invalid pages" error?
2) Why there is not any logs at master server's postgresql?
thx.

Faulty memory can cause all kinds of data corruption, so that seems like a good enough explanation to me.
Perhaps there are no log entries at the master PostgreSQL server because all that was corrupted was the WAL stream.
You can run
oid2name
to find out which database has OID 16400 and then
oid2name -d <database with OID 16400> -f 559613
to find out which table belongs to file 559613.
Is that table larger than 12 GB? If not, that would mean that page 1671400 is indeed an invalid value.
You didn't tell which PostgreSQL version you are using, but maybe there are replication bugs fixed in later versions that could cause replication problems even without a hardware bug present; read the release notes.
I would perform a new pg_basebackup and reinitialize the slave system.
But what I'd really be worried about is possible data corruption on the master server. Block checksums are cool (turned on if pg_controldata <data directory> | grep checksum gives you 1), but possibly won't detect the effects of memory corruption.
Try something like
pg_dumpall -f /dev/null
on the master and see if there are errors.
Keep your old backups in case you need to repair something!

Related

PSQL timeline conflict prevent start of master

We had an outage on one of our PSQL 14 (managed by Zalando) due to k8s control plane being unreachable for 30min.
Control plane is now ok but master PSQL does not want to start:
LOG,00000,"listening on IPv4 address ""0.0.0.0"", port 5432"
LOG,00000,"listening on IPv6 address ""::"", port 5432"
LOG,00000,"listening on Unix socket ""/var/run/postgresql/.s.PGSQL.5432"""
LOG,00000,"database system was shut down at 2023-01-30 02:51:10 UTC"
WARNING,01000,"specified neither primary_conninfo nor restore_command",,"The database server will regularly poll the pg_wal subdirectory to check for files placed there."
LOG,00000,"entering standby mode"
FATAL,XX000,"requested timeline 5 is not a child of this server's history","Latest checkpoint is at 2/82000028 on timeline 4, but in the history of the requested timeline, the server forked off from that timeline at 0/530000A0."
LOG,00000,"startup process (PID 23007) exited with exit code 1"
LOG,00000,"aborting startup due to startup process failure"
LOG,00000,"database system is shut down"
We can see in archive_status folder:
-rw-------. 1 postgres postgres 0 Jan 30 02:51 000000040000000200000081.ready
-rw-------. 1 postgres postgres 0 Jan 30 02:51 00000005.history.done
Would you know how we can recover safely from this?
I guess switching back to timeline 4 would be enough as timeline 5 was made after start of outage.
The server is started in standby mode. Remove standby.signal if you want to start the server as primary server.

osm2pgsql import fails with "Failed to read from node cache: Input/output error"

I'm attempting a whole-planet OSM data import on an AWS EC2. During or possibly after the "Ways" processing i receive the following message:
"Failed to read from node cache: Input/output error"
The EC2 has the following specs:
type: i3.xlarge
memory: 30.5 Gb
vCPUs: 4
Postgresql: v9.5.6
PostGIS: 2.2
In addition to the root volume, I have mounted 900GB SSD and a 2TB HHD (high throughput). The Postgresql data directory is on the HHD. I have commanded the osm2pgsql to write the flat-nodes file the SSD.
Here is my osm2pgsql command:
osm2pgsql -c -d gis --number-processes 4 --slim -C 20000 --flat-nodes /data-cache/flat-node-cache/flat.nodes /data-postgres/planet-latest.osm.pbf
I run the above command as user renderaccount that is a member of the following groups renderaccount ubuntu postgres. The flat-nodes file appears to be successfully created at /data-cache/flat-node-cache/flat.nodes and has this profile:
ubuntu#ip-172-31-25-230:/data-cache/flat-node-cache$ ls -l
total 37281800
-rw------- 1 renderaccount renderaccount 38176555024 Apr 13 05:45 flat.nodes
Has anyone run into and or resolved this? I suspect maybe a permissions issue? I notice now that since this last osm2pgsql failure, the mounted SSD that is the destination of the flat-nodes file has been converted to a "read-only" file system - which sounds like may happen when there are i/o errors on the mounted volume(?).
Also, does osm2pgsql write to a log that I could acquire additional info?
UPDATE: dmesg output:
[ 6206.884412] blk_update_request: I/O error, dev nvme0n1, sector 66250752
[ 6206.890813] EXT4-fs warning (device nvme0n1): ext4_end_bio:329: I/O error -5 writing to inode 14024706 (offset 10871640064 size 8388608 starting block 8281600)
[ 6206.890817] Buffer I/O error on device nvme0n1, logical block 8281344
After researching the above output, it appears it might be a bug in Ubuntu 16.04. https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1668129?comments=all
This was an error with Ubuntu 16.04 writing to the volume nvme0n1. Solved by this https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1668129/comments/29

Postgresql - could not create shared memory segment: Cannot allocate memory

This question has been asked many times, but the response given / worked for most is not working for me. So, I am suspicious that something else is missing.
OS: $ cat /proc/version
Linux version 2.6.32-431.29.2.el6.x86_64 (mockbuild#x86-026.build.eng.bos.redhat.com) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-4) (GCC) ) #1 SMP Sun Jul 27 15:55:46 EDT 2014
When I try to start PostGreSQL, I get the following error :
[reportadmin /opt/birt/modules/BIRTiHub]$ ./startPostgreSQL.sh
Starting startpostgresql at Fri 05/15/2015 08:03:29
AC_SERVER_HOME=/opt/BIRTiHubFType/modules/BIRTiHub/iHub
2015-05-15 12:03:30 UTC 27379 LOG: loaded library "pg_stat_statements"
2015-05-15 12:03:30 UTC 27379 FATAL: could not create shared memory segment: Cannot allocate memory
2015-05-15 12:03:30 UTC 27379 DETAIL: Failed system call was shmget(key=8433001, size=290635776, 03600).
2015-05-15 12:03:30 UTC 27379 HINT: This error usually means that PostgreSQL's request for a shared memory segment exceeded available memory or swap space, or exceeded your kernel's SHMALL parameter. You can either reduce the request size or reconfigure the kernel with larger SHMALL. To reduce the request size (currently 290635776 bytes), reduce PostgreSQL's shared memory usage, perhaps by reducing shared_buffers or max_connections.
The PostgreSQL documentation contains more information about shared memory configuration.
pg_ctl: could not start server
Examine the log output.
Also, I have adjusted the shmmax and shmall values to be greater than and less than (respect) the requested value.
[root /opt/birt]$ sysctl -p
....
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.shmmax = 68719476736
kernel.shmall = 134217728
In other words:
shmall =134217728
Requested =290635776
shmmax =68719476736
Any idea what could be wrong ?
D

MongoDB 2.6.6 crashed with "Invalid access at address"

Running MongoDB on my company's QA env, I ran into this error in the log:
2015-02-22T04:48:06.194-0500 [rsHealthPoll] SEVERE: Invalid access at address: 0
2015-02-22T04:48:06.290-0500 [rsHealthPoll] SEVERE: Got signal: 11 (Segmentation fault).
Backtrace:0xf62526 0xf62300 0xf6241f 0x7fc70b581710 0xca12c2 0xca14e7 0xca3bb6 0xf02995 0xefb6d8 0xf9af1c 0x7fc70b5799d1 0x7fc70a2d28fd
/usr/bin/mongod(_ZN5mongo15printStackTraceERSo+0x26) [0xf62526]
/usr/bin/mongod() [0xf62300]
/usr/bin/mongod() [0xf6241f]
/lib64/libpthread.so.0(+0xf710) [0x7fc70b581710]
/usr/bin/mongod(_ZN5mongo21ReplSetHealthPollTask12tryHeartbeatEPNS_7BSONObjEPi+0x52) [0xca12c2]
/usr/bin/mongod(_ZN5mongo21ReplSetHealthPollTask17_requestHeartbeatERNS_13HeartbeatInfoERNS_7BSONObjERi+0xf7) [0xca14e7]
/usr/bin/mongod(_ZN5mongo21ReplSetHealthPollTask6doWorkEv+0x96) [0xca3bb6]
/usr/bin/mongod(_ZN5mongo4task4Task3runEv+0x25) [0xf02995]
/usr/bin/mongod(_ZN5mongo13BackgroundJob7jobBodyEv+0x128) [0xefb6d8]
/usr/bin/mongod() [0xf9af1c]
/lib64/libpthread.so.0(+0x79d1) [0x7fc70b5799d1]
/lib64/libc.so.6(clone+0x6d) [0x7fc70a2d28fd]
It seems there's some segmentation fault in rsHealthPoll.
This is from a mongod instance running as part of a replica set in a shard-ready cluster (2 mongods + 1 arbiter running with config servers and mongos processes).
This DB mostly receives writes of new records, periodically updating a boolean to True for some records, and some reads, according to user activity querying it. (Single collection at the moment)
Googling this error only gave me other, older, already-solved segmentation fault bugs in MongoDB Jira.
Anyone seen this recently or knows the reason?

MongoDB: How to remove an index on a replicaset?

I see that the MongoDB documentation says that removing index is by calling db.accounts.dropIndex( { "tax-id": 1 } ). But it does not say whether the node needs to be removed from the replicaset or not.
I tried to take a secondary node in a replicaset offline and restart as a standalone node (in a different port) and tried to drop the index.
But after bringing back the node in the replica set with regular process sudo service mongod start, the mongod process is dying saying the index got corrupted.
Thu Oct 31 19:52:38.098 [repl writer worker 1] Assertion: 15898:error in index possibly corruption consider repairing 382
0xdddd81 0xd9f55b 0xd9fa9c 0x7edb83 0x7fb332 0x7fdc08 0x9d3b50 0x9c796e 0x9deb64 0xac45dd 0xac58df 0xa903fa 0xa924c7 0xa71f6c 0xc273d3 0xc26b18 0xdab721 0xe26609 0x7ff4d05f0c6b 0x7ff4cf9965ed
/usr/bin/mongod(_ZN5mongo15printStackTraceERSo+0x21) [0xdddd81]
/usr/bin/mongod(_ZN5mongo11msgassertedEiPKc+0x9b) [0xd9f55b]
/usr/bin/mongod() [0xd9fa9c]
/usr/bin/mongod(_ZN5mongo11checkFailedEj+0x143) [0x7edb83]
/usr/bin/mongod(_ZNK5mongo12BucketBasicsINS_12BtreeData_V1EE11basicInsertENS_7DiskLocERiS3_RKNS_5KeyV1ERKNS_8OrderingE+0x222) [0x7fb332]
/usr/bin/mongod(_ZNK5mongo11BtreeBucketINS_12BtreeData_V1EE10insertHereENS_7DiskLocEiS3_RKNS_5KeyV1ERKNS_8OrderingES3_S3_RNS_12IndexDetailsE+0x68) [0x7fdc08]
/usr/bin/mongod(_ZNK5mongo30IndexInsertionContinuationImplINS_12BtreeData_V1EE22doIndexInsertionWritesEv+0xa0) [0x9d3b50]
/usr/bin/mongod(_ZN5mongo14IndexInterface13IndexInserter19finishAllInsertionsEv+0x1e) [0x9c796e]
/usr/bin/mongod(_ZN5mongo24indexRecordUsingTwoStepsEPKcPNS_16NamespaceDetailsENS_7BSONObjENS_7DiskLocEb+0x754) [0x9deb64]
/usr/bin/mongod(_ZN5mongo11DataFileMgr6insertEPKcPKvibbbPb+0x123d) [0xac45dd]
/usr/bin/mongod(_ZN5mongo11DataFileMgr16insertWithObjModEPKcRNS_7BSONObjEbb+0x4f) [0xac58df]
/usr/bin/mongod(_ZN5mongo14_updateObjectsEbPKcRKNS_7BSONObjES4_bbbRNS_7OpDebugEPNS_11RemoveSaverEbRKNS_24QueryPlanSelectionPolicyEb+0x2eda) [0xa903fa]
/usr/bin/mongod(_ZN5mongo27updateObjectsForReplicationEPKcRKNS_7BSONObjES4_bbbRNS_7OpDebugEbRKNS_24QueryPlanSelectionPolicyE+0xb7) [0xa924c7]
/usr/bin/mongod(_ZN5mongo21applyOperation_inlockERKNS_7BSONObjEbb+0x65c) [0xa71f6c]
/usr/bin/mongod(_ZN5mongo7replset8SyncTail9syncApplyERKNS_7BSONObjEb+0x713) [0xc273d3]
/usr/bin/mongod(_ZN5mongo7replset14multiSyncApplyERKSt6vectorINS_7BSONObjESaIS2_EEPNS0_8SyncTailE+0x48) [0xc26b18]
/usr/bin/mongod(_ZN5mongo10threadpool6Worker4loopEv+0x281) [0xdab721]
/usr/bin/mongod() [0xe26609]
/lib64/libpthread.so.0(+0x7c6b) [0x7ff4d05f0c6b]
/lib64/libc.so.6(clone+0x6d) [0x7ff4cf9965ed]
Thu Oct 31 19:52:38.106 [repl writer worker 1] ERROR: writer worker caught exception: error in index possibly corruption consider repairing 382 on:
xxxxxxxx--deleted content related to the data...xxxxxxxxxxxxx
Thu Oct 31 19:52:38.106 [repl writer worker 1] Fatal Assertion 16360
0xdddd81 0xd9dc13 0xc26bfc 0xdab721 0xe26609 0x7ff4d05f0c6b 0x7ff4cf9965ed
/usr/bin/mongod(_ZN5mongo15printStackTraceERSo+0x21) [0xdddd81]
/usr/bin/mongod(_ZN5mongo13fassertFailedEi+0xa3) [0xd9dc13]
/usr/bin/mongod(_ZN5mongo7replset14multiSyncApplyERKSt6vectorINS_7BSONObjESaIS2_EEPNS0_8SyncTailE+0x12c) [0xc26bfc]
/usr/bin/mongod(_ZN5mongo10threadpool6Worker4loopEv+0x281) [0xdab721]
/usr/bin/mongod() [0xe26609]
/lib64/libpthread.so.0(+0x7c6b) [0x7ff4d05f0c6b]
/lib64/libc.so.6(clone+0x6d) [0x7ff4cf9965ed]
Thu Oct 31 19:52:38.108 [repl writer worker 1]
***aborting after fassert() failure
Thu Oct 31 19:52:38.108 Got signal: 6 (Aborted).
Is this due to dropping the index in the offline mode on the secondary? Any suggestions on the proper way to drop the index is highly appreciated.
The proper way to remove index from replica set is to drop it on primary. The idea of replica is having the same copy of data (with small time lags). So whenever you do something on primary is copied to the secondaries. So if you start doing anything on the primary, right after it finishes this process, the process propagates to secondaries.
If you are removing index from primary - the index will be removed on the secondary as well.