Sunday we upgraded our RDS instance from postgresql 9.6 to 10.3. After that, we started to see a way bigger CPU consumption.
I've checked and looks like the two versions have a few different defaults parameters. Anyone know which one could be the cause of the difference in CPU?
Related
AFAIK the documentation states:
In general, log shipping between servers running different major PostgreSQL release levels is not possible. It is the policy of the PostgreSQL Global Development Group not to make changes to disk formats during minor release upgrades, so it is likely that running different minor release levels on primary and standby servers will work successfully. However, no formal support for that is offered and you are advised to keep primary and standby servers at the same release level as much as possible.
But my question is: does disk format actually changes between 9.4.9 and 9.5.6?
We are currently running with:
PostgreSQL 9.4.9 on x86_64-unknown-linux-gnu, compiled by gcc (Debian 4.9.2-10) 4.9.2, 64-bit
Debian GNU/Linux 8.6 (jessie)
And the 'next' possible step would be using the version from this repo:
http://apt.postgresql.org/pub/repos/apt/
Our current DB is about 2TB, so we'd like to try a replication-like approach for a smoother transition, rather tan a using a full pg_dump, which would actually need quite a while with the db frozen.
does disk format actually changes between 9.4.9 and 9.5.6
Yes. Until the coming PostgreSQL 10, PostgreSQL used a wacky version scheme where "x.y" was the "major" version, and the 3rd number was the minor version.
So 9.4 and 9.5 are different major versions. They are definitely not on-disk compatible.
To upgrade you can:
Dump and reload
Use pg_upgrade (the officially recommended way)
Use pglogical
Use Londiste
Is it possible to install a mongo db version greater than 3.2 on raspberry pi with RASPBIAN JESSIE LITE installed on the pi?
I only succeed to have the version 2.1 using this tutorial.
http://www.widriksson.com/install-mongodb-raspberrypi/
I tried a lot of tutorial but impossible to find one which work for greater versions.
As it was already written in the comments, you are limited to the 32-bit version.
Which comes with severe drawbacks:
The data which can be stored is less than 2Gb (potentially a lot less), since WiredTiger is not available and MMAPv1 is limited to a maximum file size of 2Gb since it makes heavy use of memory-mapping. It simply has a very limited addressable space on 32-bit machines
The WiredTiger storage engine is not available. It allows compression and hence would be especially interesting for limited resources.
MongoDB needs RAM. The more, the better. Indexes need it, connections need it desperately, and hm, the memory mapping makes good use of it. And well, we are only on 32bits. MongoDB Inc decided against creating workarounds for a dying technology. So do not expect this to change
The biggest drawback, however, is that journaling and replication are basically No-Gos since the further limit the amount of data you can store. No journaling translates to limited durability of your data (unless you are willing to force the data to be synced to disk for each write by using an according write concern), while the lack of replication and the resulting lack of failover capabilities most likely is less of a concern on a Raspi.
MongoDB Inc strongly advises against using the 32-bit version for other than testing purposes. And they do so for a good reason. Personally, my generated test data by far exceeds the limitations of the 32-bit version.
So yes, it should be technically possible (and even with no package at hand: compiling MongoDB is no rocket science). Is it a good idea? Well, not so much, if you ask me.
I am the author of the blog http://www.clarenceho.net/2015/12/building-mongodb-30x-for-arm-armv7l.html mentioned by #user3343399
Just to add that Arch Linux ARM latest build of MongoDB 3.2.0 seems to be working fine. Except that the default storage engine was compiled as WiredTiger although there is no 32-bit support from WiredTiger. You will need to add parameter --storageEngine=mmapv1
Recently I upgraded one of our replicas from MongoDB version 2.4.3 to 2.6.3. It was a simple restart of the 3 mongodb replicas after upgrading of the binaries. After a few days, the same queries that were running for nearly a year on 2.4.3 started to build up causing very high load on the server. The load avg which used to be less than 1 all the time now spiked to over 300. Quick resolution was to failover and restart the mongod process which would bring the load down. But the new Primary would behave the same way in matter of a few hours and I would be forced to failover again and restart mongod. After a few such occurrences I downgraded the replicas to 2.4.10 and this seem to have resolved the load issue or queues building. Can anyone vouch for the theory to be true if you might have experienced a similar problem?
I've got two PostgreSQL 9.2.4 servers running on 32-bit Suse.
Failover is configured using a shared storage device.
I'd like to upgrade to 64-bit Ubuntu machines using PostgreSQL's streaming replication while keeping the database service available. To do that would mean temporarily having failover between a 32-bit and 64-bit system.
I've read a lot of documentation for PostgreSQL & PostgreSQL replication.
It's clear that PostgreSQL doesn't handle streaming replication between 32 & 64 bit systems. It's not as clear if it can handle shared storage between 32 & 64 bit systems. I'm pessimistic, but wanted to check.
Yes, you can - with the caveat that you must use a 32-bit PostgreSQL build on your 64-bit system, it must be the same major release (e.g. both 9.2 or both 9.3) and it must be compiled with the same settings for integer_datetimes etc.
Modern Debian/Ubuntu, like all 64-bit Red Hat variants, supports a multi-arch install where 32-bit and 64-bit binaries can live side by side. So you should be able to simply apt-get install the 32-bit PostgreSQL on your 64-bit system.
That said, I strongly suggest relying on streaming replication instead. Shared-storage failover is very risky - if you have any problems with fencing access and STONITH, you will get extremely severe data corruption. It also protects against fewer classes of problems.
Actually, it's even possible that 32-bit Suse's version of PostgreSQL and 32-bit Ubuntu's aren't compatible. Not likely, but it depends on what options they chose during compilation.
So - no.
If you really want to have complete availability you'll need to look at one of the trigger-based replication systems (slony / londiste / bucardo). These can replicate between different installations of PostgreSQL regardless of on-disk format.
Of course, this means having two sets of data.
It does allow you for an uninterrupted upgrade though, so you can consider switching to the latest 9.3 at the same time.
I currently run Postgres 8.4 on Centos 6. I need to migrate to Postgres 9.1 on a Windows machine. I have my reasons for this...Anyway, what is the best way to move the data from one DB to another without interrupting service and losing any of the functionality particularly with PostGIS? The PostGIS version (2.0) that installs with 9.1 has some features that I want to take advantage of but at the same time I don't want to lose any of the features in 8.4. Can anyone provide some insight in to this?
As far as I know, the only thing that will let you upgrade without downtime is a trigger-based replication system such as Slony-I.