PostgreSQL 8.4 Win 32Bit, replication strategies to 64 bit Linux - postgresql

So we have an application that uses PostgresSQL 8.4 on windows (yeah I know)..
We have several of these apps in our country.
What we want to do is have a linux server in a data centre, that stores a full copy of the database, and have the data stream into it fairly regularly.
This doesn't need to be real-time 100% consistent, but we want to get as close to that as possible as we will use to track sales data through the day.
The "slave" (data centre) doesn't need to do anything other then receive all the data, and then an app will run some reports on it.
I've looked into it, slony, pgpool, running 32 bit PostgreSQL on 64 bit linux etc but it's a big area so looking for some advise on our less then ideal setup.

Your basic options are, as Craig pointed out, Bucardo, Londiste, and Slony. These are all somewhat complex to set up compared to streaming replication.
The big thing you can't do is use the streaming replication or similar solutions. These apply architecture (and major-version) -specific log files, and so going across architectures will on good days just not work and on bad days lead to data corruption on the slave. Don't do it.
These three solutions pull the data out in an architecture-independent format and send it through an additional infrastructure to be saved on the slave. There are big tradeoffs here and I would recommend thoroughly researching each option thoroughly before committing.
One thing to keep in mind is that the PostgreSQL community is typically quite adamant that there is no one-size-fits-all replication solution possible and so the multitude of options leads to many often solutions each of which is usually quite specialized.
Of these, Slony is probably the most configurable and Londiste is the simplest. They are for very different use cases though. If i have time and nobody beats me to it, I may post a comparison of the three or at least link to others.
Update: Brief comparison.
Slony-I
Slony-I is the the oldest and most powerful logical replication system available. I actually prefer to think of Slony-I as a replication toolkit rather than a solution. The toolkit approach offers incredible flexibility and the ability to solve all kinds of problems in complex environments. The downside is that the flexibility is complexity. As I put it, "Slony will happily let you replicate only part of your database. On the other hand, Slony will happily let you replicate only part of your database." It is an extremely helpful solution and makes all kinds of things possible, but the complexity is much higher than the other solutions.
One major advantage of Slony however is the fact that it has tools for managing DDL changes. Londiste and Bucardo do not to my knowledge. This means that adding columns to tables is possible on Slony but not so much on the other systems.
Bucardo
This is somewhere between Londiste and Slony in complexity. It has the primary useful feature of being able to do multi-master replication between two masters. It uses Perl extensively. I don't know how well it has been tested on Windows, and this may be a drawback.
Londiste
Londiste is Skype's master-slave replication system built on pgq (basically an event queue connected to PostgreSQL with events raised on database actions). It has a reputation of being easy to set up but not readily protecting replicas against modification. this of course could be a feature or a bug depending on how you want to look at it.

Related

what is better in my case Postgres BDR or Postgres-XL?

Currently I am working on making the 2 node cluster of PostgreSQL on bare metal cloud. I am very confused about either which approach should I go.
Like i have one option that is PostgreSQL BDR (bi directional replica). In this approach, I have benefit that my both nodes will have read and write access. but now I came to know about PostgreSQL XL. This approach works on sharding approach. Can anybody tell me or help to that either which approach should I go? Sharding will give me benefit or not? I want my Postgres highly available and fast. Which approach will help me in this regard.
Or any other suggestions that you wanna give me.
One more thing. I want to make my cluster horizontal scalable.
The best solution in most cases is option (c): neither. Use stock PostgreSQL + active/standby failover.
I say that as a BDR developer. It's a great tool (in my opinion) for workloads that need it. But it comes with some considerable costs, like any multi-master system, and should not be used if you don't actually need it.
Most people who think they need multi-master, don't. Or rather, don't understand the impacts and trade-offs.
Read the BDR documentation on multi-master conflicts.

Polyglot persistence for startup app

I'm in the process of developing my next app, and I'm really interested in using polyglot persistence. I like the idea of being able to query different data structures for different services. I'm essentially wanting to sync MongoDB, Neo4j/Titan, SQL, and maybe Cassandra/Hbase.
Currently, I'm wrapping everything in a try/catch block and rolling them all back if one fails. However, this is taxing my write times. I've also looked into AMQP systems like Kafka or ZeroMQ, but these seem more big data centric, whereas my app is still small and I want to keep it efficient.
Has anyone had experience with this? Is a MQ a good idea for a small app or am I prematurely optimizing?
Thanks
I know quite a bit about ZeroMQ, but not a lot about the database servers you mention.
First, you're a bit confused about ZeroMQ. Although it is derived from experience with AMQP, it uses the ZMTP wire protocol. That was custom designed during ZeroMQ development [but other applications do now use it].
ZeroMQ is a small and very fast MQ library that is symmetrical for all nodes; it is very good for small apps. The problem here is that you need something on the other systems that talks ZMTP, whether it's ZeroMQ or a bridge. If you intend making plugins or the like for the other systems then fine.
I presume though, that you are using JMS to talk to the other systems without intending to develop add-ons for them. In which case you're probably stuck with JMS. Kafka is a new one that I haven't caught up with, but RabbitMQ is a good, fast, and small, broker. FWIW. There are a great many broker comparisons out there for you to find. Many are dodgy in the sense that one small tweak of a setting can affect the performance greatly and are not necessarily comparing apples with apples. If want to compare broker performance in your environment, there isn't much of a shortcut to doing it yourself.
One thing that is confusing me is how you expect a broker to help your rollback performance. You'll still need to do the rollback in essentially the same manner, albeit asyncronously via the broker.
I work at CloudBoost.io (https://www.cloudboost.io) and we build a layer that sits on top of databases and give you the power of Polyglot Persistence persistence. We integrate MongoDB, ElasticSearch, Redis, Cassandra, and Neo4j and give you the one single APIwhere you can query / store your data. We automatically shard your data into various databases based on your query / storage request patterns.
Let me know if this helps. :)
For syncing data from MongoDB to Neo4j there is now the Neo4j Doc Manager project. It works by monitoring MongoDB for operations and converts the document operation into a property graph model and immediately writes to Neo4j.

Best suited NoSQL database for Content Recommender

I am currently working in a project which includes migrating a content recommender from MySQL to a NoSQL database for performarce reasons. Our team has been evaluating some alternatives like MongoDB, CouchDB, HBase and Cassandra. The idea is to choose a database that is capable of running in a single server or in a cluster.
So far we have discarded the use of Hbase due to its dependency on a distributed environment. Even having the idea of scaling horizontally, we need to run the DB in a single server for a little while in production. MongoDB was also discarded because it does not support map/reduce features.
We have still 2 alternatives and we have no solid background to decide. Any guidance or help is appreciated
NOTE: I do not pretend to create a religion-like discussion with non-founded arguments. It is a strictly technical question to be discussed in the problem's context
Graph databases are usually considered as best suited for recommendation engines, since a lot of the recommendation algorithms are actually graph based. I recommend looking into Neo4J - it can handle billions of nodes/edges on a single machine and it supports a so-called high availability mode which is a master-slave setup with automatic master selection.

Is CouchDB a good persistent layer for Membase?

Membase is great for social game due to it's low latency.
As I understand CouchDB is a MVCC system using b+ tree, with a focus on append only design.
(http://guide.couchdb.org/draft/btree.html)
One of the most important scenario of Membase is social game.
Social game has a lot of write operations (50+%).
And a good portion of them are in-place updates.
So why is CouchDB a suitable persistent layer for Membase?
I'd also add that CouchDB's append-only log format really doesn't have much relation to whether application writes are new items or updates. The append-only format gives us much better reliability and performance than an in-place system (like sqlite...which is still quite reliable). It's also much easier to take backups of.
Does Membase NEED an append-only log format? maybe not...does it NEED CouchDB?...YES!
The benefits of map-reduce and indexing as well as eventually consistent replication that CouchDB brings are nothing less than huge for Membase...and the benefits of low-latency, clustering and UI that Membase brings to CouchDB are arguably just as important.
(Disclosure: I work for Couchbase)
Perry Krug
CouchDB has great file formats, great ability to recover from crashes, sophisticated authentication and authorization tools, and a universal, standard, interface: HTTP. CouchDB is poor at low-latency queries, optimized memory utilization, and heavy update speeds (a million per second).
Membase currently has only a simple SQLite file format for persistence, less sophisticated authentication and authorization, using a more obscure protocol. Membase is amazing for low-latency queries, ideal memory utilization, and heavy update speeds.
I think the two complement each other very well. Since the merging effort is coming from core developers in both projects, collaborating together, I expect to see the strengths of both and the weaknesses of neither. Yes, CouchDB is a good persistence layer for Membase.
Money speaks and if there ever was a vote of confidence then here it is, not only from a new lead investor but also from the existing ones as well.
http://www.couchbase.com/press-releases/couchbase-series-C
Besides, don't you think that Membase itself is more than well enough qualified to make an evaluation for such a merger decision?

How best to integrate several systems?

Ok where I work we have a fairly substantial number of systems written over the last couple of decades that we maintain.
The systems are diverse in that multiple operating systems (Linux, Solaris, Windows), Multiple Databases (Several Versions of oracle, sybase and mysql), and even multiple languages (C, C++, JSP, PHP, and a host of others) are used.
Each system is fairly autonomous, even at the cost of entering the same data into multiple systems.
Management recently decided that we should investigate what it will take to get all the systems happily talking to each other and sharing data.
Keep in mind that while we can make software changes to any of the individual systems, a complete rewrite of any one system (or more) is not something management is likely to entertain.
The first thought of several of the developers here was the straight forward: If system A needs data from system B it should just connect to system B's database and get it. Likewise if it needs to give B data it should just insert it into B's database.
Due to the mess of databases (and versions) used, other developers were of the opinion that we should have one new database, combining the tables from all the other systems to avoid having to juggle multiple connections. By doing this they hope that we might be able to consolidate some tables and get rid of the redundant data entry.
This is about the time I was brought in for my opinion on the whole mess.
The whole idea of using the database as a means of system communication smells funny to me. Business logic will have to be placed into multiple systems (if System A wants to add data to System B it better understand B's rules concerning the data before doing the insert), several systems will most likely have to do some form of database polling to find any changes to their data, continuing maintenance will be a headache, as any change to a database schema now propagates several systems.
My first thought was to take the time and write APIs/Services for the different systems, which once written could be easily used to pass/retrieve data back and forth. A lot of the other developers feel that is excessive and far more work than just using the database.
So what would be the best way to go about getting these systems to talk to each other?
Integrating disparate systems is my day job.
If I were you, I would go to great effort to avoid accessing System A's data from directly within System B. Updating System A's database from System B is extremely unwise. It is exactly the opposite of good practice to make your business logic so diffuse. You will end up regretting it.
The idea of the central database isn't necessarily bad ... but the amount of effort involved is probably within an order of magnitude of rewriting the systems from scratch. It is certainly not something I would attempt, at least in the form you describe. It can succeed, but it is much, much harder and it takes a lot more discipline than the point-to-point integration approach. It's funny to hear it suggested in the same breath as the 'cowboy' approach of just shoving data directly into other systems.
Overall your instincts seem pretty good. There are a couple of approaches. You mention one: implementing services. That's not a bad way to go, especially if you need updates in real time. The other is a separate integration application that is responsible for shuffling the data around. That's the approach I usually take, but usually because I can't change the systems I'm integrating to ask for the data it needs; I have to push the data in. In your case the services approach isn't a bad one.
One thing I would like to say that might not be obvious to someone coming to system integration for the first time is that every piece of data in your system should have a single, authoritative point of truth. If the data is duplicated (and it is duplicated), and the copies disagree with each other, the copy in the point of truth for that data must be taken to be correct. There is just no other way to integrate systems without having the complexity scream skyward at an exponential rate. Spaghetti integration is like spaghetti code, and it should be avoided at all costs.
Good luck.
EDIT:
Middleware addresses the problem of transport, but that is not the central problem in integration. If the systems are close enough together that one app can shove data directly in to another, they're probably close enough that a service offered by one can be called directly by another. I wouldn't recommend middleware in your case. You might get some benefit from it, but that would be outweighed by the increased complexity. You need to solve one problem at a time.
Sounds like you may want to investigate Message Queuing and message-oriented middleware.
MSMQ and Java Message Service being examples.
It seems you are looking for opinions, so I will provide mine.
I agree with the other developers that writing an API for all the different systems is excessive. You would likely get it done faster and have much more control over it if you just take the other suggestion of creating a single database.
One of the challenges that you will have is to align the data in each of the different systems so that it can be integrated in the first place. It may be that each of the systems that you want to integrate holds entirely different sets of data but more likely it is data that is overlapping. Before diving into writing API:s (which is the route I would take as well given your description) I would recommend that you try and come up with a logical data model for the data that needs to be integrated. This data model will then help you leverage the data that you are having in the different systems and make it more useful to the other databases.
I would also highly recommend an iterative approach to the integration. With legacy systems there is so much uncertainty that trying to design and implement it all in one go is too risky. Start small and work your way to a reasonably integrated system. "Fully integrated" is hardly ever worth aiming for.
Directly interfacing via pushing/ poking databases exposes a lot of internal detail of one system to another. There are obvious disadvantages: upgrading one system can break the other. Moreover, there can be technical limitations in how one system can access the database of the other (consider how an application written in C on Unix will interact with a SQL Server 2005 database running on Windows 2003 Server).
The first thing you have to decide is the platform where the "master database" will reside, and the same for the middleware providing the much required glue. Instead of going towards API level middleware-integration (such as CORBA), I would suggest you to consider Message Oriented Middleware. MS Biztalk, Sun's eGate and Oracle's Fusion can be some of the options.
Your idea of a new database is a step in the right direction. You might like to read a little bit on Enterprise Entity Aggregation pattern.
A combination of "data integration" with a middleware is the way to go.
If you are going towards Middleware + Single Central Database strategy, you might want to consider achieving this in multiple phases. Here's a logical stepped process which can be considered:
Implementation of services/APIs for different systems which expose the functionality for each system
Implementation of Middleware which accesses these APIs and provides an interface to all the systems to access the data/services from other systems (accesses data from central source if available, else gets it from another system)
Implementation of Central Database only, without data
Implementation of Caching/Data-Storage Services at the Middleware level which can store/cache data in the central database whenever that data is accessed from any of the Systems e.g. IF System A's records 1-5 are fetched by System B through Middleware, the Middleware Data Caching Services can store these records in the centralized database and the next time these records will be fetched from the central database
Data Cleansing can happen in Parallel
You can also create a import mechanism to push data from multiple systems to the central database on a daily basis (automated or manual)
This way, the effort is distributed across multiple milestones and data is gradually stored in the central database on first-accessed-first-stored basis.