I’m trying to come up with a strategy to backup data in my apache ignite cache hosted as a stateful set in google cloud Kubernetes.
My ignite deployment uses ignite native persistence and runs a 3 node ignite cluster backed up by persistence volumes in Kubernetes.
I’m using a binaryConfiguration to store binary objects in cache.
I’m looking for a reliable way to back up my ignite data and be able to restore it.
So far I’ve tried backing up just the persistence files and then restoring them back.
It hasn’t worked reliably yet.
The issue I’m facing is that after restore, the cache data which isn’t binary objects is restored properly, e.g. strings or numbers. I’m able to access numeric or string data just fine. But binary objects are not accessible. It seems the binary objects are restored, but I’m unable to fetch them.
The weird part is that after the restore, once I add a new binary object to the cache all the restored data seems to be accessed normally.
Can anyone please suggest a reliable way to back up and restore ignite native persistence data?
You should either backup ${ignite.work.dir}/marshaller directory, or call ignite.binary().type(KeyOrValue.class) for every type you have in cache to prime binary marshaller.
Apache Ignite providers ACID transactions which are pretty reliable. The cache also uses its own mechanism for primary backups and copies and assuming you have its WAL enabled some stuff is kept in memory.
The most likely thing happening is that you do your restore and the moment you make an initial write memory starts populating allowing you to see what's on disk (cache). This is not really a supported restore mechanism (there isn't one in the docs) but it could work that way where after the restore you run a minor sample irrelevant write. I advise testing this thoroughly though.
Related
I have a Neo4j database running on Kubernetes. I want to make scheduled backups for the database. I know that Neo4j provides a set of tools for backup and restore. However, Kubernetes VolumeSnapshot also looks viable for backup and restore.
I wonder if it's a good idea to use Kubernetes VolumeSnapshot to backup/restore Neo4j databases? Will it cause errors like inconsistency database status or faulty disk problem? Thanks.
Generally, if it is not supported by the database, then it is a bad idea.
Think of your database as being stored across:
Database files on disk
Page cache (in volatile memory)
Write ahead transaction logs on disk
A volume snapshot would not save enough information to get a consistent state of your database (unless the database is gracefully shut down).
Use the set of tools provided for backup/restore
I am using Rundeck Docker Container. It was running well for 2 months and suddenly it crashed. I lost all the data wrt a project that I had created using CLI. Is there a way to change the default path to store all project related data including job definitions, resources etc?
Out of the box, Rundeck stores the project/jobs data on their internal H2 database, this database is only for testing purposes and probably will crash with a lot of data (storing projects at filesystem is deprecated right now), the best approach is to use a "real" database like MySQL, PostgreSQL, or Oracle, in that way Rundeck stores all project/jobs data on a robust backend.
Check this MySQL, PostgreSQL and Oracle Docker environment examples.
Of course, having a backup policy for your instance would be ideal to keep safe all your instance data.
I have a web-app that depends on a read-only MongoDB database. Through trial and error, I discovered that by far the fastest way to run the ETL pipeline that populates the database is to run a local copy of MongoDB, populate the database, stop the database, and tarball the state directory.
To deploy a high-availability "cluster," I create multiple instances (or containers) running the app, each with access to a copy of the state in locally mounted storage. Putting these behind a load balancer with regular health checks and autoscaling (or in a Kubernetes cluster as a ReplicaSet), I get isolation, redundancy, easy rollbacks (using versioned storage), and easy setup in virtually any environment.
The key idea here is that because the database is read-only, it is in a sense a "stateless" application. Thus, I can treat it like any other static provider of information
There are many apparent advantages to this setup. Nevertheless, I have always had a nagging feeling that I was missing something. Given a read-only context, is there still some reason why it might be better to run a "proper" MongoDB cluster?
If you don't mind outages when the single node goes down and you don't mind taking the system down during upgrades then this is probably an ok deployment. You might get a safer dump and restore using mongodump and mongorestore rather than tar but apart from that this setup should work for a read-only deployment.
Is it possible to perform a backup on Kubernetes in application-consistent manner?
Some of the backup solutions I found mainly base on freezing the pod and then launching the backup to maintain consistency (Heptio's Ark for example.)
The idea of application-consistent backups is to capture all data in memory and all transactions in process. This is performed by using some type of client software co-resident with the database application to quiesce the database application, flush its memory cache, complete all its writes in order and then perform the backup.
In its turn, Kubernetes operates with specifications of resources (e.g., Deployments, Services, etc.) and their statuses, and in any given time the resource status must be the same as defined in the specification. For storing any important data in Kubernetes, persistent volumes are used. In other words, you cannot perform a backup in an application-consistent manner on Kubernetes, because the main idea of it is different.
It is possible that a specific application for a specific database exists and allows implementing such type of backup. But it is related to that application but not to Kubernetes itself.
What's a quick and efficient way to transfer a large Mongo database?
I want to transfer a 10GB production Mongo 3.4 database to a staging environment for testing. I used the mongodump/mongorestore tools to test this transfer to my localhost, but it took over 8 hours and consumed a massive amount of CPU and memory, which is something I'd like to avoid in the future. The database doesn't have any indexes, so the mongodump option to exclude indexes doesn't increase performance.
My staging environment will mostly be read-only, but it will still need to write occasionally, so it can't be setup as a permanent read replica of production.
I've read about [replication sets][1], but they seem very complicated to setup and designed for permanent mirroring of a primary to two or more secondaries. I've read some posts about people hacking this to be temporary, so they can do a one-time mirroring, but I can't find any reliable documentation since this isn't the intended usage of the feature. All the guides I've read also say you need at least 3 servers, which seems unintuitive since I only have 2 (production and staging) and don't want to create a third.
Several options exist today (2020-05-06).
Copy Data Directory
If you can take the system offline you can copy the data directory from one host to another then set the configuration to point to this directory and start up the new mongod.
Mongomirror
Mongomirror (https://docs.atlas.mongodb.com/import/mongomirror/) is intended to be a tool to migrate from on-premises to Atlas, but this tool can be leveraged to copy data to another on-premises host. Beware, this connection requires SSL configurations on source and target to transfer.
Replicaset
MongoDB has built-in High Availability features using a replica set model (https://docs.mongodb.com/manual/tutorial/deploy-replica-set/). It is not overly complicated and works very well. This option allows the original system to stay online while replication does its magic. Once the replication completes reconfigure the replica set to be a single node replica set referring only to the new host and shut down the original host. This configuration is referred to as a single-node replica set. Having a single node replica set offers benefits over a stand-alone installation in that the replica set underpinnings (oplog) are the basis for other features such as change streams (https://docs.mongodb.com/manual/changeStreams/)
Backup and Restore
As you mentioned you can use mongodump/mongorestore. There is a point in time where the backup must be restored. During this time it is expected the original system is offline and not accepting any additional writes. This method is robust but has downtime associated with it. You could use mongoexport/mongoimport to use a JSON file as an intermediate step but this is not recommended as BSON data types could be lost in translation.
Per Mongo documentation, you should be able to cp/rsync files for creating a backup (if you are able to halt write ops temporarily on your production setup - or if you do this during a maintenance window)
https://docs.mongodb.com/manual/core/backups/#back-up-by-copying-underlying-data-files
Back Up with cp or rsync
If your storage system does not support snapshots, you can copy the files >directly using cp, rsync, or a similar tool. Since copying multiple files is not >an atomic operation, you must stop all writes to the mongod before copying the >files. Otherwise, you will copy the files in an invalid state.
Backups produced by copying the underlying data do not support point in time >recovery for replica sets and are difficult to manage for larger sharded >clusters. Additionally, these backups are larger because they include the >indexes and duplicate underlying storage padding and fragmentation. mongodump, >by contrast, creates smaller backups.
FYI - for replica sets, the third "server" is an arbiter which exists to break the tie when electing a new primary. It does not consume as many resources as the primary/secondaries. Since you are looking to creating a staging environment, i would not recommend creating a replica set that includes production and staging env. Your primary instance could switch over to the staging instance and clients who are meant to access production instance will end up reading/writing from staging instance.