I am currently working with couchbase server 1.8.1 an in a process of upgrading to 2.2 version.
We want to dump all the keys of couchbase 1.8.1 to a text file and then run on this file and copy all the data to the new couchbase 2.2.
The reason we chose to use this method instead of backup and restore is because our server do not respond well to backup and there is a risk of server failing.
Can you help me figure out how to create this dump file from couchbase bucket files?
In addition to what Dave posted, I recommend reading this blog post: http://blog.couchbase.com/Couchbase-rolling-upgrades
Also, there are some unique considerations when upgrading from 1.8.1 to 2.x, so make sure you read the documentation Dave linked to.
Note you can upgrade an existing cluster online (without having to manually copy data to a new 2.2 cluster) - see http://docs.couchbase.com/couchbase-manual-2.5/cb-install/#upgrading
We use this script: CouchbaseDump
It works and help us getting the keys from the sqlite files.
Related
I am trying to configure Feast with PostgreSQLSource as both online and offline source. I have created a table in db and edited feature_store.yaml file with proper credentials. I can successfully generate feature views and deploy infrastructure.
But when I run feast materialize command, it throwing an AssertionError for offline_stores. What might be the possible error/mistake and how can I resolve it??
Thank you
I faced same issue recently, I tried using postgresql as datasource, online store and offline store by editing feature-store.yaml file. Postgres supported as Registry, Online store, and Offline store is now available officially in feast version 0.21.0.
So if you use old version for postgresql then you will face issue and instead of editing feature-store.yaml just use postgres as template while feast init.
Refer:
https://github.com/feast-dev/feast/releases
For simplicity and cost, we are starting our project using local MySQL running on our GCE instances. We will want to switch to CloudSQL some months down the road.
Any advice on avoiding MySQL version conflicts/challenges would be much appreciated!
The majority of the documentation is for MySQL 5.7 so as an advice I recommend you use this version and review migrating to cloudsql concept this is a guide that will guide you through how to migrate safely which migration methods exist and how to prepare you MySQL database.
Another advice which I can give you is make the tutorial migrating mysql to cloud using automated workflow tutorial this guide also says that the any MySQL database running version 5.6 or 5.7 allows you to take advantage of the Cloud SQL automated migration workflow this tutorial is important to know how works and how deploy a source MySQL database on Compute Engine. The sql page will give you more tutorials if you want to learn more.
Finally I suggest to you check de sql pricing to be aware about the billing and also I suggest to you create a workspace with this you can have more transparency and more control over your billing charges by identifying and tuning up the services that are producing more log entries.
I hope all the information that I'm giving you are helpful.
Similar to this guide medium.com, I want to program a streaming server. But I do not know, how to store or upload my files to the database. I want to test, if I can stream a mp3 file, so I want to upload two files with gridfs. Can you help me by explaining how I can upload a file to my MongoDB via Gridfs?
Best regards
I asked you version in comments because from version 4.4 database tools needs to downloaded separately. Earlier it was part of maongodb installation.
This is from official website.
Starting with MongoDB 4.4, the MongoDB Database Tools are now released separately from the MongoDB Server and use their own versioning, with an initial version of 100.0.0. Previously, these tools were released alongside the MongoDB Server and used matching versioning.
So go ahead and download database-tools from here :-
https://www.mongodb.com/try/download/database-tools?tck=docs_databasetools
As you can see in the screenshot, you have all the database tools.
Open new command prompt and cd to the location where you have downloaded database tools.
From there run the following command. Replace <DB_NAME> with database name and <PATH_TO_FILE> with absolute/relative path to track you want to store.
mongofiles -d=<DB_NAME> put <PATH_TO_FILE>
e.g: mongofiles -d=testDb put C:\Music\1track1.mp3
To verify you can connect to your Db using compass and check fs.files and fs.chunks collection.
I'm about to upgrade a sharded MongoDB environment from 2.0.7 to 2.2.9, ultimately I want to upgrade to 2.4.9 but apparently I need to do this via 2.2. The release notes for 2.2 state that the config servers should have their binaries upgraded first, then the shards. I currently have the config instances using the same Mongo binary as the data instances. Essentially there are three shards each with three replicas, and one replica out of each shard also functions as a config instance. Since they share a binary I can't upgrade the config instances independent of some of the data instances.
Would upgrading some data instances before all of the config instances cause any problems, assuming I've disabled the balancer?
Should I change the config instances to use a different copy of the binary? If so, what's the best way to go about this for an existing production setup running on Ubuntu 12?
Should I remove the three data instances from the replica sets, upgrade the config instances, then start the data instances up again, effectively updating them as well, but in the right order? This last option is a bit hairy as some are primaries, so I would have to step them down before removing them from the replica sets. This last option would also occur again when I have to do the next upgrade, so I'm not really a fan.
I resolved this issue by:
Adding the binaries for the new version to a new folder.
Restarting the config instances using the new binaries so that the data instances could continue to run with the old binaries
Once all of the config servers were upgraded I created yet another folder in which to put the same new binaries from step 1
I then restarted the data instances using these new binaries
Now the config instances and data instances on the same server are using the new binaries but in different folders so that it will be easier to upgrade them for the next release
Note that there are other steps involved with the upgrade, and these are specified in the release notes which you should always follow. However, this is how I dealt with the shared binary problem which is not directly addressed in the release notes.
A lot of the tutorials seem to use a single binary for data and config instances on a single server but this is problematic when it's time to upgrade. I'd suggest always using separate binaries for your config and data instances.
Even if the config server and data server share the same binary, you can upgrade them one by one. The first step is upgrade the mongodb package. The second step is to shut down the config server, restart it using the new binary. The third step is to shut down the data server, restart it using the new binary.
I invite you to look to the release notes of each release you have to pass through. MongoDB team is explaining all these steps.
For example here you can find how to upgrade from 2.2 to 2.4 :
http://docs.mongodb.org/manual/release-notes/2.4-upgrade/#upgrade-a-sharded-cluster-from-mongodb-2-2-to-mongodb-2-4
The basic steps are:
Upgrade all mongos instances in the cluster.
Upgrade all 3 mongod config server instances.
Upgrade the mongod instances for each shard, one at a time.
Once again, look at the release notes, this is should be your first step ;)
We want to restore the database that we have got from the client as backup in our development environment, we are unable to restore the database successfully, can any one help us to know the steps involved in this restore process? Thanks in Advance.
Vijay, if you plan to make a new database out of checkpoints (+journals) made on another (physical) server, then I must disappoint you - it is going to be a painful process. Follow these instructions http://docs.actian.com/ingres/10.0/migration-guide/1375-upgrading-using-upgradedb . The process is basically the same as upgradedb . However, if architecture of the development server is different (say backup has been made on a 32bit system, and development machine is, say POWER6-based) then it is impossible to make your development copy of the database using this method.
On top of all this, this method of restoring backups is not officially supported by Actian.
My recommendation is to use the 'unloaddb' tool on the production server, export the database in some directory, SCP that directory to your development server, and then use the generated 'copy.in' file to create the development database. NOTE: this is the way supported by Actian, and you may find more details on this page: http://docs.actian.com/ingres/10.0/migration-guide/1610-how-you-perform-an-upgrade-using-unloadreload . This is the preferred way of migrating databases across various platforms.
It really depends on how the database has been backed up and provided to you.
In Ingres there is a snapshot (called a checkpoint) that can be restored into a congruent environment, but that can be quite involved.
There is also output from copydb and unloaddb commands which can be reloaded into another database. Things to look out for here are a change in machine architecture or paths that may have been embedded into the scripts.
Do you know how the database was backed up?