How to replicate Cloudant DB in 2 Bluemix env - ibm-cloud

I have application deployed in both US South and London Bluemix env to improve reliability. I would like to synch the cloudant db in both of these envs. In the setting up of the replication job, I have problems in defining the source DB and target DB.
I have "the database doesn't exist" error when I set up the US South replication job, I used the url from the environment variables in the London env of the Cloudant DB service as the source db.
Can you help me to define where to get the target and source DB for the replication please?
Thanks.
Jen

It needs to be in this format:
https://$USERNAME:$PASSWORD#$REMOTE_USERNAME.cloudant.com/$DATABASE_NAME
Notice the $DATABASE_NAME at the end.
The URL field in the VCAP_SERVICES environment variable will not have the database name at the end, because the database is something you create.

Related

AWS RDS Postgres: WAL_LEVEL not set to 'logical' after changing rds.logical_replication = 1

I am in the process of setting up Debezium to work with AWS Aurora Postgres (postgres version 12.6).
For Debezium to work, the WAL (Write-ahead-logging) must be set to 'logical' and not 'replica'.
On AWS, this would require a DBA to set the rds.logical_replication parameter in the parameter group to be set to 1.
The above was done. The database instance was restarted.
To verify that the WAL level was changed to 'logical', I ran the following query:
show wal_level.
However, after running this query in postgres on the target database the result showed replica.
I further looked at the log events in the AWS management console and I saw these log events.
Would anyone have an idea why this might be? There is another environment where we were able to successfully set the rds.logical_replication to 1 and following a database restart, the WAL was set to logical. But for our main environment this is not the case. Looking at the parameter groups, between these two environments, they are the same.
Any help/ advice is appreciated. thanks.
Ok after contacting the AWS support I got the information that the parameter "rds.logical_replication=1" is only active on the instance that has the Writer role (open for read-write). When you set this parameter you will have to use the writer instance for logical replication. You can connect to the writer instance either via Cluster endpoint (recommended) or instance endpoint.
I was checking the read-only instance with SHOW wal_level; but in fact it was set on the read/write instance !!!

How to downsize an AWS RDS instance to free tier

I want to create a free tier clone of a production AWS RDS PostgreSQL. As per my understanding, following are different ways
create a snapshot of the production DB and restore it on t2.micro
create a read replica of the production DB using t2.micro and then detach it as independent database
create a free tier database and restore a database dump of the production db
Option 3 is my last preference.
The problem is while creating read replica or restoring from snapshot, AWS doesn't explicitly allow to choose the free tier template. I just want to know if restoring to t2.micro without any advanced features like autoscaling, performance monitoring etc. is equivalent to free tier or not? I read here that the key thing with AWS production DB is that AWS provisions a secondary database provisioned to fallback in event of failure of the primary database or the Availability Zone in which the database is running.
AWS Free Tier doesn't actually care about the kind of service you use. Per their website you just get 750 instance hours per month for a db.t2.micro.
You can use these in any service you see fit and the discount will be applied automatically for the first 12 months.
Looking at the pricing page for RDS Postgres I can see, that these instances aren't listed anymore, which seems weird. The t2 instance family is fairly old, so they're probably trying to phase it out, but typically you can provision older instance types using the API directly if they're not available in the Console.
So what you want to do is create your db.t2.micro instance using one of the SDKs or the AWS CLI and restore from a snapshot. Alternatively you can create a read replica from the CLI and set the class to db.t2.micro. Later detaching that from the main cluster should work.
The production ready stuff refers to the Multi-AZ deployment, which is good for production use, but for anything production related a t2.micro seems like a bad choice, so I'm going to assume you're not planing to do that.

Migrate MongoDB replica set to new server with minimal downtime

We have a 3 member replica set mongodb running on mLab for a production website. We want to move the database to a new replica set hosted in our own Google Cloud account.
My current idea is to do the following steps
use dump/restore to copy a snapshot of the current database to the new replica set on Google Cloud
use oplog to keep the new replica set in sync with the current database
stop writing to current database and switch endpoint to the new new replica set
The production website can still be accessible during step 1 and 2. And I can do step 3 at my choice of time to reduce down time.
I don't have much mongo DBA experience so looking for suggestions on
Is the plan above making sense?
What commands/tools should I look into to make my plan work?
Thanks in advance!

deploying mongodb on google cloud platform?

Hello all actually for my startup i am using google cloud platform, now i am using app engine with node.js this part is working fine but now for database, as i am mongoDB i saw this for mongoDB https://console.cloud.google.com/launcher/details/click-to-deploy-images/mongodb?q=mongo now when i launched it on my server now it created three instances in my compute engine but now i don't know which is primary instance and which is secondary, also one more thing as i read that primary instance should be used for writing data and secondary for reading, now when i will query my database should i provide secondary instance url and for updating/inserting data in my mongodb database should i provide primary instance url otherwise which url should i use for CRUD operations on my mongodb database ?? also after launcing this do i have to make any changes in any conf file or in any file manually or they already done that for me ?? Also do i have to make instance groups of all three instances or not ??
Please if any one of you think i have not done any research on this or its not a valid stackoverflow question then i am so sorry google cloud platform is very much new that's why there is not much documentation on it also this is my first time here in deploying my code on servers that's why i am completely noob in this field Thanks Anyways please help me ut of here guys.
but now i don't know which is primary instance and which is secondary,
Generally the Cloud Launcher will name the primary with suffix -1 (dash one). For example by default it would create mongodb-1-server-1 instance as the primary.
Although you can also discover which one is the primary by running rs.status() on any of the instances via the mongo shell. As an example:
mongo --host <External instance IP> --port <Port Number>
You can get the list of external IPs of the instances using gcloud. For example:
gcloud compute instances list
By default you won't be able to connect straight away, you need to create a firewall rule for the compute engines to open port(s). For example:
gcloud compute firewall-rules create default-allow-mongo --allow tcp:<PORT NUMBER> --source-ranges 0.0.0.0/0 --target-tags mongodb --description "Allow mongodb access to all IPs"
Insert a sensible port number, please avoid using the default value. You may also want to limit the source IP ranges. i.e. your office IP. See also Cloud Platform: Networking
i read that primary instance should be used for writing data and secondary for reading,
Generally replication is to provide redundancy and high availability. Where the primary instance is being used to read and write, and secondaries act as replicas to provide a level of fault tolerance. i.e. the loss of primary server.
See also:
MongoDB Replication.
Replication Read Preference.
MongoDB Sharding.
now when i will query my database should i provide secondary instance url and for updating/inserting data in my mongodb database should i provide primary instance url otherwise which url should i use for CRUD operations on my mongodb database
You can provide both in MongoDB URI and the driver will figure out where to read/write. For example in your Node.js you could have:
mongodb://<instance 1>:<port 1>,<instance 2>:<port 2>/<database name>?replicaSet=<replica set name>
The default replica set name set by Cloud Launcher is rs0. Also see:
Node Driver: URI.
Node Driver: Read Preference.
also after launcing this do i have to make any changes in any conf file or in any file manually or they already done that for me ?? Also do i have to make instance groups of all three instances or not ??
This depends on your application use case, but if you are launching through click and deploy the MongoDB config should all be taken care of.
For a complete guide please follow tutorial : Deploy MongoDB with Node.js. I would also recommend to check out MongoDB security checklist.
Hope that helps.

How to replicate MySQL database to Cloud SQL Database

I have read that you can replicate a Cloud SQL database to MySQL. Instead, I want to replicate from a MySQL database (that the business uses to keep inventory) to Cloud SQL so it can have up-to-date inventory levels for use on a web site.
Is it possible to replicate MySQL to Cloud SQL. If so, how do I configure that?
This is something that is not yet possible in CloudSQL.
I'm using DBSync to do it, and working fine.
http://dbconvert.com/mysql.php
The Sync version do the service that you want.
It work well with App Engine and Cloud SQL. You must authorize external conections first.
This is a rather old question, but it might be worth noting that this seems now possible by Configuring External Masters.
The high level steps are:
Create a dump of the data from the master and upload the file to a storage bucket
Create a master instance in CloudSQL
Setup a replica of that instance, using the external master IP, username and password. Also provide the dump file location
Setup additional replicas if needed
VoilĂ !