How to recovery to different environment using Cassy backup tool? [closed] - scalardb

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I want to know how to recover Scalar DB to another instance using Cassy backup.
Because I need a new instance for tests from the production environment.

There is no direct support in Cassy to load backups that were taken in a cluster to another cluster.
Since Cassy only manages snapshots of Cassandra, you can follow the doc to do it.
For testing, I would recommend dumping some of the data from the current (possibly production) cluster and load it to a new testing cluster.

Related

How can I schedule shutdown of my azure Virtual machine scale set instances using Terraform? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 days ago.
Improve this question
I have 2 azure virtual machine scale sets (each with 2 instances) deployed. One is for dev and other for testing.
I would like to schedule shutdown of the testing vmss everyday after working hours inorder to cut costs.
How can this be done via terraform ?
I initially tried using azurerm_dev_test_global_vm_shutdown_schedule, but looks like this works only for vm and not vmss.

DigitalOcean managed Postgres out of memory [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 days ago.
Improve this question
I am using managed Postgres from DigitalOcean. I have the cheapest instance, with 1 CPU, 1GB RAM and 10 GB space. I have a small database (aprox 25 tables), so the resources should be enough. I am using Postgres version number 15.
However, even when not using the database (not querying or inserting), the disk usage continues to go up. I suspect that the logging might be an issue.. from their API I've set the temp_log_size property to a small size, still no success.
Does anybody know what I can do? I don't think that it is possible to access the configuration file directly. Thanks a lot.

EC2 instance best practice for Mongodb and App deployment [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I installed Mongodb in EC2 t2.small instance with this guide. I don't know whether dedicate this instance for MonogoDB. or use this same instance for app deployment for production. Suggest me the best practice.
The 'best practice' is to run the database server on its on instance; the even better 'best practice' is to run MongoDB on a cluster of instances to give you high availability.
That said, its perfectably acceptable, imo, to run the DB and your app on the same instance for small projects with low demands where cost is an important driver, although personally I would use at least a EC2 large instance if you are going to make your instance do double-duty in this manner.
Now that you know what is 'best', only you can determine how much 'best' you can afford.

Create data for testing MongoDB and Postgresql [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I need to test the performance of MongoDB and Postgresql in large amount of data, over 5GB, for a college's assignment.
How can I create data for both databases?
Thanks
EDIT:
I found this webpage http://www.generatedata.com where you can download a script to generate the data
First, take a look to How to Generate Test Data on MongoDB. For MongoDB mongoperf i a tool to measure performance of such database on disk. also, you can see MongoDB Benchmarks. For Postgressql you can use pgbench.

scaling a database on cloud and on local servers [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am considering using mongo db (it could be postgresql or any other ) as a data warehouse, my concern is that up to twenty or more users could be running queries at a time and this could have serious implications in terms of performance.
My question is what is the best approach to handle this in a cloud based and non cloud based environment? Do cloud based db's automatically handle this? If so would the data be consistent through all instances if a refresh on the data was made? In a non cloud based environment would the best approach be to load balance all instances? Again how would you ensure data integrity for all instances?
thanks in advance
I think auto sharding is what I am looking for
http://docs.mongodb.org/v2.6/MongoDB-sharding-guide.pdf