DigitalOcean managed Postgres out of memory [closed] - postgresql

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 days ago.
Improve this question
I am using managed Postgres from DigitalOcean. I have the cheapest instance, with 1 CPU, 1GB RAM and 10 GB space. I have a small database (aprox 25 tables), so the resources should be enough. I am using Postgres version number 15.
However, even when not using the database (not querying or inserting), the disk usage continues to go up. I suspect that the logging might be an issue.. from their API I've set the temp_log_size property to a small size, still no success.
Does anybody know what I can do? I don't think that it is possible to access the configuration file directly. Thanks a lot.

Related

Connecting postgresql to database on other harddrive [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 2 years ago.
Improve this question
I am using Win 10 64 bit and have just installed postgresql here: C:\Program Files\PostgreSQL\13. I have an older installation here, with lots of tables: F:\Program Files\PostgreSQL\9.6\data. Both drives are on the same computer. Can my new installation connect to my database on drive f:?
Does the data need to remain in two separate instances? If not, you could export the data from one with pg_dump, and import it into the other. Then decommission the old one.
If you need to maintain separate instances, you could connect them together with postgres_fdw. This is very convenient to query across the two instances, but performance usually suffers, often dramatically.

How to recovery to different environment using Cassy backup tool? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I want to know how to recover Scalar DB to another instance using Cassy backup.
Because I need a new instance for tests from the production environment.
There is no direct support in Cassy to load backups that were taken in a cluster to another cluster.
Since Cassy only manages snapshots of Cassandra, you can follow the doc to do it.
For testing, I would recommend dumping some of the data from the current (possibly production) cluster and load it to a new testing cluster.

Create data for testing MongoDB and Postgresql [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I need to test the performance of MongoDB and Postgresql in large amount of data, over 5GB, for a college's assignment.
How can I create data for both databases?
Thanks
EDIT:
I found this webpage http://www.generatedata.com where you can download a script to generate the data
First, take a look to How to Generate Test Data on MongoDB. For MongoDB mongoperf i a tool to measure performance of such database on disk. also, you can see MongoDB Benchmarks. For Postgressql you can use pgbench.

scaling a database on cloud and on local servers [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am considering using mongo db (it could be postgresql or any other ) as a data warehouse, my concern is that up to twenty or more users could be running queries at a time and this could have serious implications in terms of performance.
My question is what is the best approach to handle this in a cloud based and non cloud based environment? Do cloud based db's automatically handle this? If so would the data be consistent through all instances if a refresh on the data was made? In a non cloud based environment would the best approach be to load balance all instances? Again how would you ensure data integrity for all instances?
thanks in advance
I think auto sharding is what I am looking for
http://docs.mongodb.org/v2.6/MongoDB-sharding-guide.pdf

What is the difference between physical , main , secondary , Primary memory? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I am learning Operating System but these words really confused me , i tried to search on the internet but couldnt find the exact difference between them?? can anybody help me out and clear my confusion??
Primary storage (also main memory and physical memory) are generally used interchangeably to refer to the memory that is attached directly to the processor.
Secondary storage is storage that is not directly connected to the CPU. The most common case of secondary storage is the hard disk.
You say searched on the internet without finding an explanation, however wikipedia seems to have a lot to say about computer storage http://en.wikipedia.org/wiki/Computer_data_storage