Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 2 years ago.
Improve this question
I am using Win 10 64 bit and have just installed postgresql here: C:\Program Files\PostgreSQL\13. I have an older installation here, with lots of tables: F:\Program Files\PostgreSQL\9.6\data. Both drives are on the same computer. Can my new installation connect to my database on drive f:?
Does the data need to remain in two separate instances? If not, you could export the data from one with pg_dump, and import it into the other. Then decommission the old one.
If you need to maintain separate instances, you could connect them together with postgres_fdw. This is very convenient to query across the two instances, but performance usually suffers, often dramatically.
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 days ago.
Improve this question
I am using managed Postgres from DigitalOcean. I have the cheapest instance, with 1 CPU, 1GB RAM and 10 GB space. I have a small database (aprox 25 tables), so the resources should be enough. I am using Postgres version number 15.
However, even when not using the database (not querying or inserting), the disk usage continues to go up. I suspect that the logging might be an issue.. from their API I've set the temp_log_size property to a small size, still no success.
Does anybody know what I can do? I don't think that it is possible to access the configuration file directly. Thanks a lot.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I want to know how to recover Scalar DB to another instance using Cassy backup.
Because I need a new instance for tests from the production environment.
There is no direct support in Cassy to load backups that were taken in a cluster to another cluster.
Since Cassy only manages snapshots of Cassandra, you can follow the doc to do it.
For testing, I would recommend dumping some of the data from the current (possibly production) cluster and load it to a new testing cluster.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 3 years ago.
Improve this question
I recently noticed that mi dashDB entry plan under a dedicated ibm cloud environment has been sunset. I read an article that said that, but i had not been informed previously, so i lost my two databases (production, and testing).
Does anyone know what i should do in this case? I have a lot of sensible data inside them, and i didn´t have any problem about changing the plan, but i don´t know how to do it because i cannot get inside the console (it doesn´t work anymore). Is there any way to recover my databases? Thanks.
Please open a support ticket here: https://watson.service-now.com/wcp
The support team can temporarily re-enable your access so that you can download a copy of your data.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
We are considering Heroku Postgres Crane plan. Anyone knows how many databases can be created in one such plan? Could not get this info anywhere.
Each plan provides only one db. You can have multiple postgres addons to get multiple dbs, but you will be billed per plan/addon. I would recommend using multiple tables instead of multiple dbs for your app.
If you wish to use multiple plans, you can have an unlimited number of dbs per account, but currently, only about ~30 dbs per app before Heroku runs out of identifiers.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm looking for a multi server big data sample application, which can be used (a) to experiment with installing and configuring a big data application, and (b) as an example starting point for developing such an application - editing the code, making some changes,etc... In most technologies (e.g. Java EE), such applications are very common, and are very useful as a starting point.
If it can be used for benchmarking, even better.
If it uses one (or more) of Hadoop, Cassandra, HBase, MongoDB, Hive, Redis it would be great.
Thanks!
You can use TeraSort, the benchmarking test packaged with Hadoop. It sorts terabytes of data, and is used to stress test new Hadoop clusters. It's part of the hadoop-x.y.z-examples.jar file that comes with a Hadoop install.
To use it, generate data into HDFS using Teragen, then run Terasort.