Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed last month.
Improve this question
I am using the below command to create a restore point. However i'd like to create multiple restore points and would like to know how not to overwrite the first one. Is there a way to add a counter after 'RP*' so it gives it a different number every time my shell script runs the below query?
select pg_create_restore_point('RP1');
pg_create_restore_point
----------------------------
F3/D988F590
There is no way to so that, unless you store the information about pre-existing restore points somewhere. The function just sets a marker with that name in the WAL. PostgreSQL doesn't remember restore points other than in the WAL.
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I want to store massive amounts of data, specifically the amount of text equivalent to a book. How can I go about this? Is there a type of data storage that makes this process faster/easier (aka is fit) for this type of operation?
There are limits, but not that much. A single database can have (with default configurations) over a billion tables and each table can be 32TB in size.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Good sirs.
I've just started planning a new project, and it seems that I should stick with a relational database, (even though I want to play with mongo). Tell me if I'm mistaken!
There will be box models, each of which can contain hundreds to thousands of items.
At any time, the user can move an item to another box.
for example, using some Railsy pseudocode...
item = Item(5676)
item.box // returns 24
item.update(box:25)
item.box // returns 25
This sounds like a simple SQL join table to me, but an expensive array manipulation operation for mongodb.
Or is removing an object out of one (huge) array and inserting it in another (huge) array not a big problem for mongo?
Thanks for any wisdom. I've only just started with mongo.
If you want to use big arrays, stay away from MongoDB. I tell from personal experience. There are two big problems with arrays. If they start to grow, document grows and it needs to be moved on disk. That is very, very slow operation. Plus if you need to scan array to get to 10000 element, that will be very slow as it needs to check 9999 before that.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Is this a perpetually denied feature request or something specific to Postgres?
I've read that Postgre has potentially higher performance than InnoDB but also potentially larger chance of less serialization (I apologize that I don't have a source, and please give that statement a wide berth because of my noobity and memory) and wonder if it might have something to do with that.
Postgres is amazingly functional compared to MySQL, and that's why I've switched. Already I've cut down lines of code & unnecessary replication immensely.
This is just a small annoyance, but I'm curious if it's considered unnecessary because of the UPDATE then INSERT workaround, or if it's very difficult to develop (possibly vs the perceived added value) like boost::lockfree::queue's ability to pass "anything" (, or if it's something else).
PostgreSQL committers are working on a patch to introduce "INSERT ... ON DUPLICATE KEY", which is functionally equivalent to an "upsert". MySQL and Oracle already have functionality (in Oracle it is called "MERGE")
A link to the PostgreSQL archives where the functionality is discussed and a patch introduced: http://www.postgresql.org/message-id/CAM3SWZThwrKtvurf1aWAiH8qThGNMZAfyDcNw8QJu7pqHk5AGQ#mail.gmail.com
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I have 2 computers which are connected to each other via serial comunication.
The main computer is holding a DB (about 10K words) the computer is working at a 20Hz rate.
I need real-time synchronization of the DB for the other computer - if data is added, deleted, or updated, I want the other computer to see or get the changes in real-time.
If I will transfer whole the DB peirodicly it will take about 5 seconds to update the other side - which is not acceptable.
Spmeone has an idea?
As you said, the other computer has to get the changes (i.e. insert, delete, update) via the serial link.
The easiest way to do this (but maybe impossible, if you can't change certain things) is to extend the database-change methods (or, if thats not possible: every call) to send an insert/delete/update-datagram with all required data over the serial link, which has to be robust against packet-loss (i.e. error detection, retransmission, etc.).
On the other end you have to implement a semantically equivalent database where you replay all the received changes.
Of course you still have to synchronize the databases at startup/initialization or maybe periodically (e.g. once per day).
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I want to select certain amount of data from one table. Based on those data, I want to check another two tables and insert into 2 tables.
So I want to iterate the resulted data. Which way is better(faster) and reasonable using DataReader or DataTable?
Thanks in advance
RedsDevils
You end up creating a reader to fill the table. The reverse isn't true, So I would stick with the dataReader.
-Josh