Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I want to select certain amount of data from one table. Based on those data, I want to check another two tables and insert into 2 tables.
So I want to iterate the resulted data. Which way is better(faster) and reasonable using DataReader or DataTable?
Thanks in advance
RedsDevils
You end up creating a reader to fill the table. The reverse isn't true, So I would stick with the dataReader.
-Josh
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed last month.
Improve this question
I am using the below command to create a restore point. However i'd like to create multiple restore points and would like to know how not to overwrite the first one. Is there a way to add a counter after 'RP*' so it gives it a different number every time my shell script runs the below query?
select pg_create_restore_point('RP1');
pg_create_restore_point
----------------------------
F3/D988F590
There is no way to so that, unless you store the information about pre-existing restore points somewhere. The function just sets a marker with that name in the WAL. PostgreSQL doesn't remember restore points other than in the WAL.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I want to store massive amounts of data, specifically the amount of text equivalent to a book. How can I go about this? Is there a type of data storage that makes this process faster/easier (aka is fit) for this type of operation?
There are limits, but not that much. A single database can have (with default configurations) over a billion tables and each table can be 32TB in size.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I want to convert a string ordinal to a number in perl
I have searched but not get exact answer.
Example: if the input is
one it should be 1.
five hundred it should be 500.
three hundred it should be 300.
Is there any module to do this?
One of the best parts of Perl is CPAN and, sure enough, a couple minutes of poking around on metacpan turned up the Lingua::EN::Words2Nums module:
use Lingua::EN::Words2Nums;
$num=words2nums("two thousand and one");
$num=words2nums("twenty-second");
$num=words2nums("15 billion, 6 million, and ninteen");
Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 3 years ago.
Improve this question
I have a problem with a stored procedure.
The procedure gets as an argument the number of rows needed, but the following does not work in HANA:
SELECT TOP :NUM_OF_ROWS * FROM TABLE_NAME
I read that TOP in HANA only receives a number, not an expression. Is there another way to do this? My solution for the moment is to select everything and delete the unneeded records on the service, but it's not very efficient.
Instead of TOP n you can use the LIMIT n option.
That one can bind variables.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
In Postgresql, the hstore and json datatypes seem to have very similar use cases. When would you choose to use one vs. the other? Initial thoughts:
You can nest with json; you can't with hstore
Functions for parsing json won't be available until 9.3
The json type is just a string. There are no built in functions to parse it. The only thing to be gained when using it is the validity checking.
Edit for those downvoting: This was written when 9.3 still didn't exist.It is correct for 9.2. Also the question was different. Check the edit history.