Windows/NET/ODBC
I would like to get query results to new table on some handy way which I can see through data adapter but I can't find a way to do it.
There is no much examples around to satisfy beginner's level on this.
Don't know temporary or not but after seeing results that table is no more needed so I can delete it 'by hand' or it can be deleted automatically.
This is what I try:
mCmd = New OdbcCommand("CREATE TEMP TABLE temp1 ON COMMIT DROP AS " & _
"SELECT dtbl_id, name, mystr, myint, myouble FROM " & myTable & " " & _
"WHERE myFlag='1' ORDER BY dtbl_id", mCon)
n = mCmd.ExecuteNonQuery
This run's without error and in 'n' I get correct number of matched rows!!
But with pgAdmin I don't see those table no where?? No matter if I look under opened transaction or after transaction is closed.
Second, should I define columns for temp1 table first or they can be made automatically based on query results (that would be nice!).
Please minimal example to illustrate me what to do based on upper code to get new table filled with query results.
A shorter way to do the same thing your current code does is with CREATE TEMPORARY TABLE AS SELECT ... . See the entry for CREATE TABLE AS in the manual.
Temporary tables are not visible outside the session ("connection") that created them, they're intended as a temporary location for data that the session will use in later queries. If you want a created table to be accessible from other sessions, don't use a TEMPORARY table.
Maybe you want UNLOGGED (9.2 or newer) for data that's generated and doesn't need to be durable, but must be visible to other sessions?
See related: Is there a way to access temporary tables of other sessions in PostgreSQL?
Related
for example the following is my query
WITH stkpos as (
select * from mytbl
),
updt as (
update stkpos set field=(select sum(fieldn) from stkpos)
)
select * from stkpos
ERROR: relation "stkpos" does not exist
Unlike MS-SQL and some other DBs, PostgreSQL's CTE terms are not treated like a view. They're more like an implicit temp table - they get materialized, the planner can't push filters down into them or pull filters up out of them, etc.
One consequence of this is that you can't update them, because they're a copy of the original data, not just a view of it. You'll need to find another way to do what you want.
If you want help with that you'll need to post a new question that has a clear and self contained example (with create table statements, etc) showing a cut down version of your real problem. Enough to actually let us understand what you're trying to accomplish and why.
I have a table BigTable and a table LittleTable. I want to move a copy of some records from BigTable into LittleTable and then (for these records) set BigTable.ExportedFlag to T (indicating that a copy of the record has been moved to little table).
Is there any way to do this in one statement?
I know I can do a transaction to:
moves the records from big table based on a where clause
updates big table setting exported to T based on this same where clause.
I've also looked into a MERGE statement, which does not seem quite right, because I don't want to change values in little table, just move records to little table.
I've looked into an OUTPUT clause after the update statement but can't find a useful example. I don't understand why Pinal Dave is using Inserted.ID, Inserted.TEXTVal, Deleted.ID, Deleted.TEXTVal instead of Updated.TextVal. Is the update considered an insertion or deletion?
I found this post TSQL: UPDATE with INSERT INTO SELECT FROM saying "AFAIK, you cannot update two different tables with a single sql statement."
Is there a clean single statement to do this? I am looking for a correct, maintainable SQL statement. Do I have to wrap two statements in a single transaction?
You can use the OUTPUT clause as long as LittleTable meets the requirements to be the target of an OUTPUT ... INTO
UPDATE BigTable
SET ExportedFlag = 'T'
OUTPUT inserted.Col1, inserted.Col2 INTO LittleTable(Col1,Col2)
WHERE <some_criteria>
It makes no difference if you use INSERTED or DELETED. The only column it will be different for is the one you are updating (deleted.ExportedFlag has the before value and inserted.ExportedFlag will be T)
I was trying to do select for all entries in select statement - abap. I'm not getting the clear idea what select for entries does. Does any one know ?
Kindly have a look at the statements below:
1.
select bukrs belnr xblnr budat
from bkpf
into table it_bkpf
where belnr in s_belnr
2.
select bukrs belnr buzei gsber zuonr wrbtr kunnr
from bseg
into table it_bseg
for all entries in it_bkpf
where belnr = it_bkpf-belnr.
Please let me know the difference in two statements.
Siva
Some obvious differences:
Different tables
Different target fields
The 2nd select had a syntax problem: You used form instead from (I corrected it with my edit)
Other differences:
The selection 1.) uses in in the where clause. So it uses a select-options (or a range-object).
for all entries in it_bkpf means, that the internal table it_bkpf contains a list of elements, you want to select. Or in other words: Select all entries in bseg, where a filed belnr is an element of hte internal table bseg.
You will get clear answer through ST05 transaction.
You could execute st05 transaction, choose trace SQL and activate
trace.
After that run your code.
Enter st05 again choose deactivate trace, then view trace result.
There you can see exact SQL code that is forwarded to database server. As BSEG is clustered table, you could not use intuitive header-item join to retrieve needed financial movements inforamation. It's just because there are several tables including BSEG are storing in single database table, so database server technically can not separate BSEG rows and find BSEG-specific fields to make proper join.
So you can do join-like construction at application server. First you are retrieving all header-related columns from header table ( BKPF). Next when SELECT ... FOR ALL ENTRIES IN ... is executed application server will take a little portions of header rows (typically 5) and construct SQL queries for retrieving packs of items, corresponding to that portions. Next all that portions will merged in single internal table. So there will be only items of desired document as it were if you could execute normal join.
Here's what happens the way I understand it. The two statements are probably executed after another:
The first statement selects a few entries from the bkpf table. These entries are stored in the internal table it_bkpf (say belnr 1, 2, 3).
Each of these entries is then used as part of the select #2. The "for all entries" matches the belnr in table bseg to those in the internal table it_bkpf from the first statement. The matching entries are then put into the internal table it_bseg.
With the example you've given this is pretty much the same if the where clause in SQL #2 was where belnr in s_belnr (instead of the whole for all entries). This would only make sense if you needed the contents of it_bkpf for some other purpose. Another typical situation is if you determine the contents of the internal table used in the for all entries clause with some program logic instead of reading it directly from the database.
One catch with "for all entries": Make sure the internal table in the for all entries is not empty - then the whole table in the from clause would be selected.
INSERT INTO contacts_lists (contact_id, list_id)
SELECT contact_id, 67544
FROM plain_contacts
Here I want to use Copy command instead of Insert command in sql to reduce the time to insert values. I fetched the data using select operation. How can i insert it into a table using Copy command in postgresql. Could you please give an example for it?. Or any other suggestion in order to achieve the reduction of time to insert the values.
As your rows are already in the database (because you apparently can SELECT them), then using COPY will not increase the speed in any way.
To be able to use COPY you have to first write the values into a text file, which is then read into the database. But if you can SELECT them, writing to a textfile is a completely unnecessary step and will slow down your insert, not increase its speed
Your statement is as fast as it gets. The only thing that might speed it up (apart from buying a faster harddisk) is to remove any potential index on contact_lists that contains the column contact_id or list_id and re-create the index once the insert is finished.
You can find the syntax described in many places, I'm sure. One of those is this wiki article.
It looks like it would basically be:
COPY plain_contacts (contact_id, 67544) TO some_file
And
COPY contacts_lists (contact_id, list_id) FROM some_file
But I'm just reading from the resources that Google turned up. Give it a try and post back if you need help with a specific problem.
I have a stored procedure which creates and works with a temporary #table
Some of the queries would be tremendously optimized if that temporary #table would have an index created on it.
However, creating an index within the stored procedure fails:
create procedure test1 as
SELECT f1, f2, f3
INTO #table1
FROM main_table
WHERE 1 = 2
-- insert rows into #table1
create index my_idx on #table1 (f1)
SELECT f1, f2, f3 FROM #table1 (index my_idx) WHERE f1 = 11 -- "QUERY X"
When I call the above, the query plan for "QUERY X" shows a table scan.
If I simply run the code above outside the stored procedure, the messages show the following warning:
Index 'my_idx' specified as optimizer hint in the FROM clause of table '#table1' does not exist. Optimizer will choose another index instead.
This can be resolved when running ad-hoc (outside the stored procedure) by splitting the code above in two batches by addding "go" after index creation:
create index my_idx on #table1 (f1)
go
Now, "QUERY X" query plan shows the use of index "my_idx".
QUESTION: How do I mimique running the "create index" in a separate batch when it's inside the stored procedure? I can't insert a "go" there like I do with the ad-hoc copy above. Please note that I'm aware of the solution of "split up the 'QUERY X' into a separate stored procedure" and am looking for a solution that will avoid that.
P.S. If it matters, this is on Sybase 12 (ASE 12.5.4)
UPDATE:
I have been seeing several references to "schema bumping" during my Googling before posing the question. But that doesn't seem to happen in my case.
You can create a table, populate it, create an index on it and select values
from it in the same porc and have the optimizer fully cost it based on
accurate information. This is called 'schema bumping' and has been in place
since 11.5.1.
The Sybase documentation says that you create and use a temporary index in the same stored procedure:
http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.dc20023_1251/html/optimizer/X26029.htm
I think to get around this you will need to split your stored procedure into at least two parts, one to create and populate the table then build the index, and then a second one to run the select query.
I am not sure how you are getting this problem, might be in older version of Sybase, however with version 12.5.4 I tried executing the same thing as suggested by you but in my case the optimizer correctly suggested the use of index created in the stored procedure. Usually in a stored procedure we do not need to break sql into batches because else we would have been required to have a seperate batch for create table command as well.
In case we try to create index within a same batch (not in a stored procedure) we will do get the same error as specified by you above because we are trying to create an index on a table and then trying to use it within the same batch. Usually the Sybase server will compile the whole batch in one go and hence the problem. But as far as stored procedure is concerned in Sybase 12.5.4 there will be no problem.