i have a question and i m trying to find a code example to implement in my project. Here is the question, i want in powerbuilder to create a datastore from simple sql select and then to fetch one by one the value stored in the ds. I want this cause at the moment i m using CURSOR which is very slow and has transaction size problems, then i tried ROW_NUMBER which also is very slow. I m using on my application both oracle and sql. (with a lot of data), please if u can provide me an pb example it would be very helpful. thank you guys.
Here is an example:
datastore lds_data
lds_data = CREATE datastore
lds_data.DataObject = "your datawindow"
lds_data.SetTransObject (SQLCA)
lds_data.Retrieve() // Put your parms in the parenthesis
...
DESTROY lds_data // Optionnal -
And if you want to dynamically build the Datastore from the SQL statement, replace the 3rd line by (ls_err being defined as string variable and will contain possible return error) :
lds_data.create(sqlca.SyntaxFromSQL('select col, you, want from your_table', 'Style(Type=Form)', ls_err))
Related
i have a DB2 data source and an Oracle 12c target.
The Oracle has a DB link to the DB2 defined which is working in general.
Now i have a huge table in the DB2 which has a timestamp column (lets call it ROW_CHANGED) for row changes. I want to retrieve rows which have changed after a particular time.
Running
SELECT * FROM lib.tbl WHERE ROW_CHANGED >'2016-08-01 10:00:00'
on the DB2 returns exactly 1 row after ca. 90 secs which is fine.
Now i try the same query from the Oracle via the db link:
SELECT * FROM lib.tbl#dblink_name WHERE ROW_CHANGED >TO_TIMESTAMP('2016-08-01 10:00:00')
This runs for hours and ends up in a timeout.
I read some Oracle docs and found distributed query optimization tips but most of them refer to joining a local to a remote table which is not my case.
In my desperation, i have tried the DRIVING_SITE hint, without effect.
Now i wonder when the WHERE part of the query will be evaluated. Since i have to use Oracle syntax and not DB2 syntax for the query, is it possible the Oracle will try to first copy the full table and apply the where clause afterwards? I did some research but did not find anything which would help me in this direction.
The ROW_CHANGED is a hidden column in the DB2, if that matters.
Thx for any hint in advance.
Update
Thanks#all for help. I'll share what did the trick for me.
First of all i have used TO_TIMESTAMP since the DB2 column is also Timestamp (not date) and i had expected to circumvent implicit conversions by this.
Without the explicit conversion i ran into ORA-28534: Heterogeneous Services preprocessing error and i have no hope of touching the DB config within reasonable time.
The explain plan btw did not bring much. It showed a FULL hint and no conversion on the predicates. Indeed it showed the ROW_CHANGED column as Date, i wonder why.
I have tried Justins suggestion to use a bind variable, however i got ORA-28534 again. Next thing i did was to wrap it into a pl/sql block (will run in a SP anyway later).
declare
v_tmstmp TIMESTAMP := 01.08.16 10:00:00;
begin
INSERT INTO ORAUSER.TMP_TBL (SRC_PK,ROW_CHANGED)
SELECT SRC_PK,ROW_CHANGED
FROM lib.tbl#dblink_name
WHERE ROW_CHANGED > v_tmstmp;
end;
This was executing in the same time as in DB2 itself. The date format is DD.MM.YY here since it is the default unfortunately.
When changing the variable assignment to
v_tmstmp TIMESTAMP := TO_TIMESTAMP('01.08.16 10:00:00','DD.MM.YY HH24:MI:SS');
I got the same problem as before.
Meanwhile the DB2 operators have created an index in the ROW_CHANGED column which i requested earlier that day. This has solved the problem in general it seems. Even my original query finishes in no time now.
If you are actually using an Oracle-specific conversion function like to_timestamp, that forces the predicate to be evaluated on the Oracle side. Oracle isn't going to know how to convert a built-in function like to_timestamp into an exactly equivalent function call in DB2.
If you used a bind variable, that would be more likely to get evaluated on the DB2 side. But that may be complicated by the data type mapping between different databases-- there may not be a perfect mapping between one engine's date and another engine's timestamp data type. If this was a numeric column, a bind variable would be almost certain to get pushed. In this case, it probably involves playing around a bit to figure out exactly what data type to use for your variable that works for your framework, Oracle, and DB2.
If using a bind variable doesn't work, you can force the predicate to be evaluated on the remote server using the dbms_hs_passthrough package. That lets you send a query verbatim to the remote server which allows you to do things like use functions defined in your DB2 database. That's a bit of overkill in this situation, hopefully, but it's nice to have the hammer as your backup if the simpler solution doesn't work quickly enough.
New to EF... using 6.0. I've have a Stored Proc which has the dynamic build select query inside a string variable that outputs using Execute(#StringQuery). This select has around 20 columns.
After adding this SP in EF, the return type is INT (not sure why). But I think I've to add all the columns manually in Complex types in EDMX. Wanted to know whether there is any better way to handle this as the columns are in huge number.
Please suggest.
Procedure Text:
DECLARE #StringQuery VARCHAR(MAX)
SET #StringQuery = 'SELECT AROUND 20 COLUMNS WITH LOT OF CONDITIONS ADDED'
EXECUTE(#StringQuery)
Open your model
Go to View->Other Windows->Entity Data Model Browser
In browser expand your Model->Function Imports and double click on Stored Proc
In Returns a Collections off choose Complex and press Get Column Information
Click Create New Complex Type
OK, Save
The question pretty much sums it up. I've got to replace text in a large number for store procedures. Its not so many that doing it manually is impossible, but enough that I'm asking the question. I also prefer automation as it reduces the change of user error when we make the change in production.
I can Identify them like this:
select OBJECT_DEFINITION(object_id), *
from sys.procedures
where OBJECT_DEFINITION(object_id) like '%''MyExampleLiteral''%'
order by name
Is there any way to mass update them all to change 'MyExampleLiteral' to 'MyOtherExampleLiteral'?
I'd even settle for a way to open all the stored procs. Just Finding these store procs in a larger list will take some time.
I thought about generating alter statements using the above select statements, but then I lose line breaks.
Thanks in advance,
This is a Microsoft SQL Server.
There are different tools to use depending on the database in question. For example, Microsoft SQL Server Data Tools integrates with Visual Studio, and allows you to do these types of operations fairly easily. The database is stored in your solution as scripts, which you can then search and replace any keyword you wish. I'm assuming there would be similar tools available for other platforms.
You could do this with dynamic sql. Query the system tables to get all the SPs containing your "MyExampleLiteral":
SELECT [object_id] FROM sys.objects o
WHERE type_desc = 'SQL_STORED_PROCEDURE'
AND is_ms_shipped = 0
AND OBJECT_DEFINITION(o.[object_id]) LIKE '%<search string>%'
Then, write a while loop to go through those object_ids. In the while loop, get the OBJECT_DEFINITION() into a string and replace the "MyExampleLiteral", then replace CREATE PROCEDURE with ALTER PROCEDURE and execute the string using sp_executesql.
Doing something this crazy, make sure you backup the database first.
I have a generic code that is used to retrieve DDL information from a Firebird database (FB2.1). It generates SQL code like
SELECT * FROM MyTable where 'c' <> 'c'
I cannot change this code. Actually, if that matters, it is inside Report Builder 10.
The fact is that some tables from my database are becoming a litle too populated (>1M records) and that query is starting to take too long to execute.
If I try to execute
SELECT * FROM MyTable where SomeIndexedField = SomeImpossibleValue
it will obviously use that index and run very quickly.
Well, it wouldn´t be that hard to the database find out that that is an impossible matcher and make some sort of optimization and avoid testing it against each row.
Is there any way to make my firebird database to optimize that search?
As the filter condition is a negative proposition (and also doesn't refer a column to search, but only a value to compare to another value), Firebird need to do a full table scan (without use any index) to confirm that aren't any record that meet your criteria.
If you can't change you need to wait for the upcoming 3.0 version, that will implement the Boolean data type, and therefore should start to evaluate "constant" fake comparisons in advance (maybe the client library will do this evaluation before send the statement to the server?).
How can I do bulk data insert in Array in SYBASE table using in .NET. I don't want to use BCP utilities.
It's a bit untidy
You have to use sp_dboption to turn it on
then you can use Select Into to get the data in
the you turn the option back off again.
It's also recomended that your drop all triggers indexes etc before and put them back after for any 'erm lengthy operation...
How are you connected up, you might have a bit of fun if you are on ODBC, as it tends to blow up on proprietry stuff, unless you put pass thru on.
Found this, fater remembering similar troubles way back when with delphi and sybase
Sybase Manual
You can see this example to see how to execute the insert statement.
Then, you simply need to:
select each row of the excel at a time
build the insert command
execute it
or (the best way)
build an insert into command with several rows (not all! maybe 50 each time)
execute the command
One side note, this will take a lot more time that to do the simple
bull copy!
After so much investigation, I found DataAdapter is able to bulk insert. It has property batchsize( I forgot the name). We can specify the number of rows, we want to insert in one trip. DataAdapter insert command should be specified.
There is AseBulkCopy class in name space Sybase.Data.AseClient in Sybase.AdoNet2.AseClient.dll
DataTable dt = SourceDataSet.Tables[0];
using (AseBulkCopy bulkCopy = new AseBulkCopy((AseConnection)conn))
{
bulkCopy.BatchSize = 10000;
bulkCopy.NotifyAfter = 5000;
bulkCopy.AseRowsCopied += new AseRowsCopiedEventHandler(bc_AseRowsCopied);
bulkCopy.DestinationTableName = DestTableName;
bulkCopy.ColumnMappings.Add(new AseBulkCopyColumnMapping("id", "id");
bulkCopy.WriteToServer(dt);
}
static void bc_AseRowsCopied(object sender, AseRowsCopiedEventArgs e)
{
Console.WriteLine(e.RowCopied + "Copied ....");
}