Checksum Validation after migration from Oracle to SQL Server - sql-server-2008-r2

I am migrating a large database from oracle 11g to SQL SERVER 2008R2 using SSIS. How can the data integrity can be validated for numeric data using checksum?

In the past, I've always done this using a few application controls. It should be something that is easy to compute on both platforms.
Frequently, the end result is a query like:
select count(*)
, count(distinct col1) col1_dist_cnt
...
, count(distinct col99) col99_dist_cnt
, sum(col1) col1_sum
...
, sum(col99) col99_sum
from table
Spool to file, Excel or database and compare outcome. Save for project management and auditors.
Please read more on application control here. I wrote it for checks between various financial reporting systems for the regulatory reporting, so this approach serves most cases.
If exactly one field value is wrong, it will always show up. Two errors might compensate each other. For example row 1 col 1 gets the value from row 2 col 1.
To detect for that, multiply each value with something unique for the row. For instance, if you have a unique ID column or identity that is included in the migration too:
, sum(ID * col1) col1_sum
...
, sum(ID * col99) col99_sum
When you get number overflows, first try using the largest available precision (especially on SQL Server sometimes difficult). If not feasibly anymore, use mod function. Only few types of error are hidden by mod.
Icing on the cake is to auto generate these statements. On Oracle look at user_tables, user_tab_columns. On SQL Server look at syscolumns etc.

Related

IBM DB2 - Do not unload DECIMAL rows exceeding a certain length

I have an IBM DB2 table where I want to UNLOAD data from so that I can load it into another DB2 table.
Both tables have the same columns (and types), except one decimal field.
It is DECIMAL(6) in the source table and DECIMAL(5) in the destination table.
There are many entries in the source table which only use up to 5 digits in the DECIMAL field and only some use up all 6 of them.
What I am going to do is only copy the entries which go up to 5 digits in the source table and drop all others.
Can I do this only using the UNLOAD statement? So having an option which tells the system "unload column 'id' as DECIMAL(5) (although it is DECIMAL(6) in the table itself) and if an entry of that column uses all 6 digits (>99999) do not unload that row".
Also how would you handle the case if it was the other way around? E.g. unload DECIMAL(5) from source and LOAD as DECIMAL(6) in destination
Why am I doing this? Because the destination table is an older version of the table used by older versions of the applications. We will drop support in 6 months, but until then we need to refresh the datasets in it.
With UNLOAD and LOAD I am talking about the UNLOAD and LOAD utilities for z/OS (?) described under e.g.
https://www.ibm.com/support/knowledgecenter/SSEPEK_11.0.0/ugref/src/tpc/db2z_utl_unload.html
Without knowing your source environment, for Db2 Linux, Unix, Windows the export statement is used with a select statement. So you'd do whatever logic makes sense to turn a DECIMAL(6) into a DECIMAL(5) - including skipping rows that need all 6 digits since there's no way to make them fit into a 5 digit allocation. The actual preferred method in this environment is now to use external tables but they work in a similar fashion.
export to 'myfile.csv' OF del select col1, dec_col6 from thetable where dec_col6 < 100000
OR
create external table 'myfile.csv' using (DELIMITER ',') as select col1, case when dec_col6 > 99999 then 99999 else dec_col6 end from thetable
External Tables
Export
Since you used the word "unload" I suppose you're either using a tool such as High Performance Unload or you are on a different flavor of Db2.

SQL Server 2008 R2, "string or binary data would be truncated" error

In SQL Server 2008 R2, I am trying to insert 30 million records from a source table to the target table. Out of these 30 million records, few records have some bad data and exceeds the length of target field. Generally due to these bad data, the whole insert gets aborted with "string or binary data would be truncated" error, without loading any rows in the target table and SQL Server also do not specify which row had the problem. Is there a way that we can insert rest of rows and catch the bad data rows without big impact on the performance (because performance is the main concern in this case) .
You can use the len function in your where condition to filter out long values:
select ...
from ...
where len(yourcolumn) <= 42
gives you the "good" records
select ...
from ...
where len(yourcolumn) > 42
gives you the "bad" records. You can use such where conditions in an insert select syntax as well.
You can also truncate your string as well, like:
select
left(col, 42) col
from yourtable
In the examples I assumed that 42 is your character limit.
You are not mention that how to insert data i.e. bulk insert or SSIS.
I prefer in this condition SSIS, in which you have control and also find the solution of your issue means you can insert the proper data as #Lajos suggest as well as for bad data you can create a temporary table and get the bad datas.
You can give flow of your logic via transformation and also error handling. You can more search for this too.
https://www.simple-talk.com/sql/reporting-services/using-sql-server-integration-services-to-bulk-load-data/
https://www.mssqltips.com/sqlservertip/2149/capturing-and-logging-data-load-errors-for-an-ssis-package/
http://www.techbrothersit.com/2013/07/ssis-how-to-redirect-invalid-rows-from.html

Getting a Result Set's Column Names via T-SQL

Is there a way to get the column names that an arbitrary query will return using just T-SQL that works with pre-2012 versions of Microsoft SQL Server?
What Doesn't Work:
sys.columns and INFORMATION_SCHEMA.COLUMNS work great for obtaining the column list for tables or views but don't work with arbitrary queries.
sys.dm_exec_describe_first_result would be perfect except that this management function was added in SQL Server 2012. What I'm writing needs to be backwards compatible to SQL Server 2005.
A custom CLR function could easily provide this information but introduces deployment complexities on the server side. I'd rather not go this route.
Any ideas?
So long as the arbitrary query qualifies to be used as a nested query (i.e. no CTEs, unique column names, etc.), this can be achieved by loading the query's metadata into a temp table, then retrieving column details via sys.tables:
SELECT TOP 0 * INTO #t FROM (query goes here) q
SELECT name FROM tempdb.sys.columns WHERE object_id = OBJECT_ID('tempdb..#t')
DROP TABLE #t
Thanks to #MartinSmith's for suggesting this approach!

SQL Server OpenQuery() behaving differently then a direct query from TOAD

The following query works efficiently when run directly against Oracle 11 using TOAD (with native Oracle drivers)
select ... from ... where ...
and srvg_ocd in (
select ocd
from rptofc
where eff_endt = to_date('12/31/9999','mm/dd/yyyy')
and rgn_nm = 'Boston'
) ...
;
The exact same query "never" returns if passed from SQL Server 2008 to the same Oracle database via openquery(). SQL Server has a link to the Oracle database using an Oracle Provider OLE DB driver.
select * from openquery( servername, '
select ... from ... where ...
and srvg_ocd in (
select ocd
from rptofc
where eff_endt = to_date(''12/31/9999'',''mm/dd/yyyy'')
and rgn_nm = ''Boston''
) ...
');
The query doesn't return in a reasonable amount of time, and the user kills the query. I don't know if it would eventually return with the correct result.
This result where the direct TOAD query works efficiently and the openquery() version "never" returns is reproducible.
A small modification to the openquery() gives the correct efficient result: Change eff_endt to trunc(eff_endt).
That is well and good, but it doesn't seem like the change should be necessary.
openquery() is supposed to be pass through, so how can there be a difference between the TOAD and openquery() behavior?
The reason we care is because we frequently develop complex queries with TOAD directly accessing Oracle. Once we have the query functioning and optimized, we convert it to an openquery() string for use in a SQL Server application. It is extremely aggravating to have a query suddenly fail with openquery() when we know it worked as a direct query. Then we have to search for a work-around through trial and error.
I would like to see the Oracle trace files for the two scenarios, but the Oracle server is within another organization, and we are not getting cooperation from the Oracle DBAs.
Does anyone know of any driver, or TOAD, or ??? issues that could account for the discrepancy? Is there any way to eliminate the problem such that both methods always give the same result?
I know you asked this a while ago but I just came across your question.
I agree, they should be the same. Obviously there is a difference. We need to find out where the difference is.
I am thinking out loud as I type...
What happens if you specify just a few column instead of select * from openquery?
How many rows are supposed to be returned?
What if, in the oracle select, you limit the returned rows?
How quickly does the openquery timeout?
Are TOAD and SS on the same machine? Are you RDPing into the SS and running toad from there?
Are they using the same drivers? including bit? (32/64) version?
Are they using the same account on oracle?
It is interesting that using the trunc() makes a difference. I assume [eff_endt] is one of the returned fields?
I am wondering if SS is getting all the rows back but it is choking on doing the date conversions. The date type in oracle may need to be converted to a ss date type before ss shows it to you.
What if you insert the rows from the openquery into a table where the date field is just a (n)varchar. I am thinking ss might just dump the date it is getting back from oracle into that text field without trying to convert it.
something like:
insert into mytable(f1,f2,f3,datetimeX)
select f1,f2,f3,datetimeX from openquery( servername, '
select f1,f2,f3,datetimeX from ... where ...
and srvg_ocd in (
select ocd
from rptofc
where eff_endt = to_date(''12/31/9999'',''mm/dd/yyyy'')
and rgn_nm = ''Boston''
) ...
');
What if toad or ss is modifying the query statement before sending it to oracle. You could fire up wireshark and see what toad and ss are actually sending.
I would be very curious if you get this resolved. I link ss to oracle often and have not run into this issue.
Here are basic things you can check for to see what the database is doing after it receives the query. First, check that the execution plans are the same in TOAD as when the query runs using openquery. You could plan the query yourself in TOAD using:
explain plan set statement_id = 'openquery_test' for <your query here>;
select *
from table(dbms_xplan.display(statement_id => 'openquery_test';
then have someone initiate the query using openquery() and have someone with permissions to view v$ tables to run:
select sql_id from v$session where username = '<user running the query>';
(If there's more than one connection with the same user, you'll have to find an additional attribute to isolate the row representing the session running the query.)
select *
from table(dbms_xplan.display_cursor('<value from query above'));
If those look the same then I'd move on to checking database waits and see what it's stuck on.
select se.username
, sw.event
, sw.p1text
, sw.p2text
, sw.p3text
, sw.wait_time_micro/1000000 as seconds_in_wait
, sw.state
, sw.time_since_last_wait_micro/1000000 as seconds_since_last_wait
from v$session se
inner join
v$session_wait sw
on se.sid = sw.sid
where se.username = '<user running the query>'
;
(again, if there's more than one session with the same username, you'll need another attribute to whittle it down to the one you're interested in.)
If the plans are different, then you need to find out why, or if they're the same, look into what it's waiting on (e.g. SQL*Net message to client ?) and why.
I noticed a difference using OLEDB through MS Access (2013) connecting to Oracle 10g & 11g tables, in that it did not always recognize indexes or primary keys on the Oracle tables properly. The same query through an MS Access 2000 database (using odbc) worked fine / had no problem with indexes & keys. The only way I found to fix the OLEDB version was to include all of the key fields in the SELECT -- which was not a satisfying answer, but it's all I could find. This might be an option to try through SSMS / OpenQuery(...) as well.
Besides that... you can try some alternatives to OPENQUERY, such as:
4-part names: SELECT ... FROM Server..Schema.Table
Execute AT: EXEC ('select...') at linked server
But as for why the OLEDB provider works differently than the native Oracle Provider -- the providers are not identical, and the native provider would be more likely to pave-over Oracle quirks than the more generic OLEDB provider would.

Faster way to transfer table data from linked server

After much fiddling, I've managed to install the right ODBC driver and have successfully created a linked server on SQL Server 2008, by which I can access my PostgreSQL db from SQL server.
I'm copying all of the data from some of the tables in the PgSQL DB into SQL Server using merge statements that take the following form:
with mbRemote as
(
select
*
from
openquery(someLinkedDb,'select * from someTable')
)
merge into someTable mbLocal
using mbRemote on mbLocal.id=mbRemote.id
when matched
/*edit*/
/*clause below really speeds things up when many rows are unchanged*/
/*can you think of anything else?*/
and not (mbLocal.field1=mbRemote.field1
and mbLocal.field2=mbRemote.field2
and mbLocal.field3=mbRemote.field3
and mbLocal.field4=mbRemote.field4)
/*end edit*/
then
update
set
mbLocal.field1=mbRemote.field1,
mbLocal.field2=mbRemote.field2,
mbLocal.field3=mbRemote.field3,
mbLocal.field4=mbRemote.field4
when not matched then
insert
(
id,
field1,
field2,
field3,
field4
)
values
(
mbRemote.id,
mbRemote.field1,
mbRemote.field2,
mbRemote.field3,
mbRemote.field4
)
WHEN NOT MATCHED BY SOURCE then delete;
After this statement completes, the local (SQL Server) copy is fully in sync with the remote (PgSQL server).
A few questions about this approach:
is it sane?
it strikes me that an update will be run over all fields in local rows that haven't necessarily changed. The only prerequisite is that the local and remote id field match. Is there a more fine grained approach/a way of constraining the merge statment to only update rows that have actually changed?
That looks like a reasonable method if you're not able or wanting to use a tool like SSIS.
You could add in a check on the when matched line to check if changes have occurred, something like:
when matched and mbLocal.field1 <> mbRemote.field1 then
This many be unwieldy if you have more than a couple of columns to check, so you could add a check column in (like LastUpdatedDate for example) to make this easier.