Best suitable hashbyte algorithm for SQL Server 2008 R2 - sql-server-2008-r2

I have source table with 399 columns in SQL Server 2008 R2.
I want to add one more column with hashbyte on above table and store on the database which is SQL Server 2016 using SSIS package.
PS - I can't ask source owner to upgrade his/her SQL Server to 2016 hence could not use SHA2_256. also few points are :
1. There are couple of tables having more than 200 columns, and one is having
exact 399 columns.
2. I had use MD5 algorithm, but I read MD5 might give you deprecated value in
SQL 2016. Hence it is advisable to use only SHA2_256 or SHA2_512.
3. About the code, its simple truncate and load into destination tables with
some transformations.
4. I am using HASH to identify entire row for update or insert as CHECKSUM is
not reliable.

Related

Inserting records from Firebrid database table into a SQL Server table

I have a application which uses Firebird (Version 2.5) database. I wanted to trigger one of the table entry to another database table which is in SQL Server 2008 R2. When I commit I am getting this following error
ErrorCode: 335544569 (ErrorMessage: Dynamic SQL Error SQL error code = -104).
Code:
CREATE TRIGGER "trig_INV"
FOR "INVA"
ACTIVE
AFTER UPDATE
POSITION 100
AS
BEGIN
IF ((updating) AND ((old.cold <> new.cold))) THEN
BEGIN
INSERT INTO 192.168.17.206/1043: [RBT].[dbo].[N_Inv]([COLA], [COLB], [COLC], [COLD], [COLD], [COLE])
SELECT FIRST 1
"COLA", "COLB", "COLC", "COLD", "COLE"
FROM "INVA"
ORDER BY COLA DESC
END
I am not sure firebird trigger allow to push records to a SQL Server database. It will be great if anyone has tried such and provide some reference. Thanks in advance.
You get that error because the syntax you're using doesn't exist in Firebird. Firebird has no support to connect to other database systems than Firebird (in theory you could write a provider that allows connecting to other databases systems, but as far as I know, none exist in reality).
If you want to synchronize to a different database system, you will either need to write a UDF or UDR (the replacement of UDFs introduced in Firebird 3) to do this for you, or a custom application that provides synchronization, or use third-party software to do this (for example, SymmetricDS).

How to COPY CSV file into table resolving foreign key values into ids

I'm expert at mssql but a beginner with PostgreSQL so all my assumptions are likely wrong.
Boss is asking me to get a 300 MB CSV file into PostgreSQL 12 (1/4 million rows and 100+ columns). The file has usernames in 20 foreign key columns that would need to be looked up and converted to id int values before getting inserted into a table. The COPY command doesn't seem to handle joining a csv to other tables before inserting. Am I going in a wrong direction? I want to test locally but ultimately am only allowed to give the CSV to a DBA for importing into a docker instance on a server. If only I could use pgAdmin and directly insert the rows!

Write a sybase PROC to interate through each table and truncate?

I am running Sybase Adaptive Server Enterprise 15.7, please could anyone tell me how to write a procedure that iterates through each table within the database and truncates the data in each table.
Thanks
:-)
There's two ways:
(i) write it yourself by cycling over sysobjects and constructing a truncate table command for every table found, and then executing it with exec(#cmd).
(ii) download my stored procs from http://www.sypron.nl/new_ssp_dwn.html, install them and then run:
sp_rv_findobject 'db=your_db_name', type=U', 'exec=immediate', 'execarg=truncate table OW.NM'

How to set table names and columns as case sensitive in oracle 11g?

I have a .NET 4.0 application that uses Entity Framework 4 that connects to a MS SQL 2008 database. The naming convention used is for example table "Clients", fields : "Id", "Id_Order". Now I need to switch from SQL Server to Oracle Server, so I migrated the MS SQL database to oracle database, but the problem is that all the table names and column names are uppercased, so by generating the edmx for oracle(using ODAC), I will have to change in code from "Clients" to "CLIENTS", "Id" to "ID", "Id_Client" to "ID_CLIENT", and it's a lot to change.
The migration was done using the built-in migration tool from Oracle SQL Developer 3.1.07.
A snippet from the generated script:
CREATE TABLE Clients (
I have read that in order to create case-sensitive identifiers you must use double quotes.
So I think the script should be something like this:
CREATE TABLE "Clients" (
Does anyone know a migration tool that perserves names case or at least a general option that I can switch on in the script ?
Why do you need to change the code? The whole point of Oracle being case-insensitive is that you can refer to the table as clients, Clients, CLIENTS, or even clIeNtS, and it will work.
You only use the double-quotes if you want case-sensitivity for some reason, but unless you have table names that are the same apart from case (shudder), you shouldn't need it.

T-SQL Select a bunch of data from another DB and Copy to DB2

H all,
Fist of all, thanks for reading this.
My Question is, how can I select a bunch of data from ANOTHER database and insert to my own database with same coloum name and field?
I just can think of is using select from DB1 and then insert into DB2.
I plan to written this process inside a stored procedures.
Is there a better way to do so?
Development enviroment :Sql server 2008 and VS2010(using .net C# to excecute Stored prod)
Thank you, Appreciate its lot.
And Please don't hesitate to voice out my error or mistake.I wish to learn from mistake
LiangCk
You can do the following if its the same database server
INSERT INTO [DB].[UserName].[TableName]
Select * from [DB2].[UserName].[TableName]
[DB] is the name of database 1
[DB2] is the name of database 2
[UserName] is your sql server username (dbo,....)
[TableName] is, off course, your table
If you have different sql servers you can connect both servers
using linked server.
http://msdn.microsoft.com/en-us/library/ff772782.aspx