Azure data factory -ingesting the data from csv file to sql table- data activity sql sink stored procedure -table type and table type parameter name - azure-data-factory

I am running one of the existing Azure data factory pipeline that contains instet into sql table, where the sink is a sql server stored procedure.
I supplied the table type and table type parameter name and which maps to my table.
I am getting error as Failure happened on 'Sink' side.
Storedprocedure:
CREATE PROCEDURE [ods].[Insert_EJ_Bing]
#InputTable [ods].[EJ_Bing_Type] READONLY,
#FileName varchar(1000)
AS
BEGIN
insert into #workingtable
(
[ROWHASH],
[Date])
select [ROWHASH],
[Date] from #InputTable
end
Error message:
Failure happened on 'Sink' side.
ErrorCode=InvalidParameter,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=The value of the property '' is invalid:
'Invalid 3 part name format for TypeName.'.,Source=,''Type=System.ArgumentException,Message=Invalid 3 part name format for TypeName.,Source=System.Data
Can anyone please let me where i am doing wrong.

There is nothing wrong with your approach or code. You are getting this error because of the way you specified the Table type value in the sink.
I got same error as you with the above table type.
Change the Table type from [dbo.][tabletype] to [dbo].[tabletype]
You can see my copy activity is successful.
Rather than typing the stored procedure name and Table type names, you can do like below.

Related

not able to do copy activity with bit value in azure data factory without column mapping for sink as postgresql

I've multiple csv files in folder like employee.csv, student.csv, etc.,.. with headers
And also I've tables for all the files(Both header and table column name is same).
employee.csv
id|name|is_active
1|raja|1
2|arun|0
student.csv
id|name
1|raja
2|arun
Table Structure:
emplyee:
id INT, name VARCHAR, is_active BIT
student:
id INT, name VARCHAR
now I'm trying to do copy activity with all the files using foreach activity,
the student table copied successfully, but the employee table was not copied its throwing error while reading the employee.csv file.
Error Message:
{"Code":27001,"Message":"ErrorCode=TypeConversionInvalidHexLength,Exception occurred when converting value '0' for column name 'is_active' from type 'String' (precision:, scale:) to type 'ByteArray' (precision:0, scale:0). Additional info: ","EventType":0,"Category":5,"Data":{},"MsgId":null,"ExceptionType":"Microsoft.DataTransfer.Common.Shared.PluginRuntimeException","Source":null,"StackTrace":"","InnerEventInfos":[]}
Use data flow activity.
In dataflow activity, select Source.
After this select derived column and change datatype of is_active column from BIT to String.
As shown in below screenshot, Salary column has string datatype. So I changed it to integer.
To modify datatype use expression builder. You can use toString()
This way you can change datatype before sink activity.
In a last step, provide Sink as postgresql and run pipeline.

Copy activity auto-creates nvarchar(max) columns

I have Azure Data Factory copy activity which loads parquet files to Azure Synapse. Sink is configured as shown below:
After data loading completed I had a staging table structure like this:
Then I create temp table based on stg one and it has been working fine until today when new created tables suddenly received nvarchar(max) type instead of nvarchar(4000):
Temp table creation now is failed with obvious error:
Column 'currency_abbreviation' has a data type that cannot participate in a columnstore index.'
Why the AutoCreate table definition has changed and how can I return it to the "normal" behavior without nvarchar(max) columns?
I've got exactly the same problem! I'm using a data factory to read csv-files into my Azure datawarehouse and this used to result in nvarchar(4000) columns, but now they are all nvarchar(max). I also get the error
Column xxx has a data type that cannot participate in a columnstore index.
My solution for now is to change my SQL code and use a CAST to change the formats, but there must be a setting in the data factory to get the former results back...

SparkSQL/JDBC error com.microsoft.sqlserver.jdbc.SQLServerException: Column, parameter, or variable #7: Cannot find data type BLOB

Saving DataFrame to table with VARBINARY columns is throwing error:
com.microsoft.sqlserver.jdbc.SQLServerException: Column, parameter, or
variable #7: Cannot find data type BLOB
If I try to use VARBINARY in createTableColumnTypes option, I get "VARBINARY not supported".
Workaround is:
change TARGET schema to use VARCHAR.
Add .option("createTableColumnTypes", "Col1 varchar(500), Col2) varchar(500)")
While this workaround lets us go ahead with saving rest of data, actual binary data from source table (from where Data is read) is not saved correctly for these 2 columns - we see NULL data.
We are using MS SQL Server 2017 JDBC driver and Spark 2.3.2.
Any help, workaround to address this issue correctly so that we don't lose data is appreciated.

Passing schema name as parameter in db2 - unix

I have one stored proc in db2 (luw) on schema A which looks like below.
CREATE PROCEDURE A.GET_TOTAL (IN ID CHARACTER(23))
BEGIN
DECLARE CURSOR1 CURSOR WITH HOLD WITH RETURN TO CLIENT FOR
SELECT * FROM B.employee e where e.id=ID
END
Given sproc which exist on schema "A" runs query on another schema "B". This another schema name B may changed based on application logic. How can i pass the schema name as parameter to this sproc?
First, I do not think that stored procedure works because the select statement is not defined in a cursor or a prepared statement.
If you want to execute a dynamic SQL in a stored procedure, you need to define in a stmt, then prepare it and execute it.
Let's suppose you pass the schema name as parameter; If you want to change the schema, you can execute dynamically "set current schema" or concatenate the schema name in your query.
For more information: http://www.toadworld.com/platforms/ibmdb2/w/wiki/7461.prepare-execute-and-parameter-markers.aspx

Postgres pg_dump now stored procedure fails because of boolean

I have a stored procedure that has started to fail for no reason. Well there must be one but I can't find it!
This is the process I have followed a number of times before with no problem.
The source server works fine!
I am doing a pg_dump of the database on source server and imported it onto another server - This is fine I can see all the data and do updates.
Then I run a stored procedure on the imported database that does the following on the database which has 2 identical schema's -
For each table in schema1
Truncate table in schema2
INSERT INTO schema2."table" SELECT * FROM schema1."table" WHERE "Status" in ('A','N');
Next
However this gives me an error now when it did not before -
The error is
*** Error ***
ERROR: column "HBA" is of type boolean but expression is of type integer
SQL state: 42804
Hint: You will need to rewrite or cast the expression.
Why am I getting this - The only difference between the last time I followed this procedure and this time is that the table in question now has an extra column added to it so the "HBA" boolean column is not the last field. But then why would it work in original database!
I have tried removing all data, dropping and rebuilding table these all fail.
However if I drop column and adding it back in if works - Is there something about Boolean fields that mean they need to be the last field!
Any help greatly apprieciated.
Using Postgres 9.1
The problem here - tables in different schemas were having different column order.
If you do not explicitly specify column list and order in INSERT INTO table(...) or use SELECT * - you are relying on the column order of the table (and now you see why it is a bad thing).
You were trying to do something like
INSERT INTO schema2.table1(id, bool_column, int_column) -- based on the order of columns in schema2.table1
select id, int_column, bool_column -- based on the order of columns in schema1.table1
from schema1.table1;
And such query caused cast error because column type missmatch.