not able to do copy activity with bit value in azure data factory without column mapping for sink as postgresql - type-conversion

I've multiple csv files in folder like employee.csv, student.csv, etc.,.. with headers
And also I've tables for all the files(Both header and table column name is same).
employee.csv
id|name|is_active
1|raja|1
2|arun|0
student.csv
id|name
1|raja
2|arun
Table Structure:
emplyee:
id INT, name VARCHAR, is_active BIT
student:
id INT, name VARCHAR
now I'm trying to do copy activity with all the files using foreach activity,
the student table copied successfully, but the employee table was not copied its throwing error while reading the employee.csv file.
Error Message:
{"Code":27001,"Message":"ErrorCode=TypeConversionInvalidHexLength,Exception occurred when converting value '0' for column name 'is_active' from type 'String' (precision:, scale:) to type 'ByteArray' (precision:0, scale:0). Additional info: ","EventType":0,"Category":5,"Data":{},"MsgId":null,"ExceptionType":"Microsoft.DataTransfer.Common.Shared.PluginRuntimeException","Source":null,"StackTrace":"","InnerEventInfos":[]}

Use data flow activity.
In dataflow activity, select Source.
After this select derived column and change datatype of is_active column from BIT to String.
As shown in below screenshot, Salary column has string datatype. So I changed it to integer.
To modify datatype use expression builder. You can use toString()
This way you can change datatype before sink activity.
In a last step, provide Sink as postgresql and run pipeline.

Related

mysql workbench 8.0 data types are deleted when I enter int or datetime data type

I have a database named blog and a table named users. While creating this table, I want to make the data type of the id column int. In another table, I want to make the data type of the date column datetime, but if I choose either of these data type, the data type is reset when I press the apply button. May you help me ?
I cannot select these columns because the data type appears empty when creating a foreign key.

Azure data factory -ingesting the data from csv file to sql table- data activity sql sink stored procedure -table type and table type parameter name

I am running one of the existing Azure data factory pipeline that contains instet into sql table, where the sink is a sql server stored procedure.
I supplied the table type and table type parameter name and which maps to my table.
I am getting error as Failure happened on 'Sink' side.
Storedprocedure:
CREATE PROCEDURE [ods].[Insert_EJ_Bing]
#InputTable [ods].[EJ_Bing_Type] READONLY,
#FileName varchar(1000)
AS
BEGIN
insert into #workingtable
(
[ROWHASH],
[Date])
select [ROWHASH],
[Date] from #InputTable
end
Error message:
Failure happened on 'Sink' side.
ErrorCode=InvalidParameter,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=The value of the property '' is invalid:
'Invalid 3 part name format for TypeName.'.,Source=,''Type=System.ArgumentException,Message=Invalid 3 part name format for TypeName.,Source=System.Data
Can anyone please let me where i am doing wrong.
There is nothing wrong with your approach or code. You are getting this error because of the way you specified the Table type value in the sink.
I got same error as you with the above table type.
Change the Table type from [dbo.][tabletype] to [dbo].[tabletype]
You can see my copy activity is successful.
Rather than typing the stored procedure name and Table type names, you can do like below.

Copy activity auto-creates nvarchar(max) columns

I have Azure Data Factory copy activity which loads parquet files to Azure Synapse. Sink is configured as shown below:
After data loading completed I had a staging table structure like this:
Then I create temp table based on stg one and it has been working fine until today when new created tables suddenly received nvarchar(max) type instead of nvarchar(4000):
Temp table creation now is failed with obvious error:
Column 'currency_abbreviation' has a data type that cannot participate in a columnstore index.'
Why the AutoCreate table definition has changed and how can I return it to the "normal" behavior without nvarchar(max) columns?
I've got exactly the same problem! I'm using a data factory to read csv-files into my Azure datawarehouse and this used to result in nvarchar(4000) columns, but now they are all nvarchar(max). I also get the error
Column xxx has a data type that cannot participate in a columnstore index.
My solution for now is to change my SQL code and use a CAST to change the formats, but there must be a setting in the data factory to get the former results back...

Copy from S3 AVRO file to Table in Redshift Results in All Null Values

I am trying to copy an AVRO file that is stored in S3 to a table I created in Redshift and I am getting all null values. However, the AVRO file does not have null values in it. I see the following error when I look at the log: "Missing newline: Unexpected character 0x79 found at location 9415"
I did some research online and the only post I could find said that values would be null if the column name case in the target table did not match the source file. I have ensured the case for the column in the target table is the same as the source file.
Here is mock snippet from the AVRO file:
Objavro.schemaĒ{"type":"record","name":"something","fields":[{"name":"g","type":["string","null"]},{"name":"stuff","type":["string","null"]},{"name":"stuff","type":["string","null"]}
Here is the sql code I am using in Redshift:
create table schema.table_name (g varchar(max));
copy schema.table_name
from 's3://bucket/folder/file.avro'
iam_role 'arn:aws:iam::xxxxxxxxx:role/xx-redshift-readonly'
format as avro 'auto';
I am expecting to see a table with one column called g where each row has the value stuff.

NUMBER is automatically getting converted to DECFLOAT

I am new with DB2. I am trying to run an alter query on an existing table.
Suppose the EMP table is Already there in db which have below columns
id int
name varchar(50)
Now I am trying to Add a new column Salary for that I am running below query.
ALTER TABLE EMP ADD SALARY NUMBER
The above query run successfully. After that I described the EMP table it gave me below result:
ID INTEGER
NAME VARCHAR
SALARY DECFLOAT
As I am trying to add a column with NUMBER datatype, I dont know how NUMBER is getting converted to DECFLOAT.
It will be helpful if anybody can explain this.
Db2 version details are as follow :
Service_Level: DB2 v11.1.2.2
Fixpack_num : 2
For Db2 for Linux/Unix/Windows, with NUMBER data type enabled (it is not the default), this is the documented behaviour.
Specifically
The effects of setting the number_compat database configuration
parameter to ON are as follows. When the NUMBER data type is
explicitly encountered in SQL statements, the data type is implicitly
mapped as follows:
If you specify NUMBER without precision and scale attributes, it is mapped to DECFLOAT(16).
If you specify NUMBER(p), it is mapped to DECIMAL(p).
If you specify NUMBER(p,s), it is mapped to DECIMAL(p,s).