Table storage showing data in only string format - azure-data-factory

I'm using ADF pipeline to copy data from data lake to blob storage and then from blob storage to table storage.
As you can see below, here are the column types in ADF Data Flow Sink - Blob Storage (integer, string, timestamp):
Here is the Mapping settings in Copy data activity:
On checking the output in table storage, I see all columns are of string type:
Why is table storage saving data in string values? How do I resolve this issue in table storage so that it will accept columns in the right type (integer, string, timestamp)? Please let me know. Thank you!

In usually, when load data from blob storage in Data Factory, all the default data type in blob file are String, Data Factory will help you convert the data type automatically to Sink.
But it also can not meet all our requests.
I tested copy data from Blob to Table Storage and found that: if we don't specify the data type manually in Source, after pipeline executed, all the data type will be String in Sink(Table Storage).
For example, this my Source blob file:
If I don't change the source data type, it seems that everything is ok in Sink table:
But after the pipeline executed, the data type in table storage are all String:
If we change the data type in Source blob manually, and it works ok!
For another question, a little confuse that just from you screenshot, that seems the UI of Mapping Data Flow Sink, but Mapping Data Flow doesn't support Table Storage as Sink.
Hope this helps.

Finally figured out the issue - I was using DelimitedText format for Blob Storage. After converting to Parquet format, I can see data being written to table storage in correct type.

Related

Issue while updating copy activity in ADF

I want to update a source excel column with a particular string.
My source contains n columns. I need to check where the string apple exists in any one of the columns. If the value exist in any column I need to replace the apple with orange string. And output the excel. How can I do this in ADF?
Note:I cannot use dataflows since we were using a self hosted vm
Excel files has lot of limitations in ADF like it is not supported in the copy activity sink and in Data flow sink as well.
You can raise the feature request for that in ADF.
So, try the above operation with a csv and copy the result to a csv in blob which later you can change it to Excel in your local machine.
To do the operations like above, Data flow can be a better option than doing it with normal activities as Dataflow deals with the transformations.
But Data flow won't support Self hosted linked service.
So, as a workaround first copy the Excel file as csv to Blob storage using copy activity. Create a Blob linked service for that to use in dataflow.
Now follow the below process in Data flow.
Source CSV from Blob:
Derived column transformation:
give the condition for each column case(col1=="apple", "orange", col1)
Sink :
In Sink settings specify as Output to single file.
After Pipeline execution a csv will be generated in the blob. You can convert it to Excel in your local machine.

Azure Table Storage Sink in ADF Data Flow

Here is how my ADF Pipeline looks like. In Data Flow, I read some data from a source, perform filter & join and store data to a sink. My plan was to use Azure Table Storage as the sink. However, according to https://github.com/MicrosoftDocs/azure-docs/issues/34981, ADF Data Flow does not support Azure Table Storage as a sink. Is there an alternative to use Azure Table Storage as the sink in Data Flow?
No, it is impossible. Azure Table Storage can not be the sink of data flow.
Only these six dataset is allowed:
Not only these limits. When as the sink of the dataflow, Azure Blob Storage and Azure Data Lake Storage Gen1&Gen2 only support four format: JSON, Avro, Text, Parquet.'
At least for now, your idea is not a viable solution.
For more information, have a look of this offcial doc:
https://learn.microsoft.com/en-us/azure/data-factory/data-flow-sink#supported-sink-connectors-in-mapping-data-flow
Even today it isn't possible. One option could be (we are solving a similar case like this currently) to use a Blob Storage as a temporary destination.
The data flow will store the result in the Blob Storage. The source data is processed by all these different transformations in the data flow and prepared well for table storage, e.g. PartitionKey, RowKey, and all other columns are there.
A subsequent Copy Activity will move the data from Blob Storage into Table Storage easily.
The marked part of the pipeline is doing exactly this:
Full Orders runs the data flow
to Table Storage copy activity moves data into the Table Storage

Can I force flush a Databricks Delta table, so the disk copy has latest/consistent data?

I am accessing Databricks Delta tables from Azure Data Factory, which does not have a native connector to Databricks tables. So, as a workaround, I create the tables with the LOCATION keyword to store them in Azure Data Lake. Then, since I know the table file location, I just read the underlying Parquet files from Data Factory. This works fine.
But... what if there is cached information in the Delta transaction log that has not yet been written to disk? Say, an application updated a row in the table, and the disk does not yet reflect this fact. Then my read from Data Factory will be wrong.
So, two questions...
Could this happen? Are changes held in the log for a while before being written out?
Can I force a transaction log flush, so I know the disk copy is updated?
Azure Data Factory has built in delta lake support (this was not the case at the time the question was raised).
Delta is available as an inline dataset in a Azure Data Factory data flow activity. To get column metadata, click the Import schema button in the Projection tab. This will allow you to reference the column names and data types specified by the corpus (see also the docs here).
ADF supports Delta Lake format as of July 2020:
https://techcommunity.microsoft.com/t5/azure-data-factory/adf-adds-connectors-for-delta-lake-and-excel/ba-p/1515793
The Microsoft Azure Data Factory team is enabling .. and a data flow connector for data transformation using Delta Lake
Delta is currently available in ADF as a public preview in data flows as an inline dataset.

Use output data from Copy activity in subsequent activity

I have a ForEach activity which uses a Copy activity with an HTTP source and a blob storage sink to download a json file for each item. The HTTP source is set to Binary Copy whereas the blob storage sink is not, since I want to both copy the complete json file to blob storage and also extract some data from each json file and store this data in a database table in the following activity.
In the blob storage sink, I've added a column definition which extracts some data from the json file. The json files are stored in blob storage successfully, but how can I access the extracted data in the subsequent stored procedure activity?
You can try using lookup activity to extract the data in the Blob and use the output in later activity. Refer to https://learn.microsoft.com/en-us/azure/data-factory/control-flow-lookup-activity to see whether lookup activity can satisfy your requirements.

Azure Data factory: Copy Data from lookup to Blob

I have a lookup activity which is querying from Salesforce. I want to store all the results into Blob. Any idea on how to do it?