Dataset overwrites database table - Azure Data Factory - azure-data-factory

I was working on a data flow inside Azure Data Factory, I had a derived column section then a select column section and finally a sink section.
At the sink part I created a new dataset where my outcome should be stored and here comes my problem:
All the data from my table, inside my database, is completely gone! Reviewing everything I did I just can't find why. The only detail that calls my attention is the dataset settings. In the field "Table" I have the name of my database table. Is it possible that the dataset overwrited my data? If so how can I retrive it?
Sink settings

The image you shared are not the Sink settings, they are the Data Set settings. Data sets do not, in their own right, "do" anything, they merely define the connection. The Data Flow, however, could absolutely be responsible.
Open the Data Flow and highlight the Sink activity. Then open the "Settings" tab and check out the "Table action" options:
If "Recreate table" or "Truncate table" are selected, that would cause the behavior you are seeing.

Related

Can a flowlet pass through all columns?

We are using Azure Data Factory and are exploring if we could use Flowlets for transformations that occur in most Data flows.
Our first attempt was to create a flowlets that only add some columns (using a "Derived Column" step) to a stream. So in the "Input" step we don't require any column to be present in the received stream. Then the "Derived Column" followed by the "Output" step. And done... we thought.
When using this flowlet in a data flow we go from 25 columns back to only the column we added, all our original columns are no longer available.
Is it possible to use a flowlet to work on only a selection of all available columns but that all columns in the stream are "passed through" and thus will be available in the sink of the original data flow?
Be sure to select the Allow Schema Drift option on your Flowlet input settings

Azure Data Factory Overwrite Existing Folder/Partitions in ADLS Gen2

I have my settings in my ADF Sink to Clear the folder but Partitioned via an ID
But this sink already has other partitions in that exists that I do not want to remove.
If an ID comes in, I just want to clear that specific folder/partition but it is actually clearing the full folder versus just partition. Am I missing a setting?
To overwrite only the partitions that appear in new data and keep the rest of the old partition data, you can make use of the pre commands present in the settings tab of the dataflow sink. Look at the following demonstration.
The following is my initial data which I have partitioned based on id.
Now let's say the following is the new data that you are going to write. Here, according to the requirement, you want to overwrite the partitions that are present and keep the rest as it is.
First, we need to get the distinct key column values (id in my case). Then use them in the pre commands of sink settings to remove files only from these partitions.
Take the above data (the 2nd image data) as dataflow1 source. Apply derived column transformation to add a new column with constant value say 'xxx' (to group based on this column and apply collect() aggregate function).
Group by this new column and use the aggregate as distinct(collect(id)).
Now for sink, choose as Cache, check write to activity output. When you run this dataflow in the pipeline, the debug output would be:
Send this array value to a parameter created in another dataflow where you make necessary changes and overwrite partitions. Give the following dynamic content
#activity('Data flow1').output.runStatus.output.sink1.value[0].val
Now in this second dataflow, the source is the same data used in first dataflow. For sink, instead of selecting clear the folder option, scroll down where you can find pre/post commands sections where you give the following dynamic content:
concat('rm /output/id=',toString($parts),'/*')
Now when you run this pipeline, it successfully executes and runs the overwrites only the required partitions, whereas keeps the other partitions.
The following is a sample partition data (id=2) to show that the data is overwritten (only one part file with required data will be available).
Why do not you specify the filename and write it to 1 single file.

How to configure a Synapse Mapping Data Flow for INSERT/UPDATE/DELETE when the destination table does not yet exist

I am trying to build a generic Mapping Data Flow for some basic cleansing on tables in my Data Lake. I need it to be able to work both on an ongoing basis after data already exists in my cleansed tables as well as when new tables are added (it would detect them automatically and create and populate the destination). Both the Source and Destination tables with be Delta tables.
The approach I have taken is to have Sources configured to both my actual source and to the target and use either JOIN transformations or EXISTS transformations to identify the new, updated and removed rows.
This works fine for INSERTS and UPDATES, however my issues is dealing with DELETES when there is no data currently in the destination. Obviously there will be nothing to DELETE - that is as expected. However, because I reference the key column that will exist once data is loaded to the table I get an error on an initial run that states:
ERROR Dataflow AppManager: name=BatchJobListener.failed, opId=xxx, message=Job 'xxx failed due to reason: DF-SINK-007 at Sink 'cleansedTableWithDeletes': Sink results in 0 output columns. Please ensure at least one column is mapped.
The overall process looks as follows:
Has anyone developed a pattern that works for a generic flow (this one is parameter driven and ensures schema drift is accommodated) or a way for the Data Flow to think that there IS a column in the destination that it can refer to and get past this issue?
In Source options check Allow no files found.
You can also provide date dynamically in Filter by last modified option.
Refer - https://learn.microsoft.com/en-us/azure/data-factory/data-flow-sink#sink-settings

ADF Mapping Data in Sink Table Error - Sink Table remains empty

I have a Data flow in Azure Data Factory which I want to use to combine data from three sources and then sink in a destination table (with some transformation in-between). For the sink table I created a table in SQL, matching the column headers and data types from my Data Flow in Azure.
However when I publish the data flow, the sink table remains empty. The only error I get is under Mapping "At least one incoming column is mapped to a column in the sink dataset schema with a conflicting type, which can cause NULL values or runtime errors." This seems to be inhibiting me from enabling Auto Mapping - so I mapped the columns manually.
So where I'm at the moment:
DataFlowLayout
I tried manually mapping the columns - the datatypes in my input and my sink tables match up with each other but my sink table is still empty
SinkTableDataMapping
Under Data Preview for both my source and my sink tables I am able to view a sample of my data, so they are not empty
SinkTableDataPreview
Anyone experience something similar?
The details provided about is not sufficient to provide any inputs. Can you add more details like what are the file format of those three sources/from where you are pulling/ additional screenshots that will really help to guide better. Thanks
You can try below steps:
Disable auto mapping of columns in Sink Transformation and manually map columns.
And check Allow insert option selected under sink transformation settings.
Also make sure all column data types of input and output of Sink transformation should match to avoid nulls.
Thanks for the feedback. In the end the issue was that the pipeline which sinks the data from the flow, in the destination table was not properly set up - that is why the dataflow was not showing any errors but the sink table still remained empty. So the dataflow was kind of hanging in the air with no instruction to actually perform the sink

Azure Data Factory data sink seems to be hidden or collapsed so it can't be validated

I'm quite new to Data Factory, so perhaps this is something very obvious that I'm missing.
I removed one of my datasets from within the list of Factory Resources (the pane on the left) and created a new one so that I could name it correctly. Previously, the dataset I deleted was associated with a data flow sink, which was named crmConstituent - this can be seen at the bottom of my screenshot. I now realise I should have removed the sink before I deleted the dataset, but I got ahead of myself.
I then added a new/replacement sink to my data flow, named SinkConstituents, associated with my recreated dataset. However, it looks like the original sink has persisted in the data flow somewhere and I can't seem to remove it using the interface. To the right of NameColumns in the top row of the flow, I can see my fresh new sink (SinkConstituents) but in between the two, I can also see another blue line that is possibly part of my old sink that I wanted to remove, but it appears to be only partially visible, as though it is collapsed. If I right-click on the "collapsed sink", it comes up with the option to delete it, but this doesn't appear to actually do anything if I select delete; the blue line remains and so does the error. I can't click on the collapsed sink to view the configuration of it in the bottom pane.
I'm currently unable to validate my factory because the original sink named crmConstituent doesn't have a dataset any more (because I am a monster who deletes things without thinking ahead), but I don't seem to be able to remove the sink from my data flow or recreate the original dataset to soothe the sink into thinking nothing is missing after all.