Copy from master row broken when provided defaultCsvExportParams (ag-grid#27^. React) - ag-grid

Example or
On Planker
Copy from master row and paste when provided defaultCsvExportParams
Actual result: data copied from the current row + data from the detail component.
Expected result: data copied only for the current row.

Related

Bad data is being copied during copy/paste in AG-Grid ag-grid

I have AG - Grid table. When i do copy paste of particular column cell value , bad data is getting copied.
For Eg - We have a column with blank value and if i select the cell and copy and paste it to another cell, bad character is being copied.
*H#$#$0
#$#$7
#$#$4
#$#$7
Could some one help on this issue?
Root cause of this issue

Is it possible to generate the space separated header row using data factory copy activity?

I am using azure sql as source dataset and delimited file as sink dataset in the copy activity.
I tried copy activity but First row as header gives comma separated headers.
Is there way to change the header output style ?
Please note spacing is unequal (h3...h4)
In this repro, I tried to give
1 space between 1st and 2nd column,
2 spaces between 2nd and 3rd column,
3 spaces between 3rd and 4th column.
Also, I tried to give same column name for column2 and column3. The approach is as follows.
Data is copied from Azure SQL database to datalake in comma delimitted format as a staging file.
This staging file is taken as a source in Dataflow activity.
In source dataset, first row as header is not checked.
Data preview of Source transformation:
Derived column transformation is added to change the column name of column2 and column3.
In this case, date_col of column1 is header data. Thus when column1 is 'date_col' replace column2 and column3 data with same column name.
column_2 = iif(Column_1=='date_col','ECIX',Column_2);
column_3 = iif(Column_1=='date_col','ECIX',Column_3);
Again derived column transformation is added to concat all the columns with spaces. Column name is given as concat . Value for this column is
concat(Column_1,' ',Column_2,' ',Column_3,' ',Column_4)
Select transformation is added and only concat column is selected here.
In sink, new delimited file is added as a sink dataset. And in sink dataset also , first row as header is not checked.
Output file screenshot
After pipeline is run, the target file looks like this.
Keeping the source as azure sql itself in the dataflow, I created a single derived column 'OUTDC' and added all the columns from the source like this:
(h1)+' '+(h2)+' '+(h3)
Then fed the OUTDC to a delimited sink and kept the Headers option as single string like this:
['h1 h2 h2']

I have problem in getting data perfectly in adf

Cconsider one CSV file with employee details and attendance marking attendance with 0 and 1. For example, 1 indicates the employee is present, 0 indicates employee is absent. My problem is to get the working date of employee if they are present (1). It should be the same day where the employee is absent (0). It should be next working day by reading the previous row.
emp id
working
working day
123
1
11/14/2022
123
0
11/15/2022
123
1
11/14/2022
I have tried using data flow in ADF, but it is not getting. Please provide solution for me in Azure Data Factory.
To manipulate the csv data in data factory we have to use Data Flow activity in Azure data factory
I reproduce your scenario in my environment please follow below steps to get issue resolved:
I took one sample file as source in Data flow activity similar to data you provided as below (assuming you don't have date values when employee is absent):
Then I took windows transformation activity over emp id and sort by emp id and created windows column working date with updating Date based on previous row value with expression:
lag(addDays({working day}, 1),1)
Windows transformation data preview:
Now I took derived column transformation to get date working date of employee if they are present or absent. I am updating the column working day if working is 0 then value should be working date else value will working day.
iif(working==0,{working date},{working day})
Derived column transformation data preview:
Now with select activity delete unnecessary columns and store the data in sink.
Select transformation data preview:

ADF map source columns startswith to sink columns in SQL table

I have a ADF data flow with many csv files as a source and a SQL database as a sink. The data in the csv files are similar with 170 plus columns wide however not all of the files have the same columns. Additionally, some column names are different in each file, but each column name starts with the same corresponding 3 digits. Example: 203-student name, 644-student GPA.
Is it possible to map source columns using the first 3 characters?
Go back to the data flow designer and edit the data flow.
Click on the parameters tab
Create a new parameter and choose string array data type
For the default value as per your requirement, enter ['203-student name','203-student grade',’203-student-marks']
Add a Select transformation. The Select transformation will be used to map incoming columns to new column names for output.
We're going to change the first 3 column names to the new names defined in the parameter
To do this, add 3 rule-based mapping entries in the bottom pane
For the first column, the matching rule will be position==1 and the name will be $parameter11
Follow the same pattern for column 2 and 3
Click on the Inspect and Data Preview tabs of the Select transformation to view the new column name.
Reference - https://learn.microsoft.com/en-us/azure/data-factory/tutorial-data-flow-dynamic-columns#parameterized-column-mapping

ADF Add Header to CSV Sink

Anyone know how to add header to csv sink? I have a data flow that's source is a database table. Then I have used derived column and concatenated the columns to make one column and split the data in the column by commas (done in the source via a query). I have then selected the column that has been concatenated to be export to csv.
Data example:
Matt,Smith,10
Therefore I technically only have one column, however, I want to add a header for each section of the data.
Desired output:
FirstName,LastName,Age
Matt,Smith,10
You can add headers in CSV file.
Select Data Flow Activity.
Select Source and use Select activity.
Add column names as shown in below screenshot.
Finally add Sink and run Pipeline.