Azure Data Flow expression builder not able to add string - azure-data-factory

Error when connecting to Azure dataflow
I have given in Dataflow expression builder in sync
value of parameterdomain is coming from source column name
File Name Option: output to single file
File name : concat($parameterdomain,'.csv')
During Debug i get below error
Failure type User configuration issue
Details
Job failed due to reason: at Sink 'sinksource'(Line 17/Col 12): Column operands are not allowed in literal expressions
Source Pipeline

Make sure you have checked the expression checkbox in the parameters tab of dataflow activity.

Related

Single Dataflow PipeLine but Different parameter passing value getting failed

I am trying to schedule the single dataflow will aggregate source to destination environment but our requirement is single data flow pipeline if we pass the parameter while trigger schedule. which means if I schedule A, i can pass the source raw folder details and other necessary information. similarly i can pass another schedule trigger B, i can pass the different parameter value. is it possible but i can see the failure. please advise.
Trigger (A) source ---> Dataflow ----> stage folder ---> copy activity(dynamic) ---> sql(on premise)
Trigger (B) source ---> Dataflow ----> stage folder ---> copy activity(dynamic) ---> sql(on premise)
When you debug for say stage folder 1 as parameter , are you setting the mapping ? Since its SQL as sink , I am assuming that you are . Now in the debug mode pass parameter for stage folder 2 details , unless schema is same in the both the case , it will fail with the same error which you mentioned .

Azure DevOps Pipeline using old connection string

I have an Azure DevOps pipeline which is failing to run because it seems to be using an old connection string.
The pipeline is for a C# project, where a FileTransform task updates an appsettings.json file with variables set on the pipeline.
The variables were recently updated to use a new connection string, however, when running a Console.PrintLn before using it and viewing it on the pipeline, it shows an outdated value.
Many updates similar to this have been run in the past without issue.
I've also recently added a Powershell task to echo what the value is in the variables loaded while the pipeline is running, which does display the new value.
I've checked the order of precedence of variables and there shouldn't be any other variables being used.
There is no CacheTask being used in this pipeline.
Does anyone have any advice to remedy this? It seems that the pipeline itself is just ignoring the variables set on the pipeline.
There is a problem with the recent File transform task version v1.208.0.
It will shows the warning message and not update the variable value correctly.
Warning example:
Resource file haven't been set, can't find loc string for key: JSONvariableSubstitution
Refer to this ticket: File transform task failing to transform files, emitting "Resource file haven't been set" warnings
The issue is from Task itself instead of the Pipeline configuration. Many users have the same issue.
Workaround:
You can change to use the File Transform task Version 2 to update the appsettings.json file.
Here is an example: Please remove the content in XML Transformation rules field and set the JSON file path

Synapse suddenly started having problem about having hash distribution column in a Merge

I started geting error at 06-25-2022 in my Fact table flows. Before that there was no problem and nothing has changed.
The Error is:
Operation on target Fact_XX failed: Operation on target Merge_XX failed: Execution fail against sql server. Sql error number: 100090. Error Message: Updating a distribution key column in a MERGE statement is not supported.
Sql error number: 100090. Error Message: Updating a distribution key column in a MERGE statement is not supported.
You got this error because updating a distribution key column through MERGE command is not supported in azure synapse currently.
The MERGE is currently in preview for Azure Synapse Analytics.
You can refer to the official documentation for more details about this as given below: -
https://learn.microsoft.com/en-us/sql/t-sql/statements/merge-transact-sql?view=azure-sqldw-latest&preserve-view=true
It clearly states that The MERGE command in Azure Synapse Analytics, which is presently in preview, may under certain conditions, leave the target table in an inconsistent state, with rows placed in the wrong distribution, causing later queries to return wrong results in some cases.

Azure Data Factory - source dataset fails with "path does not resolve to any file(s)" when sink to a different directory is in progress

We have an ADF pipeline with Copy activity to transfer data from Azure Table Storage to a JSON file in an Azure Blob Storage container. When the data transfer is in progress, other pipelines that use this dataset as a source fail with the following error "Job failed due to reason: Path does not resolve to any file(s)".
The dataset has a property that indicates the container directory. This property is populated by the trigger time of the pipeline copying the data, so it writes to a different directory in each run. The other failing pipelines use a directory corresponding to an earlier run of the pipeline copying the data and I have confirmed that the path does exist.
Anyone knows why this is happening and how to solve it?
Probably your expression in directory and file textbox inside the dataset is not correct.
Check this link : Azure data flow not showing / in path to data source

Release Pipeline error when using Azure Dacpac Task

I'm new to using Azure release pipelines and have been fighting issues trying to deploy a database project to a new Azure SQL database. Currently the pipeline is giving me the following error...
TargetConnectionString argument cannot be used in conjunction with any other Target database arguments
I've tried deploying with and without the TargetConnectionString included in my publish profile. Any suggestions or something else to try? I'm out of ideas.
TargetConnectionString
Specifies a valid SQL Server/Azure connection string to the target database. If this parameter is specified it shall be used exclusively of all other target parameters. (short form /tcs)
So please remove all other TargetXXX arguments.
(if you don't have them can you show what arguments you have inline and in publish profile - of course without data)