Synapse - Can't properly deploy SQL linked server - azure-devops

Hello I’m having a problem when I try to deploy my Synapse workspace to another environment (eg. Development to Test).
The problem is that the SQL linked services doesn’t seems to deploy properly. The first screenshot is from the development synapse and the second screenshot form test. As you can see the settings of the linked service are completely different.
Development
Test enviroment
I’m using the standard synapse deployment task in my DevOps pipeline
- task: Synapse workspace deployment#1
displayName: 'Synpase deployment task for workspace: syn$(name)$(environment)'
inputs:
TemplateFile: '$(System.DefaultWorkingDirectory)/$(cicd_synapse_workspace_origin)/TemplateForWorkspace.json'
ParametersFile: '$(System.DefaultWorkingDirectory)/$(cicd_synapse_workspace_origin)/TemplateParametersForWorkspace.json'
AzureSubscription: '${{ parameters.sercon }}'
ResourceGroupName: 'rg-$(name)-$(environment)'
TargetWorkspaceName: syn$(name)$(environment)
OverrideArmParameters: '
-ls_sq_mark_connectionString $(Connectionstring)
Whereby I overwrite the linked service with a variable form the DevOps library (it contains the following code “("Integrated Security=False;Encrypt=True;Connection Timeout=30;Data Source=databaseserver;Initial Catalog=database")”
The thing I noticed when I looked into the JSON file (TemplateForWorkspace.json) that the linked service is defined as follow:
"ls_sq_mark_connectionString": {
"type": "secureString",
"metadata": "Secure string for 'connectionString' of 'ls_sq_mark'"
},
Maybe the problem is, is that it's suddenly a securestring? But i have no idea how to fixed this issue.

Related

Docker Compose task on Azure DevOps cannot start daemon

I'm unable to run the Docker Compose task on Azure DevOps and every solution I've looked up online, either makes no sense, or does not work for my scenario.
The job output for the failure is:
This is a very simple process, artifacts are copied to a folder during build, and the docker-compose.yml and .dockerfile is added to this directory, which then needs to be run.
One article explained that if you add your docker-compose.yml to the same folder as the files the image will be hosting and the .dockerfile, that it might cause the daemon to fall over and generate this generic error, so I've added a .dockerignore file, but this issues persists.
I'm using a Hosted Agent - Ubuntu-18.04.
My task looks like this:
steps:
- task: DockerCompose#0
displayName: 'Run a Docker Compose command'
inputs:
azureSubscription: 'Test Dev Ops'
azureContainerRegistry: '{"loginServer":"testdevops.azurecr.io", "id" : "/subscriptions/{subscription_key}/resourceGroups/Test.Devops/providers/Microsoft.ContainerRegistry/registries/testdevops"}'
dockerComposeFile: '$(System.DefaultWorkingDirectory)/$(Release.PrimaryArtifactSourceAlias)/test.ng.$(Build.BuildNumber)/dist/testweb/docker-compose-build.yml'
dockerComposeCommand: build
arguments: '--build-arg azure_pat=$(System.AccessToken) --build-arg azure_username=Azure'
The idea here is that this container is composed and delivered straight to Azure's Container Registry.
I have ensured that the user that's running this process, as been granted permissions in that ACR, as well as added the user to the Administrative group in Azure DevOps.
A lot of responses talks about adding the user to the Docker group, but this is a Hosted Agent, not a private agent, so there is no such option.
I have even tried installing Docker CLI before this task, but nothings working.
Am I being daft to think that I can compose in Azure DevOps?
Edit
The contents of my artifacts folder looks something like this:
This error message is extremely misleading. If anyone from Microsoft is looking at this question, please consider making the error more specific, if possible.
It turned out, I missed a semi-colon in a build task that replaced tokens before the build artifacts was pushed from the build output, and because of that, the yaml file had a #{..} token inside of it, which caused the docker-compose to fail.
It had nothing to do with permissions, nor a .dockerignore file, very misleading.

Azure DevOps pipeline for SSDT project can not find the variables when it does not need to create object

I am using Azure Devops to deploy my SSDT project. I am trying to update my Azure SQL Data Warehouse where I have DATABASE SCOPED CREDENTIAL and EXTERNAL DATA SOURCE.
I found this article and I did those steps. https://techcommunity.microsoft.com/t5/azure-synapse-analytics/how-to-securely-manage-load-credentials-with-ssdt-azure-key/bc-p/1397979
In my release pipeline I have this setting to deploy my SSDT project. As you can see i am using the values from my Azure Key Vault.
- task: AzureKeyVault#1
inputs:
azureSubscription: '<My Azure Subscription>'
KeyVaultName: '<My Key Vault>'
SecretsFilter: '*'
...
- task: SqlAzureDataWarehouseDacpacDeployment#1
inputs:
azureSubscription: '<My Azure Subscription>'
AuthenticationType: 'server'
ServerName: 'ABC.database.windows.net'
DataWarehouse: '$(SynapseName)'
SqlUsername: '$(SynapseSQLUsername)'
SqlPassword: '$(SynapseSQLPassword)'
deployType: 'DacpacTask'
DeploymentAction: 'Publish'
DacpacFile: 'SQL_ASynapse\bin\Release\SQL_ASynapse.dacpac'
AdditionalArguments: '/p:IgnoreAnsiNulls=True /p:IgnoreComments=True /v:DatabaseScopeCredentialSecret=$(DatabaseScopeCredentialSecret) /v:DatabaseScopeCredentialIdentity=$(DatabaseScopeCredentialIdentity) /v:ExternalDataSourceMarineTrafficLocation=$(ExternalDataSourceMarineTrafficLocation)'
IpDetectionMethod: 'AutoDetect'
I am passing three value for my three variable in my two blow scripts.
$(DatabaseScopeCredentialSecret)
$(DatabaseScopeCredentialIdentity)
$(ExternalDataSourceMarineTrafficLocation)
I have below code in two separated SQL files.
ADLSCredential.sql :
CREATE MASTER KEY;
GO
CREATE DATABASE SCOPED CREDENTIAL ADLSCredential
WITH
IDENTITY = '$(DatabaseScopeCredentialIdentity)',
SECRET = '$(DatabaseScopeCredentialSecret)'
;
AzureDataLakeStoreMarineTraffic.sql :
CREATE EXTERNAL DATA SOURCE AzureDataLakeStoreMarineTraffic
WITH (
TYPE = HADOOP,
LOCATION='$(ExternalDataSourceMarineTrafficLocation)',
CREDENTIAL = ADLSCredential
);
When I don't have those objects on my DW (Synapse), My pipeline is able to find values from Azure key Vault and assign to my parameters and create both objects but next time I have below error.
##[error]*** Could not deploy package.
##[error]Warning SQL72013: The following SqlCmd variables are not defined in the target scripts: DatabaseScopeCredentialSecret DatabaseScopeCredentialIdentity ExternalDataSourceMarineTrafficLocation.
Error SQL72014: .Net SqlClient
It seams when I don't need to ran those scripts, by passing values to my parameters SQLCMD has problem to find those variables because it had not created them.
Is there any way to have public variable somewhere or to tell SQLCMD to do not pass values for secound time?
I found the problem that I had.
I forgot to create variables in SSDT project. After creating them, every things work well.

Forcing data seeding during release stage of the pipeline

I am having trouble seeding data into thre database using devops. I have a YAML with the following build step (I've stripped out irrelevant steps):
- task: CmdLine#2
inputs:
script: |
dotnet tool install --global dotnet-ef --version 3.0
dotnet tool restore
dotnet ef migrations script -p $(Build.SourcesDirectory)/$(My.SQLProject)/$(My.SQLProject).csproj -o $(Build.ArtifactStagingDirectory)/migrations/script.sql -i
- task: PublishBuildArtifacts#1
This creates a migration SQL script just fine and pops it into drop.
During release I create the database using an ARM deployment task I then run the SQL script:
- task: SqlAzureDacpacDeployment#1
inputs:
azureSubscription: 'my-sub'
ServerName: 'my-server.database.windows.net'
DatabaseName: 'my-db'
SqlUsername: 'my-sqluser'
SqlPassword: 'my-password'
deployType: SqlTask
SqlFile: '$(Pipeline.Workspace)/drop/migrations/script.sql'
This works fine - the schema in the DB is created.
I then create the App Service with connection string and the App Service connects to the DB just fine.
The bit I can't seem to get to work is the data seeding. I've googled lots and there are plenty of articles that talk about creating migrations and creating SQL scripts and then running the script in Devops. And there are plenty that talk about seeding data outside of Devops, but the bit I'm struggling with is how to get it to seed the data in Devops. One odd thing I have noticed is that if I re-run the build / deploy YAML, it then seeds the data without me having to tell it. So, I guess there are two questions:
Is data seeding something that MUST (or SHOULD) be done in the App Service code during App Service Startup, or is it something that should be instigated during the release pipeline in Devops? (I'm not the App Service developer. The Dev says he thinks it should be happening at App Startup. It doesn't so my thinking is that if he's missing something in his code, perhaps I can say "don't worry, I can kick off the data seeding myself in Devops".
If it should be done in Devops, how should it be done? I would have thought that "dotnet ef database update -p " ought to do it, but that doesn't seem to work in the release pipeline.
Many thanks
After some experimentation I have an answer. It's not pretty but it works. I don't know why the dev's seeding isn't working, but I can force it in Devops like this:
Deploy the App Service (AzureWebApp#1)
Add the Connection Strings (AzureAppServiceSettings#1)
Re-deploy the App Service (AzureWebApp#1)
That seems to "force" the seeding. Job done.

New Azure DevOps pipeline using ASP.NET yaml template failing

I have a GitHub repository with a .NET Core 3.0 website solution in it. In Azure DevOps, I went through the wizard to create a new pipeline linked to that repository using the ASP.NET Core template on the Configure step of the wizard. This is what my YAML looks like:
# ASP.NET Core
# Build and test ASP.NET Core projects targeting .NET Core.
# Add steps that run tests, create a NuGet package, deploy, and more:
# https://learn.microsoft.com/azure/devops/pipelines/languages/dotnet-core
trigger:
- develop
pool:
vmImage: 'ubuntu-latest'
variables:
buildConfiguration: 'Release'
steps:
- script: dotnet build --configuration $(buildConfiguration)
displayName: 'dotnet build $(buildConfiguration)'
When I try to manually run the pipeline to test it, this is the output I get everytime:
##[warning]There was a failure in sending the provision message: Unexpected response code from remote provider NotFound
,##[warning]There was a failure in sending the provision message: Unexpected response code from remote provider NotFound
,##[warning]There was a failure in sending the provision message: Unexpected response code from remote provider NotFound
,##[warning]There was a failure in sending the provision message: Unexpected response code from remote provider NotFound
,##[warning]There was a failure in sending the provision message: Unexpected response code from remote provider NotFound
,##[error]Provisioning request delayed or failed to send 5 time(s). This is over the limit of 3 time(s).
Pool: Azure Pipelines
Image: ubuntu-latest
Started: Yesterday at 10:04 PM
Duration: 10h 54m 5s
Job preparation parameters
ContinueOnError: False
TimeoutInMinutes: 60
CancelTimeoutInMinutes: 5
Expand:
MaxConcurrency: 0
########## System Pipeline Decorator(s) ##########
Begin evaluating template 'system-pre-steps.yml'
Evaluating: eq('true', variables['system.debugContext'])
Expanded: eq('true', Null)
Result: False
Evaluating: resources['repositories']['self']
Expanded: Object
Result: True
Evaluating: not(containsValue(job['steps']['*']['task']['id'], '6d15af64-176c-496d-b583-fd2ae21d4df4'))
Expanded: not(containsValue(Object, '6d15af64-176c-496d-b583-fd2ae21d4df4'))
Result: True
Evaluating: resources['repositories']['self']['checkoutOptions']
Result: Object
Finished evaluating template 'system-pre-steps.yml'
********************************************************************************
Template and static variable resolution complete. Final runtime YAML document:
steps:
- task: 6d15af64-176c-496d-b583-fd2ae21d4df4#1
inputs:
repository: self
I thought maybe ubuntu-latest was maybe no longer a valid vmImage, so I tried changing it to ubuntu-18.04 and got the same result. The Micosoft-hosted agents documentation says either should be valid.
Do I have something wrong with my yaml file? I have setup pipelines before with the old no-yaml interface with no issues, so I am a little confused.
i think nowadays it should looks like this:
trigger:
- develop
jobs:
- job: buildjob
variables:
buildConfiguration: 'Release'
pool:
vmImage: 'ubuntu-latest'
steps:
- script: dotnet build --configuration $(buildConfiguration)
displayName: 'dotnet build $(buildConfiguration)'
although this says you can omit jobs if you only have a single job, but I dont see anything wrong with your yaml other than the fact you use steps directly (which, again, should be fine)
Looks like nothing wrong with your yaml file and format.
Since you are using a GitHub repository with a .NET Core 3.0 website. Please pay attention when you create pipeline, make sure you have selected GitHub not Azure Repos Git.
Also as you have mentioned I have setup pipelines before with the old no-yaml interface with no issues, you could setup your pipelines with classic editor first.
There is also a view yaml option.
You could follow that format and content to create a yaml template, which may do the trick.
I was looking through my account and noticed my Agent Pool settings looked a little suspect on the project. It showed I had 11 available agents that were online even though I was using the free plan on a private repository so it should only have one.
I ended up deleting my Azure DevOps organization and creating a new one. Now the YAML configuration I initially posted works fine.

Azure Pipepline with task Sonarqube https

I added a Sonarqube task into my azure build pipeline, in order to login to my sonarqube server I need to run a command, which uses trunst store ssl.
my pipeline looks just like this:
- task: SonarSource.sonarqube.15B84CA1-B62F-4A2A-A403-89B77A063157.SonarQubePrepare#4
displayName: 'Prepare analysis on SonarQube'
inputs:
SonarQube: abc-sonarqube
scannerMode: CLI
configMode: manual
cliProjectKey: 'abc'
cliProjectName: 'abc'
cliSources: src
extraProperties: |
sonar.host.url=https://sonarqube.build.abcdef.com
sonar.ce.javaAdditionalOpts=-Djavax.net.ssl.trustStore=mvn/sonar.truststore -Djavax.net.ssl.trustStorePassword=changeit
I am not sure, if this command "sonar.ce.javaAdditionalOpts=-Djavax.net.ssl.trustStore=mvn/sonar.truststore -Djavax.net.ssl.trustStorePassword=changeit" correct is.
I got the error "API GET '/api/server/version' failed, error was: {"code":"UNABLE_TO_VERIFY_LEAF_SIGNATURE"}
"
PS: my project is angular project.
any solutions?
Azure Pipepline with task Sonarqube https
This issue should be related in how the configure task works. So, even if we add the certificate to the java trustore, the task that sets the configuration uses a different runtime (not java at least) to communicate with the server, that’s why you still get that certificate error.
To resolve this issue, you could try to:
set a global variable, NODE_EXTRA_CA_CERTS, and set it to a copy of
the root cert we had stored locally in a directory. See this article.
Check the related ticket for some more details.
Hope this helps.