I have ADF pipelines that calls Azure Databricks notebook. I want to call an ADF pipeline in normal mode(high performance) and then in debug mode.
When in debug mode, I want to display some DFs(Data frames) in databricks. But when run normally DFs should not displayed.
To achieve this I am thinking of sending parameters from ADF (debug=true) and let the display happen in an 'if' condition in databricks notebook. Is this the recommended approach or are there builtin functionlities in databricks or ADF?
If I understand the ask ,you are trying to to capture if the pipeline was initiated in debug mode or scheduled trigger mode . I think you can use the expression
#pipeline().TriggerName .
Atleast when I tested for debug mode it shows the value as "Sanbox" or for scheduled mode you its shows the trigger name which triggered the pipeline .
You can pass this as a parameter to the notebook and put a IF statement to add your logic .
HTH
Related
Currently I am integrating using Blazemeter plugin in azure devops for the JMeter script . I have one issue I am trying with different combination to pass the PreDefined variables in azure for JMeter . What I am trying is I am trying to pass the User Defined Variable of JMeter in Azure Pre-Defined Variable . But I am unable to do so . I have tried different combination , following pre-defined document . The reason to do so is . Suppose tomorrow any non technical person just want to run the script by just change the environment at predefined level in azure instead of changing them in JMeter User Defined Variable or you can say by making any change in the script of .jmx . I have tried different combination of integration of predefined . One of the combination i have shared with screenshot . If anyone have any idea please let me know . One more thing in Release.EnvironmentUri I have checked the release because of that I am able to send those value at run time ..
Let me elaborate more Here in this image. I have also shared the plugins I am using. .
I think you need to set an environment variable like below:
And in JMeter's User Defined Variables read the variable value using __groovy() function
${__groovy(System.getenv('Release.EnvironmentUri'),)}
More information: Testing via Azure DevOps Pipeline
I have a Databricks activity in ADF and I pass the output with the below code:
dbutils.notebook.exit(message_json)
Now, I want to use this output for the next Databrick activity.
As my search, I think add the last output into base parameters in the second activity. Am I right?
and other questions, How can I use this output inside the Databrick notebook?
Edited: The output is a JSON as the below screenshot.
As per doc, you can consume the output of Databrick Notebook activity in data factory by using expression such as #{activity('databricks notebook activity name').output.runOutput}.
If you are passing JSON object you can retrieve values by appending property names.
Example: #{activity('databricks notebook activity name').output.runOutput.PropertyName}.
I reproduced the issue and it's working fine.
Below is the sample notebook.
import json
dates = ['2017-12-11', '2017-12-10', '2017-12-09', '2017-12-08', '2017-12-07']
return_json = json.dumps(dates)
dbutils.notebook.exit(return_json)
This is how the Notebook2 Activity Seeting looks like:
Pipeline ran successfully.
I currently have an Azure Devops install that I am configuring for automated build and testing. I would like to enable a Continuous Integration trigger for the build process, however our check-in standards require different parts of our code to be checked in separate from each other.
For example: we are using nettiers auto generated code, so whenever a ticket requires a database change, the nettiers code base gets updated. Because that is auto generated code it gets checked in separately from manual modifications with a comment indicating that it is an auto generated check-in.
A build will fail if it does not have both the nettiers and the manual modifications checked in. However with Continuous Integration turned on, the first check-in will trigger a build to begin that will be missing the second half of the changes which are checked in a couple minutes later.
The ideal way I would like to fix this would be to implement a 5 minute delay between when the CI build first gets triggered, and when it actually begins its work. Even better would be if each successive check-in would cancel the first build and start a new timer with its own build to account for any subsequent check-ins.
An alternative was to solve the issue might be to have a gate on a work item query. However, I have been unsuccessful in figuring out how to implement either of these ideas, or in coming up with other options. Gates based on queries only seem to be available in Release pipelines, not Builds.
Has anyone out there solved a similar problem, or have thoughts on how to solve or work around this issue?
Azure Devops delayed Continuous Integration build
I am afraid there is no such out of box setting/method to set this specify continuous integration build for your case.
As workaround, we could generated code and gets checked in to some specify folder by using nettiers, like \NettiersGenerated.
Then we could exclude that folder by the Path filters under the Enable continuous integration:
In this case, the generated code will not trigger the build pipeline.
Update:
It would require that the nettiers code always gets checked in first
(which would be hard to enforce)
Yes, agree with you. If the build will fail if it does not have both the nettiers and the manual modifications checked in, my first is indeed not reasonable enough.
As another workaround, we could use the Azure DevOps counter and get the rest of the counter through a powershell script, build the pipeline only if the number is even, otherwise cancel the build, like:
Counter expression like
variables:
internalBuildNumber: 1
semanticBuildNumber: $[counter(variables['internalBuildNumber'], 0)]
Powershell scripts:
$value=$(semanticBuildNumber)
switch($value)
{
{($_ % 2) -ne 0} {"Go on build pipeline"}
{($_ % 2) -eq 0}
{
Write-Host "##vso[task.setvariable variable=agent.jobstatus;]canceled"
Write-Host "##vso[task.complete result=Canceled;]DONE"
}
}
In this case, the pipeline will be build when it triggered at second time.
Hope this helps.
I'm using serverless-stack-output to save my serverless output to a file with some custom values that I setup. Works well, but serverless has some other default outputs such as these:
FunctionQualifiedArn (one for each function)
ServiceEndpoint
ServerlessDeploymentBucketName
I don't want these to show on my file, how to disable serverless/cloudformation from outputting them?
This is not possible at this stage.
I've dug through the code and there's no switch that to suppress outputs.
Unfortunate, as I have the exact same requirement.
I have spent a couple of hours search for a solution to disable my Azure Service Bus Topics using Powershell.
The background for this is we want to force a manual failover to our other region.
Obviously I could click in the Portal:
but I want to have a script to do this.
Here is my current attempt:
Any help would be great.
Assuming you're sure your $topic contains the full description, modify the status parameter in the array and then splat it back using the UpdateTopic method. I'm afraid I can't test this at present.
$topic.Status = "Disabled"
$topicdesc = $NamespaceManager.UpdateTopic($topic)
I don't think you'll need to set the entity type for the Status, nor do you require semi-colons after each line of code in your loop.
References
PowerShell Service Bus creation sample script (which this appears to be based off): https://blogs.msdn.microsoft.com/paolos/2014/12/02/how-to-create-service-bus-queues-topics-and-subscriptions-using-a-powershell-script/
UpdateTopic method: https://msdn.microsoft.com/en-us/library/azure/microsoft.servicebus.namespacemanager.updatetopic.aspx
Additional note: please don't screenshot the code - paste it in. I'd rather copy-and-paste than type things out.