LUIS Azure DevOps pipeline, how to create any samples? - azure-devops

There is a requirement, to create Azure Devops pipeline which could support version, add file into application, train, test & publish application into Stage & production.
How it can be done through Azure Devops Pipeline. Any documentation or steps could be helpful.
Thanks
A.Prabhuram

I have done something similar through Azure CLI (PowerShell version 2.X) tasks, but it is not straight forward. I haven't done all of the steps as you mention above, but hopefully this will give you what you need to further build on it.
As a baseline, the functions you need are outlined in the LUIS Programmatic API. You will need the LUIS key and app id for most requests, which you can get via
$LUISKEY= & az cognitiveservices account keys list -g "resourceGroupName" --name "LUISauthoringKeyName" --query key1 -o tsv
$header = #{"Ocp-Apim-Subscription-Key"="$LUISKEY"}
$res = Invoke-RestMethod -Uri "https://westus.api.cognitive.microsoft.com/luis/authoring/v3.0-preview/apps/?take=1" -Method 'Get' -Headers $header
$appid = $res.id
For brevity, I'm not repeating this code. But if you have separate tasks and/or agent jobs in your pipeline (as opposed to doing this as one script, which I do not recommend), you'll need to repeat these statements for each task. Do take note of the region and modify as needed for your authoring resource.
Obviously, to update your LUIS version, you need to have the model definition. I don't do this often since we are set up to use the same LUIS app in QA and PROD. So I just add a new version to the project repo if I need to run it through DevOps. Then I add the repo as an artifact for the release pipeline. But you should be able to use the Export Application Version API to get it programmatically, though I haven't tried personally. Here is what I did to add the new version:
$body = Get-Content '$(System.DefaultWorkingDirectory)/_AveryCreek_OEM_CSC_Bot/models/luis/AveryCreek OEM_CSC Team.json' | Out-String
Invoke-RestMethod -Uri "https://westus.api.cognitive.microsoft.com/luis/authoring/v3.0-preview/apps/$appid/versions/import" -Method 'Post' -Body $body -Headers $header
Note that this version is not additive and will completely replace your previous version (though you can revert). In other words, if you have changes to the previous version that are NOT incorporated into the version you are importing, they will be lost. This is one of the main reasons we do not use separate LUIS apps per environment (sidebar - you can use separate predicition resources so you don't use production capacity while testing, but all of the endpoint utterances still go through to the one application).
Once the version is imported, you need to train and publish it. I personally don't have any testing built in, but I'm sure you could build some calls via the LUIS Prediction API and check for expected results. To train, you first need to grab the version, then call the training endpoint.
$res = Invoke-RestMethod -Uri "https://westus.api.cognitive.microsoft.com/luis/authoring/v3.0-preview/apps/$appid/versions" -Method 'Get' -Headers $header
$version = $res[0].version
Invoke-RestMethod -Uri "https://westus.api.cognitive.microsoft.com/luis/authoring/v3.0-preview/apps/$appid/versions/$version/train" -Method 'Post' -Headers $header
The next part is the most tricky. You can't publish the app until it is done training. To mitigate the risk of trying to publish before it is ready, I have created a separate agent job in my "Train and Publish" task to delay it.
This is typically enough delay, but I also have a check on training status and throw an error if it is not ready. Here is the snippet to get and check status and then publish.
$status = Invoke-RestMethod -Uri "https://westus.api.cognitive.microsoft.com/luis/authoring/v3.0-preview/apps/$appid/versions/$version/train" -Method 'Get' -Headers $header
if ($status.details[0].status -ne "Success" -and $status.details[0].status -ne "UpToDate") { throw }
Invoke-RestMethod -Uri "https://westus.api.cognitive.microsoft.com/luis/authoring/v3.0-preview/apps/$appid/publish" -Method 'Post' -Body $body -Headers $header
And that should do it! As I mentioned, take care to make sure you define things like the LUIS Key and App ID in each task, as I have not repeated all of those values here. And you can add additional tasks to programmatically export the version (make sure you get the right key for your source app) and test the model as desired.

Related

How to create resources in Azure Portal using Devops CI/CD pipeline and ARM Template

I am working on creating qnamaker based Chatbot and for this I have prepared ARM template for creating and deploying resources.
PowerShell script is being used to create knowledge base once all the resources are created.
This process works just fine when I am executing PowerShell script from local PowerShell tool.
Now I want to do CI/CD for this deployment process so that deployment process can be automated
Creating resources and deploying to azure through Pipeline is quite possible through Azure ARM Template deployment task but I am not sure how to execute PowerShell script that is responsible for creating knowledge base once resources are created based on recently created qnamaker service
Any help greatly appreciated
You can refer to Create a CI/CD pipeline for .NET with the Azure DevOps Project for step by step lab to create the pipelines.
If you need help on ARM templates (if you are stuck somewhere), you can check out A quick guide on writing your Azure Resource Manager (ARM) Templates. and get started quickly.
Please clarify that your issue is creating and training the QnA Maker KB after the actual resources have been created by ARM? There are probably multiple ways to do this, but here is what I have done. First of all, I am using the Azure CLI Task version 2.* which simplifies some of the initial steps. You could probably use regular PowerShell for some of the subsequent steps. I have selected all as PowerShell script types and inline scripts. Each of these segments I do as a separate task but you could probably do all as one. I like being able to see which step fails if something happens, though.
First you need to get the key for your cognitive service
$QNAKEY= & az cognitiveservices account keys list -g "YOUR_RESOURCE_GROUP" --name "YOUR_QNA_RESOURCE_NAME" --query key1 -o tsv
Write-Output("##vso[task.setvariable variable=QNA_KEY;]$QNAKEY")
Next you need to create the KB. I have mine seeded with information via a json file in my project repository. You can do this from wherever, or just create a blank KB (I think). Note I'm checking here to see if the KB exists; KB name does not have to be unique, so if you don't do this you will end up creating a lot of duplicate KBs.
$header = #{
"Content-Type"="application/json"
"Ocp-Apim-Subscription-Key"="$(QNA_KEY)"
}
$res = Invoke-RestMethod -Uri "https://westus.api.cognitive.microsoft.com/qnamaker/v4.0/knowledgebases" -Method 'Get' -Headers $header
$kb = $res.knowledgebases | Where-Object name -eq "YOUR_KB_NAME"
Write-Host $res.knowledgebases
Write-Host $kb
if (!$kb) {
Write-Host "KB does not exist, so creating new KB"
$body = Get-Content '$(System.DefaultWorkingDirectory)/PATH_TO_MY_MODEL.json' | Out-String
Invoke-RestMethod -Uri "https://westus.api.cognitive.microsoft.com/qnamaker/v4.0/knowledgebases/create" -Method 'Post' -Body $body -Headers $header
}
Finally, you will likely want to publish your KB. With LUIS I think you need to train it first (via separate CLI task), but QnA Maker you should be able to publish directly. I do this as a dependent stage to ensure the KB is created before I try to publish.
$QNAKEY= & az cognitiveservices account keys list -g "YOUR_RESOURCE_GROUP" --name "YOUR_QNA_RESOURCE_NAME" --query key1 -o tsv
$header = #{
"Content-Type"="application/json"
"Ocp-Apim-Subscription-Key"="$QNAKEY"
}
$res = Invoke-RestMethod -Uri "https://westus.api.cognitive.microsoft.com/qnamaker/v4.0/knowledgebases" -Method 'Get' -Headers $header
$kb = $res.knowledgebases | Where-Object name -eq "YOUR_KB_NAME"
$qnaId = $kb.id
Invoke-RestMethod -Uri "https://westus.api.cognitive.microsoft.com/qnamaker/v4.0/knowledgebases/$qnaId" -Method 'Post' -Headers $header
And that's it! At this point your QnA Maker knowledgebase should be created, published, and ready to go.

Azure cli: clone pipeline

Looking at az pipelines documentation it seems it's not possible to clone a pipeline using cli.
I've looked at getting the yaml (az pipelines show -name=x > x_orig.yaml) and then trying to change json and create pipeline from modified yaml, but that feels like a lot of work that could break after next update.
Is there a way to clone a pipline without going the the Web UI?
Currently, there indeed is not available Azure CLI that can clone or export/import a pipeline to create a new pipeline.
I also searched and tried the Azure DevOps REST API for Pipelines, but did not find the available API.
Ideally, the Azure CLI "az pipelines create" can provide an input parameter that allows users specify an existing pipeline as a starting point for the new pipeline.
If your projects really need this feature, I recommend that you can directly report a feature request on the "Azure/azure-cli" repository to ask adding the parameter like as above mentioned. That will allow you directly interact with the appropriate engineering team, and make it more convenient for the engineering team to collect and categorize your suggestions.
As a workaround, we could clone the build definition via power shell script to call REST API.
Note: We need to change the original build definition name.
REST API
Get build definition:
GET https://dev.azure.com/{organization}/{project}/_apis/build/definitions/{definitionId}?api-version=6.0
Create build definition
POST https://dev.azure.com/{organization}/{project}/_apis/build/definitions?api-version=6.0
Power shell script
$connectionToken="{pat}"
$base64AuthInfo= [System.Convert]::ToBase64String([System.Text.Encoding]::ASCII.GetBytes(":$($connectionToken)"))
$BuildDefinitionInfoURL = "https://dev.azure.com/{org name}/{project name}/_apis/build/definitions/386"
$BuildDefinitionInfo = Invoke-RestMethod -Uri $BuildDefinitionInfoURL -Headers #{authorization = "Basic $base64AuthInfo"} -Method Get
Write-Host $BuildDefinitionInfo.name
$BuildDefinitionInfo.name = $BuildDefinitionInfo.name +" clone"
Write-Host $BuildDefinitionInfo.name
$body = $BuildDefinitionInfo | ConvertTo-Json -Depth 99
$createBuildDefinitionURL = "https://dev.azure.com/{org name}/{project name}/_apis/build/definitions?api-version=6.0"
$response = Invoke-RestMethod -Uri $createBuildDefinitionURL -ContentType "application/json" -Body $body -Headers #{Authorization=("Basic {0}" -f $base64AuthInfo)} -Method POST
Write-Host $response.id
Result:

Using semaphore-like conditions for pipeline job queueing

we use floating licenses for our more expensive compilers/tools, so that we can do local development as well as our production builds. The license manager (flexLM) has an api we can query, so we could block the license. However, I cannot find a mechanism by which I can cause my pipeline to queue based on the state of an auxiliary variable or return value of a script or something like that.
This means that I can launch the build on any machine on which the compiler is installed but it will then fail if the license is not available and I will have to relaunch the pipeline. If I did that automatically I would effectively just block that machine until the license becomes available.
Is there anything I have missed that could achieve a "queueing until license becomes available" kind of thing?
Thank you,
Manuel
We can add the first task power shell in the pipeline definition and define new variable in the variable tab such as Value:true, then add script to check the license status, if the license is available, set the variable Value to true, if the license is not available, set the variable Value to False. Then add condition eq(variables['{variable name}'], '{variable value}') in the Second task.
After configure, if your license is available, the pipeline will run successfully.
Or we could check the license first, then call the below script to queue build pipeline.
$token = "$(pat)"
$url = "https://dev.azure.com/{Org name}/{project name}/_apis/build/builds?api-version=6.1-preview.6"
$token = [System.Convert]::ToBase64String([System.Text.Encoding]::ASCII.GetBytes(":$($token)"))
$JSON = #"
{
"definition": {
"id": {Build definition ID}
}
}
"#
$response = Invoke-RestMethod -Uri $url -Headers #{Authorization = "Basic $token"} -Method Post -ContentType application/json -body $JSON
write-host $response

Retrieving info from Azure Devops with Powershell

I'm trying to automate the deployment of releases in Azure Devops using the Azure Devops API with Powershell. I'm able to retrieve the last release definition, but I'm not able to extract a release for a specific environment.
I'm looking to retrieve the last 'prod' release in order to obtain the 'release ID', and then deploy the release to UAT or DEV using a HTTP 'PATCH' request.
I can hit a URL manually (example below), which gives me the last release for DEV (definitionEnvironmentId=1), and I know I could easily changed the environment ID to '12' for production, but I'd like to get this via an API call.
https://{redacted url}/{redacted}/{redacted}/_apis/release/deployments?api-version=6.0&definitionId=1&definitionEnvironmentId=1&$top=1
Any ideas if this is possible as I can't find any info on it?
You were using the correct Deployments - List Rest api. The reason you didnot get any info from the api call via powershell is because you didnot escape the special character $ for $top parameter in the url.
$ is a special character in powershell. You should escape it by adding a back tick '`' in the url. See below:
#add ` to $top parameter
$url = "https://{redacted url}/_apis/release/deployments?definitionId=3&definitionEnvironmentId=5&`$top=1&api-version=6.0"
$connectionToken="PAT"
$base64AuthInfo= [System.Convert]::ToBase64String([System.Text.Encoding]::ASCII.GetBytes(":$($connectionToken)"))
$response= Invoke-RestMethod -Uri $url -Headers #{authorization = "Basic $base64AuthInfo"} -Method Get
$releaseId = $response.value[0].release.id

Check previous build information in VSTS (VSTS API)

Are previous build variables accessible during execution of a VSTS build? For example, can I get $(Build.SourceVersion) or $(Build.QueuedBy) of the previous build?
I can get current build information through the build variables like $(Build.SourceVersion) but can I get something like $(Build.Previous.SourceVersion)?
There aren’t the built-in variables for previous build information, the workaround is that you can call Builds REST API (can be filter status, such as completed, inProgress) through PowerShell during this build. (The first item of the result is the newest one)
$base64authinfo = [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes(("{0}:{1}" -f $User, $Password)))
$responseFromGet = Invoke-RestMethod -Method Get -ContentType application/json -Uri $Uri -Headers #{Authorization=("Basic {0}" -f $base64authinfo)}
Some articles about calling REST API: Calling VSTS APIs with PowerShell, VSTS/TFS REST API: The basics and working with builds and releases
You can use value of System.AccessToken variable as password (Check Allow scripts to access OAuth token option in Options tab) and username can be anything.
No. "Previous" is a nebulous concept when you're talking about things that can run in parallel. What if you have 3 builds running concurrently?