Tagging Azure Resources from .csv - powershell

Is there an easy way to read a .csv in a VSTS pipeline from a PowerShell script?
I have a script that can tag Azure Resources and it gets the key-value pairs from a .csv file. It works a charm when running it locally and running:
$csv = Import-Csv "d:\tagging\tags.csv"
But I'm struggling to find a way to reference the .csv in VSTS (Devops Services). I've put the .csv with the script in the same repo/folder, and I've created an Azure PowerShell script task.
I need to know what the Import-Csv should look like if it's in VSTS. Do I need to add additional steps so that the agent downloads the .csv when running the script?
This is the current error:
The hosted agent can't find the file and reports "Could not find file 'D:\a_tasks\AzurePowerShell_72s1a1931b-effb-4d2e-8fd8-f8472a07cb62\3.1.6\tags.csv'.

Let's say you put the file in your repo in the location /AwesomeCSV/MyCSV.csv. Your CSV's location, from a build perspective, would be $(Build.SourcesDirectory)/AwesomeCSV/MyCSV.csv.
So basically, pass in $(Build.SourcesDirectory)/AwesomeCSV/MyCSV.csv to the script as an argument, or reference it as an environment variable in your script as $env:BUILD_SOURCESDIRECTORY.

Related

How to delete specific files from the Source folder using delete task in Azure devOps

I am trying to add a task to delete files with specific type from source folder and all the sub folders using delete task in Azure DevOps pipeline.
Delete task:
https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/utility/delete-files?view=azure-devops doesn't seem to provide any information on the patterns.
I have tried following combinations but none of them worked.
(.xyz)
(.xyz)
*.xyz
*.xyz\
my expectation is to delete files with .xyz type from all the sub folders.
Try setting:
**/*.xyz
As the Contents variable value.
The full range of pattern filters is described in the documentation here.

How to delete a empty row from a csv file using PowerShell

I need to delete multiple empty rows (;;;;;;;) from a csv file using PowerShell.
I wrote a PowerShell script that I use to creating new users in active directory. The csv file is created by using excel. Sometimes there are empty rows in the CSV file. The script is automatically deployed using task scheduler so I can’t check the script every day that’s why I really need PowerShell to clean the csv automatically.
I also want PowerShell to deletes all the contents from the CSV file when the user has been created. (End of the script)
Thx for the help

Jenkins Pipeline - Create file in workspace (Windows Slave)

For a number of reasons, it would be really useful if I could create a file from a Jenkins pipeline and put it in my workspace. If I can do this, I could avoid pulling in some repositories where I'm currently pulling them in for just one or two files, keep those files in a maintainable place, and I could also use this to create temporary powershell scripts, working around a limitation of the solution described in https://stackoverflow.com/a/42576572
This might be possible through a Pipeline utility, although https://jenkins.io/doc/pipeline/steps/pipeline-utility-steps/ doesn't list any such utility; or it might be possible using a batch script - as long as that can be passed in as a string
You can do something like that:
node (''){
stage('test'){
bat """
echo "something" > file.txt
"""
String out = readFile(file.txt).trim()
print out // prints variable out groovy style
out.useFunction() // allows running functions loaded from the file
bat "type %out%" // batch closure can access the variable
}
}

Azure Data Factory pipelines are failing when no files available in the source

Currently – we do our data loads from Hadoop on-premise server to SQL DW [ via ADF Staged Copy and DMG on-premise server]. We noticed that ADF pipelines are failing – when there are no files in the Hadoop on-premise server location [ we do not expect our upstreams to send the files everyday and hence its valid scenario to have ZERO files on Hadoop on-premise server location ].
Do you have a solution for this kind of scenario ?
Error message given below
Failed execution Copy activity encountered a user error:
ErrorCode=UserErrorFileNotFound,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Cannot
find the 'HDFS' file.
,Source=Microsoft.DataTransfer.ClientLibrary,''Type=System.Net.WebException,Message=The
remote server returned an error: (404) Not Found.,Source=System,'.
Thanks,
Aravind
This Requirement can be solved by using the ADFv2 Metadata Task to check for file existence and then skip the copy activity if the file or folder does not exist:
https://learn.microsoft.com/en-us/azure/data-factory/control-flow-get-metadata-activity
You can change the File Path Type to Wildcard, add the name of the file and add a "*" at the end of the name or any other place that suits you.
This is a simple way to stop the Pipeline failing when there is no file.
Do you have Input DataSet for your pipeline? See if you can skip your Input Dataset dependency..
Mmmm, this is a tricky one. I'll up vote the question I think.
Couple of options that I can think of here...
1) I would suggest the best way would be to create a custom activity ahead of the copying to check the source directory first. This could handle the behaviour if there isn't a file present, rather than just throwing an error. You could then code this to be a little more graceful when it returns and not block the downstream ADF activities.
2) Use some PowerShell to inspect the ADF activity for the missing file error. Then simply set the dataset slice to either skipped or ready using the cmdlet to override the status.
For example:
Set-AzureRmDataFactorySliceStatus `
-ResourceGroupName $ResourceGroup `
-DataFactoryName $ADFName.DataFactoryName `
-DatasetName $Dataset.OutputDatasets `
-StartDateTime $Dataset.WindowStart `
-EndDateTime $Dataset.WindowEnd `
-Status "Ready" `
-UpdateType "Individual"
This of course isn't ideal, but would be quicker to develop than a custom activity using Azure Automation.
Hope this helps.
I know i'm late to the party, but if you're like me and running into this issue, looks like they made an update a while back to allow for no files found

Jenkins Powershell Output

I would like to capture the output of some variables to be used elsewhere in the job using Jenkins Powershell plugin.
Is this possible?
My goal is to build the latest tag somehow and the powershell script was meant to achieve that, outputing to a text file would not help and environment variables can't be used because the process is seemingly forked unfortunately
Besides EnvInject the another common approach for sharing data between build steps is to store results in files located at job workspace.
The idea is to skip using environment variables altogether and just write/read files.
It seems that the only solution is to combine with EnvInject plugin. You can create a text file with key value pairs from powershell then export them into the build using the EnvInject plugin.
You should make the workspace persistant for this job , then you can save the data you need to file. Other jobs can then access this persistant workspace or use it as their own as long as they are on the same node.
Another option would be to use jenkins built in artifact retention, at the end of the jobs configure page there will be an option to retain files specified by a match (e.g *.xml or last_build_number). These are then given a specific address that can be used by other jobs regardless of which node they are on , the address can be on the master or the node IIRC.
For the simple case of wanting to read a single object from Powershell you can convert it to a JSON string in Powershell and then convert it back in Groovy. Here's an example:
def pathsJSON = powershell(returnStdout: true, script: "ConvertTo-Json ((Get-ChildItem -Path *.txt) | select -Property Name)");
def paths = [];
if(pathsJSON != '') {
paths = readJSON text: pathsJSON
}