Use variable in Powershell command line [duplicate] - powershell

This question already has answers here:
Using --% in PowerShell
(3 answers)
Closed 2 years ago.
I have a small Powershell script that uploads a file to blob storage. The idea is to run this as an inline script inside Devops once a build has been completed instead of having the path/file hardcoded like now.
If I run this in a Powershell command prompt on my computer, it works fine.
az storage blob upload --% --container-name mycontainer --account-name myaccount --name "File.zip" --file "c:\Projects\File.zip" --sas-token "{my token}"
However, I want to exchange the hardcoded path+file with a variable that my build pipleline can set. But here is where I haven't figured out how to actually use the variable.
The following does not work when I try to run it locally. To test I created a variable and then made a call.
// Create variable
New-Variable -Name "TestFile" -Value "c:\projects\File.zip"
(enter)
// Try to upload it
az storage blob upload --% --container-name mycontainer --account-name myaccount --name "File.zip" --file $TestFile --sas-token "{my token}"
Results in:
FileOperationError: [WinError 2] The system cannot find the file
specified: '$TestFile'
I am assuming that I need to declare it in a different way or pipe it to make it work, but how?

Use a speech mark at the beginning, and at the end of your variable, for example,"$TestFile"

Related

File path from within Azure CLI task

I have an Azure CLI task which references a PowerShell script (via build artifact) running az commands. Most of these commands work successfully, but when attempting to execute the following command:
az appconfig kv import --name $resourceName -s file --path appconfig.json --format json
I've noticed that the information was not present against the Azure resource and the log file has "File is not available".
I must be referencing the file incorrectly from the build artifact but if anyone could provide some clarity around this that would be great.
I must be referencing the file incorrectly from the build artifact
You can try to add $(System.ArtifactsDirectory) to the json file path. For example: --path $(System.ArtifactsDirectory)/appconfig.json.
System.ArtifactsDirectory: The directory to which artifacts are downloaded during deployment of a release. Example: C:\agent\_work\r1\a
For details ,please refer to predefined variables .
This can be a little tricky to figure out.
System.ArtifactsDirectory is the default variable that indicates the directory to which artifacts are downloaded during deployment of a release.
However, to use a default variable in your script, you must first replace the . in the default variable names with _. For example, to print the value of artifact variable System.ArtifactsDirectory in a PowerShell script, you would have to use $env:SYSTEM_ARTIFACTSDIRECTORY.
I have a similar setup and do it this way within my PowerShell script:
# Define the path to the file
$appSettingsFile="$env:SYSTEM_ARTIFACTSDIRECTORY\<rest_of_the_path>\appconfig.json"
# Pass it to the Azure CLI command
az appconfig kv import -n $appConfigName -s file --path $appSettingsFile --format json --separator . --yes
It is also helpful to view the current values of all variables to see what they contain before using them.
References:
Default variables - System
Using default variables

Dockerfile RUN powershell wget and see progress

In my dockerfile want to use the following sequence of commands to download and extract a large zip file:
RUN powershell -Command \
wget http://my_server/big_huge.zip \
-OutFile C:\big_huge.zip ; \
Expand-Archive -Path C:\big_huge.zip \
-DestinationPath C:\big_huge ; \
Remove-Item C:\big_huge.zip -Force
I don't want to use ADD to download the zip file isn't going to change and I want this step to be cached.
What I have above seems to work but I do not get any indication of the progress of the download like I normally would. That's a bummer because this is a large download. The progress of the download is obscured I suppose because Invoke-WebRequest which wget is an alias to is a cmdlet. Is there any way to pipe the output of a cmdlet to stdout so I can see it when I am running docker build?
I gave up on trying to do the download from the Dockerfile and instead wrote a separate script that pre-downloads the files I need and expands their archives if the files aren't already present. This script then calls docker build, docker run, etc. In the Dockerfile I am copying the directory where I expanded the archives.
I don't know Docker. But maybe you can pipe the output through the powershell cmdlet Out-Host. Type in help Out-Host for more information.

Configuration of Owasp Zap on Azure Container Instances

I am trying to create an owasp zap instance using azure container instances using the following code:
$containerGroupName = "EW-owaspzap"
$containerDnsName = "EW-owaspzap"
$imageName = "owasp/zap2docker-stable"
$myIpAddress = (Invoke-WebRequest ifconfig.me/ip).Content.Trim()
$environmentVars = #{"api.key"="myreallysecureapikey";"api.addrs.addr.name"=$myIpAddress}
$containerGroup = Get-AzureRmContainerGroup -ResourceGroupName $resourceGroupName -Name $containerGroupName -ErrorAction SilentlyContinue
if (!$containerGroup) {
New-AzureRmContainerGroup -ResourceGroupName $resourceGroupName -Name $containerGroupName -Image $imageName -Command zap-webswing.sh -Port 8080,8090 `
-IpAddressType Public -DnsNameLabel $containerDnsName -RestartPolicy OnFailure -Location WestEurope -AzureFileVolumeShareName $storageShareName `
-AzureFileVolumeMountPath '/output' -AzureFileVolumeAccountCredential $storageCredentials -EnvironmentVariable $environmentVars
}
However I get the error:
The environment variable name in container 'EW-owaspzap' of container group 'EW-owaspzap' is invalid. A valid environment variable
name must start with alphabetic character or '', followed by a string of alphanumeric characters or '' (e.g. 'my_name', or 'MY_NAME', or 'MyName')
according to this https://github.com/zaproxy/zaproxy/wiki/Docker I have the format of the environment variables correct. Is there anything else I have missed?
This is ACI limitation - see here for naming limitation for env vars:
| Environment variable | 1-63 |Case insensitive |Alphanumeric, and
underscore (_) anywhere except the first or last character
This is not an issue with Zap, but with ACI.
This can be solved with a script that gets the env vars in Azure format and converts them to Zap's format (e.g. api_key to api.key). This is a pseudo-code (I did not test it), just to give you an idea:
export set api.key=$API_KEY
./zap
Create a new docker image based on Zap's official image, copy the script and use it to start Zap instead of the regular Zap's command.
For your issue, I think there is something you have misunderstood. The command in the link you posted docker run -u zap -p 8080:8080 -i owasp/zap2docker-stable zap-x.sh -daemon -host 0.0.0.0 -port 8080 -config api.addrs.addr.name=.* -config api.addrs.addr.regex=true, you should take a look at docker run, there is no parameter like -config.
So, I think the command from zap-x.sh to the end is a whole bash command with the script zap-x.sh. You can check the parameter definition in the script zap-x.sh.
And the environment in PowerShell command is a Hashtable, you can get more details here. Also, there are some limitations about Naming conventions in Azure Container Instances.
Not sure if you got his working, but I used your powershell script & was able to create Zap container by replacing "." with "_" in $environmentVars array.

Creating CSV file as an object

I have got a robust script which gets, parse and uses some data from .csv file. To run the script I can use
.\script.ps1 -d data_file.csv
The thing is I cannot interfere into script itself that is why i need to create some kind of a wrapper which will create new csv file and use script.ps1 with a new made file. I am wondering if there is a possibility to create a csv file as an object which will be passed directly to the command like this
.\script.ps1 -d csv_file_as_object.csv
without creating file in some path directory.
What you'd need in this case is the equivalent of Bash's process substitution (<(...)), which, in a nutshell, would allow you to present a command's output as the content of a temporary file whose path is output:
.\scripts.ps1 -d <(... | ConvertTo-Csv) # !! does NOT work in PowerShell
Note: ... | ConverTo-Csv stands for whatever command is needed to transform the original CSV in-memory.
No such feature exists in PowerShell as of Windows PowerShell v5.1 / PowerShell Core v6.1, but it has been proposed.
If .\scripts.ps1 happens to also accept stdin input (via pseudo-path - indicating stdin input), you could try:
... | ConvertTo-Csv | .\script.ps1 -d -
Otherwise, your only option is to:
save your modified CSV data to a temporary file
pass that temporary file's path to .\script.ps1
remove the temporary file.

AWS S3, Deleting files from local directory after upload

I have backup files in different directories in one drive. Files in those directories can be quite big up to 800GB or so. So I have a batch file with a set of scripts which upload/syncs files to S3.
See example below:
aws s3 sync R:\DB_Backups3\System s3://usa-daily/System/ --exclude "*" --include "*/*/Diff/*"
The upload time can vary but so far so good.
My question is, how do I edit the script or create a new one which checks in the s3 bucket that the files have been uploaded and ONLY if they have been uploaded then deleted them from the local drive, if not leave them on the drive?
(Ideally it would check each file)
I'm not familiar with aws s3, or aws cli command that can do that? Please let me know if I made myself clear or if you need more details.
Any help will be very appreciated.
Best would be to use mv with --recursive parameter for multiple files
When passed with the parameter --recursive, the following mv command recursively moves all files under a specified directory to a specified bucket and prefix while excluding some files by using an --exclude parameter. In this example, the directory myDir has the files test1.txt and test2.jpg:
aws s3 mv myDir s3://mybucket/ --recursive --exclude "*.jpg"
Output:
move: myDir/test1.txt to s3://mybucket2/test1.txt
Hope this helps.
As the answer by #ketan shows, Amazon aws client cannot do batch move.
You can use WinSCP put -delete command instead:
winscp.com /log=S3.log /ini=nul /command ^
"open s3://S3KEY:S3SECRET#s3.amazonaws.com/" ^
"put -delete C:\local\path\* /bucket/" ^
"exit"
You need to URL-encode special characters in the credentials. WinSCP GUI can generate an S3 script template, like the one above, for you.
Alternatively, since WinSCP 5.19, you can use -username and -password switches, which do not need any encoding:
"open s3://s3.amazonaws.com/ -username=S3KEY -password=S3SECRET" ^
(I'm the author of WinSCP)