I uploaded a .cert certificate as secure file in azure devops
we are using Classic pipelines
my pipeline added two additional tasks
1, download secure file
2, Azure Cli task to import .cert file add below script as inline script
$certFilePath = $(Agent.TempDirectory)\mycert.com.crt
az keyvault certificate import --vault-name "keyvaultname" -n "mycert.com.crt" -f $certFilePath
getting the below error
D:\agent_work_temp\mycert.com.crt : The term 'D:\agent_work_temp\mycert.com.crt' is not recognized as the name
of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included,
verify that the path is correct and try again.
At D:\agent_work_temp\azureclitaskscript1675572660483_inlinescript.ps1:1 char:12
$inFile = D:\agent_work_temp\mycert.com.crt
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
CategoryInfo : ObjectNotFound: (D:\agent_work_temp\mycert.com.crt:String) [], ParentContainsErrorRe
cordException
FullyQualifiedErrorId : CommandNotFoundException
I have tried to reproduce the same in my lab environment and got the below results.
Firstly, I would like to inform you that the file type of the certificate to be imported must be the .pfx or .pem extension. Kindly refer to this link for more details.
I have followed the below steps to upload a .pfx file to the Azure Key Vault.
Step 1: Created a .pfx type certificate.
Step 2: Upload a file to the secure files in Azure DevOps.
Step 3: Add below mentioned tasks to the pipeline.
Command: az keyvault certificate import --vault-name "key-vault-vijay" --file "$(Agent.TempDirectory)\test05.pfx" --name "cert0104" --password "passw0rd#123"
Pass the password parameter if you have created it at the time of certificate file creation.
Step 4: Verify the access policies assigned to the service principle using Azure DevOps automation on the key vault.
Step 5: Run the pipeline and kindly check for the certificate on the Azure portal.
Kindly refer to this link Manage Azure Key Vault using CLI - Azure Key Vault | Microsoft Learn for more details.
Related
I need to create Greengrass group and core in aws iot(windows).
I have referred with the document https://docs.aws.amazon.com/cli/latest/reference/greengrass/create-group.html
I have tried with powershell script aws greengrass create-group \ --name ggawsgreen
Gets error when executing above powershell script . error => aws : The term 'aws' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again
How to create Greengrass group and core(aws iot) in windows powershell
the error appears because you haven't installed the AWS Command Line Interface or aws cli for short.
Or ´aws cli´ is not in your path.
download here
I am attempting to use the Azure CLI to create a user in my Active Directory. I am using this command:
az login
az ad user create --display-name "John Doe" --password "dfdfsd34!234" --user-principal-name "john#mydomain.com" --force-change-password-next-login true --mail-nickname "Jonny"
(I have obfuscated the UPN name)
If I run that from my command line, it runs exactly how I wish and the user appears in my Active Directory. If I place that command inside a Powershell script it fails, saying the UPN is invalid.
az : ERROR: Property userPrincipalName is invalid.
My version of Powershell is 5.1.14409.1005
Any ideas what I am missing? I originally assumed it's correctly logging into Azure, but then returning to the original shell?
When we use the Azure CLI command az ad user create to create Azure AD user, please ensure you have provide a validated Azure AD domain. If the domain is not validated, you will get the above error. The default domain is ***.onmicrosoft.com For more details, please refer to here and here
For example
az ad user create --display-name "John Doe" --password "dfdfsd34!234" --user-principal-name "john#hanxia.onmicrosoft.com" --force-change-password-next-login true --mail-nickname "Jonny"
Below are the detailed steps which i am performing for CI CD created pipeline to create spkg.sppkg its successful now in release i am doing below steps
Connect to SharePoint App Catalog (Successful)
o365 login https://naxis007.sharepoint.com/sites/Wipro%20App%20Catalog/AppCatalog --authType password --userName $(username) --password $(password)
Add Solution Package to App Catalog **(getting error access denied in this steps)**
o365 spo app add -p $(System.DefaultWorkingDirectory)/Test/drop/drop/sharepoint/solution/spfx.sppkg --overwrite --appCatalogUrl https://naxis007.sharepoint.com/sites/Wipro%20App%20Catalog/AppCatalog --scope sitecollection
It looks like you were running o365 login command and o365 spo app add command in two separate command line tasks.
Each command line task will open a new terminal window which will be closed when the task is finished. So the login information in the first command line task cannot persist in the second command link task. That's why you got the access denied error.
You should run the o365 login command and o365 spo app add command in the same command line task. So that both the commands will be executed in the same terminal.
Or you can use the task SharePoint Files Uploader to upload the spfx.sppkg file .
I have a build pipeline to deploy something to AWS via Azure Devops pipeline using the AWS CodeDeploy task. I want to report on the detail of this deployment by using the output variable of deployment Id from the AWS CodeDeploy task step as an input to query the deployment via the next task AWS CLI command.
Here is the AWS CodeDeploy step, and the configuration of the output variable.
Here is the subsequent step, using that variable.
Here is the output error from the build pipeline.
Code Deploy task:
Started deployment of new revision to deployment group VSTSEc2Targets for application VSTSTestApp, deployment ID d-PN4UXHVJO
Setting output variable deployment_id with the ID of the deployment
Waiting for deployment to complete
AWS CLI task:
[command]C:\windows\system32\cmd.exe /D /S /C "C:\hostedtoolcache\windows\Python\3.6.8\x64\Scripts\aws.cmd deploy get-deployment --deployment-id "$(codedeploy.deployment_id)""
An error occurred (InvalidDeploymentIdException) when calling the GetDeployment operation: Specified DeploymentId is not in the valid format: $(codedeploy.deployment_id)
##[error]Error: The process 'C:\hostedtoolcache\windows\Python\3.6.8\x64\Scripts\aws.cmd' failed with exit code 255
It appears to not be converting the variable to actual value. Can anyone assist?
I tested outputting the variable via PowerShell and got this error:
variable check
codedeploy.deployment_id : The term 'codedeploy.deployment_id' is not recognized as the name of a cmdlet, function,
script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is
correct and try again.
At D:\a\_temp\4985f146-ca74-46a3-aed2-aa67cdc2e01a.ps1:5 char:14
+ Write-Host $(codedeploy.deployment_id)
+ ~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : ObjectNotFound: (codedeploy.deployment_id:String) [], ParentContainsErrorRecordException
+ FullyQualifiedErrorId : CommandNotFoundException
##[error]PowerShell exited with code '1'.
##[section]Finishing: PowerShell Script
Using script:
# Write your PowerShell commands here.
Write-Host "variable check"
Write-Host $(codedeploy.deployment_id)
Solved this. The issue was my syntax. I assumed you needed to reference a variable using the reference name.variable $(reference.variable) when in fact you only need to use $(variable). The reference name - while required to specify in the task definition - is not used when referencing the value elsewhere. I was following the Microsoft documentation which states the following:
Use outputs in the same job
In the Output variables section, give the producing task a reference name. Then, in a downstream step, you can use the form $(ReferenceName.VariableName) to refer to output variables.
From: https://learn.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=classic%2Cbatch
AWS CLI commands in PowerShell like ec2, s3api work just fine, but deploy commands always throw a usage error. Why is this?
EDIT
If you've worked with the AWS CLI, you probably are aware of what commands I am
referenceing... But if you need example commands:
This works just fine in a PowerShell Script. So does aws s3api list-object, etc...
aws ec2 describe-instance-status --instance-id $ec2_instance_id.Trim() --region us-east-1 --query "InstanceStatuses[].InstanceState[].Name"
This does not work:
aws deploy create-deployment --application-name MyWebApp --s3-location bucket=sthreebucketname,bundleType=zip,eTag=abac12345jkjdafdafdf,key=MyWebApp.zip --deployment-group-name DepGroup --deployment-config-name CodeDeployDefault.OneAtATime --description "MyApp Deployment"
I have set the user environment variable to my administrator. I have also set up PowerShell with my AWS credentials when I installed the Deploy Agent. So I should have no issue with access or permissions, but evidently it doesn't like the "deploy" suite of commands.
EDIT 2
Here is the error message. I know it says usage, but the same command works on the command prompt just fine.
aws : usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
At C:\scripts\deploy-to-instance.ps1:1 char:11
+ $result = aws deploy create-deployment --application-name MyWebApp --s3-location ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (usage: aws [opt....] [parameters]:String) [], RemoteException
+ FullyQualifiedErrorId : NativeCommandError
To see help text, you can run:
aws help
aws <command> help
aws <command> <subcommand> help
Unknown options: eTag=abac12345jkjdafdafdf, key=MyWebApp.zip, bundleType=zip
New-CDDeployment command from the AWSPowerShell tool does the same thing.
What other permissions need to be set then?
It is not a permissions error at all.
The answer is:
The syntax in PowerShell is different(more finicky) than what the command prompt expects. You need to have quotations and commas in the exact correct spot for it to work properly.
If you are using the --query option, make sure your query's have the EXACT syntax. It seems the command prompt cli can use a slightly different syntax without throwing errors.