Securely Get Access Key / Secret from CloudFormation AccessKey Creation - aws-cloudformation

I've created a CloudFormation template that successfully creates an IAM user and an AccessKey and assigns that AccessKey to the IAM user. Right now I am getting the AccessKey's secret by outputting it in the Outputs section of the CloudFormation template.
I'd like to know if there is a more secure way to create the AccessKey and fetch its corresponding secret without spitting it out in plain text in the Outputs section.
I'm a bit confused by this because AWS doesn't have much info on doing this, and what little docs it does have directly contradict each other. Here AWS suggests doing what I've described above "One way to retrieve the secret key is to put it into an Output value". This seems like a security issue, and is confirmed by another AWS doc Here where it says "We strongly recommend you don't use this section to output sensitive information, such as passwords or secrets".
Am I misunderstanding their documentation or is this a direct contradiction? I've seen a S/O comment Here suggesting the use of AWS secrets manager but I'm having trouble figuring out how to get the AccessKey secret into Secrets Manager, where it can be stored and fetched more securely by something like boto3. Any example of this would be super helpful. My CloudFormation template is below for reference.
{
"Description": "My CloudFormation Template",
"Outputs": {
"UserAccessKeyId": {
"Description": "The value for the User's access key id.",
"Value": {
"Ref": "UserAccessKey"
}
},
"UserSecretKey": {
"Description": "The value for the User's secret key.",
"Value": {
"Fn::GetAtt": [
"UserAccessKey",
"SecretAccessKey"
]
}
}
},
"Resources": {
"User": {
"Properties": {
"UserName": "myNewUser"
},
"Type": "AWS::IAM::User"
},
"PrimaryUserAccessKey": {
"DependsOn": "User",
"Properties": {
"Status": "Active",
"UserName": "myNewUser"
},
"Type": "AWS::IAM::AccessKey"
}
}
}

I would recommend putting it in a Secret. You can have the CloudFormation write the value to Secrets Manager in the stack, and then you can access it via code. That allows you to have a secret that no person has to see or touch to use it.
I think something like this should work (note: I haven't actually tried this).
AccessKey:
Type: AWS::IAM::AccessKey
Properties:
Serial: 1
Status: Active
UserName: 'joe'
AccessKeySecret:
Type: AWS::SecretsManager::Secret
Properties:
Name: JoeAccessKey
Description: Joes Access Key
SecretString: !Sub '{"AccessKeyId":"${AccessKey}","SecretAccessKey":"${AccessKey.SecretAccessKey}"}'

Related

ADF: linkedService template function not defined

I am currently trying to add some parameterised linked services. I have two services currently: a key vault, and a data lake. The configuration are:
// Key vault
{
"name": "Logical Key Vault",
"properties": {
"parameters": {
"environment": {
"type": "String"
}
},
"annotations": [],
"type": "AzureKeyVault",
"typeProperties": {
"baseUrl": "https://kv-#{linkedService().environment}.vault.azure.net"
}
}
}
// Data lake
{
"name": "Logical Data Lake",
"properties": {
"type": "AzureBlobFS",
"parameters": {
"environment": {
"type": "String"
}
},
"annotations": [],
"typeProperties": {
"url": "https://sa#{replace(linkedService().environment, '-', '')}.dfs.core.windows.net",
"accountKey": {
"type": "AzureKeyVaultSecret",
"secretName": "storageAccountKey",
"store": {
"referenceName": "Logical Key Vault",
"type": "LinkedServiceReference",
"parameters": {
"environment": {
"value": "#linkedService().environment",
"type": "Expression"
}
}
}
}
}
}
}
Both linked services are parameterised by an environment parameter, and I have confirmed that the Key Vault works fine and is able to correctly retrieve secrets. The problem happens when I attempt to retrieve the storage key from the key vault. I get the following error:
Error code
FailToResolveParametersInExploratoryController
Details
The parameters and expression cannot be resolved for schema operations.
Error Message: {
"message": "ErrorCode=InvalidTemplate, ErrorMessage=The template function 'linkedService' is not defined or not valid."
}
My attempts at debugging this has identified the use of #linkedService on line 38 to be the issue, which is when the Data Lake is trying to pass its own environment parameter to the Key Vault so that it may obtain the storage key. If I remove this use of #linkedService.environment and replace it with a hard coded value, the linked service successfully connects to the data lake.
The expression is trivially simple, and the web interface itself offers the option:
As a result, I am unsure why the use of #linkedService fails here. The web interface and ability to use expressions would suggest it should work, but then #linkedService is undefined for some reason.
While debugging this, I did find that using the expression
#string(linkedService.environment)
Does indeed work, but this seems rather odd as the environment is itself a string and thus its conversion into a string should be a no-op. I have also looked into removing the # entirely and trying
linkedService.environment
and while this does correctly resolve to the environment, it still results in an error as it resulting parameter contains the surrounding quotation and thus the linked service fails to connect to the key vault https://kv-'foobar'.vault.azure.net as it is clearly invalid (assuming my environment was foobar).

"Select all" option for Azure DevOps variable group linked to KeyVault

I am running several YAML pipelines and use variable groups for this. I have a number of variable groups and each needs to be linked to a keyvault, and all secrets in that keyvault need to be added to the variable group.
In the documentation (variablegroup documentation) it only gives the option to click on the "+Add" button and select each secret by hand. This takes up a lot of time, since there are a lot of secrets that need to be added.
Does anyone know of an option to select all secrets in a keyvault to add to a variable group?
Preferably from a powershell or CLI script.
Thanks for your help!
[EDIT]: I apologize for the confusing use of the word "secrets" before. I am looking to link the variable group to a KeyVault and add the secrets that are in there.
You can use the REST API Variablegroups - Update to batch set variables in a variable group.
PUT https://dev.azure.com/{organization}/_apis/distributedtask/variablegroups/{groupId}?api-version=6.0-preview.2
P.S. The id of a variable group is an integer and increase from 1 in the order they were created. You can aslo get the id of a variable group using the REST API Variablegroups - Get Variable Groups
GET https://dev.azure.com/{organization}/{project}/_apis/distributedtask/variablegroups?api-version=6.0-preview.2
to get all variable groups of your project and find the specific variable group id that you want.
Here is an example of the REST API's request body:
{
"name": "{variable group name}",
"variables": {
"A": {
"isSecret": false,
"value": "a"
},
"B": {
"isSecret": true,
"value": "b"
},
"C": {
"isSecret": true,
"value": "c"
}
},
"variableGroupProjectReferences": [
{
"description": "",
"name": "{variable group name}",
"projectReference": {
"id": "",
"name": "{project name}"
}
}
]
}

Add/remove pipeline checks using REST API

I have a requirement to dynamically add/remove or enable disable approval and checks at azure DevOps pipeline environment.
Is there a rest api for this?
Is there a rest api for this?
Yes, it has. BUT, as what the #Krysztof said, we haven't provide such api documents to public as of today. This is because what you are looking for is one feature (configure Check and approval from environments ) that only support for YAML pipeline, and until now, we are developing but does not publish the corresponding rest api (for YAML) docs.
But, as work around, you can catch these apis from F12. Just do action from UI, then capture and analyze corresponds api records to found out what you are expecting.
Here I make the summary to you.
The api that used to add/delete approval and check to environment is:
https://dev.azure.com/{org name}/{project name}/_apis/pipelines/checks/configurations
The corresponding api method for add or delete is Post and DELETE.
Firstly, share you the request body samples which for approval and check added.
1) Add approval to this environment:
{
"type": {
"id": "8C6F20A7-A545-4486-9777-F762FAFE0D4D", // The fixed value for Approval
"name": "Approval"
},
"settings": {
"approvers": [
{
"id": "f3c88b9a-b49f-4126-a4fe-3c99ecbf6303" // User Id
}
],
"executionOrder": 1,
"instructions": "",
"blockedApprovers": [],
"minRequiredApprovers": 0,
"requesterCannotBeApprover": false // The pipeline requester allow to approve it.
},
"resource": {
"type": "environment",
"id": "1", // Environment id
"name": "Deployment" //Environment name
},
"timeout": 43200 // Set the available time(30d) of this approval pending. The measure unit is seconds.
}
2) Add task check, Azure function, Invoke rest api task and etc:
{
"type": {
"id": "fe1de3ee-a436-41b4-bb20-f6eb4cb879a7", // Fixed value if you want to add task check
"name": "Task Check" //Fixed value
},
"settings": {
"definitionRef": {
"id": "537fdb7a-a601-4537-aa70-92645a2b5ce4", //task Id
"name": "AzureFunction", //task name
"version": "1.0.10" //task version
},
"displayName": "Invoke Azure Function", //task display name configured
"inputs": {
"method": "POST",
"waitForCompletion": "false",
"function": "csdgsdgsa",
"key": "436467543756" // These are all task inputs
},
"retryInterval": 5, // The re-try time specified.
"linkedVariableGroup": "AzKeyGroup"// The variable group name this task linked with
},
"resource": {
"type": "environment",
"id": "2",
"name": "Development"
},
"timeout": 43200
}
In this request body, you can find the corresponding task id from our public source code. Just check the task.json file of corresponding task.
3) Add template check:
{
"type": {
"id": "4020E66E-B0F3-47E1-BC88-48F3CC59B5F3", // Fixed value for template check added.
"name": "ExtendsCheck" //Fixed value
},
"settings": {
"extendsChecks": [
{
"repositoryType": "git", // github for Github source, bitbucket for Bitbucket source
"repositoryName": "MonnoPro",
"repositoryRef": "refs/heads/master",
"templatePath": "tem.yml"
}
]
},
"resource": {
"type": "environment",
"id": "6",
"name": "development"
}
}
In this body, if the template source is coming from github or bitbucket, the value of repositoryName should like {org name}/{repos name}.
Hope these are helps.
This is an older question but we had a similar need. There does not appear to be a direct API To query this, but this GitHub Project pointed us in the right direction:
# GET ENVIRONMENT CHECKS (stored under .fps.dataProviders.data['ms.vss-pipelinechecks.checks-data-provider'].checkConfigurationDataList)
GET https://dev.azure.com/{{organization}}/{{project}}/_environments/{{environment_id}}/checks?__rt=fps&__ver=2
As mentioned above under .fps.dataProviders.data['ms.vss-pipelinechecks.checks-data-provider'].checkConfigurationDataList the list of who is authorized is provided.
The officially documented APIs can tell you that there are checks in place; for example:
GET https://dev.azure.com/{organization}/{project}/_apis/pipelines/checks/configurations?resourceType=environment&resourceId={id}
Can tell you that you have checks enabled (including an Approval check) but this isn't super useful as it does not give a list of who can Approve.
Note that you can get the list of environments (to get their resource ID) using this documented API:
GET https://dev.azure.com/{organization}/{project}/_apis/distributedtask/environments?api-version=7.1-preview.1
This is not supported at the moment. You can upvote feature request to show your interest here.

Azure Automation Registration Endpoint is corrupted when used to pull DSC configuration

For some reason, I keep getting these weird issues.....
In this case, I have a Key and Endpoint URL for the Automation Account stored as Secrets in a KeyVault (I don't know of a away to extract it natively from Automation Account using ARM).
I can extract these values perfectly and they they are published to the Template that runs a PowerShell extension to pull a DSC Configuration.
For example as seen as an Input deploying the Template:
"RegistrationUrl":"https://ase-agentservice-prod-1.azure-automation.net/accounts/e0799801-a8da-8934-b0f3-9a43191dd7e6"
However, I receive the following error (note the Url in the Error with 3 forward slashes)
"code": "VMExtensionProvisioningError",
"message": "VM has reported a failure when processing extension 'dscLcm'.
Error message: "DSC Configuration 'ConfigureLCMforAAPull' completed with error(s). Following are the first few: The attempt to 'get an action' for AgentId 11A5A267-6D00-11E7-B07F-000D3AE0FB1B from server URL https://ase-agentservice-prod-1.azure-automation.net///accounts/e0799801-a8da-8934-b0f3-9a43191dd7e6/Nodes(AgentId='11A5A267-6D00-11E7-B07F-000D3AE0FB1B')/GetDscAction failed with server error 'ResourceNotFound(404)'.
For further details see the server error message below or the DSC debug event log with ID 4339.
ServerErrorMessage:- 'No NodeConfiguration was found for the agent.'\"."
The Endpoint Url is passed as a Secure String. I tried passing it a normal string - Same problem.
The Key and Endpoint are feed into the Template as Parameters:
"dscKeySecret": {
"type": "securestring",
"metadata": {
"description": "Key for PowerShell DSC Configuration."
}
},
"dscUrlSecret": {
"type": "securestring",
"metadata": {
"description": "Url for PowerShell DSC Configuration."
}
},
These values are used to create a parameter to be passed to the next template that runs the VM Extension.
"extn-settings": {
"value": {
"configuration": {
"url": "[concat(variables('urls').dscScripts, '/', 'lcm-aa-pull', '/', 'lcm-aa-pull', '.zip')]",
"script": "[concat('lcm-aa-pull', '.ps1')]",
"function": "ConfigureLCMforAAPull"
},
"configurationArguments": {
"registrationKey": {
"username": "dsckeySecret",
"password": "[parameters('dscKeySecret')]"
},
"registrationUrl": "[parameters('dscUrlSecret')]",
"configurationMode": "ApplyAndMonitor",
"configurationModeFrequencyMins": 15,
"domain": "[variables('names').domain]",
"name": "dscLcm",
"nodeConfigurationName": "[variables('names').config.ad]",
"rebootNodeIfNeeded": true,
"refreshFrequencyMins": 30
},
"protectedSettings": null,
}
}
The next template receives the Parameters and used in the Properties of the VM's Resources section:
"properties": {
"publisher": "Microsoft.Powershell",
"type": "DSC",
"typeHandlerVersion": "2.22",
"autoUpgradeMinorVersion": true,
"settings": {
"configuration": "[parameters('extn-settings').configuration]",
"configurationArguments": "[parameters('extn-settings').configurationArguments]"
},
"protectedSettings": "[parameters('extn-settings').protectedSettings]"
}
So why is the Url being corrupted with the the first '/' being changed to '///'?
I don't why the Endpoint Url has 3 x '/', but that wasn't the issue.... I wish I found the issue before I posted this question...
I found the Node Configuration Name was wrong with a spelling mistake (hang head in shame)
Thanks anyway!

How to dynamically name an ECS cluster with cloudformation?

Its easy to create the cluster MyCluster with a hardcoded name:
"MyCluster": {
"Type": "AWS::ECS::Cluster"
}
However, I'm wanting to have a dynamic name but also reference the named resource. Something like this where the cluster name would be the stack name:
"NamedReferenceButNotClusterName": {
"Type": "AWS::ECS::Cluster",
"Properties": {
"Name": {"Ref": "AWS::StackName"} <-- Name property isnt allowed
}
},
"ecsService": {
"Type": "AWS::ECS::Service",
"DependsOn": [
{"Ref": "NamedReferenceButNotClusterName"} <-- not sure if I can even do this
],
"Properties": {
"Cluster": {
"Ref": "NamedReferenceButNotClusterName" <-- I really want this part
},
"DesiredCount": 2,
"TaskDefinition": {
"Ref": "EcsTask"
}
}
}
Is there any way to do this?
It's not possible with AWS cloud formation.
"MyCluster": {
"Type": "AWS::ECS::Cluster"
}
The above cloudformation script will generate a ECS cluster with name format <StackName>-MyCluster-<RandomSequence>.
The stackname is provided as input at the time of execution of the cloudformation script. The random sequence is generated by cloudformation and cannot be deterministic.
At this point the best bet to create a cluster with desired naming convention will be using aws cli or a small program using aws sdk.