I want to install a public key certificate into an Azure VM scale set using ARM, but have issues getting the local path to the certificate correct.
It is possible to install a certificate using the PowerShell DSC extension for VM scalesets and use the DSC module xCertificate
I am using this code sample:
Configuration Example
{
param
(
[Parameter()]
[string[]]
$NodeName = 'localhost'
)
Import-DscResource -ModuleName xCertificate
Node $AllNodes.NodeName
{
xCertificateImport MyTrustedRoot
{
Thumbprint = 'c81b94933420221a7ac004a90242d8b1d3e5070d'
Location = 'LocalMachine'
Store = 'Root'
Path = '.\Certificate\MyTrustedRoot.cer'
}
}
}
I am using the Publish-AzureRmVMDscConfiguration cmdlet to package and upload the DSC script along with the public key certificate to Azure storage account so it can be used as part of the ARM deployment process.
But I cannot figure out how to resolve the local path to the certicate, I get an error when using
.\Certificate\MyTrustedRoot.cer or $PSScriptRoot\Certificate\MyTrustedRoot.cer
I would think it is possible to either resolve the file in DSC or use relative paths, to keep the DSC configuration simple and packaged together with the certificate.
UPDATE: Publish-AzureRmVMDscConfiguration zips and uploads the DSC script and the public key certificate to an azure storage account. The VMSS DSC extension downloads the zip, unzips locally on the vm and runs DSC, so the certificate is present locally on all vm's in the scaleset.
But the path to the certificate is not deterministic due to version numbers of the DSC module being used.
Well the certificate doesn't exist locally right, it is only in the storage account. So you would need to pass the path to the certificate in storage as a parameter, using the artifactsLocation and artifactsLocationSasToken values to make the uri.
But these are values that you would get from an arm template deployment. As you are just using powershell to publish your DSC plus resources, you should be able to determine what the uri of your cert is. You specify the storage account as a param of the publish cmdlet, so you should be able to build the uri with that information. Double check the uri of the cert in the storage account.
Alternatively, use a custom script extension on the VM to download the cert from storage to the VM, then the cert will exist locally.
Are you doing this all in powershell? You may want to look at using an arm template for this. for eg: Azure Quickstart Templates
Locally on the Azure VMs, the path to the DSC file and the certificate are somewhere under the DSC extension workfolder, extracted from the zip package created by the Publish-AzureRmVMDscConfiguration cmdlet, e.g.
C:\Packages\Plugins\Microsoft.Powershell.DSC\2.23.0.0\DSCWork\InstallRootCaCert.ps1.0\Certificate\MyTrustedRoot.cer
The best solution I could come up with was to use the absolute path to the certificate and lock the the DSC version number by explicitly setting the typeHandlerVersion to 2.23 in the ARM template and setting autoUpgradeMinorVersion to false
Related
I was recently ordered by our IT team to disable the NAT pools on my service fabric cluster due to security risks. The only way I could do this was to deploy a new cluster with all its components.
Because this is a test environment I opt to use a self signed cert without a password for my cluster, the certificate is in my vault and the cluster is up and running.
The issue I have now is when I try to deploy my application from an Azure Devops Release Pipeline I get the following message:
An error occurred attempting to import the certificate. Ensure that your service endpoint is configured properly with a correct certificate value and, if the certificate is password-protected, a valid password. Error message: Exception calling "Import" with "3" argument(s): "The specified network password is not correct.
I generated the self signed certificate in Key Vault, downloaded the certificate and used Powershell to get the Base64 string for the service connection.
Should I create the certificate myself, with a password?
With the direction of the two comments supplied, I ended up generating a certificate on my local machine using the powershell script included with service fabric's local run time.
A small caveat here is to change the key size in the script to a large key size than the default, because ke vault does not support 1024 keys.
I then exported the pfx from my user certificates added a password(this is required for the service connection) and impoted the new pfx into my key vault.
Redeployed my cluster and it worked.
That's it. Plain and simple.
The first step in my pipeline is to remove services that are no longer supported. To do that I need to use Connect-ServiceFabricCluster to connect to the cluster. But that requires a certificate installed on the local machine. I won't have a local machine in a hosted pipeline and I have a problem with installing the certificate on the hosted VM for security reasons.
So how do I connect?
1,
Dont know if you tried azure cli sfctl cluster select which allows you to specify a certificate, check here for more information.
In order to use the certificate in your pipeline. You need to go to the Library under Pipelines and click secure files and add your certificate from local. Make sure Authorize for use in all pipelines is checked when adding your certificate.
Then you can add a Download secure file task to download your certificate in your pipeline.
Then you can consume it in your next task by referring to the download location "$(Agent.TempDirectory)\yourcertificatefilename", check here for more information
sfctl cluster select --endpoint https://testsecurecluster.com:19080 --cert "$(Agent.TempDirectory)\yourcertificatefilename" --key ./keyfile.key
2,
If above sfctl cluster select is not working, You can install the certificate which is already uploaded with a powershell task to the hosted agent
Import-Certificate -FilePath ""$(Agent.TempDirectory)\yourcertificatefilename"" -CertStoreLocation cert:\LocalMachine\Root
3,
If the hosted agent has security concern. You can create your own self-hosted agent on your local machine. You can then install the certificate in your on-premises agent.
To create self-hosted agent.
You need to get a PAT and assign the scope to Agent Pool. click here for detailed steps. You will need the PAT to config your self-hosted agent later.
Then go to Project setting, select Agent Pools under Pipelines, Create a self-defined agent pool if you donot have one, Then select your agent pool, click new agent, and follow the steps to create your own agent.
Hope above can be helpful to you!
I recently installed the AWS .NET SDK which came with the PowerShell For AWS CLI enhancements.
I went ahead and added an IAM user and generated a key pair, then installed it into the SDK Store:
Set-AWSCredentails -AccessKey AAAAAAAAAAAAAA -SecretKey AAAAAAAAAA/AAAA -StoreAs default
I then tested my credentials by making a request that I knew I didn't have access to:
Get-EC2Instance
... Then was surprised to find out print out three EC2 instances. Instances I don't own! I tried this as well:
Get-EC2Instance -Profile default
Which produced the desired result, insufficient access. To continue testing, I added EC2FullAccess to my user and repeated the last line. It correctly printed my personal use EC2 instance:
GroupNames : {}
Groups : {}
Instances : {aws_personal}
OwnerId : 835586800000
RequesterId :
ReservationId : r-0e625fd77d0000000
However whenever I attempt a statement without the -Profile default, I am accessing another account. Without going into too much detail, I disabled my access to that account in AWS Dashboard. Now commands produce this output:
Get-EC2Instance : AWS was not able to validate the provided access credentials
At line:1 char:1
+ Get-EC2Instance
I do not have a .AWS directory in my %UserProfile%. Searching my computer for .aws or credentials fails to find a credential file which would explain this.
I can't explain why you are seeing different behavior between specifying the -ProfileName parameter and not, but I can shed light on where credentials are coming from.
The PowerShell tools can read from two credential locations (as well as environment variables and EC2 instance metadata when running on an EC2 instance).
Firstly there is the encrypted SDK credential store file which is located at C:\Users\userid\AppData\Local\AWSToolkit\RegisteredAccounts.json - this one is shared between the PowerShell tools, the AWS SDK for .NET and the AWS Toolkit for Visual Studio. It can also read from the ini-format shared credentials file (shared with the AWS CLI and other AWS SDKs). Note that although the shared credentials file can be moved between accounts and machines, the encrypted SDK file can be used only by the owning user and only on that single machine.
The PowerShell tools currently only write to one store though - the encrypted file used by the .NET tools exclusively. So when you set up credentials and used the -StoreAs option, the profile would have been written to the RegisteredAccounts.json file. If you open this file in a text editor you should see your profile named 'default' along with two encrypted blobs that are your access and secret keys.
When a profile name is given with a command, the tools look for a profile with that name first in RegisteredAccounts.json and if not found there, it attempts to read the ini-format file in %USERPROFILE%.aws\credentials (to bypass the encrypted store, you can use the -ProfilesLocation parameter to point at the ini-format file you want to load credentials from, if it's not at its default location under your user profile).
If no profile name is given, the tools probe to find the closest set of credentials - the search 'path' is described in a blog post at https://blogs.aws.amazon.com/net/post/Tx2HQ4JRYLO7OC4/. Where you see references to loading a profile, remember that the tools check for the profile first in RegisteredAccounts.json and then in the shared credentials file.
HTH you track down where the tools are finding credentials.
I need to install a certificate in a Service Fabric cluster that I created using an ARM template. I was able to install a certificate with the private key using the following helper powershell command:
> Invoke-AddCertToKeyVault
https://github.com/ChackDan/Service-Fabric/tree/master/Scripts/ServiceFabricRPHelpers
Once this certificate is in Azure Key Vault I can modify my ARM template to install the certificate automatically on the nodes in the cluster:
"osProfile": {
"secrets": [
{
"sourceVault": {
"id": "[parameters('vaultId')]"
},
"vaultCertificates": [
{
"certificateStore": "My",
"certificateUrl": "https://mykeyvault.vault.azure.net:443/secrets/fabrikam/9d1adf93371732434"
}
]
}
]
}
The problem is that the Invoke-AddCertToKeyVault is expecting me to provide a pfx file assuming I have the private key.
The script is creating the following JSON blob:
$jsonBlob = #{
data = $base64
dataType = 'pfx'
password = $Password
} | ConvertTo-Json
I modified the script to remove password and change dataType to 'cer' but when I deployed the template in Azure it said the dataType was no longer valid.
How can I deploy a certificate to a service fabric cluster that does not include the private key?
1) SF does not really care if you used .cer or .pfx. All SF needs is for the certificate to be available in the local cert store in the VM.
2) The issue you are running into is that CRP agent, which installs the cert from the keyvault to the local cert store in the VM, supports only .pfx today.
So now you have two options
1) create a pfx file without a private key and use it
Here is how to do via C# (or powershell)
Load the certificate into a X509Certificate2 object
Then use the export method for X509ContentType = Pfx
https://msdn.microsoft.com/en-us/library/24ww6yzk(v=vs.110).aspx
2) Deploy the .cer using a custom VM extension. Since .cer is only a public key cert there should be no privacy requirements. You can just upload the cert to a blob, and have a custom script extension download it and install it on the machine.
I am using VSTS (Visual Studio Team Services, formerly known as Visual Studio Onine) for continuous deployment to an Azure VM using an Azure File Copy task in my build definition.
The problem I am having is that I have an ACL setup on the Azure VM that is only allowing connections from my office for Remote Powershell.
With the ACL in place, the Azure File Copy task fails with an error like "WinRM cannot complete the operation. Verify that the specified computer name is valid, that the computer is accessible over the network, and that the firewall exception for the WinRM service is enabled and allows access from this computer." With the ACL removed, everything works.
To be clear, this is not a problem with WinRM configuration or firewalls or anything like that. It is specifically the ACL on the VM that is blocking the activity.
So the question is, how can I get this to work without completely removing the ACL from my VM? I don't want to open up the VM Powershell endpoint to the world, but I need to be able to have the Azure File Copy task of my build succeed.
You can have an on-premises build agent that lives within your office's network and configure things so that the build only uses that agent.
https://msdn.microsoft.com/library/vs/alm/release/getting-started/configure-agents#installing
Azure File Copy Task need to use WinRM Https Protocol, so when you enable the ACL, the Hosted Build Agent won't be able access to the WinRM on Azure VM and that will cause Azure File Copy Task fail.
When copying the files from the blob container to the Azure VMs,
Windows Remote Management (WinRM) HTTPS protocol is used. This
requires that the WinRM HTTPS service is properly setup on the VMs and
a certificate is also installed on the VMs.
There isn't any easy workaround for this as I know. I would recommend you to setup your own build agent in your network that can access to Azure VM WinRM.