Install a certificate in a Service Fabric Cluster without a private key - azure-service-fabric

I need to install a certificate in a Service Fabric cluster that I created using an ARM template. I was able to install a certificate with the private key using the following helper powershell command:
> Invoke-AddCertToKeyVault
https://github.com/ChackDan/Service-Fabric/tree/master/Scripts/ServiceFabricRPHelpers
Once this certificate is in Azure Key Vault I can modify my ARM template to install the certificate automatically on the nodes in the cluster:
"osProfile": {
"secrets": [
{
"sourceVault": {
"id": "[parameters('vaultId')]"
},
"vaultCertificates": [
{
"certificateStore": "My",
"certificateUrl": "https://mykeyvault.vault.azure.net:443/secrets/fabrikam/9d1adf93371732434"
}
]
}
]
}
The problem is that the Invoke-AddCertToKeyVault is expecting me to provide a pfx file assuming I have the private key.
The script is creating the following JSON blob:
$jsonBlob = #{
data = $base64
dataType = 'pfx'
password = $Password
} | ConvertTo-Json
I modified the script to remove password and change dataType to 'cer' but when I deployed the template in Azure it said the dataType was no longer valid.
How can I deploy a certificate to a service fabric cluster that does not include the private key?

1) SF does not really care if you used .cer or .pfx. All SF needs is for the certificate to be available in the local cert store in the VM.
2) The issue you are running into is that CRP agent, which installs the cert from the keyvault to the local cert store in the VM, supports only .pfx today.
So now you have two options
1) create a pfx file without a private key and use it
Here is how to do via C# (or powershell)
Load the certificate into a X509Certificate2 object
Then use the export method for X509ContentType = Pfx
https://msdn.microsoft.com/en-us/library/24ww6yzk(v=vs.110).aspx
2) Deploy the .cer using a custom VM extension. Since .cer is only a public key cert there should be no privacy requirements. You can just upload the cert to a blob, and have a custom script extension download it and install it on the machine.

Related

Is it possible to create a tls kubernetes secret using Azure Key Vault data resources in Terraform?

I have a certificate file and a private key file that I am using to implement tls encrypted traffic for several different k8s pods running under an NGINX ingress load balancer. This works fine (i.e. the web apps are visible and show as secure in a browser) if I create the kubernetes.io/tls secret in either of these ways:
Use kubectl: kubectl create secret my-tls-secret --key <path to key file> --cert <path to cert file>.
Reference those files locally in terraform:
resource "kubernetes_secret" "my_tls_secret" {
metadata {
name = "my-tls-secret"
}
type = "kubernetes.io/tls"
data = {
"tls.crt" = file("${path.module}/certfile.cer"),
"tls.key" = file("${path.module}/keyfile.key")
}
}
However, neither of these methods are ideal because for #1, it turns my terraform plan/apply steps into 2-step processes and for #2, I don't want to commit the key file to source control for security reasons.
So, my question is: is there a way to do this by using some combination of Azure Key Vault data resources (i.e. keys, secrets or certificates)?
I have tried the following:
Copy/pasting the cert and key into key vault secrets (have also tried this with base64 encoding the values before pasting them into the key vault and using base64decode() around the tls.crt and tls.key values in the Terraform):
data "azurerm_key_vault_secret" "my_private_key" {
name = "my-private-key"
key_vault_id = data.azurerm_key_vault.mykv.id
}
data "azurerm_key_vault_secret" "my_certificate" {
name = "my-certificate"
key_vault_id = data.azurerm_key_vault.mykv.id
}
resource "kubernetes_secret" "my_tls_secret" {
metadata {
name = "my-tls-secret"
}
type = "kubernetes.io/tls"
data = {
"tls.crt" = data.azurerm_key_vault_secret.my_certificate.value,
"tls.key" = data.azurerm_key_vault_secret.my_private_key.value
}
}
Tried importing the cert as an Azure key vault certificate and accessing its attributes like so:
data "azurerm_key_vault_certificate_data" "my_certificate_data" {
name = "my-certificate"
key_vault_id = data.azurerm_key_vault.mykv.id
}
resource "kubernetes_secret" "my_tls_secret" {
metadata {
name = "my-tls-secret"
}
type = "kubernetes.io/tls"
data = {
"tls.crt" = data.azurerm_key_vault_certificate_data.my_certificate_data.pem,
"tls.key" = data.azurerm_key_vault_certificate_data.my_certificate_data.key
}
}
which results in an error in the NGINX ingress log of:
[lua] certificate.lua:253: call(): failed to convert private key from PEM to DER: PEM_read_bio_PrivateKey() failed, context: ssl_certificate_by_lua*, client: xx.xx.xx.xx, server: 0.0.0.0:443
Both of these attempts resulted in failure and the sites ended up using the default/fake/acme kubernetes certificate and so are shown as insecure in a browser.
I could potentially store the files in a storage container and wrap my terraform commands in a script that pulls the cert/key from the storage container first and then use working method #2 from above, but I'm hoping there's a way I can avoid that that I am just missing. Any help would be greatly appreciated!
Method #1 from the original post works - the key point I was missing was how I was getting the cert/key into Azure KeyVault. As mentioned in the post, I was copy/pasting the text from the files into the web portal secret creation UI. Something was getting lost in translation doing it this way. The right way to do it is to use the Azure CLI, like so:
az keyvault secret set --vault-name <vault name> --name my-private-key --file <path to key file>
az keyvault secret set --vault-name <vault name> --name my-certificate --file <path to cert file>

Service Fabric, Azure Devops Deployment fails : The specified network password is not correct

I was recently ordered by our IT team to disable the NAT pools on my service fabric cluster due to security risks. The only way I could do this was to deploy a new cluster with all its components.
Because this is a test environment I opt to use a self signed cert without a password for my cluster, the certificate is in my vault and the cluster is up and running.
The issue I have now is when I try to deploy my application from an Azure Devops Release Pipeline I get the following message:
An error occurred attempting to import the certificate. Ensure that your service endpoint is configured properly with a correct certificate value and, if the certificate is password-protected, a valid password. Error message: Exception calling "Import" with "3" argument(s): "The specified network password is not correct.
I generated the self signed certificate in Key Vault, downloaded the certificate and used Powershell to get the Base64 string for the service connection.
Should I create the certificate myself, with a password?
With the direction of the two comments supplied, I ended up generating a certificate on my local machine using the powershell script included with service fabric's local run time.
A small caveat here is to change the key size in the script to a large key size than the default, because ke vault does not support 1024 keys.
I then exported the pfx from my user certificates added a password(this is required for the service connection) and impoted the new pfx into my key vault.
Redeployed my cluster and it worked.

Importing a client certificate (with chain) on all service fabric cluster nodes for end user communication

I have a need to import my partners' X509 client certificates (along with complete chain) on all of my service fabric cluster nodes so that I can validate each incoming request and authenticate each partner based on the client certificate. This means when I import a client certificate, I want the related intermediate certificate (that signed the client certificate) and related root certificate (that signed the intermediate certificate) to be installed automatically into appropriate cert stores such as 'Intermediate Certificate Authorities' and 'Trusted Root Certification Authorities' in Local Machine store.
The reason why I want the entire chain stored in appropriate locations in certificate store is because I intend to validate incoming client certificate using X509Chain in System.Security.Cryptography.X509Certificates namespace in my service authentication pipeline component. The X509Chain seem to depend on the 'Trusted Root Certification Authorities' store for complete root certificate validation.
There is lot of information on how to secure a) node to node and b) managing client to cluster communication such as this: https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-cluster-security. However there is not much information on securing the communication between services (hosted in service fabric cluster) and the end user consumers using client certificates. If I missed this information, please let me know.
I don't have lot of partner client certificates to configure. The number of partners is well within manageable range. Also I can not recreate the cluster every time there is a new partner client certificate to add.
Do I need to do leverage
/ServiceManifest/CodePackage/SetupEntryPoint element in
SerivceManifest.xml file and write custom code to import partner
certificates (that are stored in the key vault or else where)? What are the pros
and cons of this approach?
Or is there any other easy way to import partner certificates that satisfies all of my requirements? If
so, please detailed steps on how to achieve this.
Update:
I tried the suggested method of adding client certificates as described in the above link under osProfile section. This seemed pretty straight forward.
To be able to do this, I first needed to push the related certificates (as secrets) in to the associated key vault as described at this link. In this article, it describes (in section "Formatting certificates for Azure resource provider use") how to format the certificate information into a Json format before storing it as secret in key vault. This json has following format for uploading pfx file bytes:
{
"dataType": "pfx",
"data": "base64-encoded-cert-bytes-go-here",
"password": "pfx-password"
}
However since I am dealing with public portion of client certificates, I am not dealing with pfx files but only base64 cer files in windows (which apparently are same as pem files elsewhere). And there is no password for public portion of certificates. So I changed the Json format to following:
{
"dataType": "pem",
"data": "base64-encoded-cert-bytes-go-here"
}
When I invoked New-AzureRmResourceGroupDeployment with related ARM template with appropriate changes under osProfile section, I am getting following error:
New-AzureRmResourceGroupDeployment : 11:08:11 PM - Resource Microsoft.Compute/virtualMachineScaleSets 'nt1vm' failed with message '{
"status": "Failed",
"error": {
"code": "ResourceDeploymentFailure",
"message": "The resource operation completed with terminal provisioning state 'Failed'.",
"details": [
{
"code": "CertificateImproperlyFormatted",
"message": "The secret's JSON representation retrieved from
https://xxxx.vault.azure.net/secrets/ClientCert/ba6855f9866644ccb4c436bb2b7675d3 has data type pem which is not
an accepted certificate type."
}
]
}
}'
I also tried using 'cer' data type as shown below:
{
"dataType": "cer",
"data": "base64-encoded-cert-bytes-go-here"
}
It also resulted in the same error.
What am I doing wrong?
I'd consider importing a certificate on all nodes as described here. (Step 5) You can add multiple certificates in specified stores by using ARM templates, that reference Azure Key Vault. Use durability level Silver/Gold, to keep the cluster running during re-deployment.
Be careful with adding certificates in the trusted store.
If a certificate is created by a trusted CA, there's no direct need to put
anything in the trusted root authorities store (as they are already there).
Validate client certificates using X509Certificate2.Verify, unless every client has his own service instance to communicate with.

How to install public key certificate in Azure VM using DSC

I want to install a public key certificate into an Azure VM scale set using ARM, but have issues getting the local path to the certificate correct.
It is possible to install a certificate using the PowerShell DSC extension for VM scalesets and use the DSC module xCertificate
I am using this code sample:
Configuration Example
{
param
(
[Parameter()]
[string[]]
$NodeName = 'localhost'
)
Import-DscResource -ModuleName xCertificate
Node $AllNodes.NodeName
{
xCertificateImport MyTrustedRoot
{
Thumbprint = 'c81b94933420221a7ac004a90242d8b1d3e5070d'
Location = 'LocalMachine'
Store = 'Root'
Path = '.\Certificate\MyTrustedRoot.cer'
}
}
}
I am using the Publish-AzureRmVMDscConfiguration cmdlet to package and upload the DSC script along with the public key certificate to Azure storage account so it can be used as part of the ARM deployment process.
But I cannot figure out how to resolve the local path to the certicate, I get an error when using
.\Certificate\MyTrustedRoot.cer or $PSScriptRoot\Certificate\MyTrustedRoot.cer
I would think it is possible to either resolve the file in DSC or use relative paths, to keep the DSC configuration simple and packaged together with the certificate.
UPDATE: Publish-AzureRmVMDscConfiguration zips and uploads the DSC script and the public key certificate to an azure storage account. The VMSS DSC extension downloads the zip, unzips locally on the vm and runs DSC, so the certificate is present locally on all vm's in the scaleset.
But the path to the certificate is not deterministic due to version numbers of the DSC module being used.
Well the certificate doesn't exist locally right, it is only in the storage account. So you would need to pass the path to the certificate in storage as a parameter, using the artifactsLocation and artifactsLocationSasToken values to make the uri.
But these are values that you would get from an arm template deployment. As you are just using powershell to publish your DSC plus resources, you should be able to determine what the uri of your cert is. You specify the storage account as a param of the publish cmdlet, so you should be able to build the uri with that information. Double check the uri of the cert in the storage account.
Alternatively, use a custom script extension on the VM to download the cert from storage to the VM, then the cert will exist locally.
Are you doing this all in powershell? You may want to look at using an arm template for this. for eg: Azure Quickstart Templates
Locally on the Azure VMs, the path to the DSC file and the certificate are somewhere under the DSC extension workfolder, extracted from the zip package created by the Publish-AzureRmVMDscConfiguration cmdlet, e.g.
C:\Packages\Plugins\Microsoft.Powershell.DSC\2.23.0.0\DSCWork\InstallRootCaCert.ps1.0\Certificate\MyTrustedRoot.cer
The best solution I could come up with was to use the absolute path to the certificate and lock the the DSC version number by explicitly setting the typeHandlerVersion to 2.23 in the ARM template and setting autoUpgradeMinorVersion to false

Azure Service Fabric, KeyVault, SSL Certificates

I want to secure my own HTTPS end point (node.js express.js server) with a certificate which I have deployed to the cluster (that is, it exists in Cert:\LocalMachine\My).
I of course want to avoid having my certificate in source control. I can't use an EndpointBindingPolicy in the ServiceManifest because as far as I'm aware that is just for http.sys(?) based systems, which this isn't.
What I thought is perhaps run a SetupEntryPoint script to:
grab the certificate from the store
export it as a pfx with a random passphrase (or some appropriate format)
copy it to {pkgroot}/certs/ssl_cert.pfx
replace some sort of token in serverinit.js with the random passphrase
This way the server, er, code base doesn't need to have the certificate present, i just needs to trust that it will be there when the service is run.
However I don't think I can do this, if it even is as sensible idea, as the certificates in the store are marked such that the private key is non-exportable! Or, at least, they are with my RDP account!
Is there a way to export the certificate with its private key?
What are my options here?
I ended up writing a powershell script which runs in my release pipeline, arguments are clientID, clientSecret and certificateName. clientSecret is stored as a protected environmental variable for my agent.
Create new application registration under same subscription as KeyVault (which should be same as SF Cluster) (e.g. in portal.azure.com)
Note down app ID
Create app secret
Modify KeyVault ACL with App as principal, set get only on secrets
use REST api with client ID and secret https://learn.microsoft.com/en-us/rest/api/keyvault/getsecret
I chose this over grabbing the certificate in the SetupEntryPoint, for example, as this hides the client secret better from the open world (e.g. developers who shouldn't/don't need access to it)