Using self-signed certificate for code signing software - certificate

Currently our company uses a digital certificate from Versign/Symtanec for code signing our software.
We have someone in our company attempting to persuade us to use a self-signed certificate instead of one purchased from Verisign/Symantec. Partially as a "cost-down" procedure (even though they're pretty damn cheap for a 2-3 year renewal), and partially to make things easier in a patching sense, as the systems our software runs on (industrial machines) has installed software with a non-Windows certificate store in which our certificate also needs to be managed. Apparently they want use to use the Windows Root CA in order to generate our certificate so we don't have to keep patching new certificates on and our certificate will essentially last as long as the Windows Root CA is valid ...
Everywhere I've been looking, I've found that some people use self-signed certificates for things like website identity verification over the net, but when used in a code-signing context, there are a lot of examples for certificate generation and people saying that you can use them for testing in an environment that's closer to a production environment (which I have done in the past), but I can't find any hard reasons as to why not to use a self-signed certificate for code-signing production software.
It's been a while since I've had to look a the certificate side of things, but this just feels wrong.
It's possible that just I'm not experienced enough with certificates to see why this is a good idea. Does anyone have any input to help me understand the full implications of this?

Using a self-signed certificate should not work. The idea is that "someone trustable" (not you; but Verisign or some other party that should check your credentials) confirms that something is certified.
I'm not sure exactly how this works in Windows. Might be that they didn't implement something properly.

There is no problem using a self signed certificate in windows. Just put the root CA certificate and the signing cert in the Windows cert store of the client machines that will run the signed application and/or driver.
Managing self signed certs within an organization is a PITA. Hence why people pay good money to get somebody else to do it.
If you are going to distribute your signed code outside your organization it is even more painful as you will need to persuade your customers to accept your CA certificate, and nobody should ever accept a root certificate from an unknown source or sent via an insecure or unverifiable means.
See this answer for instructions on creating a self signed CA and signing certificates using it.
The same is possible (but in my opinion more complicated) using Windows PowerShell, The sequence is he same but the commands change.

Related

How to sign correctly a Powershell script for AllSigned ExecutionPolicy?

We have an application where we use several PowerShell scripts. We received a complain regarding about they aren't signed and unable to run them if they have the strictest Execution Policy - AllSigned.
I signed with our certificate issued by well-known issuer via signtool as we do it for dlls and exe app but even after that there is an issue if I try running the script I'm getting warning:
Do you want to run software from this untrusted publisher?
It's signed by a certificate issued by know CA (Sectigo). Only how can I get rid of this warning is to add the certificate to Trusted Publishers. It's not good for customers to do those steps (but maybe it's necessary security step). Note: With the same certificate, we sign exe app and it works fine and Windows doesn't complain. (Look like PowerShell policies are stricter.)
Is it possible somehow avoid getting this warning on a customer side without manually adding our certificate into Trusted Publishers? Looks to me that it is not possible.
What I've found out so far:
I've searched across internet and it looks like there is no solution for that. Even if I used PowerShell script signed by Microsoft Corporation I get the same warning unless I add to the Trusted Publishers folder.
Also e.g. HP directly recommends to add the certificate manually to the cert store.
In a documenation about execution policies is written in AllSigned section: Prompts you before running scripts from publishers that you haven't yet classified as trusted or untrusted.
From those all information, I got it as there is no way how to avoid getting this warning on a customer side without adding to the cert store. I want just to assure myself I'm right.

Is there a way to generate a Microsoft Serialized Certificate Store without Windows?

Our company uses exclusively Apple devices. At the same time we use Microsoft 365. Using S/MIME on the desktop works but using S/MIME with Outlook for iPhone leads to Outlook for iPhone complaining about the certs not being valid. Earlier certs were rejected with the error message "unable to build chain" (or similar). It hints to Outlook for iPhone not being able to build the chain of trust because of missing root and intermediate CA certs.
I tried importing those as PEM or DER without success. I built the trust chain by concatenating the certs and converting all together into P12/PFX but to no avail.
Reading encrypted mails does work by the way, sending does not.
Microsoft's support now suggests to export the trust chain as Microsoft Serialized Certificate Store (SST extension) but that requires a MMC with certs snap-in. For my own cert I could do that by using a VM but we have more employees than just me. I found several hints at using PowerShell for that but all guides online only explain how to do that using the MMC.

How to bootstrap certificates for LCM to reference for signatureverification settings?

WMF 5.1 includes new functionality to allow signing of MOF documents and DSC Resource modules (reference). However, this seems very difficult to implement in reality -- or I'm making it more complicated than it is...
My scenario is VMs in Azure and I'd like to leverage Azure Automation for Pull DSC Server; however, I see this applying on premise too. The problem is that the certificate used to sign the MOF configurations and/or modules needs to get placed on the machine before fetching and applying the configuration otherwise configuration will fail because the certificate isn't trusted or present on the machine.
I tried using Azure KeyVault to bootstrap the certificate (just the public key because that's my understanding of how signing works) and that fails using Add-AzureRmVMSecret because the CertificateUrl parameter expects a full certificate with the public/private key pair to install. In an ideal world, this would be the solution but that's not the case...
Other ideas, again in this context, would be to upload the cert to blob storage, use a CustomScriptExtension to pull down the cert and install into the LocalMachine store but that feels nasty as well because, ideally, that script should be signed as well and that puts us back in the same spot.
I suppose another idea would be to first PUSH a configuration that downloaded and installed certificates only but that doesn't sound great either.
Last option would be to rely on an AD GPO or something similar to potentially push the certificate first...but, honestly, trying to move away from much of that if/when possible...
Am I off-base on this? It seems like this should be a solvable problem -- just looking for at least one "good" way of doing it.
Thanks
David Jones has quite a bit of experiencing dealing with this issue in an on-premises environment, but as you stated the same concepts should apply for Azure. Here is a link to his blog. This is a link to his GitHub site with a PKITools module that he created. If all else fails you can reach out to him on Twitter.
While it's quite easy to populate a pre-booted image with public certificates. it's not possible (that I have found) to populate the private key.
DSC would require the private key to decrypt the passwords.
The most common tactic people blog about is to use the unattend to script the import of a PFX. issue there is you have to leave the password for the PFX in plain text. Perhaps that is ok in your environment.
The other option requires a more complicated setup. Use a simple DSC or GPO to auto enroll a unique certificate. then have the system, via first boot script or DSC custom resource, tickle an API (Like Polaris) and that triggers a DSC script that uses PKITools or other script to get the public certificate that the machine has. Then have that API push a new DSC config (or pull settings) to the machine.

When .net says "certificate valid", what is it checking?

I'm using the SignedXml.CheckSignature(X509Certificate2, boolean) method. I would like to know what checks are performed when deciding the validity of the certificate. I have verified that the Current User/Not Trusted list is checked. The documentation says it will use the "address book" store, searching by subject key identifier, to build the certificate chain. I imagine this means the Local Machine and Current User certificate stores?
Am I right to think that certificate revocation and signature timestamp are not checked? To do an OCSP check for certificate revocation, am I obliged to use Bouncy Castle?
In the remarks in the msdn article you link to one finds:
In version 1.1 of the .NET Framework, the X.509 certificate is not verified.
In version 2.0 and later, the X.509 certificate is verified.
In version 2.0 and later of the .NET Framework, the CheckSignature method will search the "AddressBook" store for certificates suitable for the verification. For example, if the certificate is referenced by a Subject Key Identifier (SKI), the CheckSignature method will select certificates with this SKI and try them one after another until it can verify the certificate.
Thus, first of all the behavior of that method has changed in different .NET framework versions. So for reproducible results, you had better not count on that method even check the certificate at all.
Furthermore, the formulation try them one after another until it can verify the certificate sounds like there just might be the mathematical test whether or not the certificate is signed by its alleged issuer.
https://referencesource.microsoft.com/#System.Security/system/security/cryptography/xml/signedxml.cs,b9518cc2212419a2
It checks
The certificate has no Key Usage extension, or the Key Usage extension has either Digital Signature or Non Repudiation usages enabled
The certificate chains up to a trusted root authority
The certificate has not been revoked
The certificate was not expired when you called this method
It doesn't know when the document was signed, so it doesn't answer that question.
That none of the certificates in the chain are explicitly prohibited by the user or system configuration.

How Can I Prevent Needing to Re-sign My Code Every 1 or 2 Years?

I was reading What happens when a code signing certificate expires - Stack Overflow and wondering about a more solid answer. The answer provided was more about setting up your own CA. Even with your own CA you will still need to deal with expiring code certificates.
If you signed the code without using a time stamping service, after the certificate expires your code will no longer be trusted, and depending on security settings it may not be allowed to run. You will need to re-sign all of your code with a new certificate, or with a renewed certificate, every 1 or 2 years.
Trusted (digital) timestamping allows the digital signature to be valid even after the certificate itself has expired. You would need to re-sign code with the new certificate only if you have made changes.
Does this all sound correct? If so, I need recommendations on what timestamping service to use, preferably from someone who has actually used one. I'd also like to know if there are any in-house solutions, similar to being your own CA.
Right now this applies to PowerShell scripts, but I will eventually have the same issue with other code.
Update: Sample of how to sign a PS script with a timestamp (you can make a script for this):
Set-AuthenticodeSignature -filepath "D:\Projects\A Sample\MyFile.ps1"
-cert gci cert:\CurrentUser\My -codesigning
| where -Filter {$_.FriendlyName -eq "Thawte Code Signing"}
-IncludeChain All
-TimeStampServer "http://timestamp.verisign.com/scripts/timstamp.dll"
Then, to see the Signer Certificate and TimeStamper Certificate, you can do this:
Get-AuthenticodeSignature MyFile.ps1 | fl *
It gives you the Subject (CN, OU, etc.), Issuer, Before/After Dates, and Thumbprints for both your cert and the timestamper's cert. You also get a message indicating the status of the signature.
You're better off selecting one of the trusted certificate providers (Verisign, Thawte, Comodo, etc.). This allows you to sign your software without the user explicitly trusting your private root CA. We've used both Verisign and Thawte, Comodo even GoDaddy with timestamping without any issues with the software becoming invalid even years after the certificate expires.
Time stamping is a free service -- it's really only a trusted provider verifying that you signed the file at a given time. Verisign's timestamp service is the standard one. The final example in the help for Set-AuthenticodeSignature demonstrates how to use it.
Lee Holmes [MSFT]
Windows PowerShell Development
Microsoft Corporation
You can't really escape having to resign code eventually. The advantage to running your own CA is that you could choose to issue your code-signing certs with longer lifetimes than the default, thereby allowing you to wait longer before having to resign anything. The downside is of course having another service or server (your CA) to deal with.