This may sound like a stupid question, but it is the first time I'm researching on this topic. Is it possible to create a chain of certificates.
So currently we have this structure:
Root CA --> Intermediate CA --> Issues certificates
This is the structure we would want:
Root CA --> Intermediate CA --> Another Intermediate CA --> Issue certs
--> Another Intermediate CA --> Issue certs
--> Another Intermediate CA --> Issue certs
I have done a little research but I can't find out that whether this chaining structure is possible.
We want to have a Root CA at then an intermediate for a division and then other intermediates for projects within the division. It will help compartmentalize any damage if done.
CA hierarchy organization is similar to folder organization with its specific rules. Every extra CA increases management costs. Every new tier increases certificate chain validation time. So you need to keep as minimum CAs and as shorter chains as it is reasonable to keep.
Minimum recommended configuration is two-tier:
Root CA --> Policy/Issuing CA --> End Entities
Root CA should be offline, not connected to any network, utilize HSM and kept in a secure room. The loss/compromise of root CA leads to entire PKI crash without any chance to revoke it. This is why root CA is usually issuing certificates only to other CAs, not to end entities. Most of time it is turned off and is turned on only during certificate renewal and CRL publication.
Policy/Issuing CA is built below root and works directly with end entities (certificate consumers or subscribers). Logically it is installed close to most clients. It is enabled and operates 24/7. Physical security is the same as to Root CA: secure room, HSM (individual or net-hsm), strict physical access to device. The compromise of issuing CA is still bad, but recoverable. At least, only part of PKI is compromised (particular chain) and you can revoke compromised CA certificate without having to replace root everywhere.
If you need separate CA for divisions, do it:
Root CA --> Policy/Issuing CA 1 --> End Entities
--> Policy/Issuing CA 2 --> End Entities
--> Policy/Issuing CA 3 --> End Entities
There is nothing wrong with such configuration.
Related
I am trying to test an API on my site. The tests work just fine from one machine, but running the code from a different machine results in the SSLCertVerificationError - which is odd because the site has an SSL cert and is NOT self signed.
Here is the core of my code:
async def device_connect(basename, start, end):
url = SERVER_URL
async with aiohttp.ClientSession() as session:
post_tasks = []
# prepare the coroutines that post
for x in range(start, end):
myDevice={'test':'this'}
post_tasks.append(do_post(session, url, myDevice))
# now execute them all at once
await asyncio.gather(*post_tasks)
async def do_post(session, url, data):
async with session.post(url, data =data) as response:
x = await response.text()
I tried (just for testing) to set 'verify=False' or trust_env=True, but I continue to get the same error. On the other computer, this code runs fine and no trust issue results.
That error text is somewhat misleading. OpenSSL, which python uses, has dozens of error codes that indicate different ways certificate validation can fail, including
X509_V_ERR_SELF_SIGNED_CERT_IN_CHAIN -- the peer's cert can't be chained to a root cert in the local truststore; the chain received from the peer includes a root cert, which is self-signed (because root certs must be self-signed), but that root is not locally trusted
Note this is not talking about the peer/leaf cert; if that is self signed and not trusted, there is a different error X509_V_ERR_DEPTH_ZERO_SELF_SIGNED_CERT which displays as just 'self signed certificate' without the part about 'in certificate chain'.
X509_V_ERR_UNABLE_TO_GET_ISSUER_CERT_LOCALLY (displays in text as 'unable to get local issuer certificate') -- the received chain does not contain a self-signed root and the peer's cert can't be chained to a locally trusted root
In both these cases the important info is the peer's cert doesn't chain to a trusted root; whether the received chain includes a self-signed root is less important. It's kind of like if you go to your doctor and after examination in one case s/he tells you "you have cancer, and the weather forecast for tomorrow is a bad storm" or in another case "you have cancer, but the weather forecast for tomorrow is sunny and pleasant". While these are in fact slightly different situations, and you might conceivably want to distinguish them, you need to focus on the part about "you have cancer", not tomorrow's weather.
So, why doesn't it chain to a trusted root? There are several possibilities:
the server is sending a cert chain with a root that SHOULD be trusted, but machine F is using a truststore that does not contain it. Depending on the situation, it might be appropriate to add that root cert to the default truststore (affecting at least all python apps unless specifically coded otherwise, and often other types of programs like C/C++ and Java also) or it might be better to customize the truststore for your appplication(s) only; or it might be that F is already customized wrongly and just needs to be fixed.
the server is sending a cert chain that actually uses a bad CA, but machine W's truststore has been wrongly configured (again either as a default or customized) to trust it.
machine F is not actually getting the real server's cert chain, because its connection is 'transparently' intercepted by something. This might be something authorized by an admin of the network (like an IDS/IPS/DLP or captive portal) or machine F (like antivirus or other 'endpoint security'), or it might be something very bad like malware or a thief or spy; or it might be in a gray area like some ISPs (try to) intercept connections and insert advertisements (at least in data likely to be displayed to a person like web pages and emails, but these can't always be distinguished).
the (legit) server is sending different cert chains to F (bad) and W (good). This could be intentional, e.g. because W is on a business' internal network while F is coming in from the public net; however you describe this as 'my site' and I assume you would know if it intended to make distinctions like this. OTOH it could be accidental; one fairly common cause is that many servers today use SNI (Server Name Indication) to select among several 'certs' (really cert chains and associated keys); if F is too old it might not be sending SNI, causing the server to send a bad cert chain. Or, some servers use different configurations for IPv4 vs IPv6; F could be connecting over one of these and W the other.
To distinguish these, and determine what (if anything) to fix, you need to look at what certs are actually being received by both machines.
If you have (or can get) OpenSSL on both, do openssl s_client -connect host:port -showcerts. For OpenSSL 1.1.1 up (now common) to omit SNI add -noservername; for older versions to include SNI add -servername host. Add -4 or -6 to control the IP version, if needed. This will show subject and issuer names (s: and i:) for each received cert; if any are different, and especially the last, look at #3 or #4. If the names are the same compare the whole base64 blobs to make sure they are entirely the same (it could be a well-camoflauged attacker). If they are the same, look at #1 or #2.
Alternatively, if policy and permissions allow, get network-level traces with Wireshark or a more basic tool like tcpdump or snoop. In a development environment this is usually easy; if either or both machine(s) is production, or in a supplier, customer/client, or partner environment, maybe not. Check SNI in ClientHello, and in TLS1.2 (or lower, but nowadays lower is usually discouraged or prohibited) look at the Certificate message received; in wireshark you can drill down to any desired level of detail. If both your client(s) and server are new enough to support TLS1.3 (and you can't configure it/them to downgrade) the Certificate message is encrypted and wireshark won't be able to show you the contents unless you can get at least one of your endpoints to export the session secrets in SSLKEYLOGFILE format.
I am trying to set up Openshift 4.9 and running into issues configuring the mirror registry. I have narrowed down the issue to cert error with quay.io
$ wget "https://quay.io/openshift-release-dev/ocp-release:4.8.15-x86_64"
--2021-10-25 16:57:27-- https://quay.io/openshift-release-dev/ocp-release:4.8.15-x86_64
Resolving quay.io (quay.io)... 35.172.159.14, 34.224.196.162, 3.216.152.103, ...
Connecting to quay.io (quay.io)|35.172.159.14|:443... connected.
ERROR: The certificate of âquay.ioâ is not trusted.
ERROR: The certificate of âquay.ioâ has been revoked.
I have downloaded the cert chain from quay.io and copied it to
/etc/pki/ca-trust/source/anchors/
Then I ran update-ca-trust as well as update-ca-trust extract
I checked the bundle and certs are present.
/etc/pki/ca-trust/extracted/openssl/ca-bundle.trust.crt
However, I keep getting the cert for quay.io is not trusted.
Any pointers to fix this would be appreciated.
Two things may help:
First of all, make sure you added the right CA file to the anchors folder:
DigiCert High Assurance EV Root CA Self-signed
Fingerprint SHA256: 7431e5f4c3c1ce4690774f0b61e05440883ba9a01ed00ba6abd7806ed3b118cf
Pin SHA256: WoiWRyIOVNa9ihaBciRSC7XHjliYS9VwUGOIud4PB18=
Then check the result in /etc/pki/tls/certs/ca-bundle.crt
In the process of setting up an SSL certificate for my site, several different files were created,
server.csr
server.key
server.pass.key
site_name.crt
Should these be added to .gitignore before pushing my site to github?
Apologies in advance if this is a dumb question.
Should these be added to .gitignore before pushing my site to github?
They should not be in the repo at all, meaning stored outside of the repo.
That way:
you don't need to manage a .gitignore,
you can store those keys somewhere safe.
GitHub actually had to change it search feature back in 2013 after seeing users storing keys and passwords in public repositories. See the full story.
The article includes this quote:
The mistakes may reflect the overall education problem among software developers.
When you have expedited programs—"6 weeks and you'll be a real software developer"—to teach developing, security becomes an afterthought. And considering "90 percent of development is copying stuff you don't understand, I'd bet most of them simply don't know what id_rsa is"
In 2016, this "book" (as a joke) reflects that:
The OP adds:
I think Heroku requires putting the files into the repo in order to run ">heroku certs:add server.crt server.key" and setup the cert.
"Configuration and Config Vars" is one illustration on that topic:
A better solution is to use environment variables, and keep the keys out of the code. On a traditional host or working locally you can set environment vars in your bashrc file. On Heroku, you use config vars.
The article "Heroku: SSL Endpoint" does not force you to have those key and certificate in your code. They can be generated directly on Heroku and saved anywhere else for safekeeping. Just not in a git repo.
I would like to add to #VonC 's answer, as it is in fact more complicated:
The files have different content, and depending on that they require a different access control:
server.csr: This is a certificate signing request file. It is generated from the key (server.key in your case) and used to create the certificate (site_name.crt in your case). This should be deleted when the certificate has been created. It should not be shared with untrusted parties.
server.key: This is the private key. Under no circumstances can this file be shared outside of the server. It cannot end up in a code repository. On the system it must be stored with 0600 permissions (i.e. read only) owned by either root or the web server user. At least in Linux, in Windows user access rights are handled differently, but it has to be done similarly.
site_name.crt: This is the signed certificate. This is considered to be public. The certificate is essentially the public key. It is sent out to everyone that connects to the server. It can be stored in the repository. (The hint from #VonC is correct, code and data should be separated, but it can be e.g. in a separate repository for the deployment).
server.pass.key: Don't know what this is, but it seems to contain the password to get access to the key. If this is the case the same rules as for the key apply: Never share with anyone.
I'm trying to deploy a VSTO solution, which are 2 addins for Word and for Outlook, using ClickOnce. Due to our deployment infrastructure/practices, I cannot publish it using Visual Studio, it is instead built on a build server and deployed via a deployment server.
For local development, a self-signed certificate is used. The deployment worked with this self-signed certificate (if the the self-signed certificate was installed on the machine), but now I want to add a real company certificate so that the application can be deployed to the users.
During deployment, after the configuration files are poked, they are updated and re-signed with the real certificate. However, this produces the following error during installation:
System.Security.SecurityException: Customized functionality in this application will not work because the certificate used to sign the deployment manifest for <app name> or its location is not trusted. Contact your administrator for further assistance.
at Microsoft.VisualStudio.Tools.Applications.Deployment.ClickOnceAddInTrustEvaluator.VerifyTrustPromptKeyInternal(ClickOnceTrustPromptKeyValue promptKeyValue, DeploymentSignatureInformation signatureInformation, String productName, TrustStatus status)
at Microsoft.VisualStudio.Tools.Applications.Deployment.ClickOnceAddInTrustEvaluator.VerifyTrustUsingPromptKey(Uri manifest, DeploymentSignatureInformation signatureInformation, String productName, TrustStatus status)
at Microsoft.VisualStudio.Tools.Applications.Deployment.ClickOnceAddInTrustEvaluator.VerifyTrustUsingPromptKey(Uri manifest, DeploymentSignatureInformation signatureInformation, String productName)
at Microsoft.VisualStudio.Tools.Applications.Deployment.ClickOnceAddInDeploymentManager.ProcessSHA1Manifest(ActivationContext context, DeploymentSignatureInformation signatureInformation, PermissionSet permissionsRequested, Uri manifest, ManifestSignatureInformationCollection signatures, AddInInstallationStatus installState)
at Microsoft.VisualStudio.Tools.Applications.Deployment.ClickOnceAddInDeploymentManager.VerifySecurity(ActivationContext context, Uri manifest, AddInInstallationStatus installState)
at Microsoft.VisualStudio.Tools.Applications.Deployment.ClickOnceAddInDeploymentManager.InstallAddIn()
The Zone of the assembly that failed was:
MyComputer
The only lead I have is that, after re-signing, the values in publisherIdentity element are not changed (both .vsto and .manifest), only the Signature element has values corresponding to the new certificate.
Following commands are used to sign the .vsto and .manifest files (as far as I can see from the deployment scripts):
mage.exe -Update "[path to .vsto/.manifest]"
mage.exe -Sign "[path to .vsto/.manifest]" -CertHash [certificateHash]
where [certificateHash] is the thumbprint of the real certificate and is used to look up the certificate in certificates stores. I'm told this is security measure so that the certificate file doesn't have to be distributed along with the deployment package.
After signing, the files have their Signature values changed, but the publisherIdentity still has the name and issuerKeyHash of the self-signed certificate.
I tried poking these two values prior to re-signing, but I'm don't know how to calculate the issuerKeyHash.
Any advise on how to proceed would be much appreciated!
Edit:
I was trying out other mage.exe parameters, like '-TrustLevel FullTrust' (which didn't have any effect) or '-UseManifestForTrust True' along with Name and Publisher parameters, which yielded this error message (which is different than the one mentioned above).
************** Exception Text **************
System.InvalidOperationException: You cannot specify a <useManifestForTrust> element for a ClickOnce application that specifies a custom host.
at Microsoft.VisualStudio.Tools.Applications.Deployment.ClickOnceAddInDeploymentManager.GetManifests(TimeSpan timeout)
at Microsoft.VisualStudio.Tools.Applications.Deployment.ClickOnceAddInDeploymentManager.InstallAddIn()
.
The certificate that the app is signed with isn't trusted by Windows. As a work around,
Right click on setup.exe,
Select properties then the Digital Signatures tab
Select Vellaichamy/user then click Details
Click View Certificate and Click Install Certificate.
Do not let it automatically choose where to store the sert, install the certificate in the Trusted Root Certification Authorities Store. Once the cert is installed the app should install...
Take a look at the Granting Trust to Office Solutions article which states the following:
If you sign the solution with a known and trusted certificate, the solution will automatically be installed without prompting the end user to make a trust decision. After a certificate is obtained, the certificate must be explicitly trusted by adding it to the Trusted Publishers list.
For more information, see How to: Add a Trusted Publisher to a Client Computer for ClickOnce Applications.
Also you may find the Deploying an Office Solution by Using ClickOnce article helpful.
We have found what the problem was. We used a version of mage.exe tool from Windows SDK from a folder named 7A (I don't remember the full paths, sorry). A colleague then found another folder with versions 7A, 8 and 8A. Once we took the .exe from 8A folder, the installation works as expected.
Try copying all the necessary files to the client computer then install. If you can avoid installing from the network drive you might be able to avoid this exception.
Does anyone know if it's possible to create my own wildcard certificate under Ubuntu? For instance, I want the following domains to use one certificate:
https://a.example.com
https://b.example.com
https://c.example.com
Just follow one of the many step by step instructions for creating your own certificate with OpenSSL but replace the "Common Name" www.example.com with *.example.com.
Usually you have to keep a bit more money ready to get a certificate for this.
> openssl req -new -x509 -keyout cert.pem -out cert.pem -days 365 -nodes
Country Name (2 letter code) [AU]:DE
State or Province Name (full name) [Some-State]:Germany
Locality Name (eg, city) []:nameOfYourCity
Organization Name (eg, company) [Internet Widgits Pty Ltd]:nameOfYourCompany
Organizational Unit Name (eg, section) []:nameOfYourDivision
Common Name (eg, YOUR name) []:*.example.com
Email Address []:webmaster#example.com
(Sorry, my favorite howto is a german text that I don't have readily available and can't find currently, thus the 'many' links)
Edit in 2017: The original answer to this question is from 2009, when the choice for certificates did not include fully automated and free options like Let's Encrypt. Nowadays (if the "domain-validated" certification level of Let's Encrypt is enough for your purpose) it's trivial to obtain individual certificates for each and every subdomain. In case you need a higher trust level than domain-validated, wildcard certificates are still an option.
Also from 2017, note the comment below, by #ha9u63ar:
According RFC 2818 sec. 3 using CN for host name identification is not recommended anymore (deprecated) Subject Alternative Name (SAN) seems to be the way to go.
My answer to this comment: I trust that nowadays any CAs that issue Wildcard certs will have a proper set of instructions. For a self-signed quick fix, I'd not worry. On the other hand, with LetsEncrypt being around these days, it's been a long time since I've created a self-signed certificate. Gee, this answer really shows its age.