I am using a certificate authority from windows 2003 machine and it issues certificates with SHA1 algorithm and these certificates are used in URLs. I came to know what all the browsers IE , Chrome , Mozilla , etc are going to stop supporting SHA1. Can you advise me what is the impact i am going to have in my URLs ??
Does these URLs will display only the security error message like "The URL which you are trying to access is not from trusted source" and if i add an exception i will still be able to view the web page or i will not be able to view the webpage completely even after adding the exception to the browser ?? Thanks
Google Statement
Step 2: Blocking all SHA-1 certificates Starting January 1, 2017 at
the latest, Chrome will completely stop supporting SHA-1 certificates.
At this point, sites that have a SHA-1-based signature as part of the
certificate chain (not including the self-signature on the root
certificate) will trigger a fatal network error. This includes
certificate chains that end in a local trust anchor as well as those
that end at a public CA. In line with Microsoft Edge and Mozilla
Firefox, the target date for this step is January 1, 2017, but we are
considering moving it earlier to July 1, 2016 in light of ongoing
research. We therefore urge sites to replace any remaining SHA-1
certificates as soon as possible. Note that Chrome uses the
certificate trust settings of the host OS where possible, and that an
update such as Microsoft’s planned change will cause a fatal network
error in Chrome, regardless of Chrome’s intended target date.
According to Google's statement Chome will block alle network traffic to your site.
Related
With KB5018410 Windows update installed in Windows 10 recently, my Delphi REST applications have stopped working. It seems that TLS 1.2 is turned off. Insomnia, Firefox etc can access the URL below, but not a "default" set of TRESTClint/TRESTRequest/TRESTResponse components dropped on a form with the minimal required Properties modifications.
https://yams.ked.co.za/version
Checking boxes under TRESTClient.SecureProtocols also does not seem to make any difference.
How can I get my (very large) REST application going again!?
Check this conversation out on Reddit - Global Protect TLS issue after install of KB5018410
https://www.reddit.com/r/paloaltonetworks/comments/y21chi/some_of_our_users_are_having_issues_connecting_to/
Check your ssl cert and make sure your cert is not valid for more than 1 year 365 days. If it is issued to be longer than a year try switching your cert to one that is only good for 1 year and see if it solves it. That fixed my Palo Alto global protect vpn issue.
My final solution was to use another, 3rd party component (Chilkat) to carry out the REST functionality.
There is also the option of rolling back (and then blocking repeat upgrading) the Windows 10 KB5018410 upgrade.
The problem with the Embarcadero REST components was reported on the Issues site, and has now been elevated from "Reported" to "Open" status.
To the people that close vote this post: it doesn't help if you don't comment why. We're all trying to learn here.
I want to have wildcard certificates for 2 domains of mine using Let's Encrypt. Here's what I did:
In Chrome it all works. In Firefox I get the error below:
So I tested here: https://www.ssllabs.com/ssltest/analyze.html?d=gamegorilla.net
I also checked this other post.
There's talk on making sure that "the server supplies a certificate chain to the client, only the domain certificate". I found validating the certificate chain here.
I then took these steps found here:
Open the Certificates Microsoft Management Console (MMC) snap-in.
On the File menu, click Add/Remove Snap-in.
In the Add or Remove Snap-ins dialog box, click the Certificates snap-in in the Available snap-ins list, click Add, and
then click OK.
In the Certificates snap-in dialog box, click Computer account, and then click Next.
In the Select computer dialog box, click Finish.
I already see "Let's Encrypt Authority X3" in the Intermediate Certification Authorities. So that should already be handling things correctly I'd presume.
How can I ensure the Let's Encrypt certificate chain is supplied to the client so it works in Firefox too?
UPDATE 1
Based on #rfkortekaas' suggestion I used "all binding identifiers" instead of supplying the search pattern. When Win-acme asked Please pick the main host, which will be presented as the subject of the certificate, I selected gamegorilla.net. After this gamegorilla.net now works in Firefox, however, on www.karo-elektrogroothandel.nl I now get an insecure certificate.
UPDATE 2
Alright, that seems to fix it. I do see that bindings for smtp/mail (e.g. smtp.gamegorilla.net) are now also added to IIS automatically:
Should I leave those or delete those mail+smtp records here?
Also, the certificate is now [Manual], does that mean I need to renew manually (which woud be weird since nowhere during the certificate creation steps did I see an option for auto-renewal):
The issue is that you only generate the certificate for www.gamegorilla.net and not gamegorilla.net if you select all binding identifiers instead of supplying the search pattern I think it should work.
To also get certificates for other names that are not hosted by IIS you cannot use the import from IIS function. You need to supply them all, starting with the common name.
After starting wacs select M for a new request and select option 2 for manual input. After that enter the comma separated list with the common name first: gamegorilla.net,www.gamegorilla.net,smtp.gamegorilla.net,karo-elektrogroothandel.nl,www.karo-elektrogroothandel.nl,smtpkaro-elektrogroothandel.nl (without any spaces). Or when you want to generate a wildcard certificate you can use: gamegorilla.net,*.gamegorilla.net,karo-elektrogroothandel.nl,*.karo-elektrogroothandel.nl.
Please be aware that for generating wildcard certificates you need to be able to use the DNS-01 challenge. The HTTP-01 challange doesn't support wildcard certificates.
For the certificate renewal you should run wacs --renew from time to time (for example via a schedules task).
I have to buy a code-signing certificate, for signing Win32 applications, and I was considering whether to pick an EV one.
The advantages of EV certificates I was able to find are:
Immediate Smartscreen reputation establisment (instead of waiting for 3k downloads? [source] )
Maintainance of Smartscreen reputation across certificate renewals [source] (probably a moot point if point 1 applies anyway)
Option for delivery on a hardware token, often not available for normal certificates
I wonder if they bring other advantages, for example if applications signed with them are more trusted than applications signed with non-EV certificates by antivirus, firewalls and other security applications (they get less blocked, provoke more favourable warnings, etc.).
I restate the case I'm most interested in: are you aware of differences in treatment by some specific antivirus/firewall/security application of applications signed with EV certificates, vs. applications signed with standard certificates?
Disclosure: I work for an AV vendor.
I wonder if they bring other advantages, for example if applications
signed with them are more trusted than applications signed with non-EV
certificates by antivirus, firewalls and other security applications
This depends on the vendor making the security application, or their current(*) policy. Both security vendors I have worked for ignored the presence of the certificate when scanning for malware. There are several reasons for this:
Just because the code is signed doesn't mean it is not malicious. It only means it has not been modified after it has been signed. For example, a relatively large number of adware applications is signed.
Malware writes have used stolen certificates in past, and thus we cannot be truly sure it was used by the original author. This is why I mentioned "current policy" above, as this could change overnight.
Verifying a certificate is a complex and relatively slow process which requires reading the whole file from disk - an expensive operation for a non-SSD storage. It also requires performing some public key cryptography operations which are CPU-intensive. Thus for some large executable files checking the certificate might take longer than scanning the file for malware.
And since we generally don't look at certificate at all, it doesn't matter whether it is standard or EV.
I have a different experience than #George Y. Our Code Signing EV-Certificate from Sectigo did help to avoid false positives in Norton 360. I don't know about other Antivirus software - to be tested.
Note:My different experience from #George Y. doesn't imply
that he is wrong. The difference can be due to many
factors, such as Antivirus Software Company policies, ... Also, my
experience is based on positive results I get today from the code
signing. More tests in the future (and experiences from our users) will prove if these positive results were temporary or permanent.
1. Before code signing
Before the code signature, our users got warnings like this:
Even worse, Norton 360 would simply remove a lot of executables and .pyd files automatically - thereby breaking our software completely:
It was a complete disaster.
2. After code signing
Today, I signed our application for the first time with our new EV-Certificate. I signed not only the .exe files, but also the .dll, .so and .pyd files. When signing these files, I first check if they already have a signature, to avoid double signing .dll files from third party opensource binaries that we include in our build. Here is my Python script that automates this procedure:
import os, subprocess
# 'exefiles' is a Python list of filepaths
# to .exe, .dll, .so and .pyd files. Each
# filepath in this list is an absolute path
# with forward slashes.
quote = '"'
for f in exefiles:
cmd = f"signtool verify /pa {quote}{f}{quote}"
result = subprocess.run(
cmd,
stdin = subprocess.DEVNULL,
stdout = subprocess.PIPE,
stderr = subprocess.PIPE,
cwd = os.getcwd(),
encoding = 'utf-8',
)
if result.returncode:
# Verification failed, so the file is not yet signed
cmd = f"signtool sign /tr http://timestamp.sectigo.com /td sha256 /fd sha256 /a {quote}{f}{quote}"
result = subprocess.run(
cmd,
stdin = subprocess.DEVNULL,
stdout = subprocess.PIPE,
stderr = subprocess.PIPE,
cwd = os.getcwd(),
encoding = 'utf-8',
)
if result.returncode:
# Code signing failed!
print(f"Sign: '{f.split('/')[-1]}' failed")
else:
# Code signing succeeded
print(f"Sign: '{f.split('/')[-1]}'")
else:
# Verification succeeded, so the file was already signed
print(f"Already signed: '{f.split('/')[-1]}'")
The results are promising so far. Windows SmartScreen no longer generates warnings. Norton 360 neither. I've tried on both my laptop and a desktop with a clean Norton 360 install - both of them trust the application (unlike before the code signature).
Fingers crossed it will stay this way. Let's also hope other Antivirus software will trust our application.
Note:
As of writing this post, our signed application is only available for testers on https://new.embeetle.com
It will be available soon on our public website https://embeetle.com as well - but not yet today.
I have a p12 file. This was generated from a DigiCert p7b.
When I import this into my personal store on one machine (windows server, using certificates mmc) it shows me one chain when I view the path.
Using the same file, I import into my personal store on a different machine (also windows, using certs mmc). On this one I see a different path (and in this case it has an expired hop)
Specifically, two hops above my cert the divergence occurs.
Why does this happen? Is there anything I can do to influence that chain (remember its the same p12 that is creating different paths)?
I should also say, I am no expert in this area. I'm a developer that muddles through these security issues when needed.
I had the same issue. Two different windows 2008 r2 servers, same certificate. After standard OS patching one of the servers was sending only the first layer of certificate trust chain (number 0), so the openssl client was failing with the message:
verify error:num=21:unable to verify the first certificate
No idea what was the root cause. I tried to
reassign certificate in IIS
reimport certificate
restart IIS
with no success. What finally helped to fix the issue was the server reboot...
Closing this out.
I'm still a little foggy on why things were working the way they did but some things made sense.
It seems the .p12 was created from a p7b that included some of the intermediate certs. One of the included intermediates was the bad one. This explains why the chain was bad on one machine.
Still not sure how I was able to see a good chain on different machine but I understand why I saw the bad one. It seems the good chain was the fluke and the bad chain should have been expected (I originally assumed the opposite).
I created a new .p12 without the intermediates. Cleaned up all the bad intermediates that were previously imported from the first .p12 in both service user and local machine stores. All seems to be working as expected now with same valid chain on all machines.
I need the certificate from my smart card to be in the Windows service local sotre. I opened the store with mmc -> snap-in -> certificates.
I used different little tools to see informations(ATR etc.) about my smartcard and they all worked out.
I can see a lot of certificates there, but the one from my smartcard is missing in the store. The folder 'Smartcard trusted Roots' is empty. Windows gets the .cer/.pfx-data from smart cards automatically, right?
Or is there no chance, i can do it without using low-level programming(APDU-commands etc.)
First read this:
http://technet.microsoft.com/en-us/library/ff404288(v=WS.10).aspx
As it's written
A logged-on user inserts a smart card.
CertPropSvc is notified that a smart card was inserted.
CertPropSvc reads all certificates from all inserted smart cards. The certificates are written to the user's personal certificate store
So yes, gnerally certificates should pop up in User Personal Certificate Store automatically.
First thing to check is that you have CertPropSvc service runnig.
Another thing that I saw that some smart cards drivers doesn't work with Windows API. One example I know was old RSA tokens. We have changed them to Gemalto .NET cards and USB readers because of this
Note: In the artcle I linked it's written that this is valid for Windows 7 and 2008 but it worked for me on XP and Vista.