I am new to this OPC-UA world and Eclipse Milo.
I do not understand how the security works here,
Discussing about client-example provided by eclipse-milo
I see few properties of security being used to connect to the OPCUA Server:
SecurityPolicy,
MessageSecurityMode,
clientCertificate,
clientKeyPair,
setIdentityProvider,
How the above configurations are linked with each other?
I was trying to run client-examples -> BrowseNodeExample.
This example internally runs the ExampleServer.
ExampleServer is configured to run with Anonymous and UsernamePassword Provider. It is also bound to accept SecurityPolicy.None, Basic128Rsa15, Basic256, Basic256Sha256 with MessageSecurityMode as SignandEncrypt except for SecurityPolicy.None where MessageSecurityMode is None too.
The problem is with AnonymousProvider I could connect to the server with all SecurtiyPolicy and MessageSecurityMode pair mentioned above (without client certificates provided).
But I could not do the same for UsernameProvider, For UsernameProvider only SecurityPolicy MessageSecurityMode pair with None runs successfully.
All others pairs throw security checks failed exception (when certificate provided) else user access denied (when client certificate not provided). How to make this work?
Lastly, It would be really nice if someone could point me to proper User documentation for Eclipse Milo. Since I could not see any documentation except examples codes, and they are not documented.
SecurityPolicy and MessageSecurityMode go hand-in-hand. The security policy dictates the set of algorithms that will be used for signatures and encryption, if any. The message security mode determines whether the messages will be signed, signed and encrypted, or neither in the case where no security is used.
clientCertificate and clientKeyPair must be configured if you plan to use security. You can't use encryption or signatures if you don't have a certificate and private key, after all.
IdentityProvider used to provide the credentials that identify the user of the session, if any.
When the ExampleServer starts up it logs that its using a temporary security directory, something like this: security temp dir: /var/folders/z5/n2r_tpbn5wd_2kf6jh5kn9_40000gn/T/security. When a client connects using any kind of security its certificate is not initially trusted by the server, resulting in the Bad_SecurityChecksFailed errors you're seeing. Inside this directory you'll find a folder rejected where rejected client certificates are stored. If you move the certificate(s) to the trusted folder the client should then be able to connect using security.
Related
I am trying to test an API on my site. The tests work just fine from one machine, but running the code from a different machine results in the SSLCertVerificationError - which is odd because the site has an SSL cert and is NOT self signed.
Here is the core of my code:
async def device_connect(basename, start, end):
url = SERVER_URL
async with aiohttp.ClientSession() as session:
post_tasks = []
# prepare the coroutines that post
for x in range(start, end):
myDevice={'test':'this'}
post_tasks.append(do_post(session, url, myDevice))
# now execute them all at once
await asyncio.gather(*post_tasks)
async def do_post(session, url, data):
async with session.post(url, data =data) as response:
x = await response.text()
I tried (just for testing) to set 'verify=False' or trust_env=True, but I continue to get the same error. On the other computer, this code runs fine and no trust issue results.
That error text is somewhat misleading. OpenSSL, which python uses, has dozens of error codes that indicate different ways certificate validation can fail, including
X509_V_ERR_SELF_SIGNED_CERT_IN_CHAIN -- the peer's cert can't be chained to a root cert in the local truststore; the chain received from the peer includes a root cert, which is self-signed (because root certs must be self-signed), but that root is not locally trusted
Note this is not talking about the peer/leaf cert; if that is self signed and not trusted, there is a different error X509_V_ERR_DEPTH_ZERO_SELF_SIGNED_CERT which displays as just 'self signed certificate' without the part about 'in certificate chain'.
X509_V_ERR_UNABLE_TO_GET_ISSUER_CERT_LOCALLY (displays in text as 'unable to get local issuer certificate') -- the received chain does not contain a self-signed root and the peer's cert can't be chained to a locally trusted root
In both these cases the important info is the peer's cert doesn't chain to a trusted root; whether the received chain includes a self-signed root is less important. It's kind of like if you go to your doctor and after examination in one case s/he tells you "you have cancer, and the weather forecast for tomorrow is a bad storm" or in another case "you have cancer, but the weather forecast for tomorrow is sunny and pleasant". While these are in fact slightly different situations, and you might conceivably want to distinguish them, you need to focus on the part about "you have cancer", not tomorrow's weather.
So, why doesn't it chain to a trusted root? There are several possibilities:
the server is sending a cert chain with a root that SHOULD be trusted, but machine F is using a truststore that does not contain it. Depending on the situation, it might be appropriate to add that root cert to the default truststore (affecting at least all python apps unless specifically coded otherwise, and often other types of programs like C/C++ and Java also) or it might be better to customize the truststore for your appplication(s) only; or it might be that F is already customized wrongly and just needs to be fixed.
the server is sending a cert chain that actually uses a bad CA, but machine W's truststore has been wrongly configured (again either as a default or customized) to trust it.
machine F is not actually getting the real server's cert chain, because its connection is 'transparently' intercepted by something. This might be something authorized by an admin of the network (like an IDS/IPS/DLP or captive portal) or machine F (like antivirus or other 'endpoint security'), or it might be something very bad like malware or a thief or spy; or it might be in a gray area like some ISPs (try to) intercept connections and insert advertisements (at least in data likely to be displayed to a person like web pages and emails, but these can't always be distinguished).
the (legit) server is sending different cert chains to F (bad) and W (good). This could be intentional, e.g. because W is on a business' internal network while F is coming in from the public net; however you describe this as 'my site' and I assume you would know if it intended to make distinctions like this. OTOH it could be accidental; one fairly common cause is that many servers today use SNI (Server Name Indication) to select among several 'certs' (really cert chains and associated keys); if F is too old it might not be sending SNI, causing the server to send a bad cert chain. Or, some servers use different configurations for IPv4 vs IPv6; F could be connecting over one of these and W the other.
To distinguish these, and determine what (if anything) to fix, you need to look at what certs are actually being received by both machines.
If you have (or can get) OpenSSL on both, do openssl s_client -connect host:port -showcerts. For OpenSSL 1.1.1 up (now common) to omit SNI add -noservername; for older versions to include SNI add -servername host. Add -4 or -6 to control the IP version, if needed. This will show subject and issuer names (s: and i:) for each received cert; if any are different, and especially the last, look at #3 or #4. If the names are the same compare the whole base64 blobs to make sure they are entirely the same (it could be a well-camoflauged attacker). If they are the same, look at #1 or #2.
Alternatively, if policy and permissions allow, get network-level traces with Wireshark or a more basic tool like tcpdump or snoop. In a development environment this is usually easy; if either or both machine(s) is production, or in a supplier, customer/client, or partner environment, maybe not. Check SNI in ClientHello, and in TLS1.2 (or lower, but nowadays lower is usually discouraged or prohibited) look at the Certificate message received; in wireshark you can drill down to any desired level of detail. If both your client(s) and server are new enough to support TLS1.3 (and you can't configure it/them to downgrade) the Certificate message is encrypted and wireshark won't be able to show you the contents unless you can get at least one of your endpoints to export the session secrets in SSLKEYLOGFILE format.
In PostgreSQL, whenever I execute an API URL with secure connection with query
like below
select *
from http_get('https://url......');
I get an error
SSL certificate problem: unable to get local issuer certificate
For this I have already placed a SSL folder in my azure database installation file at following path
C:\Program Files\PostgreSQL\9.6\ssl\certs
What should I do to get rid of this? Is there any SSL extension available, or do I require configuration changes or any other effort?
Please let me know the possible solutions for it.
A few questions...
First, are you using this contrib module: https://github.com/pramsey/pgsql-http ?
Is the server that serves https://url....... using a self-signed (or invalid) certificate?
If the answer to those two questions is "yes" then you may not be able to use that contrib module without some modification. I'm not sure how limited your access is to PostgreSQL in Azure, but if you can install your own C-based contrib modules there is some hope...
pgsql-http only exposes certain CURLOPTs (see: https://github.com/pramsey/pgsql-http#curl-options) values which are settable with http_set_curlopt()
For endpoints using self-signed certificates, I expect the CURLOPT you'll want to include support for to ignore SSL errors is CURLOPT_SSL_VERIFYPEER
If there are other issues like SSL/TLS protocol or cipher mismatches, there are other CURLOPTs that can be patched-in, but those also are not available without customization of the contrib module.
I don't think anything in your
C:\Program Files\PostgreSQL\9.6\ssl\certs
folder has any effect on the http_get() functionality.
If you don't want to get your hands dirty compiling and installing custom contrib modules, you can create an issue on the github page of the maintainer and see if it gets picked up.
You might also take a peek at https://github.com/pramsey/pgsql-http#why-this-is-a-bad-idea because the author of the module makes several very good points to consider.
In the process of setting up an SSL certificate for my site, several different files were created,
server.csr
server.key
server.pass.key
site_name.crt
Should these be added to .gitignore before pushing my site to github?
Apologies in advance if this is a dumb question.
Should these be added to .gitignore before pushing my site to github?
They should not be in the repo at all, meaning stored outside of the repo.
That way:
you don't need to manage a .gitignore,
you can store those keys somewhere safe.
GitHub actually had to change it search feature back in 2013 after seeing users storing keys and passwords in public repositories. See the full story.
The article includes this quote:
The mistakes may reflect the overall education problem among software developers.
When you have expedited programs—"6 weeks and you'll be a real software developer"—to teach developing, security becomes an afterthought. And considering "90 percent of development is copying stuff you don't understand, I'd bet most of them simply don't know what id_rsa is"
In 2016, this "book" (as a joke) reflects that:
The OP adds:
I think Heroku requires putting the files into the repo in order to run ">heroku certs:add server.crt server.key" and setup the cert.
"Configuration and Config Vars" is one illustration on that topic:
A better solution is to use environment variables, and keep the keys out of the code. On a traditional host or working locally you can set environment vars in your bashrc file. On Heroku, you use config vars.
The article "Heroku: SSL Endpoint" does not force you to have those key and certificate in your code. They can be generated directly on Heroku and saved anywhere else for safekeeping. Just not in a git repo.
I would like to add to #VonC 's answer, as it is in fact more complicated:
The files have different content, and depending on that they require a different access control:
server.csr: This is a certificate signing request file. It is generated from the key (server.key in your case) and used to create the certificate (site_name.crt in your case). This should be deleted when the certificate has been created. It should not be shared with untrusted parties.
server.key: This is the private key. Under no circumstances can this file be shared outside of the server. It cannot end up in a code repository. On the system it must be stored with 0600 permissions (i.e. read only) owned by either root or the web server user. At least in Linux, in Windows user access rights are handled differently, but it has to be done similarly.
site_name.crt: This is the signed certificate. This is considered to be public. The certificate is essentially the public key. It is sent out to everyone that connects to the server. It can be stored in the repository. (The hint from #VonC is correct, code and data should be separated, but it can be e.g. in a separate repository for the deployment).
server.pass.key: Don't know what this is, but it seems to contain the password to get access to the key. If this is the case the same rules as for the key apply: Never share with anyone.
In WSO2 Enterprise Store 1.0.0 there is a mix about the hostname used to make connections.
You can set HostName and MgtHostName in carbon.xml. But there are files with fixed names, like
sso-idp-config.xml: (AssertionConsumerService) https://localhost:9443/store/acs
jaggeryapps\store\controllers\ login.jag: (postUrl) "https://" + process.getProperty('carbon.local.ip') + ":" ...
localhost breaks every remote connection. IP address breaks SAML authentication and is not consistent with 3rd party certificates.
Is there an easy way to set the hostname all over the ES?
I tried this scenario only by updating AssertionConsumerService within sso-idp-config.xml and it works for me.
So you have to only update AssertionConsumerService within sso-idp-config.xml.
To work properly, the full list of files I had to modify is:
repository\conf\sso-idp-config.xml
repository\deployment\server\jaggeryapps\publisher\controllers\login.jag
repository\deployment\server\jaggeryapps\publisher\controllers\logout.jag
repository\deployment\server\jaggeryapps\social\controllers\login.jag
repository\deployment\server\jaggeryapps\social\controllers\logout.jag
repository\deployment\server\jaggeryapps\store\controllers\login.jag
repository\deployment\server\jaggeryapps\store\controllers\logout.jag
repository\deployment\server\jaggeryapps\store\themes\store\js\asset.js
login/logout files use the IP address (a bad choice when working with third-party certificates. It also breaks SAML authentication).
I lost a lot of time locating files with IP and localhost references. I think it should be reviewed and documented in future versions of the product.
When trying to hit an environment with improperly configured SSL certificates, I get the following error:
javax.net.ssl.SSLPeerUnverifiedException: peer not authenticated
at com.sun.net.ssl.internal.ssl.SSLSessionImpl.getPeerCertificates(SSLSessionImpl.java:352)
at org.apache.http.conn.ssl.AbstractVerifier.verify(AbstractVerifier.java:128)
at org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:390)
at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:148)
at org.apache.http.impl.conn.AbstractPoolEntry.open(AbstractPoolEntry.java:149)
at org.apache.http.impl.conn.AbstractPooledConnAdapter.open(AbstractPooledConnAdapter.java:121)
at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:562)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:415)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:820)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:776)
at dispatch.BlockingHttp$class.dispatch$BlockingHttp$$execute(Http.scala:45)
at dispatch.BlockingHttp$$anonfun$execute$1$$anonfun$apply$3.apply(Http.scala:58)
at dispatch.BlockingHttp$$anonfun$execute$1$$anonfun$apply$3.apply(Http.scala:58)
at scala.Option.getOrElse(Option.scala:108)
at dispatch.BlockingHttp$$anonfun$execute$1.apply(Http.scala:58)
at dispatch.Http.pack(Http.scala:25)
at dispatch.BlockingHttp$class.execute(Http.scala:53)
at dispatch.Http.execute(Http.scala:21)
at dispatch.HttpExecutor$class.x(executor.scala:36)
at dispatch.Http.x(Http.scala:21)
at dispatch.HttpExecutor$class.when(executor.scala:50)
at dispatch.Http.when(Http.scala:21)
at dispatch.HttpExecutor$class.apply(executor.scala:60)
at dispatch.Http.apply(Http.scala:21)
at com.secondmarket.cobra.lib.delegate.UsersBDTest.tdsGet(UsersBDTest.scala:130)
at com.secondmarket.cobra.lib.delegate.UsersBDTest.setup(UsersBDTest.scala:40)
I would like to ignore the certificates entirely.
Update: I understand the technical concerns regarding improperly configured SSL certs and the issue isn't with our boxes but a service we're using. It happens mostly on test boxes rather than prod/stg so we're investigating but needed something to test the APIs.
You can't 'ignore the certificates entirely' for the following reasons:
The problem in this case is that the client didn't even provide one.
If you don't want security why use SSL at all?
I have no doubt whatsoever that many, perhaps most, of these alleged workarounds 'for development' have 'leaked' into production. There is a significant risk of deploying an insecure system if you build an insecure system. If you don't build the insecurity in, you can't deploy it, so the risk vanishes.
The following was able to allow unsafe SSL certs.
Http.postData(url, payload).options(HttpOptions.allowUnsafeSSL,
HttpOptions.readTimeout(5000))
For the newest version of Dispatch (0.13.2), you can use the following to create an http client that accepts any certificate:
val myHttp = Http.withConfiguration(config => config.setAcceptAnyCertificate(true))
Then you can use it for GET requests like this:
myHttp(url("https://www.host.com/path").GET OK as.String)
(Modify accordingly for POST requests...)
I found this out here: Why does dispatch throw "java.net.ConnectException: General SSLEngine ..." and "unexpected status" exceptions for a particular URL?
And to create an Http client that does verify the certificates, I found some sample code here: https://kevinlocke.name/bits/2012/10/03/ssl-certificate-verification-in-dispatch-and-asynchttpclient/.