I have just built a vault server that works correctly but at each connection on the webui, I am asked to validate a certificate:
Do you know why I have this message? Is it possible to bypass this problem?
For information, I use a wildcard certificate *.mydomain.com.
Best regards,
M.
Vault supports mutual TLS by default. Vault asks that you present your own certificate to authenticate and continues with other authentication methods if no client certificate is provided.
You can turn it off by setting tls_disable_client_certs = false in your server's configuration, under the tcp stanza (restart required).
You can find more details in this knowledge based article by #Zam.
Fixing this issue involves making a tweak to your TCP listener's config stanza. For the TCP listener, Vault includes a parameter called tls_disable_client_certs which allows you to toggle this functionality. By default, the value of this parameter is false and Vault will request client certificates when available.
To disable this behavior, simply update the TCP listener stanza in your Vault configuration file to include the following line.
tls_disable_client_certs = "true"
Below is an example of how this would look in a Vault configuration file.
...
listener "tcp" {
address = "0.0.0.0:8200"
tls_cert_file = "/opt/vault/tls/vault-cert.crt"
tls_key_file = "/opt/vault/tls/vault-key.key"
tls_client_ca_file = "/opt/vault/tls/vault-ca.crt"
tls_disable_client_certs = "true"
}
...
If you'd like to read more, I wrote a knowledge base article detailing how to handle this.
Thank you for your answers.
By adding this line:
tls_disable_client_certs = true
I don't have to submit a certificate anymore.
Best regards,
M.
Related
Been trying to figure out how to do this for awhile. Essentially, Vault does not have a secure option for its REST calls. I want to make these rest calls encrypted from as close between point a and b as possible. My thoughts have been the following:
Use an SSH tunnel
Use a TLS tunnel like Stunnel
I currently have Vault in a Docker container, so that’s something else to mention. Has anyone encountered this situation, and how did you deal with it?
UPDATE: So, using the Python API (HVAC), I am getting the following error:
requests.exceptions.SSLError: HTTPSConnectionPool(host='0.0.0.0',
port=8200): Max retries exceeded with url: /v1/secret (Caused by
SSLError(SSLError("bad handshake: Error([('SSL routines', 'ssl3_get_record',
'wrong version number')],)",),))
Using the following commands:
import os
import hvac
client = hvac.Client(url='https://0.0.0.0:8200', token='my-token-here')
Vault has TLS enabled by default, thus all your REST calls are encrypted already. If you are having trouble using https, have a look at the documentation of VAULT_CACERT and VAULT_CAPATH environment variables.
from vault's documentation.
VAULT_CACERT
Path to a PEM-encoded CA certificate file on the local disk. This file
is used to verify the Vault server's SSL certificate. This environment
variable takes precedence over VAULT_CAPATH.
VAULT_CAPATH Path to a directory of PEM-encoded CA certificate files
on the local disk. These certificates are used to verify the Vault
server's SSL certificate.
You can use tools like tcpdump or wireshark to make sure that your requests are indeed encrypted.
To elaborate for Vault running in a container, you need to create a configuration file for Vault that contains something similar to this this (Chef/Ruby code):
config_content = %(
"storage": {
...
},
"default_lease_ttl": "768h",
"max_lease_ttl": "8766h",
"listener": [
{"tcp": {
"address": "0.0.0.0:8200",
"tls_disable": 0,
"tls_cert_file": "/vault/certs/my-cert-combined.pem",
"tls_key_file": "/vault/certs/my-cert.key"
}}],
"log_level": "info"
)
Especially the listener portion. Make your backend storage whatever you want to use (not the Dev default of in-memory!).
Note you will need to get a valid certificate and its private key also in the volume bound into the container.
Store this configuration file in a directory that gets bound inside the container to the path /vault/config. I use /var/vault/config on my host. For example (more Ruby/Chef):
docker_container 'vault' do
container_name 'vault'
tag 'latest'
port '8200:8200'
cap_add ['IPC_LOCK']
restart_policy 'always'
volumes ['/var/vault:/vault']
command 'vault server -config /vault/config'
action :run_if_missing
end
That command tells Vault to look in /vault/config and it should find your config file there, with a .json extension. Note it is important to have the config file listener->tcp->address be 0.0.0.0, rather than 127.0.0.1, because Vault will not resolve external accesses properly.
Then Vault will startup with TLS encryption on all transactions. Define VAULT_ADDR to have https://your-host.com:8200 and away you go.
In my case, I've been testing it on my local environment. So instead of calling the secured https: https://localhost:8200, I called regular http: http://localhost:8200.
This solved the error.
I am new to this OPC-UA world and Eclipse Milo.
I do not understand how the security works here,
Discussing about client-example provided by eclipse-milo
I see few properties of security being used to connect to the OPCUA Server:
SecurityPolicy,
MessageSecurityMode,
clientCertificate,
clientKeyPair,
setIdentityProvider,
How the above configurations are linked with each other?
I was trying to run client-examples -> BrowseNodeExample.
This example internally runs the ExampleServer.
ExampleServer is configured to run with Anonymous and UsernamePassword Provider. It is also bound to accept SecurityPolicy.None, Basic128Rsa15, Basic256, Basic256Sha256 with MessageSecurityMode as SignandEncrypt except for SecurityPolicy.None where MessageSecurityMode is None too.
The problem is with AnonymousProvider I could connect to the server with all SecurtiyPolicy and MessageSecurityMode pair mentioned above (without client certificates provided).
But I could not do the same for UsernameProvider, For UsernameProvider only SecurityPolicy MessageSecurityMode pair with None runs successfully.
All others pairs throw security checks failed exception (when certificate provided) else user access denied (when client certificate not provided). How to make this work?
Lastly, It would be really nice if someone could point me to proper User documentation for Eclipse Milo. Since I could not see any documentation except examples codes, and they are not documented.
SecurityPolicy and MessageSecurityMode go hand-in-hand. The security policy dictates the set of algorithms that will be used for signatures and encryption, if any. The message security mode determines whether the messages will be signed, signed and encrypted, or neither in the case where no security is used.
clientCertificate and clientKeyPair must be configured if you plan to use security. You can't use encryption or signatures if you don't have a certificate and private key, after all.
IdentityProvider used to provide the credentials that identify the user of the session, if any.
When the ExampleServer starts up it logs that its using a temporary security directory, something like this: security temp dir: /var/folders/z5/n2r_tpbn5wd_2kf6jh5kn9_40000gn/T/security. When a client connects using any kind of security its certificate is not initially trusted by the server, resulting in the Bad_SecurityChecksFailed errors you're seeing. Inside this directory you'll find a folder rejected where rejected client certificates are stored. If you move the certificate(s) to the trusted folder the client should then be able to connect using security.
How to I change the monitoring-agent.config to go out via proxy with authentication?
The change log states...
Monitoring Agent 2.3.1.89-1
Released 2014-07-08
Added support for HTTP proxy configuration in the agent configuration file.
But I can't see how to do this.
Following wdberkeley's link I can add this value to the monitoring-agent.config file.
httpProxy=http://"pxproxy01":3128
But this gives..
Failure getting conf. Op: Get Err: Proxy Authentication Required
Is there anyway to set the authentication user/password ?
Edit file:
C:\MMSData\Monitoring\monitoring-agent.config
Add line...
httpProxy=http://<insert_server_address>:<insert_port>
e.g.
httpProxy=http://PROXY01.server.com:3128
Then get the proxy control team, who ever they be, to exclude the following from requiring authentication.
https://mms.mongodb.com 80
https://mms.mongodb.com 443
This has worked for me. I now have the MMS Agent on Windows sending stat's to the MMS service.
Thanks to #wdberkeley for starting me off on this route.
wdberkeley, the page you linked to does not exist & the classic page PDF & HTTP versions state 'HTTP_PROXY' not 'httpproxy' (on OSx section & tar.gz section), section '6.6 Monitoring Agent Configuration' does state the correct property name 'httpproxy'.
I'm trying to read out (later maybe even write) into a Google Spreadsheet with Net::Google::Spreadsheets.
The most boilerplate script dies with "Login failed" and no error:
use Net::Google::Spreadsheets;
my $service = Net::Google::Spreadsheets->new(
username => 'myusername#googlemail.com',
password => 'mypassword'
);
All I'm getting is
Net::Google::AuthSub login failed
Sadly, I don't know how one would diagnose or fix this issue. Anyone?
Thanks so much!
May be because of SSL certificate checking. You can skip the test with:
$ENV{PERL_LWP_SSL_VERIFY_HOSTNAME} = 0;
Though really you should set the certificate authorities correctly, as per the message returned by the Net::Google::AuthSub module:
Can't verify SSL peers without knowing which Certificate Authorities
to trust
This problem can be fixed by either setting the PERL_LWP_SSL_CA_FILE
envirionment variable or by installing the Mozilla::CA module.
To disable verification of SSL peers set the
PERL_LWP_SSL_VERIFY_HOSTNAME envirionment variable to 0. If you do
this you can't be sure that you communicate with the expected peer.
When trying to hit an environment with improperly configured SSL certificates, I get the following error:
javax.net.ssl.SSLPeerUnverifiedException: peer not authenticated
at com.sun.net.ssl.internal.ssl.SSLSessionImpl.getPeerCertificates(SSLSessionImpl.java:352)
at org.apache.http.conn.ssl.AbstractVerifier.verify(AbstractVerifier.java:128)
at org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:390)
at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:148)
at org.apache.http.impl.conn.AbstractPoolEntry.open(AbstractPoolEntry.java:149)
at org.apache.http.impl.conn.AbstractPooledConnAdapter.open(AbstractPooledConnAdapter.java:121)
at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:562)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:415)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:820)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:776)
at dispatch.BlockingHttp$class.dispatch$BlockingHttp$$execute(Http.scala:45)
at dispatch.BlockingHttp$$anonfun$execute$1$$anonfun$apply$3.apply(Http.scala:58)
at dispatch.BlockingHttp$$anonfun$execute$1$$anonfun$apply$3.apply(Http.scala:58)
at scala.Option.getOrElse(Option.scala:108)
at dispatch.BlockingHttp$$anonfun$execute$1.apply(Http.scala:58)
at dispatch.Http.pack(Http.scala:25)
at dispatch.BlockingHttp$class.execute(Http.scala:53)
at dispatch.Http.execute(Http.scala:21)
at dispatch.HttpExecutor$class.x(executor.scala:36)
at dispatch.Http.x(Http.scala:21)
at dispatch.HttpExecutor$class.when(executor.scala:50)
at dispatch.Http.when(Http.scala:21)
at dispatch.HttpExecutor$class.apply(executor.scala:60)
at dispatch.Http.apply(Http.scala:21)
at com.secondmarket.cobra.lib.delegate.UsersBDTest.tdsGet(UsersBDTest.scala:130)
at com.secondmarket.cobra.lib.delegate.UsersBDTest.setup(UsersBDTest.scala:40)
I would like to ignore the certificates entirely.
Update: I understand the technical concerns regarding improperly configured SSL certs and the issue isn't with our boxes but a service we're using. It happens mostly on test boxes rather than prod/stg so we're investigating but needed something to test the APIs.
You can't 'ignore the certificates entirely' for the following reasons:
The problem in this case is that the client didn't even provide one.
If you don't want security why use SSL at all?
I have no doubt whatsoever that many, perhaps most, of these alleged workarounds 'for development' have 'leaked' into production. There is a significant risk of deploying an insecure system if you build an insecure system. If you don't build the insecurity in, you can't deploy it, so the risk vanishes.
The following was able to allow unsafe SSL certs.
Http.postData(url, payload).options(HttpOptions.allowUnsafeSSL,
HttpOptions.readTimeout(5000))
For the newest version of Dispatch (0.13.2), you can use the following to create an http client that accepts any certificate:
val myHttp = Http.withConfiguration(config => config.setAcceptAnyCertificate(true))
Then you can use it for GET requests like this:
myHttp(url("https://www.host.com/path").GET OK as.String)
(Modify accordingly for POST requests...)
I found this out here: Why does dispatch throw "java.net.ConnectException: General SSLEngine ..." and "unexpected status" exceptions for a particular URL?
And to create an Http client that does verify the certificates, I found some sample code here: https://kevinlocke.name/bits/2012/10/03/ssl-certificate-verification-in-dispatch-and-asynchttpclient/.