Securing a REST call to Vault Secrets management - rest

Been trying to figure out how to do this for awhile. Essentially, Vault does not have a secure option for its REST calls. I want to make these rest calls encrypted from as close between point a and b as possible. My thoughts have been the following:
Use an SSH tunnel
Use a TLS tunnel like Stunnel
I currently have Vault in a Docker container, so that’s something else to mention. Has anyone encountered this situation, and how did you deal with it?
UPDATE: So, using the Python API (HVAC), I am getting the following error:
requests.exceptions.SSLError: HTTPSConnectionPool(host='0.0.0.0',
port=8200): Max retries exceeded with url: /v1/secret (Caused by
SSLError(SSLError("bad handshake: Error([('SSL routines', 'ssl3_get_record',
'wrong version number')],)",),))
Using the following commands:
import os
import hvac
client = hvac.Client(url='https://0.0.0.0:8200', token='my-token-here')

Vault has TLS enabled by default, thus all your REST calls are encrypted already. If you are having trouble using https, have a look at the documentation of VAULT_CACERT and VAULT_CAPATH environment variables.
from vault's documentation.
VAULT_CACERT
Path to a PEM-encoded CA certificate file on the local disk. This file
is used to verify the Vault server's SSL certificate. This environment
variable takes precedence over VAULT_CAPATH.
VAULT_CAPATH Path to a directory of PEM-encoded CA certificate files
on the local disk. These certificates are used to verify the Vault
server's SSL certificate.
You can use tools like tcpdump or wireshark to make sure that your requests are indeed encrypted.

To elaborate for Vault running in a container, you need to create a configuration file for Vault that contains something similar to this this (Chef/Ruby code):
config_content = %(
"storage": {
...
},
"default_lease_ttl": "768h",
"max_lease_ttl": "8766h",
"listener": [
{"tcp": {
"address": "0.0.0.0:8200",
"tls_disable": 0,
"tls_cert_file": "/vault/certs/my-cert-combined.pem",
"tls_key_file": "/vault/certs/my-cert.key"
}}],
"log_level": "info"
)
Especially the listener portion. Make your backend storage whatever you want to use (not the Dev default of in-memory!).
Note you will need to get a valid certificate and its private key also in the volume bound into the container.
Store this configuration file in a directory that gets bound inside the container to the path /vault/config. I use /var/vault/config on my host. For example (more Ruby/Chef):
docker_container 'vault' do
container_name 'vault'
tag 'latest'
port '8200:8200'
cap_add ['IPC_LOCK']
restart_policy 'always'
volumes ['/var/vault:/vault']
command 'vault server -config /vault/config'
action :run_if_missing
end
That command tells Vault to look in /vault/config and it should find your config file there, with a .json extension. Note it is important to have the config file listener->tcp->address be 0.0.0.0, rather than 127.0.0.1, because Vault will not resolve external accesses properly.
Then Vault will startup with TLS encryption on all transactions. Define VAULT_ADDR to have https://your-host.com:8200 and away you go.

In my case, I've been testing it on my local environment. So instead of calling the secured https: https://localhost:8200, I called regular http: http://localhost:8200.
This solved the error.

Related

Does Server -dev mode store the data in windows

I tried running the vault server in local with dev mode option. I got a root token which i exported to the environment variables. But once I stopped the server and started it it said *Invalid Request, Unable to start the server with the token my token.
Also does the in-memory vault server store its secrets?
If so where does it store secrets in my windows machine? I have exported VAULT_DEV_ROOT_TOKEN_ID to my environment variables with value s.WC4LYVf6oOyllP6HjR0A3nvo
I tried restarting the server several times
C:\Users\user>vault server -dev
==> Vault server configuration:
Api Address: http://127.0.0.1:8200
Cgo: disabled
Cluster Address: https://127.0.0.1:8201
Listener 1: tcp (addr: "127.0.0.1:8200", cluster address: "127.0.0.1:8201", max_request_duration: "1m30s", max_request_size: "33554432", tls: "disabled")
Log Level: info
Mlock: supported: false, enabled: false
Storage: inmem
Version: Vault v1.2.2
Error initializing Dev mode: failed to create root token with ID "s.WC4LYVf6oOyllP6HjR0A3nvo": 1 error occurred:
* invalid request
The problem here was when ever we start a the vault server in dev mode in windows it generates a root access token. If we export the VAULT_DEV_ROOT_TOKEN_ID to the environment variables, it tries to start the server with that token. But since the token was previously used by vault it the server is not allowed to start.
Because you are running on the dev mode you'll need to unset the token, because whenever you restart the server it generates a new token, and invalidates the previous one. This is how to get it done on MacOS, specifically Big Sur.
Do these:
Open the .zshrc file. Issue this command in the terminal open ~/.zshrc to open it.
In the previous setup, the vault url and token has been added to .zshrc, comment these lines out. These are the lines:
export VAULT_ADDR='http://127.0.0.1:8200
export VAULT_DEV_ROOT_TOKEN_ID=s.a009oZnrl78a9h1vlRw1kGqL
Issue the startup command on the terminal again - vault server -dev
This time around it'll generate a new token, copy it, paste, and replace the previous token. This is the one to edit and replace -
export VAULT_DEV_ROOT_TOKEN_ID=s.a009oZnrl78a9h1vlRw1kGqL.
Once you've added it, uncomment these vault values in the .zshrc file.
Save the file and close, then on your terminal issue this command - source ~/.zshrc to refresh/update the .zshrc file.

How to fix "Your JWT secret key is not set up, you will not be able to log into the JHipster" during the startup of jhipster-registry container

I am trying to launch a microservice application with Jhipster. Each of my services are run in docker containers. When jhipster-registry is starting up, I receive this error:
2019-06-18 18:58:39.066 INFO 1 --- [ main] i.g.j.r.security.jwt.TokenProvider : The JWT key used is not Base64-encoded. We recommend using the `jhipster.security.authentication.jwt.base64-secret` key for optimum security.
2019-06-18 18:58:39.067 ERROR 1 --- [ main] i.g.j.r.security.jwt.TokenProvider :
----------------------------------------------------------
Your JWT secret key is not set up, you will not be able to log into the JHipster.
Please read the documentation at https://www.jhipster.tech/jhipster-registry/
This causes the jhipster-registry service to exit with a code of 1.
However, my application.yml file currently contains a base-64 jwt secret key:
jhipster:
security:
authentication:
jwt:
base64-secret: MjNiZjdiNDk5MGM4MjE4ODI4YzRiNjZkOTRhNTU3YmNkMWRmMWYxMzkzYjAzMzI5OWI0MzNjNzVmZjg0ZDRkNDkwOTNkNjlmNjU4Zjc0NmEyYTQ3NzViMWIzZTliYjNkNjI5ZQ==
I am currently using the docker image jhipster/jhipster-registry:v5.0.1. I have tried using v5.0.2 and the error persists. I have also tried changing my application.yml to include an empty secret parameter like so, but this didn't result in any change.
secret:
base64-secret: MjNiZjdiNDk5MGM4MjE4ODI4YzRiNjZkOTRhNTU3YmNkMWRmMWYxMzkzYjAzMzI5OWI0MzNjNzVmZjg0ZDRkNDkwOTNkNjlmNjU4Zjc0NmEyYTQ3NzViMWIzZTliYjNkNjI5ZQ==
I also tried the solution suggested in How to fix Invalid JWT with JHipster Registry [Docker]?
and it did not work for me. My docker-compose.yml and application.yml are exactly the same as the other people on my team and the registry service launches fine for them. How do I resolve this error?
EDIT: This started happening after I changed my Windows password.
Probably your Docker hasn't acces to the Filesystem where the config lies.
In my case the Firewall was blocking the access.
Check your Docker Desktop installation:
Docker Desktop -> Settings -> Shared Drives -> Reset credentials -> re-enter your new data.
Go to your Docker Desktop settings and under Shared Drives see if you've selected the drives you want to share with Docker.

Allow own signed certificat in owncloud on a synology

I have owncloud version 9.1.8 running on a synology. Now I installed onlyoffice on a local server with a self signed certificat. It is important to know, that the onlyoffice server is running locally in a network. So I cannot access the server like e.g. with lets encrypt, because I only have a local server name and not a public server name. Lets Encrypt therefore cannot verify the server. However if I want (and if you have a solution doing that), I can access the internet using the server.
Now i have the problem, that owncloud delivers me the following error message
"Error while downloading the document file to be converted."
when I want to save the url in the onlyoffice configuration in owncloud. I guess the problem is, that I am using a self signed certificat. Do you know what I can do? Google does not really help me.
"Error while downloading the document file to be converted."
means that DocumentServer cannot validate your storage's self-signed certificate (OC in your case)
There are 2 possible workarounds:
1) Change "rejectUnauthorized" to false in the /etc/onlyoffice/documentserver/default.json config file
2) Change the default Node.js CAstore:
Edit the files:
/etc/supervisor/conf.d/onlyoffice-documentserver-converter.conf
/etc/supervisor/conf.d/onlyoffice-documentserver-docservice.conf
Add a flag --use-openssl-ca to the parameters in this line
Then you need to add your certificate to the the default CA store and restart ONLYOFFICE services:
supervisorctl restart all

Understanding OPC-UA Security using Eclipse Milo

I am new to this OPC-UA world and Eclipse Milo.
I do not understand how the security works here,
Discussing about client-example provided by eclipse-milo
I see few properties of security being used to connect to the OPCUA Server:
SecurityPolicy,
MessageSecurityMode,
clientCertificate,
clientKeyPair,
setIdentityProvider,
How the above configurations are linked with each other?
I was trying to run client-examples -> BrowseNodeExample.
This example internally runs the ExampleServer.
ExampleServer is configured to run with Anonymous and UsernamePassword Provider. It is also bound to accept SecurityPolicy.None, Basic128Rsa15, Basic256, Basic256Sha256 with MessageSecurityMode as SignandEncrypt except for SecurityPolicy.None where MessageSecurityMode is None too.
The problem is with AnonymousProvider I could connect to the server with all SecurtiyPolicy and MessageSecurityMode pair mentioned above (without client certificates provided).
But I could not do the same for UsernameProvider, For UsernameProvider only SecurityPolicy MessageSecurityMode pair with None runs successfully.
All others pairs throw security checks failed exception (when certificate provided) else user access denied (when client certificate not provided). How to make this work?
Lastly, It would be really nice if someone could point me to proper User documentation for Eclipse Milo. Since I could not see any documentation except examples codes, and they are not documented.
SecurityPolicy and MessageSecurityMode go hand-in-hand. The security policy dictates the set of algorithms that will be used for signatures and encryption, if any. The message security mode determines whether the messages will be signed, signed and encrypted, or neither in the case where no security is used.
clientCertificate and clientKeyPair must be configured if you plan to use security. You can't use encryption or signatures if you don't have a certificate and private key, after all.
IdentityProvider used to provide the credentials that identify the user of the session, if any.
When the ExampleServer starts up it logs that its using a temporary security directory, something like this: security temp dir: /var/folders/z5/n2r_tpbn5wd_2kf6jh5kn9_40000gn/T/security. When a client connects using any kind of security its certificate is not initially trusted by the server, resulting in the Bad_SecurityChecksFailed errors you're seeing. Inside this directory you'll find a folder rejected where rejected client certificates are stored. If you move the certificate(s) to the trusted folder the client should then be able to connect using security.

installing kubernetes on coreos with rkt and automated script

I'm trying to install kuberentes with rkt on my real (not virtual) coreos servers at home using the scripts at https://github.com/coreos/coreos-kubernetes/tree/master/multi-node/generic and I have some questions.
my etcd2 is using tls keys, I can't see anywhere in the script where I can define where the certificates are located.
can I supply a domain instead of IP for ADVERTISE_IP and CONTROLLER_ENDPOINT ?
when I tried to install kubernetes manually I needed start the rkt service api. it doesn't state in the documents that it needed here, does it mean that I don't need it if I use these scripts? or is it just something that's missing in the documents?
thanks!
update
Rob thank you so much for your response. I wasn't clear enough regarding etcd2. I already have etcd2 tls installed and properly configured on my coreos servers. so I configured my etcd servers in the controller-install.sh file:
export ETCD_ENDPOINTS="https://coreos-2.tux-in.com:2379,https://coreos-3.tux-in.com:2379"
but when I run the controller-install.sh script, it returns and repeat the following output:
Waiting for etcd...
Trying: https://coreos-2.tux-in.com:2379
Trying: https://coreos-3.tux-in.com:2379
Trying: https://coreos-2.tux-in.com:2379
Trying: https://coreos-3.tux-in.com:2379
...
so I was guessing it's because i didn't define etcd related tls certificates in the controller script and that is why it stuck in that faze.
on my macbook pro laptop I have the following alias configured:
alias myetcdctl="~/apps/etcd-v3.0.8-darwin-amd64/etcdctl --endpoint=https://coreos-2.tux-in.com:2379 --ca-file=/Users/ufk/Projects/coreos/tux-in/etcd/certs/certs-names/ca.pem --cert-file=/Users/ufk/Projects/coreos/tux-in/etcd/certs/certs-names/etcd1.pem --key-file=/Users/ufk/Projects/coreos/tux-in/etcd/certs/certs-names/etcd1-key.pem --timeout=10s"
so when I run myetcdctl member list I get:
8832ce6a269a7dac: name=ccff826d5f564c67abf35467306f80a0 peerURLs=https://coreos-3.tux-in.com:2380 clientURLs=https://coreos-3.tux-in.com:2379 isLeader=true
a2c0ac9708ef90fc: name=dc38bc8f20e64940b260d3f7b260430d peerURLs=https://coreos-2.tux-in.com:2380 clientURLs=https://coreos-2.tux-in.com:2379 isLeader=false
so I'm guessing that I don't really have a problem there.
any ideas?
thanks!
my etcd2 is using tls keys, I can't see anywhere in the script where I can define where the certificates are located.
These scripts don't start an etcd server. You will need to set one up manually and will be able to use TLS and as many nodes as you would like. This isn't clear in the current form of the document, I will attempt a PR to fix.
can I supply a domain instead of IP for ADVERTISE_IP and CONTROLLER_ENDPOINT ?
Only CONTROLLER_ENDPOINT be a domain name.
when I tried to install kubernetes manually I needed start the rkt service api. it doesn't state in the documents that it needed here, does it mean that I don't need it if I use these scripts? or is it just something that's missing in the documents?
These scripts include/start the rkt API service. As you can see below, it also has a Restart parameter set (source):
[Unit]
Before=kubelet.service
[Service]
ExecStart=/usr/bin/rkt api-service
Restart=always
RestartSec=10
[Install]
RequiredBy=kubelet.service