hashicorps Vault cli "bad certificate" error after successful login - hashicorp-vault

I'm trying to issue Vault commands with the cli from my local machine to my remote Vault server but keep getting a bad certificate error.
On the remote Vault server I
created an admin policy as outlined here in admin.hcl
wrote it with vault policy write admin admin.hcl
enabled certificate authentication with vault auth enable cert
associated the admin policy just created with a client admin certificate admin-cert.crt:
vault write auth/cert/certs/user display_name=admin policies=admin certificate=#vault/admin-cert.crt ttl=3600
Then on my local machine I was able to successfully login with this command
vault login -method=cert -ca-cert=CA.crt -client-cert=admin-cert.crt -client-key=client.key.pem name=user
which gave back a token. The output:
is already stored in the token helper. You do NOT need to run "vault login"
again. Future Vault requests will automatically use this token.
Key Value
--- -----
token s.Q2NPAIRwhjNRJqvY8LscgSPy
token_accessor bQSI8zGJM4zspnlgvu2XEV1z
token_duration 1h
token_renewable true
token_policies ["admin" "default"]
identity_policies []
policies ["admin" "default"]
token_meta_authority_key_id n/a
token_meta_cert_name user
token_meta_common_name localhost.com
token_meta_serial_number 4285812225508508199151930131872257251014974781
token_meta_subject_key_id n/a
However any subsequent Vault cli commands from my local machine then get back a tls: bad certificate error. I don't think my certs are incorrect as I wouldn't have been able to complete the initial log in in the first place. Rather, it looks like I need to turn off the certificate authentication and use the token for my requests with the Vault cli because I am able to authenticate into the Vault UI with the token.

The -ca-cert argument value used is for the Vault TLS Listener CA certificate, and not the CA that issued the client authentication certificate. Your -client-cert is correct, and your -client-key is probably also correct, but your -ca-cert value should not be the one associated with the authentication engine itself.

The problem was in the configure file.
listener "tcp" {
address = "0.0.0.0:8200"
/*
* Configuration required for mutual TLS
*/
tls_min_version = "tls12"
tls_cert_file = "/home/ubuntu/vault/vault-cert.crt" // path to pem encoded server certificate
tls_key_file = "/home/ubuntu/vault/server.key.pem" // path to pem encoded server private key
tls_require_and_verify_client_cert = "true" // require client certificate from inbound requests
tls_client_ca_file = "/home/ubuntu/vault/client-CA.crt" // path to client CA cert used to validate client certs
The tls_require_and_verify_client_cert needed to be false instead of true. I guess this made requests go through mTLS authentication even after logging in and obtaining the Vault token. However the vault CLI commands other than login don't provide parameters to pass in the certificates needed for mTLS and so the requests failed with the tls: bad certificates error. Turning the mTLS requirement off allows for token authentication of the Vault requests after login.

Related

Should the k8s Cluster Certificate Authority be kept secret?

I have an azure aks cluster and a local kubeconfig:
apiVersion: v1
kind: Config
clusters:
- name: my-cluster
cluster:
certificate-authority-data: LS0...0tCg==
server: https://api-server:443
contexts:
- name: my-context
context:
cluster: my-cluster
namespace: samples
user: my-context-user
current-context: my-context
users:
- name: my-context-user
user:
token: ey...jI
that is used for connecting to the cluster, listing pods etc.
From what I understand its important that the token in the kubeconfig is kept secret/private. But whats about the certificate-authority-data?
Since its just used to verify the the API server certificate I guess it has the same status as a public key and can be made public available at least for internal team members.
And is there and documentation that confirms this?
I did not find any info regarding that here or here.
All clients (pods, normal users using kubeconfigfile, service accounts, component clients: kubelet to kube-apiserver etc.) are suing ca.crt in order to recognize self-signed certificates.
As we can see in the docs
Client certificate authentication is enabled by passing the --client-ca-file=SOMEFILE option to API server. The referenced file must contain one or more certificate authorities to use to validate client certificates presented to the API server. If a client certificate is presented and verified, the common name of the subject is used as the user name for the request. As of Kubernetes 1.4, client certificates can also indicate a user's group memberships using the certificate's organization fields. To include multiple group memberships for a user, include multiple organization fields in the certificate.
In k8s cluster boostraped using kubeadm by default kube-apiserver is consigured with --client-ca-file=/etc/kubernetes/pki/ca.crt.
As you can see in the docs certificate-authority ca.crt should be be referenced in any config file for all clients which are used to securely connect with k8s cluster.
Sometimes you may want to use Base64-encoded data embedded here instead of separate certificate files; in that case you need to add the suffix -data to the keys, for example, certificate-authority-data, client-certificate-data, client-key-data
By default this value is Base64-encoded and embedded in tho the KubeconfigFile.
When your workload is accessing the k8s API from within a Pod
You can find also information about
# Reference the internal certificate authority (CA)
CACERT=${SERVICEACCOUNT}/ca.crt
by default ca.crt is located in /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
Why ca.crt is included in all kubeconfigFiles - as we can see in the docs
A client node may refuse to recognize a self-signed CA certificate as valid. For a non-production deployment, or for a deployment that runs behind a company firewall, you can distribute a self-signed CA certificate to all clients and refresh the local list for valid certificates.
According to your last statement.
Since its just used to verify the the API server certificate I guess it has the same status as a public key and can be made public available at least for internal team members
certificate-authority-data should be included in all kubeconfig files for all internal team members, while client-key-data or tokens should be kept secret between different clients.

Service Fabric, Azure Devops Deployment fails : The specified network password is not correct

I was recently ordered by our IT team to disable the NAT pools on my service fabric cluster due to security risks. The only way I could do this was to deploy a new cluster with all its components.
Because this is a test environment I opt to use a self signed cert without a password for my cluster, the certificate is in my vault and the cluster is up and running.
The issue I have now is when I try to deploy my application from an Azure Devops Release Pipeline I get the following message:
An error occurred attempting to import the certificate. Ensure that your service endpoint is configured properly with a correct certificate value and, if the certificate is password-protected, a valid password. Error message: Exception calling "Import" with "3" argument(s): "The specified network password is not correct.
I generated the self signed certificate in Key Vault, downloaded the certificate and used Powershell to get the Base64 string for the service connection.
Should I create the certificate myself, with a password?
With the direction of the two comments supplied, I ended up generating a certificate on my local machine using the powershell script included with service fabric's local run time.
A small caveat here is to change the key size in the script to a large key size than the default, because ke vault does not support 1024 keys.
I then exported the pfx from my user certificates added a password(this is required for the service connection) and impoted the new pfx into my key vault.
Redeployed my cluster and it worked.

Keycloak Gatekeeper always fail to validate 'iss' claim value

Adding the match-claims to the configuration file doesn't seem to do anything. Actually, Gatekeeper is always throwing me the same error when opening a resource (with or without the property).
My Keycloak server is inside a docker container, accessible from an internal network as http://keycloak:8080 while accessible from the external network as http://localhost:8085.
I have Gatekeeper connecting to the Keycloak server in an internal network. The request comes from the external one, therefore, the discovery-url will not match the 'iss' token claim.
Gatekeeper is trying to use the discovery-url as 'iss' claim. To override this, I'm adding the match-claims property as follows:
discovery-url: http://keycloak:8080/auth/realms/myRealm
match-claims:
iss: http://localhost:8085/auth/realms/myRealm
The logs look like:
On startup
keycloak-gatekeeper_1 | 1.5749342705316222e+09 info token must contain
{"claim": "iss", "value": "http://localhost:8085/auth/realms/myRealm"}
keycloak-gatekeeper_1 | 1.5749342705318246e+09 info keycloak proxy service starting
{"interface": ":3000"}
On request
keycloak-gatekeeper_1 | 1.5749328645243566e+09 error access token failed verification
{ "client_ip": "172.22.0.1:38128",
"error": "oidc: JWT claims invalid: invalid claim value: 'iss'.
expected=http://keycloak:8080/auth/realms/myRealm,
found=http://localhost:8085/auth/realms/myRealm."}
This ends up in a 403 Forbidden response.
I've tried it on Keycloak-Gatekeeper 8.0.0 and 5.0.0, both with the same issue.
Is this supposed to work the way I'm trying to use it?
If not, what I'm missing?, how can I validate the iss or bypass this validation? (preferably the former)?
It is failing during discovery data validation - your setup violates OIDC specification:
The issuer value returned MUST be identical to the Issuer URL that was directly used to retrieve the configuration information. This MUST also be identical to the iss Claim value in ID Tokens issued from this Issuer.
It is MUST, so you can't disable it (unless you want to hack source code - it should be in coreos/go-oidc library). Configure your infrastructure setup properly (e.g. use the same DNS name for Keycloak in internal/external network, content rewrite for internal network requests, ...) and you will be fine.
Change the DNS name to host.docker.internal
token endpoint: http://host.docker.internal/auth/realms/example-realm/open-id-connect/token
issuer URL in your property file as http://host.docker.internal/auth/realms/example-realm
In this way both outside world access and internal calls to keycloak can be achieved

Spring throwing error after SAML cert update

I have a perfectly working Spring Security web application that uses SAML SSO. The client (IdP) changed their certs. I updated the cert and the CA certs to my keystore.jks. I am getting redirected properly to the IdP, I log in and get properly redirected back to my app. At that point I am getting theses in the logs:
Attempting to validate signature using key from supplied credential (validate) (SignatureValidator.java:54)
Creating XMLSignature object (buildSignature) (SignatureValidator.java:90)
Validating signature with signature algorithm URI: http://www.w3.org/2000/09/xmldsig#rsa-sha1 (validate) (SignatureValidator.java:64)
Validation credential key algorithm 'RSA', key instance class 'sun.security.rsa.RSAPublicKeyImpl' (validate) (SignatureValidator.java:65)
Signature validated with key from supplied credential (validate) (SignatureValidator.java:70)
SSL negotiation with xxxxxx using candidate credential was successful (verifySignature) (BaseSignatureTrustEngine.java:148)
Successfully verifiServer certificate verify failed: signer not foundidate) (BaseSignatureTrustEngine.java:101)
Attempting to establish trust of KeyInfo-derived credential (validateConnected to HTTPS on 34.196.133.252)
Failed to validate untrusted credential against trusted key (validate) (ExplicitKeyTrustEvaluator.java:95
org.opensaml.xml.validation.ValidationException: Signature did not validate against the credential's key
So it looks like something is being validated, but I do not understand why it's failing. I double checked with keytool and all the CA's are there.

Kerberos test using kinit with no password (cert auth)

I did extensive search before posting this Q.
We have a Kerb setup working fine for most users for our internal portal. For a few users we are getting the following error:
"Failed to create delegated GSSAPI token on behalf of
HTTP/ssologon.xxx.xxx.xx.com#XXX.XXX.XX.COM for
service#hostname.xxx.xxx.xx.com: Minor Status=-1765328230, Major
Status=851968, Message=Cannot find KDC for requested realm]"
I can test kerb setup fine from the Server side using Kinit using Keytab file etc.
Issue/Q is how do I test the same from the workstations/client PC which are exhibiting the above error.
I could use kinit or kinit principal-name but it prompts for a Password. But we have disabled Passwords authentication and use X509 certs/Access Card to login to our PCs/Domain.
So, how do we use Kinit or equiv. to test kerberos from a domain workstation
using CLI and Cert authentication.
I have seen the kinit -X option but it is not available on JDK1.7/1.8 in Win 7 it seems. Is pkinit (MIT Kerberos) an option but it seems more like used for web server to KDC authentication.
Thank you in advance and appreciate the community's time and effort.
---- Additional Info 1----
Btw, had the user purge all his tickets - klist purge and had her try accessing the SSO site (protected using IWA Kerb) and verified she is issued a kerb ticket
5 Client: xxjdoe # XXX.XX.XXX
Server: HTTP/ssologon.xxx.xxx.xx.xx # XXX.XXX.XX.XXX
KerbTicket Encryption Type: RSADSI RC4-HMAC(NT)
Ticket Flags 0x40a40000 -> forwardable renewable pre_authent ok_as_delegate
Start Time: 4/7/2017 13:54:59 (local)
End Time: 4/7/2017 23:54:48 (local)
Renew Time: 4/14/2017 13:54:48 (local)
Session Key Type: RSADSI RC4-HMAC(NT)
-------- End 1 ---------------