Vault reports missing client token when using postgres storage backend - postgresql

I am using Vault with postgres storage backend along with kv secret engine. I am uisng kubernetes auth method to get the vault token. I followed the below documentation to setup the vault with kubernetes
https://learn.hashicorp.com/tutorials/vault/kubernetes-minikube?in=vault/kubernetes
When I start the webapplication for the first time and try to retrieve the tokens it is working but when I delete the webapp deployment and try to deploy webapp again and try to retrieve the vault token again with the api
v1/auth/kubernetes/login
I get the following error
error: 400 Bad Request: [{"errors":["missing client token"]}
But the request has the jwt token of service account. Please see the below image
Due to this error Pod keeps restarting and all of a sudden after some time vault honours the request and returns the vault token.
This looks strange any reason for such behavior?
UPDATE:
This issue does not happen for consul backend

Related

Identity Server 4 API JWT, Load Balancing, Data Protection, Kubernetes,

Running into issues with multiple instances of IdentityServer4 on Kubernetes exposed by the load balancer. I dont think there is a issue with credential login, my issues are around JWT Tokens. Works fine when there is only 1 instance.
Overview:
IdentityServer4
MongoDB Data Storage
PersistedGrantStore
Data Protection setup on Redis
Multiple .Net Core 3.1 Web API. Using AddIdnetityServerAuthentication in start up passing in the connection and the API Name. I am running multiple instance of the API. Reducing down to 1 I still get the same issue. Works fine if there is only 1 instance of the Identity Server but multiple instances I get the following error on the API:
"Bearer" was not authenticated. Failure message: "IDX10501: Signature validation failed. Unable to match key:
I am not getting any errors or failed authentications on the IdentityServer logs.
So the questions going on in my head is, JWT token so in I believe the request should be validated by the token, i.e. the API should not be requesting info form the Identity Server? Identity Server has DataProtection setup running on Redis as its store, I can see its dropped info in there. I have persisted grants store, but tokens are not added.
Do I need to switch to resource vs JWT? What is likely overhead for that?
Are the tokens not getting shared between the API instances via Data Protection?
Thanks for any advice / suggestions.
In case anyone else comes across this. It was down to mistakenly leaving developer signing in the config of Identity Server. Replaced with a certificate solved the issue.
builder.AddDeveloperSigningCredential();
to
builder.AddSigningCredential(rsaCertificate);

How to create base authentication in kubernetes?

I want to create base authentication in kubernetes. every document say that I should create CSV or file then enter the username and password in it. but I do not want to use file I want to some database or kubernetes handle it.
what can I do for base authentication?
You can based your authentication on tokens if you don't want to use static pasword file.
First option:
Service Account Tokens
A service account is an automatically enabled authenticator that uses signed bearer tokens to verify requests.
The plugin uses two flags(which are optional):
Service accounts are usually created automatically by the API server and associated with pods running in the cluster through the ServiceAccount Admission Controller. Bearer tokens are mounted into pods at well-known locations, and allow in-cluster processes to talk to the API server. Accounts may be explicitly associated with pods using the serviceAccountName field of a PodSpec.
Service account bearer tokens are perfectly valid to use outside the cluster and can be used to create identities for long standing jobs that wish to talk to the Kubernetes API. To manually create a service account, simply use the kubectl create serviceaccount (NAME) command. This creates a service account in the current namespace and an associated secret.
The created secret holds the public CA of the API server and a signed JSON Web Token (JWT).
The signed JWT can be used as a bearer token to authenticate as the given service account. See above for how the token is included in a request. Normally these secrets are mounted into pods for in-cluster access to the API server, but can be used from outside the cluster as well.
There is some drawbacks because service account tokens are stored in secrets, any user with read access to those secrets can authenticate as the service account. Be careful when granting permissions to service accounts and read capabilities for secrets.
Second:
Install OpenID Connect (full documentation you can find here: oidc).
OpenID Connect (OIDC) is a superset of OAuth2 supported by some service providers, notably Azure Active Directory, Salesforce, and Google. The protocol’s main addition on top of OAuth2 is a field returned with the access token called an ID Token. This token is a JSON Web Token (JWT) with well known fields, such as a user’s email, signed by the server.
To identify the user, the authenticator uses the id_token (not the access_token) from the OAuth2 token response as a bearer token.
Since all of the data needed to validate who you are is in the id_token, Kubernetes doesn’t need to “phone home” to the identity provider. In a model where every request is stateless this provides a very scalable solution for authentication.
Kubernetes has no “web interface” to trigger the authentication process. There is no browser or interface to collect credentials which is why you need to authenticate to your identity provider first.
There’s no easy way to authenticate to the Kubernetes dashboard without using the kubectl proxy command or a reverse proxy that injects the id_token.
More information you can find here: kubernetes-authentication.

Hashicorp Vault cli return 403 when trying to use kv

I set up vault backed by a consul cluster. I secured it with https and am trying to use the cli on a separate machine to get and set secrets in the kv engine. I am using version 1.0.2 of both the CLI and Vault server.
I have logged in with the root token so I should have access to everything. I have also set my VAULT_ADDR appropriately.
Here is my request:
vault kv put secret/my-secret my-value=yea
Here is the response:
Error making API request.
URL: GET https://{my-vault-address}/v1/sys/internal/ui/mounts/secret/my-secret
Code: 403. Errors:
* preflight capability check returned 403, please ensure client's policies grant access to path "secret/my-secret/"
I don't understand what is happening here. I am able to set and read secrets in the kv engine no problem from the vault ui. What am I missing?
This was a result of me not reading documentation.
The request was failing because there was no secret engine mounted at that path.
You can check your secret engine paths by running vault secrets list -detailed
This showed that my kv secret engine was mapped to path kv not secret as I was trying.
Therefore running vault kv put kv/my-secret my-value=yea worked as expected.
You can enable secret engine for specific path
vault secrets enable -path=kv kv
https://www.vaultproject.io/intro/getting-started/secrets-engines
You need to update secret/my-secret to whichever path you mounted when you enable the kv secret engine.
For example, if you enable the secret engine like this:
vault secrets enable -version=2 kv-v2
You should mount to kv-v2 instead of secret
vault kv put kv-v2/my-secret my-value=yea

Using KeyCloak Gateway in a K8S Cluster

I have KeyCloak Gateway running successfully locally providing Google OIDC authentication for the Kubernetes dashboard. However using the same settings results in an error when the app is deployed as a pod in the cluster itself.
The error I see when the Gateway is running in a K8S pod is:
unable to exchange code for access token {"error": "invalid_request: Credentials in post body and basic Authorization header do not match"}
I'm calling the gateway with the following options:
--enable-logging=true
--enable-self-signed-tls=true
--listen=:443
--upstream-url=https://mydashboard
--discovery-url=https://accounts.google.com
--client-id=<client id goes here>
--client-secret=<secret goes here>
--resources=uri=/*
With these settings applied to a container in a pod I can browse to the Gateway, am redirected to Google to log in, and then am redirected back to the Gateway where the error above is generated.
What could account for the difference between running the application locally and running it in a pod that would generate the above error?
This turned out to be a copy/paste fail in the end, with the client secret being incorrect. The error message wasn't much help here, but at least it was a simple fix.

Client Token generated after logging into AppRole auth backed in Harshcorp Vault doesn't allow to read secrets

I'm integrating Harshicorp Vault into my Node JS application using node-vault-js npm package. I wanted to have multiple app roles defined such as dev, stag, prod on my Vault server engine, for that purpose I have used AppRole auth backend. I have followed all the steps on AppRole documentation and obtained the role_id and secret_id for the role to perform login as well. After that I was able to perform a login and obtained the client_token required for connecting with the Vault engine.But when using that generated client_token as vault token I get a permission denied error.
The same behavior is there even when I follow the same flow from the example in getting started to vault api official documentation. So its not an issue related to the node package.