Can't connect Spring Cloud Vault to HCP Vault in Dev Mode Using Token Auth - spring-cloud

What I have:
HCP Vault in Development mode
Simple Spring Client using Spring Cloud Vault
CLI Client
Desried Outcome:
Inject data into a variable using HCP Vault
What is working:
Run Vault locally (in dev mode) and inject data into the variable
Get the data from the HCP Vault using vault CLI
Notes:
I'm generating the admin token to avoid the use of policies
The secrets in HCP Vault and my local Vault are identical
This is my application.yml file:
spring.application.name: my-spring-boot-app
spring.cloud.vault:
host: vault-cluster-public-vault-fjkdsahfdjksa.hfjksdfhdsajk.hashicorp.cloud
port: 8200
scheme: https
authentication: TOKEN
token: hvs.fdsjfhdsakjfhdasjkfhdasjkfhdasjkfhdasjkfhdasjkfhdasjkfhdsakj
spring.config.import: vault://
logging.level.org: INFO
logging.level.com: INFO

The problem was that I was missing the namespace header.
spring.cloud.vault:
host: vault-cluster-public-vault-fjkdsahfdjksa.hfjksdfhdsajk.hashicorp.cloud
port: 8200
scheme: https
namespace: admin
authentication: TOKEN
token: ${VAULT_TOKEN}
spring.config.import: vault://

Related

How to access hashi corp vault secret in kubernetes

Hi I have added secret in my hashi corp vault in the below path
cep-kv/dev/sqlpassword
I am trying to access secret in my manifest as below
spec:
serviceAccountName: default
containers: # List
- name: cep-container
image: myinage:latest
env:
- name: AppSettings__Key
value: vault:cep-kv/dev/sqlpassword#sqlpassword
This is throwing error below
failed to inject secrets from vault: failed to read secret from path: cep-kv/dev/sqlpassword: Error making API request.\n\nURL: GET https://vaultnet/v1/cep-kv/dev/sqlpassword?version=-1\nCode: 403. Errors:\n\n* 1 error occurred:\n\t* permission denied\n\n" app=vault-env
Is the path I am trying to access is correct value:
vault:cep-kv/dev/sqlpassword#sqlpassword
I tried with below path too
value: vault:cep-kv/dev/sqlpassword
This says secret not found in respective path. Can someone help me to get secret from hashi corp vault. Any help would be appreciated. Thanks
As you are getting 403 permission you need to Configure Kubernetes authentication, you can configure authentication from the following step:
Enable the Kubernetes auth method:
vault enable auth kubernetes
Configure the Kubernetes authentication method to use the location of the Kubernetes API
vault write auth/kubernetes/config \
kubernetes_host=https://192.168.99.100:<your TCP port or blank for 443>
Create a named role:
vault write auth/kubernetes/role/demo \
bound_service_account_names=myapp \
bound_service_account_namespaces=default \
policies=default \
ttl=1h
Write out the ” myapp ” policy that enables the “read” capability for secrets at the path .
vault policy write myapp -path "yourpath"
{ capabilities = ["read"] }
For more information follow Configuration, Here is a blog explaining the usage of secrets in kubernetes.

Istio enabled GKE cluster not reliably communicating with Google Service Infrastructure APIs

I have been unable to reliably allow my istio enabled Google Kubernetes Engine cluster to connect to Google Cloud Endpoints (service management API) via the extensible service proxy. When I deploy my Pods the proxy will always fail to startup causing the Pod to be restarted, and output the following error:
INFO:Fetching an access token from the metadata service
WARNING:Retrying (Retry(total=0, connect=None, read=None, redirect=0, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fea4abece90>: Failed to establish a new connection: [Errno 111] Connection refused',)': /computeMetadata/v1/instance/service-accounts/default/token
ERROR:Failed fetching metadata attribute: http://169.254.169.254/computeMetadata/v1/instance/service-accounts/default/token
However after restarting, the proxy reports everything is fine, it was able to grab an access token and I am able to make requests to the Pod successfully:
INFO:Fetching an access token from the metadata service
INFO:Fetching the service config ID from the rollouts service
INFO:Fetching the service configuration from the service management service
INFO:Attribute zone: europe-west2-a
INFO:Attribute project_id: my-project
INFO:Attribute kube_env: KUBE_ENV
nginx: [warn] Using trusted CA certificates file: /etc/nginx/trusted-ca-certificates.crt
10.154.0.5 - - [23/May/2020:21:19:36 +0000] "GET /domains HTTP/1.1" 200 221 "-" "curl/7.58.0"
After about an hour, presumably because the access token has expired, the proxy logs indicate that it was again unable to fetch an access token and I can no longer make requests to my Pod.
2020/05/23 22:14:04 [error] 9#9: upstream timed out (110: Connection timed out)
2020/05/23 22:14:04[error]9#9: Failed to fetch service account token
2020/05/23 22:14:04[error]9#9: Fetch access token unexpected status: INTERNAL: Failed to fetch service account token
I have in place a ServiceEntry resource that should be allowing the proxy to make requests to the metadata server on the GKE node:
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: google-metadata-server
spec:
hosts:
- metadata.google.internal # GCE metadata server
addresses:
- 169.254.169.254 # GCE metadata server
location: MESH_EXTERNAL
ports:
- name: http
number: 80
protocol: HTTP
- name: https
number: 443
protocol: HTTPS
I have confirmed this is working by execing into one of the containers and running:
curl -H "Metadata-Flavor: Google" http://169.254.169.254/computeMetadata/v1/instance/service-accounts/default/token
How can I prevent this behaviour and reliably have the proxy communicate with the Google Service Infrastructure APIs?
Although I am not entirely convinced this is the solution it appears that using a dedicated service account to generate access tokens within the extensible service proxy container prevents the behaviour reported above from happening, and I am able to reliably make requests to the proxy and upstream service, even after an hour.
The service account I am using has the following roles:
roles/cloudtrace.agent
roles/servicemanagement.serviceController
Assuming this is a stable solution to the problem I am much happier with this as an outcome because I am not 100% comfortable using the metadata server since it relies on the service account associated with the GKE node. This service account is often more powerful that it needs to be for ESP to do its job.
I will however be continuing to monitor this just in case the proxy upstream becomes unreachable again.

aws-iam-authenticator & EKS

I've deployed a test EKS cluster with the appropiate configMap, and users that are SSO'd in can access the clusters via exporting session creds (AWS_ACCESS_KEY_ID, SECRET_ACCESS_KEY_ID, AWS_SESSION_TOKEN etc) and having the aws-iam-authenticator client installed in their terminal. The problem comes in when users attempt to use an aws sso profile stored in ~/.aws/config using the aws-iam-authenticator. The error that's recieved when running any kubectl command is the following:
$ kubectl get all
could not get token: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
I've tested this on my local machine (AWS CLI v2) and I haven't had any success. I've exported an AWS profile found in the ~/.aws/config file via export AWS_PROFILE=User1 and running aws sts get-caller-identity correctly shows the profile being exported. I've switched between mulitple named profiles and each one gets the correct identity and permissions, however, when running any kubectl command I get the above error. I've also tried symlinking config -> credentials but no luck. The only way it works is if I export the access_key, secret_key, and session_token to the environment variables.
I suppose I can live with having to paste in the dynamic creds that come from AWS SSO, but my need to solve solutions won't let me give up :(. I was following the thread found in this github issue but no luck. The kube config file that I have setup is spec'd to AWS's documentation.
I suspect there may be something off with the aws-iam-authenticator server deployment, but nothing shows in the pod logs. Here's a snippet from the tools github page, which I think I followed correctly, but I did skip step 3 for reasons that I forgot:
The Kubernetes API integrates with AWS IAM Authenticator for
Kubernetes using a token authentication webhook. When you run
aws-iam-authenticator server, it will generate a webhook configuration
file and save it onto the host filesystem. You'll need to add a single
additional flag to your API server configuration:
Kube Config File
apiVersion: v1
clusters:
- cluster:
server: <endpoint-url>
certificate-authority-data: <base64-encoded-ca-cert>
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: aws
name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: aws-iam-authenticator
args:
- "token"
- "-i"
- "eks-cluster-name"
- "-r"
- "EKS-ADMIN-ROLE:ARN:::::::"
env:
- name: AWS_PROFILE
value: "USER"
The AWS CLI v2 now supports AWS SSO so I decided to update my Kube config file to leverage the aws command instead of aws-iam-authenticator. Authentication via SSO is now a breeze! It looks like AWS wanted to get away from having to have an additional binary to be able to authenticate in to EKS clusters which is fine by me! Hope this helps.

How do I forward headers to different services in Kubernetes (Istio)

I have a sample application (web-app, backend-1, backend-2) deployed on minikube all under a JWT policy, and they all have proper destination rules, Istio sidecar and MTLS enabled in order to secure the east-west traffic.
apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
name: oidc
spec:
targets:
- name: web-app
- name: backend-1
- name: backend-2
peers:
- mtls: {}
origins:
- jwt:
issuer: "http://myurl/auth/realms/test"
jwksUri: "http://myurl/auth/realms/test/protocol/openid-connect/certs"
principalBinding: USE_ORIGIN
When I run the following command I receive a 401 unauthorized response when requesting the data from the backend, which is due to $TOKEN not being forwarded to backend-1 and backend-2 headers during the http request.
$> curl http://minikubeip/api "Authorization: Bearer $TOKEN"
Is there a way to forward http headers to backend-1 and backend-2 using native kubernetes/istio? Am I forced to make application code changes to accomplish this?
Edit:
This is the error I get after applying my oidc policy. When I curl web-app with the auth token I get
{"errors":[{"code":"APP_ERROR_CODE","message":"401 Unauthorized"}
Note that when I curl backend-1 or backend-2 with the same auth-token I get the appropriate data. Also, there is no other destination rule/policy applied to these services currently, policy enforcement is on, and my istio version is 1.1.15.
This is the policy I am applying:
apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
name: default
namespace: default
spec:
# peers:
# - mtls: {}
origins:
- jwt:
issuer: "http://10.148.199.140:8080/auth/realms/test"
jwksUri: "http://10.148.199.140:8080/auth/realms/test/protocol/openid-connect/certs"
principalBinding: USE_ORIGIN
should the token be propagated to backend-1 and backend-2 without any other changes?
Yes, policy should transfer token to both backend-1 and backend-2
There is a github issue , where users had same issue like You
A few informations from there:
The JWT is verified by an Envoy filter, so you'll have to check the Envoy logs. For the code, see https://github.com/istio/proxy/tree/master/src/envoy/http/jwt_auth
Pilot retrieves the JWKS to be used by the filter (it is inlined into the Envoy config), you can find the code for that in pilot/pkg/security
And another problem with that in stackoverflow
where accepted answer is:
The problem was resolved with two options: 1. Replace Service Name and port by external server ip and external port (for issuer and jwksUri) 2. Disable the usage of mTLS and its policy (Known issue: https://github.com/istio/istio/issues/10062).
From istio documentation
For each service, Istio applies the narrowest matching policy. The order is: service-specific > namespace-wide > mesh-wide. If more than one service-specific policy matches a service, Istio selects one of them at random. Operators must avoid such conflicts when configuring their policies.
To enforce uniqueness for mesh-wide and namespace-wide policies, Istio accepts only one authentication policy per mesh and one authentication policy per namespace. Istio also requires mesh-wide and namespace-wide policies to have the specific name default.
If a service has no matching policies, both transport authentication and origin authentication are disabled.
Istio supports header propagation. Probably didn't support when this thread was created.
You can allow the original header to be forwarded by using forwardOriginalToken: true in JWTRules or forward a valid JWT payload using outputPayloadToHeader in JWTRules.
Reference: ISTIO JWTRule documentation

Keycloak provides invalid signature with Istio and JWT

I'm using Keycloak (latest) for Auth 2.0, to validate authentication, provide a token (JWT) and with the token provided, allows the access to the application URLs, based in the permissions.
Keycloak is currently running in Kubernates, with Istio as Gateway. For Keycloak, this is the policy being used:
apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
name: application-auth-policy
spec:
targets:
- name: notification
origins:
- jwt:
issuer: http://<service_name>http.<namespace>.svc.cluster.local:8080/auth/realms/istio
jwksUri: http://<service_name>http.<namespace>.svc.cluster.local:8080/auth/realms/istio/protocol/openid-connect/certs
principalBinding: USE_ORIGIN
An client was registered in this Keycloak and a RSA created for it.
The issuer can generates a token normally and the policy was applied successfully.
Problem:
Even with everything set, the token provided by Keycloak has the signature invalid according to JWT Validator.
This token doesn't allow any access for the URLs, as it should be, with 401 code.
Anyone else had a similar issue?
The problem was resolved with two options:
1. Replace Service Name and port by external server ip and external port (for issuer and jwksUri)
2. Disable the usage of mTLS and its policy (Known issue: https://github.com/istio/istio/issues/10062).