What "Everything is path based" means in HashiCorp Vault? - hashicorp-vault

In Vault documentation, specifically the policies page, there is this phrase:
Everything in Vault is path based, and policies are no exception
I wonder about this phrase, does it mean that in the architecture of Vault and in its internals everything really is a path, similar to the "Everything is a file" in Linux architecture, which applies on processes, files, directories, sockets, pipes, etc?
What makes me relate to this phrase is that secret engines are defined by paths, and I assume Vault infers their types and which to be used from the given paths. Also policies are relatable as you have to define exact paths for each policy, but what about other components like auth methods, audits, tokens, etc?
I just want to get what is meant by "path based" in "Everything in Vault is path based" phrase.

In Vault, everything is path based. This means that every operation that is performed in Vault is done through a path. The path is used to determine the location of the operation, as well as the permissions that are required to execute the operation.

Whether you're using the vault binary or whether you're hitting the HTTP API endpoints, secrets/configs are written to a path.
i.e. via cli:
VAULT_ADDR=https://myvault.example.com VAULT_TOKEN=xxxxxxxx-xxxxxxx-xxxxxx vault kv get mysecrets/passwords/root
would correspond to HTTP endpoint:
curl \
-H "X-Vault-Token: xxxxxxx-xxxxxx-xxxxxxx" \
-X GET \
https://myvault.example.com/v1/mysecrets/passwords/root
Here's another example:
enabling the gcp secret engine with a custom path:
vault secrets enable -path=”my-project-123” gcp
If you wanted to enable secrets engines from the HTTP API, the endpoint (path) is /sys/mounts. Details here.
creating writing a config:
vault write my-project-123/config credentials=#/path/to/creds.json ttl=3600 max_ttl=21600
Notice how the config is written to a path, and if you were to use the HTTP API endpoint to do this, then it would look something like this:
curl \
--header "X-Vault-Token: ..." \
--request POST \
--data #payload.json \
https://myvault.example.com/v1/my-project-123/config
Where the payload.json would contain your credentials in text, ttl, max_ttl
Hence why they Vault says everything is path based.
EDIT: TL;DR - path based is so that there's parity between HTTP API and CLI (or any SDKs too). Compare this to a gcloud or aws command to its HTTP API endpoint counterpart where there isn't much parity there.

Related

How to access Kubernetes API from node directly

From a Kubernetes node, how can I access API server, how can I find out the API endpoint and handle authentication? It is a Windows node by the way.
I'm surprised that there is not much information I could find on the Internet about this, is accessing Kubernetes API from node directly a bad design?
"From the node" sound like a fringe use case, like addons, which are usually covered by using the "admin.conf" file that was deployed during node attachement and contains whatever you would need to connect to the api server.
A more usual approach would be to deploy your workload in a Pod which service account would have the proper role binding to access the API server.
How to access Kubernetes API from node directly?
There are multiple ways , one of the way is from master node
# Get API Server URL:
kubectl cluster-info
#access it using the curl
curl https://<api serverIP>:6443/api/v1/nodes --cacert /etc/srv/kubernetes/pki/ca-certificates.crt --cert /var/lib/kubelet/pki/kubelet-client.crt --key /var/lib/kubelet/pki/kubelet-client.key
how can I find out the API endpoint and handle authentication?
One technique i use is using --v=11 with kubectl commands , it will give endpoints of the kubernetes resources
#example :
kubectl get pods --v=11 2>&1 | grep GET
I1229 10:20:41.098241 42907 round_trippers.go:423] curl -k -v -XGET -H "Accept: application/json;as=Table;v=v1;g=meta.k8s.io,application/json;as=Table;v=v1beta1;g=meta.k8s.io,application/json" -H "User-Agent: kubectl/v1.19.4 (linux/amd64) kubernetes/d360454" 'https://10.157.160.165:6443/api/v1/namespaces/default/pods?limit=500'
I1229 10:20:41.116964 42907 round_trippers.go:443] GET https://<apiserver>:6443/api/v1/namespaces/default/pods?limit=500 200 OK in 18 milliseconds
It is a Windows node by the way
Ideally above steps should work , May be you need to find out equivalent commands for grep & curl. change location of the certs to appropriate locations. you can find the location of certs from admin.conf file.

How can I represent authorization bearer token in YAML

I have generated the access token and placed in below mentioned mount path and this token need to be included in the Authorization header when making a request against the retrieve secret endpoint.
How can we achieve it in yaml scripting
volumeMounts:
- mountPath: /run/test
name: conjur-access-token
readOnly: true
This question is referencing CyberArk's Conjur Secrets Manager's Kubernetes authenticator. It uses a sidecar authenticator client to keep an authenticated session token for Conjur's API refreshed in a shared volume mount with an application container running within a Kubernetes pod. This allows the application container to request secret values Just-in-Time (JiT) from the Conjur API with a single API call.
There is a file located at /run/test/conjur-access-token (according to the manifest snippet you provided) that contains the authenticated session token to use to connect to the Conjur API. Your application container needs to read /run/test/conjur-access-token and use it in the Authorization header as a Token-based authorization. To use curl, this would look like:
curl -H "Authorization: Token token='$(cat /run/test/conjur-access-token)'" https://conjur.example.com/secrets/myorg/variable/prod%2Fdb%2Fpassword
Where:
/run/test/conjur-access-token is the path to the shared volume mount of the application container and sidecar Kubernetes authenticator client.
conjur.example.com is the Base URL for your Conjur Follower in the Kubernetes cluster (or outside, if that's the deployment method).
myorg is the organzation account configured at the time of Conjur deployment and configuration.
prod%2Fdb%2Fpassword is the URLified secret variable path in Conjur. This would be referenced otherwise as prod/db/password but since forward-slashes are part of URL/URI, we need this URLified to %2F.
If the file containing your token in the mount path is called token, then you can simply do (assuming that you use curl):
curl -H "Authorization: Bearer $(cat /run/test/token)" ...

Vault (HashiCorp) - curl equivalent of "vault read"

Small question regarding Hashicorp Vault please.
I have a secret in Vault, under cubbyhole/mytestkey
If I log in to the web UI, I can see the key mytestkey and its value under cubbyhole
If I use the Vault CLI, running vault read /cubbyhole/mytestkey, I do get the result.
vault read /cubbyhole/mytestkey
Key Value
--- -----
mytestkey mytestvalue
However, when I use via curl (The token should be correct, since I used it to connect to Vault web UI), I get:
curl -vik -H "X-Vault-Token: token" https://remote-vault/cubbyhole/mytestkey
HTTP 404
May I ask what is the issue with my curl command? A path issue? And the correct one would be?
Thank you
Your REST API endpoint is missing the port and the version of the API. You can update it to:
curl -vik -H "X-Vault-Token: token" https://remote-vault:8200/v1/cubbyhole/mytestkey
and modify the port if running on the non-default 8200.
You can find more information in the relevant documentation.

Accessing Concourse REST API from resource

I am trying to write a custom Concourse resource (in Python) that accesses the Concourse instance's REST API for information. I'm stuck at obtaining the bearer token at login. The issue is that when I follow the gist of this shell script
#!/bin/bash
## Variables required #need to update these to take inputs for getting token per team and target.
CONCOURSE_URL="http://localhost:8080"
CONCOURSE_USER="test"
CONCOURSE_PASSWORD="test"
CONCOURSE_TEAM="test"
CONCOURSE_TARGET="my-concourse"
function get_token() {
## Create a file named token that will be used to read and write tokens
touch token
## extract the LDAP authentication url and write to token file
LOCAL_AUTH_URL=$CONCOURSE_URL$(curl -b token -c token -L "$CONCOURSE_URL/sky/login" -s | grep "/sky/issuer/auth/local" | awk -F'"' '{print $4}')
echo "url is $LOCAL_AUTH_URL"
# login using username and password while writing to the token file
curl -s -o /dev/null -b token -c token -L --data-urlencode "login=$CONCOURSE_USER" --data-urlencode "password=$CONCOURSE_PASSWORD" "$LOCAL_AUTH_URL"
ATC_BEARER_TOKEN=`grep 'Bearer' token | cut -d\ -f2 | sed 's/"$//'`
echo $ATC_BEARER_TOKEN
}
there are many redirects involved, and at least some of them refer to the concourse instance as being at http://localhost:8080, which does not work from inside the docker container of the resource.
So I wanted to parametrize the external base url and explicitly give it in resource config. Manually handling the redirects and rewriting the local IP into the URL fails at the last "approval" step with a code 400, probably because it looks like some kind of a cross-domain attack.
The environment variable ATC_EXTERNAL_URL is always localhost:8080 and I suspect that this is also used when forming out the redirect urls. Can this be set somewhere?
I'm bad at golang, but it seems to me that https://github.com/concourse/concourse-pipeline-resource calls the fly binary to achieve some kind of login from inside a resource. Can't say I can get what it does and how.
All help appreciated...
The env var $ATC_EXTERNAL_URL most likely corresponds to the external url specified when you start Concourse, so yes, it can (and if you're using external auth like Github or OAuth, must) be changed. You're correct in assuming that it's used to construct callback URLs.
Also, I don't want to be That Guy(TM), but the Concourse REST API is not public and is subject to change at any time. What are you trying to do that you can't get from the fly CLI? Your resource could call the ATC_EXTERNAL_URL to get the fly CLI when it's needed then execute commands that way.

"Access Denied. Provided scope(s) are not authorized" error when trying to make objects public using the REST API

I am attempting to set permissions on individual objects in a Google Cloud Storage bucket to make them publicly viewable, following the steps indicated in Google's documentation. When I try to make these requests using our application service account, it fails with HTTP status 403 and the following message:
Access denied. Provided scope(s) are not authorized.
Other requests work fine. When I try to do the same thing but by providing a token for my personal account, the PUT request to the object's ACL works... about 50% of the time (the rest of the time it is a 503 error, which may or may not be related).
Changing the IAM policy for the service account to match mine - it normally has Storage Admin and some other incidental roles - doesn't help, even if I give it the overall Owner IAM role, which is what I have.
Using neither the XML API nor the JSON version makes a difference. That the request sometimes works with my personal credentials indicates to me that the request is not incorrectly formed, but there must be something else I've thus far overlooked. Any ideas?
Check for the scope of the service account incase you are using the default compute engine service account. By default the scope is restricted and for GCS it is read only. Use rm -r ~/.gsutil to clear cache in case of clearing cache
When trying to access GCS from a GCE instance and getting this error message ...
the default scope is devstorage.read_only, which prevents all write operations.
Not sure if scope https://www.googleapis.com/auth/cloud-platform is required, when scope https://www.googleapis.com/auth/devstorage.read_only is given by default (eg. to read startup scripts). The scope should rather be: https://www.googleapis.com/auth/devstorage.read_write.
And one can use gcloud beta compute instances set-scopes to edit the scopes of an instance:
gcloud beta compute instances set-scopes $INSTANCE_NAME \
--project=$PROJECT_ID \
--zone=$COMPUTE_ZONE \
--scopes=https://www.googleapis.com/auth/devstorage.read_write \
--service-account=$SERVICE_ACCOUNT
One can also pass all known alias names for scopes, eg: --scopes=cloud-platform. The command must be run outside of the instance, because of permissions - and the instance must be shutdown, in order to change the service account.
Follow the documentation you provided, taking into account these points:
Access Control system for the bucket has to be Fine-grained (not uniform).
In order to make objects publicly available, make sure the bucket does not have the public access prevention enabled. Check this link for further information.
Grant the service account with the appropriate permissions in the bucket. The Storage Legacy Object Owner role (roles/storage.legacyObjectOwner) is needed to edit objects ACLs as indicated here. This role can be granted for individual buckets, not for projects.
Create the json file as indicated in the documentation.
Use gcloud auth application-default print-access-token to get authorization access token and use it in the API call. The API call should look like:
curl -X POST --data-binary #JSON_FILE_NAME.json \
-H "Authorization: Bearer $(gcloud auth application-default print-access-token)" \
-H "Content-Type: application/json" \
"https://storage.googleapis.com/storage/v1/b/BUCKET_NAME/o/OBJECT_NAME/acl"
You need to add OAuth scope: cloud-platform when you create the instance. Look: https://cloud.google.com/sdk/gcloud/reference/compute/instances/create#--scopes
Either select "Allow full access to all Cloud APIs" or select the fine-grained approach
So, years later, it turns out the problem is that "scope" is being used by the Google Cloud API to refer to two subtly different things. One is the available permission scopes available to the service account, which is what I (and most of the other people who answered the question) kept focusing on, but the problem turned out to be something else. The Python class google.auth.credentials.Credentials, used by various Google Cloud client classes to authenticate, also has permission scopes used for OAuth. You see where this is going - the client I was using was being created with a default OAuth scope of 'https://www.googleapis.com/auth/devstorage.read_write', but to make something public requires the scope 'https://www.googleapis.com/auth/devstorage.full_control'. Adding this scope to the OAuth credential request means the setting public permissions on objects works.