How can I represent authorization bearer token in YAML - kubernetes

I have generated the access token and placed in below mentioned mount path and this token need to be included in the Authorization header when making a request against the retrieve secret endpoint.
How can we achieve it in yaml scripting
volumeMounts:
- mountPath: /run/test
name: conjur-access-token
readOnly: true

This question is referencing CyberArk's Conjur Secrets Manager's Kubernetes authenticator. It uses a sidecar authenticator client to keep an authenticated session token for Conjur's API refreshed in a shared volume mount with an application container running within a Kubernetes pod. This allows the application container to request secret values Just-in-Time (JiT) from the Conjur API with a single API call.
There is a file located at /run/test/conjur-access-token (according to the manifest snippet you provided) that contains the authenticated session token to use to connect to the Conjur API. Your application container needs to read /run/test/conjur-access-token and use it in the Authorization header as a Token-based authorization. To use curl, this would look like:
curl -H "Authorization: Token token='$(cat /run/test/conjur-access-token)'" https://conjur.example.com/secrets/myorg/variable/prod%2Fdb%2Fpassword
Where:
/run/test/conjur-access-token is the path to the shared volume mount of the application container and sidecar Kubernetes authenticator client.
conjur.example.com is the Base URL for your Conjur Follower in the Kubernetes cluster (or outside, if that's the deployment method).
myorg is the organzation account configured at the time of Conjur deployment and configuration.
prod%2Fdb%2Fpassword is the URLified secret variable path in Conjur. This would be referenced otherwise as prod/db/password but since forward-slashes are part of URL/URI, we need this URLified to %2F.

If the file containing your token in the mount path is called token, then you can simply do (assuming that you use curl):
curl -H "Authorization: Bearer $(cat /run/test/token)" ...

Related

How to retrieve secret data from vault API using AppRole?

My HashiCorp vault instance is runnning properly on CentOS7. I enabled AppRole authentication, created a policy and a role, enabled secret engine and created a secret for a client application.
I can retrieve the secret data using root CLI but I can't figure out how to get secret data from HTTP API with my application role using curl. I tried a few endpoint combinations without success. Retrieving the client token works, but I can't get secret data itself.
I wonder if the API endpoint is correct or if there is another setting in play.
Authentication method
vault auth enable approle
Policy
# File: my_app /etc/vault/my_app.hcl
path "kv/data/foo/*" {
capabilities = ["read", "list"]
}
# Command line
vault policy write my_app /etc/vault/my_app.hcl
Role
vault write auth/approle/role/my_app policies="my_app"
Secret creation
vault kv put kv/data/foo/user#domain.tld password=1234
API call token request
curl --request POST --data '{"role_id": "xxxxxxxxxxxxxxxxx", "secret_id": "xxxxxxxxxxxxxxxxxxxx"}' http://127.0.0.1:8200/v1/auth/approle/login | jq
Result: Token is properly retrieved
API call for secret data request
export VAULT_CLIENT_TOKEN=XXXXXXX
curl --header "X-Vault-Token: $VAULT_CLIENT_TOKEN" --request GET "http://127.0.0.1:8200/v1/kv/data/foo/user#domain.tld"
Result : No secret data retrieved
Output:
{"errors":[]}
CLI call for secret data
vault kv get -field=password kv/data/foo/user#domain.tld
Output:
1234
Global settings
vault secrets list
Path Type Accessor Description
---- ---- -------- -----------
cubbyhole/ cubbyhole cubbyhole_xxxxxxxx per-token private secret storage
identity/ identity identity_xxxxxxxx identity store
kv/ kv kv_xxxxxxxx n/a
sys/ system system_xxxxxxxx system endpoints used for control, policy and debugging

How to create base authentication in kubernetes?

I want to create base authentication in kubernetes. every document say that I should create CSV or file then enter the username and password in it. but I do not want to use file I want to some database or kubernetes handle it.
what can I do for base authentication?
You can based your authentication on tokens if you don't want to use static pasword file.
First option:
Service Account Tokens
A service account is an automatically enabled authenticator that uses signed bearer tokens to verify requests.
The plugin uses two flags(which are optional):
Service accounts are usually created automatically by the API server and associated with pods running in the cluster through the ServiceAccount Admission Controller. Bearer tokens are mounted into pods at well-known locations, and allow in-cluster processes to talk to the API server. Accounts may be explicitly associated with pods using the serviceAccountName field of a PodSpec.
Service account bearer tokens are perfectly valid to use outside the cluster and can be used to create identities for long standing jobs that wish to talk to the Kubernetes API. To manually create a service account, simply use the kubectl create serviceaccount (NAME) command. This creates a service account in the current namespace and an associated secret.
The created secret holds the public CA of the API server and a signed JSON Web Token (JWT).
The signed JWT can be used as a bearer token to authenticate as the given service account. See above for how the token is included in a request. Normally these secrets are mounted into pods for in-cluster access to the API server, but can be used from outside the cluster as well.
There is some drawbacks because service account tokens are stored in secrets, any user with read access to those secrets can authenticate as the service account. Be careful when granting permissions to service accounts and read capabilities for secrets.
Second:
Install OpenID Connect (full documentation you can find here: oidc).
OpenID Connect (OIDC) is a superset of OAuth2 supported by some service providers, notably Azure Active Directory, Salesforce, and Google. The protocol’s main addition on top of OAuth2 is a field returned with the access token called an ID Token. This token is a JSON Web Token (JWT) with well known fields, such as a user’s email, signed by the server.
To identify the user, the authenticator uses the id_token (not the access_token) from the OAuth2 token response as a bearer token.
Since all of the data needed to validate who you are is in the id_token, Kubernetes doesn’t need to “phone home” to the identity provider. In a model where every request is stateless this provides a very scalable solution for authentication.
Kubernetes has no “web interface” to trigger the authentication process. There is no browser or interface to collect credentials which is why you need to authenticate to your identity provider first.
There’s no easy way to authenticate to the Kubernetes dashboard without using the kubectl proxy command or a reverse proxy that injects the id_token.
More information you can find here: kubernetes-authentication.

Openshift 3.11 How to setup permenant token for pulling from integrated docker registry

I'm using openshift 3.11 and I have a very hard time figuring out how to setup permenant token for image pull and push.
After I do docker login it is ok, but eventually that token expires.
By the documentation it seems that services account : default ,builder should have access.
As you can see each of them have some default dockercfg:
Labels:
Annotations:
Image pull secrets: default-dockercfg-ttjml
Mountable secrets: default-token-q4x4w
default-dockercfg-ttjml
Tokens: default-token-729xq
default-token-q4x4w
Events:
default-dockercfg-ttjml, Which has really weird username and password. Read the documentation many times and still I can't understand how to setup a permanent token. Can someone explain me in a plain manner what is the procedure?
AFAIK, serviceAccount token does not expire until create it again. Look [0] for details. If you want to create docker authentication secret against external docker registry, refer [1] for details.
[0]Managing Service Accounts
The generated API token and registry credentials do not expire, but they can be revoked by deleting the secret.
[1]Allowing Pods to Reference Images from Other Secured Registries
$ oc create secret generic <pull_secret_name> \
--from-file=.dockerconfigjson=<path/to/.docker/config.json> \
--type=kubernetes.io/dockerconfigjson

Secret management through vault in docker containers

I am implementing vault for storing and accessing the secrets for my docker containers on other VM by installing vault in that docker container as well.
Is there any way that I could access secrets from my docker containers in another machine without installing hashicorp-vault on those containers.
In order to access secrets from Vault you will need to authenticate, retrieve vault token and access the relevant secrets.
There are multiple authentication methods (user/pass, LDAP, JWT...). Read about it here and decide which method fits your needs
Vault exposes rest api, which means that you don't need to install anything in order to access it. Just send the relevant http request.
For example - here is the kv http api (and an example - to list secrets)
$ curl \
--header "X-Vault-Token: ..." \
--request LIST \
https://127.0.0.1:8200/v1/secret/metadata/my-secret
You can add multiple listeners in the vault configs.
listener "tcp" {
address = "127.0.0.1:8200"
}
listener "tcp" {
address = "<your_server_ip>:8200"
tls_cert_file = path/to/certfile
tls_key_file = path/to/keyfile
}
References
https://www.vaultproject.io/docs/configuration/listener/tcp.html

How to properly authorize request to Google Cloud Storage API?

I am trying to use the Google Cloud Storage JSON API to retrieve files from a bucket using http calls.
I am curling from a Container in GCE within the same project as the storage bucket, and the service account has read access to the bucket
Here is the pattern of the requests:
https://storage.googleapis.com/{bucket}/{object}
According to the API console, I don't need anything particular as the service account provides Application Default Credentials. However, I keep having this:
Anonymous caller does not have storage.objects.get
I also tried to create an API key for the project and appended it to the url (https://storage.googleapis.com/{bucket}/{object}?key={key})but I still got the same 401 error.
How can I authorize requests to query this API?
The URL that you are using is not correct. The APIs use a URL that starts with https://www.googleapis.com/storage/v1/b.
Using API keys is not recommended. Instead you should use a Bearer: token. I will show both methods.
To get an access token for the gcloud default configuration:
gcloud auth print-access-token
Then use the token in your curl request. Replace TOKEN with the token from the gcloud command.
To list buckets:
curl -s -H "Authorization: Bearer TOKEN" https://www.googleapis.com/storage/v1/b
curl https://www.googleapis.com/storage/v1/b?key=APIKEY
To list objects:
curl -s -H "Authorization: Bearer TOKEN" https://www.googleapis.com/storage/v1/b/examplebucket/o
curl https://www.googleapis.com/storage/v1/b/examplebucket/o?key=APIKEY
API Reference: List Buckets
If you are able to create another cluster you can obtain permission like this:
Click in "avanced edit"
next click in "Allow full access to all Cloud APIs"
And that's it :D