Vault HashiCorp Too many arguments error when tuning a secret - hashicorp-vault

Could you please help me. When I run command
.\vault.exe secrets tune -default-lease-ttl=720h -max-lease-ttl=720h auth/token -address=http://192.168.10.10:8200
I Have error Too many arguments (expected 1, got 2).

Vault secrets tune command only accepts one argument, for example
$ vault secrets tune -default-lease-ttl=72h pki/
You've added two, auth/token and 8200 (there is a space between the port and the address in the code you pasted.
Apart from that, you want to place your -address flag after tune, for example
.\vault.exe secrets tune -default-lease-ttl=720h -max-lease-ttl=720h -address=http://192.168.10.10:8200 auth/token

Related

Running kubectl commands in parallel with different credentials

I'm currently running two Kubernetes clusters one on Google cloud and one on IBM cloud. To manage them I use kubectl. I've made a script that executes some commands on one of the clusters then switches to the other and does some other work there.
This works fine as long as the script only runs in one process, however when run in parallel the credentials are sometimes overwritten by one process when in use by another and this obviously causes issues.
I therefore want to know if I can supply kubectl with a credentials file for every call, instead of storing it in a environmental variable with kubectl config set-credentials.
Any help/solution is much appreciated.
If I need to work with multiple clusters using kubectl I am splitting my terminal and setting KUBECONFIG for each split:
For my first split:
export KUBECONFIG=~/.kube/cluster1
For the second split
export KUBECONFIG=~/.kube/cluster2
It is working pretty well, but this approach has one issue:
If you are using some kind of prompt with the current Kubernetes context it will give you different output and it might be missing leading.
For scripts, I am just changing value of KUBECONFIG in for loop, to loop over each cluster.
You need to use Kubefed in order to manage multiple clusters.
It will take one cluster as the main one, and execute all the same requests to the second cluster.

Secrets management across environments in HashiCorp Vault

Problem - there are Vault instances in dev/stage/production (each independent). Now if I need to create/modify a key/value (kv secrets engine) then I can manually do so in each env. This leads to environments being out of sync and as part of the deployment we need to remember to write values to vault (I'm sure you can guess what happens).
What I would like to be able to do is externalize the paths/keys so they can applied to each environment. This could potentially be a file in Git and we just write the file through script in each env but then how do we get the values to use? Further these values are different in each environment and they need a secure location to live.
I have tried to search for this and while there is plenty about managing Vault itself I have not found any solution to managing the data within Vault. Any guidance would be much appreciated.

Security: Yaml Bomb: user can restart kube-api by sending configmap

Create yaml-bomb.yaml file:
apiVersion: v1
data:
a: &a ["web","web","web","web","web","web","web","web","web"]
b: &b [*a,*a,*a,*a,*a,*a,*a,*a,*a]
c: &c [*b,*b,*b,*b,*b,*b,*b,*b,*b]
d: &d [*c,*c,*c,*c,*c,*c,*c,*c,*c]
e: &e [*d,*d,*d,*d,*d,*d,*d,*d,*d]
f: &f [*e,*e,*e,*e,*e,*e,*e,*e,*e]
g: &g [*f,*f,*f,*f,*f,*f,*f,*f,*f]
h: &h [*g,*g,*g,*g,*g,*g,*g,*g,*g]
i: &i [*h,*h,*h,*h,*h,*h,*h,*h,*h]
kind: ConfigMap
metadata:
name: yaml-bomb
namespace: default
Send ConfigMap creation request to Kubernetes API by cmd kubectl apply -f yaml-bomb.yaml.
kube-api CPU/memory usage are very high, even later are getting restarted.
How do we prevent such yaml-bomb?
This is a billion laughts attack and can only be fixed in the YAML processor.
Note that the Wikipedia is wrong here when it says
A "Billion laughs" attack should exist for any file format that can contain references, for example this YAML bomb:
The problem is not that the file format contains references; it is the processor expanding them. This is against the spirit of the YAML spec which says that anchors are used for nodes that are actually referred to from multiple places. In the loaded data, anchors & aliases should become multiple references to the same object instead of the alias being expanded to a copy of the anchored node.
As an example, compare the behavior of the online PyYAML parser and the online NimYAML parser (full disclosure: my work) when you paste your code snippet. PyYAML won't respond because of the memory load from expanding aliases, while NimYAML doesn't expand the aliases and therefore responds quickly.
It's astonishing that Kubernetes suffers from this problem; I would have assumed since it's written in Go that they are able to properly handle references. You have to file a bug with them to get this fixed.
There's a couple of possible mitigations I could think of although as #flyx says the real fix here would be in the YAML parsing library used by Kubernetes.
Interestingly running this on a Kubernetes cluster on my local machine showed the CPU spike to be client-side (it's the kubectl process churning CPU) rather than server side.
If the issue was server side, then possible mitigations would be to use RBAC to minimize access to ConfigMap creation, and potentially to use an admission controller like OPA to review manifests before they are applied to the cluster.
This should probably be raised with the Kubernetes security vulnerability response team so that a proper fix can be implemented.
EDIT - I think where the problem manifests, might be down to the cluster version used. Server-side apply graduated to beta (should be enabled by default) in 1.16. So on a 1.16 cluster perhaps this would hit server side instead of client side.
EDIT - Just setup a 1.16 cluster, still showing the CPU usage as client-side in kubectl...
EDIT - I've filed an issue for this here also confirmed that the DoS can be achieved server-side by using curl instead of kubectl
Final EDIT - This got assigned a CVE (CVE-2019-11253) and is being fixed in Kubernetes 1.13+ . The fix has also been applied to the underlying YAML parsing lib here so any other Go programs should be ok as long as they're using an up to date version.
There was a TrustCom19 paper studying vulnerabilities in YAML parsers for different languages, it found that most parsers have some issues, so this is common and there are several recent CVEs in this space (details in paper: Laughter in the Wild: A Study into DoS Vulnerabilities in YAML Libraries, TrustCom19.
Preprint: https://www.researchgate.net/publication/333505459_Laughter_in_the_Wild_A_Study_into_DoS_Vulnerabilities_in_YAML_Libraries

Handling OpenShift secrets in a safe way after extraction into environment variables

So I have configured an OpenShift 3.9 build configuration such that environment variables are populated from an OpenShift secret at build-time. I am using these environment variables for setting passwords up for PostgreSQL roles in the image's ENTRYPOINT script.
Apparently these environment variables are baked into the image, not just the build image, but also the resulting database image. (I can see their values when issuing set inside the running container.) On one hand this seems necessary because the ENTRYPOINT script needs access to them, and it executes only at image run-time (not build-time). On the other this is somewhat disconcerting, because FWIK one who obtained the image could now extract those passwords. Unsetting the environment variables after use would not change that.
So is there a better way (or even best practice) for handling such situations in a more secure way?
UPDATE At this stage I see two possible ways forward (better choice first):
Configure DeploymentConfig such that it mounts the secret as a volume (not: have BuildConfig populate environment variables from it).
Store PostgreSQL password hashes (not: verbatim passwords) in secret.
As was suggested in a comment, what made sense was to shift the provision of environment variables from the secret from BuildConfig to DeploymentConfig. For reference:
oc explain bc.spec.strategy.dockerStrategy.env.valueFrom.secretKeyRef
oc explain dc.spec.template.spec.containers.env.valueFrom.secretKeyRef

Passing long configuration file to Kubernetes

I like the work methology of Kuberenetes, use self-contained image and pass the configuration in a ConfigMap, as a volume.
Now this worked great until I tried to do this thing with Liquibase container, The SQL is very long ~1.5K lines, and Kubernetes rejects it as too long.
Error from Kubernetes:
The ConfigMap "liquibase-test-content" is invalid: metadata.annotations: Too long: must have at most 262144 characters
I thought of passing the .sql files as a hostPath, but as I understand these hostPath's content is probably not going to be there
Is there any other way to pass configuration from the K8s directory to pods? Thanks.
The error you are seeing is not about the size of the actual ConfigMap contents, but about the size of the last-applied-configuration annotation that kubectl apply automatically creates on each apply. If you use kubectl create -f foo.yaml instead of kubectl apply -f foo.yaml, it should work.
Please note that in doing this you will lose the ability to use kubectl diff and do incremental updates (without replacing the whole object) with kubectl apply.
Since 1.18 you can use server-side apply to circumvent the problem.
kubectl apply --server-side=true -f foo.yml
where server-side=true runs the apply command on the server instead of the client.
This will properly show conflicts with other actors, including client-side apply and thus fail:
Apply failed with 4 conflicts: conflicts with "kubectl-client-side-apply" using apiextensions.k8s.io/v1:
- .status.conditions
- .status.storedVersions
- .status.acceptedNames.kind
- .status.acceptedNames.plural
Please review the fields above--they currently have other managers. Here
are the ways you can resolve this warning:
* If you intend to manage all of these fields, please re-run the apply
command with the `--force-conflicts` flag.
* If you do not intend to manage all of the fields, please edit your
manifest to remove references to the fields that should keep their
current managers.
* You may co-own fields by updating your manifest to match the existing
value; in this case, you'll become the manager if the other manager(s)
stop managing the field (remove it from their configuration).
See http://k8s.io/docs/reference/using-api/api-concepts/#conflicts
If the changes are intended you can simple use the first option:
kubectl apply --server-side=true -force-conflicts -f foo.yml
You can use an init container for this. Essentially, put the .sql files on GitHub or S3 or really any location you can read from and populate a directory with it. The semantics of the init container guarantee that the Liquibase container will only be launched after the config files have been downloaded.