Can you tell when the secret_id will expire in Vault - hashicorp-vault

I recently updated an AppRole secret_id using the following command
vault write -tls-skip-verify auth/approle/role/my-super-role-name/secret-id secret_id_ttl=4320h
How can I know when that secret-id will expire?
Since I ran the command I know that it will expire in 4320h hours, but is there a way to check the expiration if you didn't create it?
I know you can check secret_id_ttl using
vault read -tls-skip-verify auth/approle/role/my-super-role-name/secret-id-ttl
Key Value
--- -----
secret_id_ttl 4320h
But that only shows how much it was set to initially it doesn't serve as a count down.

This will print info about creation_time, expiration_time, last_updated_time of specified secret-id:
https://www.vaultproject.io/api/auth/approle#read-approle-secret-id

You can call lookup path API
vault write auth/approle/role/<role-name>/secret-id/lookup secret_id=<secret-id>
Key Value
--- -----
cidr_list <value>
creation_time <value>
expiration_time <value>
last_updated_time <value>
metadata <value>
secret_id_accessor <value>
secret_id_num_uses <value>
secret_id_ttl <value>
token_bound_cidrs <value>

Related

User with assigned policy can't access secrets

I have created a kv (version 2) secrets engine, mounted on /secret:
$ vault secrets list
Path Type Accessor Description
---- ---- -------- -----------
cubbyhole/ cubbyhole cubbyhole_915b3383 per-token private secret storage
identity/ identity identity_9736df92 identity store
secret/ kv kv_8ba16621 n/a
sys/ system system_357a0e34 system endpoints used for control, policy and debugging
I have created a policy that should give admin access to everything in myproject:
$ vault policy read myproject
path "secret/myproject/*" {
capabilities = ["create","read","update","delete","list"]
}
I have created a secret in the appropriate path (with root token):
$ vault kv put secret/myproject/entry1 pass=pass
Key Value
--- -----
created_time 2022-05-11T15:06:49.658185443Z
deletion_time n/a
destroyed false
version 1
I have created a user that has been assigned the given policy:
$ vault token lookup
Key Value
--- -----
accessor CBnMF4i2cgadYoMNAX1YHaX6
creation_time 1652281774
creation_ttl 168h
display_name userpass-myproject
entity_id ad07640c-9440-c4a1-b668-ab0b8d07fe93
expire_time 2022-05-18T15:09:34.799969629Z
explicit_max_ttl 0s
id s.FO7PrOBdvC3KB85N46E05msi
issue_time 2022-05-11T15:09:34.799982017Z
meta map[username:myproject]
num_uses 0
orphan true
path auth/userpass/login/myproject
policies [default myproject]
renewable true
ttl 167h53m36s
type service
However when I try to access anything (list,get), I get a 403 error:
$ vault kv list secret/myproject
Error listing secret/metadata/myproject: Error making API request.
URL: GET https://example.vault/v1/secret/metadata/myproject?list=true
Code: 403. Errors:
* 1 error occurred:
* permission denied
$ vault kv get secret/myproject/entry1
Error reading secret/data/myproject/entry1: Error making API request.
URL: GET https://vault.private.gsd.sparkers.io/v1/secret/data/myproject/entry1
Code: 403. Errors:
* 1 error occurred:
* permission denied
When I change the policy to this (change path to secret/*), I get access to everything:
$ vault policy read myproject
path "secret/*" {
capabilities = ["create","read","update","delete","list"]
}
$ vault kv get secret/myproject/entry1
====== Metadata ======
Key Value
--- -----
created_time 2022-05-11T15:06:49.658185443Z
deletion_time n/a
destroyed false
version 1
==== Data ====
Key Value
--- -----
pass pass
What am I doing wrong?
It turns out that you need to define your policy like this:
path "secret/metadata/myproject/*" {
capabilities = ["list"]
}
path "secret/data/myproject/*" {
capabilities = ["create","read","update","delete"]
}
because with engine v2 kv list prepends metadata to your path, and kv get prepends data to your path.
No idea how I missed the documentation on this here: https://www.vaultproject.io/docs/secrets/kv/kv-v2:
Writing and reading versions are prefixed with the data/ path.
Thank you #Matt Schuchard

Rundeck 4.0.0 - Remote node command execution using ssh

I am having an issue with the most basic of Rundeck functions - namely, running a command over ssh on a remote node. I have generated a rsa key and added it via the Key Storage function. I have also created a yaml file for node definitions:
root#rundeck:/var/lib/rundeck# cat nodes.yml
mynode:
nodename: mynode
hostname: mynode
description: 'Some description'
ssh-authentication: privateKey # added - unsure if really required
ssh-keypath: /var/lib/rundeck/.ssh/id_rsa # added - unsure if really required
username: rundeck
osFamily: linux
The node is showing up correctly and command line ssh works just fine:
root#rundeck:/var/lib/rundeck/.ssh# ssh -i id_rsa rundeck#mynode date
Mon Apr 4 16:19:33 UTC 2022
The project settings are as below:
#Mon Apr 04 16:23:36 UTC 2022
#edit below
project.description=someproject
project.disable.executions=false
project.disable.schedule=false
project.execution.history.cleanup.batch=500
project.execution.history.cleanup.enabled=false
project.execution.history.cleanup.retention.days=60
project.execution.history.cleanup.retention.minimum=50
project.execution.history.cleanup.schedule=0 0 0 1/1 * ? *
project.jobs.gui.groupExpandLevel=1
project.label=somelabel
project.name=someproject
project.nodeCache.enabled=true
project.nodeCache.firstLoadSynch=true
project.output.allowUnsanitized=false
project.ssh-authentication=privateKey
project.ssh-command-timeout=0
project.ssh-connect-timeout=0
project.ssh-key-storage-path=keys/project/someproject/rundeck_id_rsa
resources.source.1.config.file=/var/lib/rundeck/nodes.yml
resources.source.1.config.format=resourceyaml
resources.source.1.config.requireFileExists=true
resources.source.1.config.writeable=true
resources.source.1.type=file
service.FileCopier.default.provider=jsch-scp
service.NodeExecutor.default.provider=jsch-ssh
Yet, when I try and run a Command from the UI, it fails:
Failed: SSHProtocolFailure: invalid privatekey: [B#7d7d0b2d
What am I doing incorrectly, and how do I successfully run a command over ssh on a remote node?
Your node definition needs the ssh-key-storage-path attribute pointing to the Rundeck user private key (created before on Rundeck Key Storage), also, the osFamily attribute must be set as unix (not linux, Rundeck only admits two values there: unix and windows).
To add an SSH node follow these steps:
If you're using a WAR-based installation execute: ssh-keygen -t rsa -b 4096. That generates two keys (private and public) on the user .ssh directory (the user that launches Rundeck). If you're using an RPM/DEB installation these keys are already created on the /var/lib/rundeck path.
Go to the remote SSH node (the account that you want to connect from Rundeck), then add the Rundeck server user public key to the authorized_keys file. Then you can test that connection with ssh user#xxx.xxx.xxx.xxx from the Rundeck server user account.
Launch Rundeck and then add to the Rundeck keys storage the rundeck user private key (remember to include the first and the last line "-----BEGIN RSA PRIVATE KEY-----" and "-----END RSA PRIVATE KEY-----") in my case I use this path keys/rundeck.
Create a new Project and then create the resources.xml file with remote node information. To generate that file just go to Project Settings > Edit Nodes > Click on the "Configure Nodes" button > Click on "Add Sources +" > Select "+ File" option > in the "Format" field select resourcexml and fill the path in "File Path" field (put the file name at the end, usually "resources.xml", also, select "Generate", "Include Server Node" and "Writeable" checkboxes and click on the "Save" button.
Now you can edit that file including the remote node, which in my case is "node00" (a Vagrant test image), on the key-storage-path attribute I used the same path created in step 3:
<?xml version="1.0" encoding="UTF-8"?>
<project>
<node name="hyperion" description="Rundeck server node" tags="" hostname="hyperion" osArch="amd64" osFamily="unix" osName="Linux" osVersion="4.15.0-66-generic" username="ruser"/>
<node name="node00" description="Node 00" tags="" hostname="192.168.33.20" osArch="amd64" osFamily="unix" osName="Linux" osVersion="3.10.0-1062.4.1.el7.x86_64" username="vagrant" ssh-key-storage-path="keys/rundeck"/>
</project>
On Rundeck GUI go to the sidebar and check your nodes on the "Nodes" section. Check.
Go to "Commands" (sidebar) and put the SSH remote node name as a filter and launch any command like this.
You can follow an entire guide here.
Alternatively, you can re-generate the key pairs with the following command: ssh-keygen -p -f /var/lib/rundeck/.ssh/id_rsa -m pem.
The keystorage save the private-key with crlf and this was the issue I recognize with version 4.2.1.
Do a dirty fix for ssh-exec.sh:
echo "$RD_CONFIG_SSH_KEY_STORAGE_PATH" > "$SSH_KEY_STORAGE_PATH"
insert these lines:
sed -i 's/\r$//' "$SSH_KEY_STORAGE_PATH"
SSHOPTS="$SSHOPTS -i $SSH_KEY_STORAGE_PATH"

vault: How to reduce the lease duration of ssh otp?

I am use the following comand to generate one time password:
$ vault write ssh/creds/otp_key_role ip=172.31.47.83
Key Value
--- -----
lease_id ssh/creds/otp_key_role/TqKAoY2kWLN058cRIzJab5qY
lease_duration 768h
lease_renewable false
ip 172.31.47.83
key ec90e030-f126-ae76-c989-177f33401536
key_type otp
port 22
username test-user
the lease_duration of otp is 768h, I want to reduce the lease_duration to 1h, how can I do it?

NIFI - Disable Client Certificate Request

I'm using nifi 1.11.4. I have https and simple ldap setup on nifi, however, it still asks for a client certificate when navigating to the page. If i select a certificate it fails, which is understandable, due to not setting up client certificates. If i cancel, it will go to the login screen.
Is there any way to make it not check for client certificates, since I am using ldap to login?
I saw some properties about turning them off, but those properties seem to be gone. I checked the documentation and it seems to mention that it will ask for a client certificate unless another authentication method is setup. However with ldap still setup, it is still asking for a certificate.
login-identity-providers.xml:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<loginIdentityProviders>
<provider>
<identifier>ldap-provider</identifier>
<class>org.apache.nifi.ldap.LdapProvider</class>
<property name="Authentication Strategy">SIMPLE</property>
<property name="Manager DN">CN=blah,OU=USERS,OU=LAW,DC=na,DC=ad,DC=test,DC=com</property>
<property name="Manager Password">secret</property>
<property name="TLS - Keystore"></property>
<property name="TLS - Keystore Password"></property>
<property name="TLS - Keystore Type"></property>
<property name="TLS - Truststore"></property>
<property name="TLS - Truststore Password"></property>
<property name="TLS - Truststore Type"></property>
<property name="TLS - Client Auth"></property>
<property name="TLS - Protocol"></property>
<property name="TLS - Shutdown Gracefully"></property>
<property name="Referral Strategy">IGNORE</property>
<property name="Connect Timeout">10 secs</property>
<property name="Read Timeout">10 secs</property>
<property name="Url">ldaps://ldapserver.na.ad.test.com</property>
<property name="User Search Base">OU=USERS,OU=LAW,DC=na,DC=ad,DC=test,DC=com</property>
<property name="User Search Filter">sAMAccountName={0}</property>
<property name="Identity Strategy">USE_USERNAME</property>
<property name="Authentication Expiration">12 hours</property>
</provider>
</loginIdentityProviders>
nifi.properties file:
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Core Properties #
nifi.flow.configuration.file=./conf/flow.xml.gz
nifi.flow.configuration.archive.enabled=true
nifi.flow.configuration.archive.dir=./conf/archive/
nifi.flow.configuration.archive.max.time=30 days
nifi.flow.configuration.archive.max.storage=500 MB
nifi.flow.configuration.archive.max.count=
nifi.flowcontroller.autoResumeState=true
nifi.flowcontroller.graceful.shutdown.period=10 sec
nifi.flowservice.writedelay.interval=500 ms
nifi.administrative.yield.duration=30 sec
# If a component has no work to do (is "bored"), how long should we wait before checking again for work?
nifi.bored.yield.duration=10 millis
nifi.queue.backpressure.count=10000
nifi.queue.backpressure.size=1 GB
nifi.authorizer.configuration.file=./conf/authorizers.xml
nifi.login.identity.provider.configuration.file=./conf/login-identity-providers.xml
nifi.templates.directory=./conf/templates
nifi.ui.banner.text=
nifi.ui.autorefresh.interval=30 sec
nifi.nar.library.directory=./lib
nifi.nar.library.autoload.directory=./extensions
nifi.nar.working.directory=./work/nar/
nifi.documentation.working.directory=./work/docs/components
####################
# State Management #
####################
nifi.state.management.configuration.file=./conf/state-management.xml
# The ID of the local state provider
nifi.state.management.provider.local=local-provider
# The ID of the cluster-wide state provider. This will be ignored if NiFi is not clustered but must be populated if running in a cluster.
nifi.state.management.provider.cluster=zk-provider
# Specifies whether or not this instance of NiFi should run an embedded ZooKeeper server
nifi.state.management.embedded.zookeeper.start=false
# Properties file that provides the ZooKeeper properties to use if <nifi.state.management.embedded.zookeeper.start> is set to true
nifi.state.management.embedded.zookeeper.properties=./conf/zookeeper.properties
# H2 Settings
nifi.database.directory=./database_repository
nifi.h2.url.append=;LOCK_TIMEOUT=25000;WRITE_DELAY=0;AUTO_SERVER=FALSE
# FlowFile Repository
nifi.flowfile.repository.implementation=org.apache.nifi.controller.repository.WriteAheadFlowFileRepository
nifi.flowfile.repository.wal.implementation=org.apache.nifi.wali.SequentialAccessWriteAheadLog
nifi.flowfile.repository.directory=./flowfile_repository
nifi.flowfile.repository.partitions=256
nifi.flowfile.repository.checkpoint.interval=2 mins
nifi.flowfile.repository.always.sync=false
nifi.flowfile.repository.encryption.key.provider.implementation=
nifi.flowfile.repository.encryption.key.provider.location=
nifi.flowfile.repository.encryption.key.id=
nifi.flowfile.repository.encryption.key=
nifi.swap.manager.implementation=org.apache.nifi.controller.FileSystemSwapManager
nifi.queue.swap.threshold=20000
nifi.swap.in.period=5 sec
nifi.swap.in.threads=1
nifi.swap.out.period=5 sec
nifi.swap.out.threads=4
# Content Repository
nifi.content.repository.implementation=org.apache.nifi.controller.repository.FileSystemRepository
nifi.content.claim.max.appendable.size=1 MB
nifi.content.claim.max.flow.files=100
nifi.content.repository.directory.default=./content_repository
nifi.content.repository.archive.max.retention.period=12 hours
nifi.content.repository.archive.max.usage.percentage=50%
nifi.content.repository.archive.enabled=true
nifi.content.repository.always.sync=false
nifi.content.viewer.url=../nifi-content-viewer/
nifi.content.repository.encryption.key.provider.implementation=
nifi.content.repository.encryption.key.provider.location=
nifi.content.repository.encryption.key.id=
nifi.content.repository.encryption.key=
# Provenance Repository Properties
nifi.provenance.repository.implementation=org.apache.nifi.provenance.WriteAheadProvenanceRepository
nifi.provenance.repository.debug.frequency=1_000_000
nifi.provenance.repository.encryption.key.provider.implementation=
nifi.provenance.repository.encryption.key.provider.location=
nifi.provenance.repository.encryption.key.id=
nifi.provenance.repository.encryption.key=
# Persistent Provenance Repository Properties
nifi.provenance.repository.directory.default=./provenance_repository
nifi.provenance.repository.max.storage.time=24 hours
nifi.provenance.repository.max.storage.size=1 GB
nifi.provenance.repository.rollover.time=30 secs
nifi.provenance.repository.rollover.size=100 MB
nifi.provenance.repository.query.threads=2
nifi.provenance.repository.index.threads=2
nifi.provenance.repository.compress.on.rollover=true
nifi.provenance.repository.always.sync=false
# Comma-separated list of fields. Fields that are not indexed will not be searchable. Valid fields are:
# EventType, FlowFileUUID, Filename, TransitURI, ProcessorID, AlternateIdentifierURI, Relationship, Details
nifi.provenance.repository.indexed.fields=EventType, FlowFileUUID, Filename, ProcessorID, Relationship
# FlowFile Attributes that should be indexed and made searchable. Some examples to consider are filename, uuid, mime.type
nifi.provenance.repository.indexed.attributes=
# Large values for the shard size will result in more Java heap usage when searching the Provenance Repository
# but should provide better performance
nifi.provenance.repository.index.shard.size=500 MB
# Indicates the maximum length that a FlowFile attribute can be when retrieving a Provenance Event from
# the repository. If the length of any attribute exceeds this value, it will be truncated when the event is retrieved.
nifi.provenance.repository.max.attribute.length=65536
nifi.provenance.repository.concurrent.merge.threads=2
# Volatile Provenance Respository Properties
nifi.provenance.repository.buffer.size=100000
# Component Status Repository
nifi.components.status.repository.implementation=org.apache.nifi.controller.status.history.VolatileComponentStatusRepository
nifi.components.status.repository.buffer.size=1440
nifi.components.status.snapshot.frequency=1 min
# Site to Site properties
nifi.remote.input.host=lawdev1
nifi.remote.input.secure=true
nifi.remote.input.socket.port=10443
nifi.remote.input.http.enabled=true
nifi.remote.input.http.transaction.ttl=30 sec
nifi.remote.contents.cache.expiration=30 secs
# web properties #
nifi.web.war.directory=./lib
nifi.web.http.host=
nifi.web.http.port=
nifi.web.http.network.interface.default=
nifi.web.https.host=lawdev1
nifi.web.https.port=9443
nifi.web.https.network.interface.default=
nifi.web.jetty.working.directory=./work/jetty
nifi.web.jetty.threads=200
nifi.web.max.header.size=16 KB
nifi.web.proxy.context.path=
nifi.web.proxy.host=
# security properties #
nifi.sensitive.props.key=
nifi.sensitive.props.key.protected=
nifi.sensitive.props.algorithm=PBEWITHMD5AND256BITAES-CBC-OPENSSL
nifi.sensitive.props.provider=BC
nifi.sensitive.props.additional.keys=
nifi.security.keystore=./conf/keystore.jks
nifi.security.keystoreType=jks
nifi.security.keystorePasswd=secret
nifi.security.keyPasswd=secret
nifi.security.truststore=./conf/truststore.jks
nifi.security.truststoreType=jks
nifi.security.truststorePasswd=secret
nifi.security.user.authorizer=managed-authorizer
nifi.security.user.login.identity.provider=ldap-provider
nifi.security.ocsp.responder.url=
nifi.security.ocsp.responder.certificate=
nifi.security.needClientAuth=false
# OpenId Connect SSO Properties #
nifi.security.user.oidc.discovery.url=
nifi.security.user.oidc.connect.timeout=5 secs
nifi.security.user.oidc.read.timeout=5 secs
nifi.security.user.oidc.client.id=
nifi.security.user.oidc.client.secret=
nifi.security.user.oidc.preferred.jwsalgorithm=
nifi.security.user.oidc.additional.scopes=
nifi.security.user.oidc.claim.identifying.user=
# Apache Knox SSO Properties #
nifi.security.user.knox.url=
nifi.security.user.knox.publicKey=
nifi.security.user.knox.cookieName=hadoop-jwt
nifi.security.user.knox.audiences=
# Identity Mapping Properties #
# These properties allow normalizing user identities such that identities coming from different identity providers
# (certificates, LDAP, Kerberos) can be treated the same internally in NiFi. The following example demonstrates normalizing
# DNs from certificates and principals from Kerberos into a common identity string:
#
# nifi.security.identity.mapping.pattern.dn=^CN=(.*?), OU=(.*?), O=(.*?), L=(.*?), ST=(.*?), C=(.*?)$
# nifi.security.identity.mapping.value.dn=$1#$2
# nifi.security.identity.mapping.transform.dn=NONE
# nifi.security.identity.mapping.pattern.kerb=^(.*?)/instance#(.*?)$
# nifi.security.identity.mapping.value.kerb=$1#$2
# nifi.security.identity.mapping.transform.kerb=UPPER
# Group Mapping Properties #
# These properties allow normalizing group names coming from external sources like LDAP. The following example
# lowercases any group name.
#
# nifi.security.group.mapping.pattern.anygroup=^(.*)$
# nifi.security.group.mapping.value.anygroup=$1
# nifi.security.group.mapping.transform.anygroup=LOWER
# cluster common properties (all nodes must have same values) #
nifi.cluster.protocol.heartbeat.interval=5 sec
nifi.cluster.protocol.is.secure=true
# cluster node properties (only configure for cluster nodes) #
nifi.cluster.is.node=false
nifi.cluster.node.address=lawdev1
nifi.cluster.node.protocol.port=11443
nifi.cluster.node.protocol.threads=10
nifi.cluster.node.protocol.max.threads=50
nifi.cluster.node.event.history.size=25
nifi.cluster.node.connection.timeout=5 sec
nifi.cluster.node.read.timeout=5 sec
nifi.cluster.node.max.concurrent.requests=100
nifi.cluster.firewall.file=
nifi.cluster.flow.election.max.wait.time=5 mins
nifi.cluster.flow.election.max.candidates=
# cluster load balancing properties #
nifi.cluster.load.balance.host=
nifi.cluster.load.balance.port=6342
nifi.cluster.load.balance.connections.per.node=4
nifi.cluster.load.balance.max.thread.count=8
nifi.cluster.load.balance.comms.timeout=30 sec
# zookeeper properties, used for cluster management #
nifi.zookeeper.connect.string=
nifi.zookeeper.connect.timeout=3 secs
nifi.zookeeper.session.timeout=3 secs
nifi.zookeeper.root.node=/nifi
# Zookeeper properties for the authentication scheme used when creating acls on znodes used for cluster management
# Values supported for nifi.zookeeper.auth.type are "default", which will apply world/anyone rights on znodes
# and "sasl" which will give rights to the sasl/kerberos identity used to authenticate the nifi node
# The identity is determined using the value in nifi.kerberos.service.principal and the removeHostFromPrincipal
# and removeRealmFromPrincipal values (which should align with the kerberos.removeHostFromPrincipal and kerberos.removeRealmFromPrincipal
# values configured on the zookeeper server).
nifi.zookeeper.auth.type=
nifi.zookeeper.kerberos.removeHostFromPrincipal=
nifi.zookeeper.kerberos.removeRealmFromPrincipal=
# kerberos #
nifi.kerberos.krb5.file=
# kerberos service principal #
nifi.kerberos.service.principal=
nifi.kerberos.service.keytab.location=
# kerberos spnego principal #
nifi.kerberos.spnego.principal=
nifi.kerberos.spnego.keytab.location=
nifi.kerberos.spnego.authentication.expiration=12 hours
# external properties files for variable registry
# supports a comma delimited list of file locations
nifi.variable.registry.properties=
# analytics properties #
nifi.analytics.predict.enabled=false
nifi.analytics.predict.interval=3 mins
nifi.analytics.query.interval=5 mins
nifi.analytics.connection.model.implementation=org.apache.nifi.controller.status.analytics.models.OrdinaryLeastSquares
nifi.analytics.connection.model.score.name=rSquared
nifi.analytics.connection.model.score.threshold=.90
Thanks,
Dusty Ryba
nifi.security.needClientAuth=false
for old version of NiFi.
In new version:
NiFi’s web server will REQUIRE certificate based client authentication for users accessing the User Interface when not configured with an alternative authentication mechanism which would require one way SSL (for instance LDAP, OpenId Connect, etc). Enabling an alternative authentication mechanism will configure the web server to WANT certificate base client authentication. https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#security_configuration
So I think that disabling is imposible.

Code: 403. Errors: permission denied - while making API call to Hashicorp Vault

I'm following Vault Configuration example referring from: https://spring.io/guides/gs/vault-config/. I've started server using windows machine.
vault server --dev --dev-root-token-id="00000000-0000-0000-0000-000000000000"
two environment variables to point the Vault CLI to the Vault endpoint and provide an authentication token.
set VAULT_TOKEN="00000000-0000-0000-0000-000000000000"
set VAULT_ADDR=http://127.0.0.1:8200
I am getting below error:
C:\Softwares\vault_1.0.1_windows_amd64>vault write secret/gs-vault-config example.username=demouser example.password=demopassword
Error writing data to secret/gs-vault-config: Error making API request.
URL: PUT http://127.0.0.1:8200/v1/secret/gs-vault-config
Code: 403. Errors:
* permission denied
In windows,
Step1:
set the VAULT_TOKEN & VAULT_ADDR
SET VAULT_TOKEN=00000000-0000-0000-0000-000000000000
SET VAULT_ADDR=http://127.0.0.1:8200
Step 2: put the secret key & password using kv put
vault kv put secret/gs-vault-config example.username=hello example.password=world
I was able to solve the simply use set VAULT_TOKEN=00000000-0000-0000-0000-000000000000
There is change in creating key-value in Hashicorp Vault now. Use kv put instead of write.
>vault kv put secret/gs-vault-config example.username=demouser example.password=demopassword
Key Value
--- -----
created_time 2018-12-26T14:25:07.5400739Z
deletion_time n/a
destroyed false
version 1
>vault kv put secret/gs-vault-config/cloud example.username=clouduser example.password=cloudpassword
Key Value
--- -----
created_time 2018-12-26T14:25:53.0980305Z
deletion_time n/a
destroyed false
version 1