Not able to unseal data from Vault - hashicorp-vault

I am trying to experiment the Vault HarshiCorp.
Version that I am using is Vault v0.11.0:
Starting log as below
Api Address: https://ldndsr000004893:8200
Cgo: disabled
Cluster Address: https://ldndsr000004893:8201
Listener 1: tcp (addr: "ldndsr000004893:8200", cluster address: "10.75.40.30:8201", max_request_duration: "1m30s", max_request_size: "33554432", tls: "disabled")
Log Level: info
Mlock: supported: true, enabled: false
Storage: file
Version: Vault v0.11.0
Version Sha: 87492f9258e0227f3717e3883c6a8be5716bf56
Server configuration as below:
listener "tcp" {
address = "ldndsr000004893:8200"
scheme = "http"
tls_disable = 1
}
#storage "inmem" {
#}
#storage "zookeeper" {
# address = "localhost:2182"
# path = "vault/"
#}
storage "file" {
path = "/app/iag/phoenix/vault/data"
}
# Advertise the non-loopback interface
api_addr = "https://ldndsr000004893:8200"
disable_mlock = true
ui=true
I have input numbers of key value pairs into vault, and was able to retrieve data normally using Vault command line. But certaintly It stopped working and not able to unseal data from both UI and commandline.
UI error :
Any advice on this issue as I am going to use Vault for storing all credential information.

Turns out it was a problem with Vault UI running on chrome browse.
I have to open a new window with incognito windows and it is showing sign in window after I keyed in the token Vault got unsealed

Related

Want to integrate argo server with keycloak

I tried in incognito as wellbut same issue exists.
Currently I have added in server-deployment.yaml
args: - server - --auth-mode - sso
And in values.yaml
sso:
# #SSO configuration when SSO is specified as a server auth mode.
# #All the values are requied. SSO is activated by adding --auth-mode=sso
# #to the server command line.
#
# #The root URL of the OIDC identity provider.
issuer: http://<keycloak_ip>/auth/realms/demo
# #Name of a secret and a key in it to retrieve the app OIDC client ID from.
clientId:
name: argo
key: client-id
# #Name of a secret and a key in it to retrieve the app OIDC client secret from.
clientSecret:
name: "argo-server-sso"
key: client-secret
# # The OIDC redirect URL. Should be in the form /oauth2/callback.
redirectUrl: http:///argo/oauth2/callback
And in keycloak ui , I have created client and client credentials.
kubectl create secret generic "argo-server-sso" --from-literal=client-secret=9a9c60ba-647d-480c-b6fa-82c19caad26a
kubectl create secret generic "argo" --from-literal=client-id=argo
After hitting the argo server url,manually I need to click on login option but after that keycloak page appears and then again a popup will come "Failed to login:Unauthorized"
Server logs:
kubectl logs argo-server-5c7f8c5cbb-9fcqk
time="2021-01-20T12:06:26.876Z" level=info authModes="[sso]" baseHRef=/ managedNamespace= namespace=default secure=false
time="2021-01-20T12:06:26.877Z" level=warning msg="You are running in insecure mode. Learn how to enable transport layer security: https://argoproj.github.io/argo/tls/"
time="2021-01-20T12:06:26.877Z" level=info msg="config map" name=argo-workflow-controller-configmap
time="2021-01-20T12:06:28.318Z" level=info msg="SSO configuration" clientId="{{argo} client-id }" issuer="http://10.xx.xx.xx:xxxx/auth/realms/demo" redirectUrl="http://xx/argo/oauth2/callback"
time="2021-01-20T12:06:28.318Z" level=info msg="SSO enabled"
time="2021-01-20T12:06:28.322Z" level=info msg="Starting Argo Server" instanceID= version=v2.12.2
time="2021-01-20T12:06:28.322Z" level=info msg="Creating event controller" operationQueueSize=16 workerCount=4
time="2021-01-20T12:06:28.323Z" level=info msg="Argo Server started successfully on http://localhost:2746"
time="2021-01-20T12:07:21.990Z" level=info msg="finished unary call with code Unauthenticated" error="rpc error: code = Unauthenticated desc = token not valid for running mode" grpc.code=Unauthenticated grpc.method=GetVersion grpc.service=info.InfoService grpc.start_time="2021-01-20T12:07:21Z" grpc.time_ms=0.379 span.kind=server system=grpc
time="2021-01-20T12:07:22.009Z" level=info msg="finished unary call with code Unauthenticated" error="rpc error: code = Unauthenticated desc = token not valid for running mode" grpc.code=Unauthenticated grpc.method=ListWorkflowTemplates grpc.service=workflowtemplate.WorkflowTemplateService grpc.start_time="2021-01-20T12:07:22Z" grpc.time_ms=0.075 span.kind=server system=grpc
I integrated ArgoCD with Keycloak successfully.
You have 1 clear/visible issue: Yaml indentation is wrong
make sure you keep the right indentation as per default values in helm chart :
https://github.com/argoproj/argo-helm/blob/1aea2c41798972ff0077108f926bb9095f3f9deb/charts/argo/values.yaml#L255-L283
Accordingly, your values should be:
( assuming your argo is serving with hostname workflows.company.com )
server:
extraArgs:
- --auth-mode=sso
sso:
issuer: http://<keycloak_ip>/auth/realms/demo
clientId:
name: argo
key: client-id
clientSecret:
name: "argo-server-sso"
key: client-secret
redirectUrl: https://workflows.company.com/argo/oauth2/callback
From keycloak side now, & under your client, make sure you fill-in the Valid Redirect URL as per your ingress hostname :

Spring config server with vault token doesnt respect the acl defined in vault

I have spring config server and vault as backend. i created a token in vault with an acl policy . when i use the token in spring.cloud.config.token it doesnt respect the acl
My sping config client has this boot strap properties
spring:
application:
name: app1
cloud:
config:
uri: https://config-server-ur:port
token: token-associated-to-acl-policy
i created an acl policy by name "app1" which allows only the "app1" to be read by the token in vault.
path "secret/app1" {
capabilities = ["read", "list"]
}
./vault token create -display-name="app1" -policy="app1"
i used the token generated in my client and it doesnt work.
when i changed the acl policy to below, it works
path "secret/*" {
capabilities = ["read", "list"]
}
However, when i access the vault directly with X-Vault-token it works perfectly as expected
I found the solution, Set spring.cloud.config.server.vault.defaultKey to empty, like this in config-server bootstrap.yml
spring.profiles.active=git, vault
spring.cloud.config.server.git.uri=properties-git-repo-url
spring.cloud.config.server.git.username=user
spring.cloud.config.server.git.password=password
spring.cloud.config.server.git.searchPaths=/{application}/xyz
spring.cloud.config.server.git.force-pull=true
spring.cloud.config.server.git.timeout=10
spring.cloud.config.server.git.order=2
spring.cloud.config.server.vault.host=vault-hostname
spring.cloud.config.server.vault.port=8200
spring.cloud.config.server.vault.scheme=https
spring.cloud.config.server.vault.backend=secret
spring.cloud.config.server.vault.defaultKey=
spring.cloud.config.server.vault.profileSeparator=/
spring.cloud.config.server.vault.skipSslValidation=true
spring.cloud.config.server.vault.order=1
spring.cloud.config.server.vault.kvVersion=1
by default spring.cloud.config.server.vault.defaultKey= is set to "application".

Kibana is not running on FreeBSD

I'm fighting with kibana since few days and I don't overcome to start it on my FreeBSD server.
This is my environment:
FreeBSD 11.1-STABLE
ElasticSearch 5.3.0
Kibana 5.3.0
Logstash 5..
ElasticSearch and Logstash work fine. But I don't overcome to start kibana service.
This is files according to kibana:
kibana.yml file:
# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601
# Specifies the address to which the Kibana server will bind. IP addresses and host names are
both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "0.0.0.0"
# Enables you to specify a path to mount Kibana at if you are running behind a proxy. This only affects
# the URLs generated by Kibana, your proxy is expected to remove the basePath value before forwarding requests
# to Kibana. This setting cannot end in a slash.
server.basePath: "/qual/kibana"
# The maximum payload size in bytes for incoming server requests.
#server.maxPayloadBytes: 1048576
# The Kibana server's name. This is used for display purposes.
#server.name: "your-hostname"
# The URL of the Elasticsearch instance to use for all your queries.
elasticsearch.url: "http://localhost:9200"
# When this setting's value is true Kibana uses the hostname specified in the server.host
# setting. When the value of this setting is false, Kibana uses the hostname of the host
# that connects to this Kibana instance.
#elasticsearch.preserveHost: true
# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn't already exist.
kibana.index: ".kibana"
# The default application to load.
#kibana.defaultAppId: "discover"
# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
#elasticsearch.username: "user"
#elasticsearch.password: "pass"
# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key
# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# These files validate that your Elasticsearch backend uses the same key files.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key
# Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]
# To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full
# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500
# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000
# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]
# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}
# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 0
# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying.
#elasticsearch.startupTimeout: 5000
# Specifies the path where Kibana creates the process ID file.
pid.file: /var/run/kibana.pid
# Enables you specify a file where Kibana stores log output.
logging.dest: /var/log/kibana.log
# Set the value of this setting to true to suppress all logging output.
#logging.silent: false
# Set the value of this setting to true to suppress all logging output other than error messages.
#logging.quiet: false
# Set the value of this setting to true to log all events, including system usage information
# and all requests.
#logging.verbose: false
# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000.
#ops.interval: 5000
# The default locale. This locale can be used in certain circumstances to substitute any missing
# translations.
#i18n.defaultLocale: "en"
/usr/local/etc/rc.d/kibana:
#!/bin/sh
#
# $FreeBSD: head/textproc/kibana5/files/kibana.in 462830 2018-02-24 14:17:41Z feld $
#
# PROVIDE: kibana
# REQUIRE: DAEMON
# KEYWORD: shutdown
. /etc/rc.subr
name=kibana
rcvar=kibana_enable
load_rc_config $name
: ${kibana_enable:="NO"}
: ${kibana_config:="/usr/local/etc/kibana.yml"}
: ${kibana_user:="www"}
: ${kibana_group:="www"}
: ${kibana_log:="/var/log/kibana.log"}
required_files="${kibana_config}"
pidfile="/var/run/${name}/${name}.pid"
start_precmd="kibana_precmd"
procname="/usr/local/bin/node"
command="/usr/sbin/daemon"
command_args="-f -p ${pidfile} env BABEL_DISABLE_CACHE=1 ${procname} /usr/local/www/kibana5/src/cli serve --config ${kibana_config} --log-file ${kibana_log}"
kibana_precmd()
{
if [ ! -d $(dirname ${pidfile}) ]; then
install -d -o ${kibana_user} -g ${kibana_group} $(dirname ${pidfile})
fi
if [ ! -f ${kibana_log} ]; then
install -o ${kibana_user} -g ${kibana_group} -m 640 /dev/null ${kibana_log}
fi
if [ ! -d /usr/local/www/kibana5/optimize ]; then
install -d -o ${kibana_user} -g ${kibana_group} /usr/local/www/kibana5/optimize
fi
}
run_rc_command "$1"
/etc/rc.conf:
kibana_enable="YES"
But when I execute: service kibana start
I get:
root#server:/var/log # service kibana start
Starting kibana.
root#server:/var/log # service kibana status
kibana is not running.
I don't know why ?
Start the service in debug mode
sh -x /usr/local/etc/rc.d/kibana start
find which command is used to start the kibana service. For kibana, the command should be something like /usr/local/bin/node /usr/local/www/kibana6/src/cli serve --config /usr/local/etc/kibana/kibana.yml
Start the process in foreground
It is possible that node is not properly installed or some permission issue.

access to vault since other server

I have install a vault in my first server but a when access to my vault since other server
I create a config file config.hcl and i my file i put this
listener "tcp" {
address = "127.0.0.1:8200"
}
listener "tcp" {
address = "myIP:8200"
}
# Advertise the non-loopback interface
api_addr = "https://myIP:8200"
cluster_addr = "https://myIp:8201"
like hashicorp doc
but when i start the server like this vault server -config=./config.hcl
i have this error
Error initializing listener of type tcp: listen tcp4 myIp:8200: bind: address already in use

Error creating TLS config after updating Traefik to v1.3.6

I'm attempting to update from Traefik v1.2.3 to v1.3.6 on Kubernetes. I have my TLS certificates mounted inside of the pods from secrets. Under v1.2.3, everything works as expected. When I try to apply my v1.3.6 deployment (only change being the new docker image), the pods fail to start with the following message:
time="2017-08-22T20:27:44Z" level=error msg="Error creating TLS config: tls: failed to find any PEM data in key input"
time="2017-08-22T20:27:44Z" level=fatal msg="Error preparing server: tls: failed to find any PEM data in key input"
Below is my traefik.toml file:
defaultEntryPoints = ["http","https"]
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[[entryPoints.https.tls.certificates]]
CertFile = "/ssl/wildcard.foo.mydomain.com.crt"
KeyFile = "/ssl/wildcard.foo.mydomain.com.key"
[[entryPoints.https.tls.certificates]]
CertFile = "/ssl/wildcard.mydomain.com.crt"
KeyFile = "/ssl/wildcard.mydomain.com.key"
[[entryPoints.https.tls.certificates]]
CertFile = "/ssl/wildcard.local.crt"
KeyFile = "/ssl/wildcard.local.key"
[kubernetes]
labelselector = "expose=internal"
My initial impression of the errors produced by the pods are that the keys in the secret are not valid. However, I am able to base64 decode the contents of the secret and see that the values match those of the certificate files I have stored locally. Additionally, I would expect to see this error on any version of Traefik if these were in fact, invalid. In reviewing the change log for Traefik, I see that the SSL library was updated but the related PR indicates that this only added ciphers and did not remove any previously supported.
:Edit w/ additional info:
Running with --logLevel=DEBUG provides this additional information (provided in full below in case it's helpful):
[cluster-traefik-2693375319-w67hf] time="2017-08-22T21:41:19Z" level=debug msg="Global configuration loaded {"GraceTimeOut":10000000000,"Debug":false,"CheckNewVersion":true,"AccessLogsFile":"","TraefikLogsFile":"","LogLevel":"DEBUG","EntryPoints":{"http":{"Network":"","Address":":80","TLS":null,"Redirect":{"EntryPoint":"https","Regex":"","Replacement":""},"Auth":null,"Compress":false},"https":{"Network":"","Address":":443","TLS":{"MinVersion":"","CipherSuites":null,"Certificates":[{"CertFile":"/ssl/wildcard.foo.mydomain.com.crt","KeyFile":"/ssl/wildcard.foo.mydomain.com.key"},{"CertFile":"/ssl/wildcard.mydomain.com.crt","KeyFile":"/ssl/wildcard.mydomain.com.key"},{"CertFile":"/ssl/wildcard.local.crt","KeyFile":"/ssl/wildcard.local.key"}],"ClientCAFiles":null},"Redirect":null,"Auth":null,"Compress":false}},"Cluster":null,"Constraints":[],"ACME":null,"DefaultEntryPoints":["http","https"],"ProvidersThrottleDuration":2000000000,"MaxIdleConnsPerHost":200,"IdleTimeout":180000000000,"InsecureSkipVerify":false,"Retry":null,"HealthCheck":{"Interval":30000000000},"Docker":null,"File":null,"Web":{"Address":":8080","CertFile":"","KeyFile":"","ReadOnly":false,"Statistics":null,"Metrics":{"Prometheus":{"Buckets":[0.1,0.3,1.2,5]}},"Path":"","Auth":null},"Marathon":null,"Consul":null,"ConsulCatalog":null,"Etcd":null,"Zookeeper":null,"Boltdb":null,"Kubernetes":{"Watch":true,"Filename":"","Constraints":[],"Endpoint":"","Token":"","CertAuthFilePath":"","DisablePassHostHeaders":false,"Namespaces":null,"LabelSelector":"expose=internal"},"Mesos":null,"Eureka":null,"ECS":null,"Rancher":null,"DynamoDB":null}"
[cluster-traefik-2693375319-w67hf] time="2017-08-22T21:41:19Z" level=info msg="Preparing server https &{Network: Address::443 TLS:0xc42060d800 Redirect:<nil> Auth:<nil> Compress:false}"
[cluster-traefik-2693375319-w67hf] time="2017-08-22T21:41:19Z" level=error msg="Error creating TLS config: tls: failed to find any PEM data in key input"
[cluster-traefik-2693375319-w67hf] time="2017-08-22T21:41:19Z" level=fatal msg="Error preparing server: tls: failed to find any PEM data in key input"
This issue turned out to be new validation logic in the crypto/tls library in Go 1.8. They are now validating the certificate blocks end in ----- where as before they did not. The private key for one of my certificate files ended in ---- (missing a hyphen). Adding the missing character fixed this issue.