Hashicorp Vault reading creds - failed to find entry for connection with name: db_name - hashicorp-vault

I dont know if I did something wrong or not.
But here is my configuration.
// payload.json
{
"plugin_name": "postgresql-database-plugin",
"allowed_roles": "*",
"connection_url": "postgresql://{{username}}:{{password}}#for-testing-vault.rds.amazonaws.com:5432/test-app",
"username": "test",
"password": "testtest"
}
then run this command:
curl --header "X-Vault-Token: ..." --request POST --data #payload.json http://ip_add.us-west-1.compute.amazonaws.com:8200/v1/database/config/postgresql
roles configuration:
// readonlypayload.json
{
"db_name": "test-app",
"creation_statements": ["CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}';
GRANT SELECT ON ALL TABLES IN SCHEMA public TO \"{{name}}\";"],
"default_ttl": "1h",
"max_ttl": "24h"
}
then run this command:
curl --header "X-Vault-Token: ..." --request POST --data #readonlypayload.json http://ip_add.us-west-1.compute.amazonaws.com:8200/v1/database/roles/readonly
Then created a policy:
path "database/creds/readonly" {
capabilities = [ "read" ]
}
path "/sys/leases/renew" {
capabilities = [ "update" ]
}
and run this to get the token:
curl --header "X-Vault-Token: ..." --request POST --data '{"policies": ["db_creds"]}' http://ip_add.us-west-1.compute.amazonaws.com:8200/v1/auth/token/create | jq
executed this command to get the values:
VAULT_TOKEN=... consul-template.exe -template="config.yml.tpl:config.yml" -vault-addr "http://ip_add.us-west-1.compute.amazonaws.com:8200" -log-level debug
Then I receive this errors:
URL: GET http://ip_add.us-west-1.compute.amazonaws.com:8200/v1/database/creds/readonly
Code: 500. Errors:
* 1 error occurred:
* failed to find entry for connection with name: "test-app"
Any suggestions will be appreciated, thanks!
EDIT: Tried also this command on the server
vault read database/creds/readonly
Still returning
* 1 error occurred:
* failed to find entry for connection with name: "test-app"

For those coming to this page via Googling for this error message, this might help:
Unfortunately the Vault database/role's parameter db_name is a bit misleading. The value needs to match a database/config/ entry, not an actual database name per se. The GRANT statement itself is where the database name is relevant, the db_name is just a reference to the config name, which may or may not match the database name. (In my case, the configs have other data such as environment prefixing the DB name.)

In case this issue not yet resolved
vault is not able to find the db name 'test-app' in postgres, or authentication to the db 'test-app' with given credential fails, so
connection failure happened.
login to postgres and check if the db 'test-app' exists by running \l.
for creating role in postgres you should use the default db 'postgres'. Try to change name from 'test-app' to 'postgres' and check.
Change connection_url in payload.json:
"connection_url": "postgresql://{{username}}:{{password}}#for-testing-vault.rds.amazonaws.com:5432/postgres",

Related

cannot purge deleted entity in apache atlas

I tried to purge deleted entities in apache atlas and I keep getting the following error
"error":"Cannot deserialize instance of java.util.HashSet<java.lang.Object> out of START_OBJECT token\n at [Source: (org.eclipse.jetty.server.HttpInputOverHTTP); line: 1, column: 1]"
I am using the following python code. How should I format my json request?
def purgeEntity(guid):
endpoint = 'http://localhost:21000/api/atlas/admin/purge'
response = requests.post(endpoint,
data=guid,
auth=HTTPBasicAuth('admin', 'password'),
headers={"Content-Type": "application/json"}) data = json.dumps({"guid":
["0f8aad54-7275-483e-90ca-8b1c09b061bc"]}) purgeEntity(data)
curl -iv -u admin:admin -X DELETE http://localhost:21000/api/atlas/v2/entity/guid/3f62e45b-5e0b-4431-be1f-b5c77808f29b

Mongoexport auth error using mechanism "SCRAM-SHA-1"

I have taken over undocumented Mongo 4.4.8 cluster (PSA). I am trying to tidy it up and test thouroughly.
An original connection string:
MONGODB_URI=mongodb://${USER}:${PASS}#10.0.0.3:27017,10.0.0.6:27017,10.0.0.2:27017/bud?replicaSet=bud-replica&authSource=admin
I have enabled localhost and socket connection. I can log in from cmdline with
mongo -u ${USER} -p ${PASS}
MongoDB shell version v4.4.8
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("492e331b-417e-458a-83c7-9db6eaae0869") }
MongoDB server version: 4.4.8
I can switch db to bud and perform the queries. But if I run just
mongo
then the authentication with the same credentials does not work:
bud-replica:PRIMARY> db.auth('admin','admin');
Error: Authentication failed.
0
I tried to search for users but shows there arent any:
bud-replica:PRIMARY> db.getUsers()
[ ]
bud-replica:PRIMARY> use bud
switched to db bud
bud-replica:PRIMARY> db.getUsers()
[ ]
This is mongod.conf security part:
security:
authorization: enabled
keyFile: "/etc/bud-rs"
Finally I need to export my data before doing experiments. Though the cmd line interface looks similar, mongoexport cannot fetch the data, regardless I set user/password or skip these arguments.
mongoexport -h localhost --db=bud -u ${USER} -p ${PASS} -c=accidents --jsonArray > accidents.json
2021-08-25T19:30:30.631+0200 could not connect to server: connection() error occured during connection handshake: auth error: sasl conversation error: unable to authenticate using mechanism "SCRAM-SHA-1": (AuthenticationFailed) Authentication failed.
mongoexport -h localhost --db=bud -u ${USER} -p ${PASS} -c=accidents --jsonArray --authenticationDatabase “admin” > accidents.json
2021-08-25T19:36:18.738+0200 could not connect to server: connection() error occured during connection handshake: auth error: sasl conversation error: unable to authenticate using mechanism "SCRAM-SHA-1": (AuthenticationFailed) Authentication failed.
root#10:~# mongoexport -h localhost --db=bud -u ${USER} -p ${PASS} -c=accidents --jsonArray --authenticationDatabase “bud” > accidents.json
2021-08-25T19:38:21.174+0200 could not connect to server: connection() error occured during connection handshake: auth error: sasl conversation error: unable to authenticate using mechanism "SCRAM-SHA-1": (AuthenticationFailed) Authentication failed.
I am really confused and I failed to find a solution on Google or SO.
Second relevant question:
If I need to create new user, shall I do it on all replicas or it is automatically synchronized?
1st update
This is the workaround, but my questions are still valid. I want to understand.
root#10:~# mongoexport --db=bud -u ${USER} -p ${PASS} -c=accidents --jsonArray "mongodb://admin:admin#10.0.0.3:27017/bud?authSource=admin" > accidents.json
2021-08-25T20:46:54.777+0200 connected to: mongodb://[**REDACTED**]#10.0.0.3:27017/bud?authSource=admin
2021-08-25T20:46:55.778+0200 [........................] bud.accidents 0/4379 (0.0%)
2021-08-25T20:46:56.497+0200 [########################] bud.accidents 4379/4379 (100.0%)
2021-08-25T20:46:56.497+0200 exported 4379 records
2nd update
bud-replica:PRIMARY> use admin
bud-replica:PRIMARY> show collections
system.keys
system.users
system.version
bud-replica:PRIMARY> db.system.users.find()
{ "_id" : "admin.admin", "userId" : UUID("769e4f5c-6f46-4153-857e-47d7d8730066"), "user" : "admin", "db" : "admin", "credentials" : { "SCRAM-SHA-1" : { "iterationCount" : 10000, "salt" : "32/AP4019eome36j8n
The user credential was created in the admin database.
When connecting with the mongo shell, switch with use admin before running db.auth
The mongoexport command that worked used authSource=admin in the connection string.
Add --authenticationDatabase=admin to the other command line to direct it to use the admin database for auth as well.
whole example command as below worked for me.
Mongodb version: 5.x.x, also for Mongodb version: 8.x.x
mongodump --authenticationDatabase=admin --uri mongodb://username:password#mongodb-host/db-name?ssl=false&authSource=admin

Confluent Schema Registry: POST simple JSON schema with object having single property

OS: Ubuntu 18.x
docker image (from dockerhub.com, as of 2020-09-25): confluentinc/cp-schema-registry:latest
I am exploring the HTTP API for the Confluent Schema Registry. First off, is there a definitive assertion somewhere about what version of the JSON Schema definition the registry assumes? For now, I am assuming Draft v7.0. More broadly, I believe the API that returns supported schema should list versions. Eg, instead of:
$ curl -X GET http://localhost:8081/schemas/types
["JSON","PROTOBUF","AVRO"]
you would have:
$ curl -X GET http://localhost:8081/schemas/types
[{"flavor": "JSON", "version": "7.0"}, {"flavor": "PROTOBUF", "version": "1.2"}, {"flavor": "AVRO", "version": "3.5"}]
so at least programmers would know for sure what the Schema Registry assumes.
This issue aside, I cannot seem to POST a rather trivial JSON schema to the registry:
$ curl -X POST -H "Content-Type: application/vnd.schemaregistry.v1+json" --data '{ "schema": "{ \"type\": \"object\", \"properties\": { \"f1\": { \"type\": \"string\" } } }" }' http://localhost:8081/subjects/mytest-value/versions
{"error_code":42201,"message":"Either the input schema or one its references is invalid"}
Here I am POSTing the schema to the mytest subject. The schema, incidentally, I scraped from Confluent documentation, and then escaped it accordingly.
Can you tell why this schema is not POSTing to the registry? And more generally, can I assume full support for Draft v7.0 of the JSON Schema definition?
You need to pass the schemaType flag. "If no schemaType is supplied, schemaType is assumed to be AVRO." https://docs.confluent.io/current/schema-registry/develop/api.html#post--subjects-(string-%20subject)-versions:
'{"schemaType":"JSON","schema":"{\"type\":\"object\",\"fields\":[{\"name\":\"f1\",\"type\":\"string\"}]}"}'
I agree that output of the supported versions would be helpful.

Access denied to S3 when using COPY command with IAM role

I have the following copy command:
copy ink.contact_trace_records
from 's3://mybucket/mykey'
iam_role 'arn:aws:iam::accountnum:role/rolename'
json 'auto';
Where the role has the full access to S3 and full access to RS policies (I know this is not a good idea, but I'm just losing it here :) ). The cluster has the role attached. The cluster has VPC enhanced routing. I get the following:
[2019-05-28 14:07:34] [XX000][500310] [Amazon](500310) Invalid operation: S3ServiceException:Access Denied,Status 403,Error AccessDenied,Rid 370F75618922AFC0,ExtRid dp2mcnlofFzt4dz00Lcm188/+ta3OEKVpuFjnZSYsC0pJPMiULk7I6spOpiZwXc04VVRdxizIj4=,CanRetry 1
[2019-05-28 14:07:34] Details:
[2019-05-28 14:07:34] -----------------------------------------------
[2019-05-28 14:07:34] error: S3ServiceException:Access Denied,Status 403,Error AccessDenied,Rid 370F75618922AFC0,ExtRid dp2mcnlofFzt4dz00Lcm188/+ta3OEKVpuFjnZSYsC0pJPMiULk7I6spOpiZwXc04VVRdxizIj4=,CanRetry 1
[2019-05-28 14:07:34] code: 8001
[2019-05-28 14:07:34] context: S3 key being read : s3://redacted
[2019-05-28 14:07:34] query: 7794423
[2019-05-28 14:07:34] location: table_s3_scanner.cpp:372
[2019-05-28 14:07:34] process: query3_986_7794423 [pid=13695]
[2019-05-28 14:07:34] -----------------------------------------------;
What am I missing here? The cluster has full access to S3, it is not even in a custom VPC, it is in the default one. Thoughts?
Check the object owner. The object is likely owned by another account. This is the reason for S3 403s in many situations where objects cannot be accessed by a role or account that has full permissions. The typical indicator is that you can list the object(s) but cannot get or head them.
In the following example I'm trying to access a Redshift audit log from a 2nd account. The account 012345600000 is granted access to the bucket owned by 999999999999 using the following policy:
{"Version": "2012-10-17",
"Statement": [
{"Action": [
"s3:Get*",
"s3:ListBucket*"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::my-audit-logs",
"arn:aws:s3:::my-audit-logs/*"
],
"Principal": {
"AWS": [
"arn:aws:iam::012345600000:root"
]}}]
}
Now I try to list (s3 ls) then copy (s3 cp) a single log object:
aws --profile 012345600000 s3 ls s3://my-audit-logs/AWSLogs/999999999999/redshift/us-west-2/2019/05/25/
# 2019-05-28 14:49:46 376 …connectionlog_2019-05-25T19:03.gz
aws --profile 012345600000 s3 cp s3://my-audit-logs/AWSLogs/999999999999/redshift/us-west-2/2019/05/25/…connectionlog_2019-05-25T19:03.gz ~/Downloads/…connectionlog_2019-05-25T19:03.gz
# fatal error: An error occurred (403) when calling the HeadObject operation: Forbidden
Then I check the object ownership from the account that owns the bucket.
aws --profile 999999999999 s3api get-object-acl --bucket my-audit-logs --key AWSLogs/999999999999/redshift/us-west-2/2019/05/25/…connectionlog_2019-05-25T19:03.gz
# "Owner": {
# "DisplayName": "aws-cm-prod-auditlogs-uswest2", ## Not my account!
# "ID": "b2b456ce30a967fb1877b3c9594773ae0275fee248e3ebdbff43d66907b89144"
# },
I then copy the object in the same bucket with --acl bucket-owner-full-control. This makes me the owner of the new object. Note the changed SharedLogs/ prefix.
aws --profile 999999999999 s3 cp \
s3://my-audit-logs/AWSLogs/999999999999/redshift/us-west-2/2019/05/25/…connectionlog_2019-05-25T19:03.gz \
s3://my-audit-logs/SharedLogs/999999999999/redshift/us-west-2/2019/05/25/…connectionlog_2019-05-25T19:03.gz \
--acl bucket-owner-full-control
Now I can download (or load to Redshift!) the new object that is shared from the same bucket.
aws --profile 012345600000 s3 cp s3://my-audit-logs/SharedLogs/999999999999/redshift/us-west-2/2019/05/25/…connectionlog_2019-05-25T19:03.gz ~/Downloads/…connectionlog_2019-05-25T19:03.gz
# download: …

what is client property of knexfile.js

In knex documentation of configuration of knexfile.js for PostgreSQL, they have a property called client, which looks this way:
...
client: 'pg'
...
However, going through some other projects that utilize PostgreSQL I noticed that they have a different value there, which looks this way:
...
client: 'postgresql'
...
Does this string correspond to the name of some sort of command line tool that is being used with the project or I misunderstand something?
Postgresql is based on a server-client model as described in 'Architectural Fundamentals'
psql is the standard cli client of postgres as mentioned here in the docs.
A client may as well be a GUI such as pg-admin, or a node-package such as 'pg' - here's a list.
The client parameter is required and determines which client adapter will be used with the library.
You should also read the docs of 'Server Setup and Operation'
To initialize the library you can do the following (in this case on localhost):
var knex = require('knex')({
client: 'mysql',
connection: {
host : '127.0.0.1',
user : 'your_database_user',
password : 'your_database_password',
database : 'myapp_test'
}
})
The standard user of the client deamon ist 'postgres' - which you can use of course, but its highly advisable to create a new user as stated in the docs and/or apply a password to the standard user 'postgres'.
On Debian stretch i.E.:
# su - postgres
$ psql -d template1 -c "ALTER USER postgres WITH PASSWORD 'SecretPasswordHere';"
Make sure you delete the command line history so nobody can read out your pwd:
rm ~/.psql_history
Now you can add a new user (i.E. foobar) on the system and for postgres
# adduser foobar
and
# su - postgres
$ createuser --pwprompt --interactive foobar
Lets look at the following setup:
module.exports = {
development: {
client: 'xyz',
connection: { user: 'foobar', database: 'my_app' }
},
production: { client: 'abc', connection: process.env.DATABASE_URL }
};
This basically tells us the following:
In dev - use the client xyz to connect to postgresqls database my_app with the user foobar (in this case without pwd)
In prod - retrieve the globalenv the url of the db-server is set to and connect via the client abc
Here's an example how node's pg-client package opens a connection pool:
const pool = new Pool({
user: 'foobar',
host: 'someUrl',
database: 'someDataBaseName',
password: 'somePWD',
port: 5432,
})
If you could clarify or elaborate your setup or what you like to achieve a little more i could give you some more detailed info - but i hope that helped anyways..