Grafana does not read credential_process - grafana

I want to create a Datasource Cloudwatch in Grafana server. I can successfully create Datasource Cloudwatch by setting Auth Provider as credentials file if the credential file at /usr/share/grafana/.aws/credentials looks like below:
[default]
aws_access_key_id = XXX
aws_secret_access_key = XXX
region = XXX
But if I set Auth Provider as credentials file and set credential file like below, the Cloudwatch datasource cannot be created.
[default]
credential_process = /usr/local/bin/script_fetch_access_key parameter1
Grafana GUI shows error HTTP Error Internal Server Error; /var/log/grafana/grafana.log shows error:
msg="Metric request error" ... error="Failed to call cloudwatch:ListMetrics, NoCredentialProviders: no valid providers in chain. Deprecated.
/usr/local/bin/script_fetch_access_key is a python code I created to get access_key, if I ssh and directly execute /usr/local/bin/script_get_access_key parameter1 in Grafana server, it will show json output correctly as below:
{"Version": 1, "AccessKeyId": "XXX", "SecretAccessKey": "XXX"}
My guess is either Grafana does not know how to run credential_process of credentials file? or it does not recognize the above json output? Anyway, how to setup Grafana which has credentials file with credential_process?

Related

I cannot log in the Chainlink GUI

I am using this helm chart
https://artifacthub.io/packages/helm/vulcanlink/chainlink
I managed to launch and connect Chainlink node with Postgres, with these values
config:
# Login Info
ROOT: /chainlink
API_LOGIN: |
API_EMAIL=admin#admin.com
API_LOGIN=admin
WALLET_PASSWORD: "9xMR9PN7CTk6Axs" # a random test password based on chainlink's demands
# HTTP Security
ALLOW_ORIGINS: "*"
SECURE_COOKIES: "false"
CHAINLINK_PORT: "6688"
CHAINLINK_TLS_PORT: "0"
# Database
DATABASE_TIMEOUT: "0"
DATABASE_URL: postgresql://chainlink:chainlink#pgdb-postgresql:5432/chainlink?sslmode=disable
# Ethereum
ETH_URL: wss://rinkeby.infura.io/ws/v3/somerandomnumber # ws://geth:8546
ETH_CHAIN_ID: "4"
LINK_CONTRACT_ADDRESS: 0x514910771af9ca656af840dff83e8264ecf986ca # this was here ...
I port forward the k8s service and I see the Chainlink UI.
But what combination of the above should I use?
I have tried them all.
EDIT
In order to change the env vars, I ended up destroying the whole minikube env. Insane, and I have no idea why...
Now I get this in the logs
There are no accounts, creating a new account with the specified password
There are no P2P keys; creating a new key encrypted with given password
There are no OCR keys; creating a new key encrypted with given password
2022-09-02T10:22:50Z [INFO] API exposed for user API_EMAIL=admin#admin.com cmd/local_client.go:122
2022-09-02T10:23:32Z [INFO] POST /sessions web/router.go:433 body={"email":"admin#admin.com","password":"*REDACTED*"} clientIP=127.0.0.1 errors=Error #01: Invalid email
latency=4.918708ms method=POST path=/sessions servedAt=2022-09-02 10:23:32 status=401
... so I still cannot log in in the GUI. It is frustrating
EDIT
This is what happens when the instructions are not clear...
The username was API_EMAIL=admin#admin.com and the password API_LOGIN=admin .
Now I can login...but surely gonna change them...

Azure devops Variables and Terraform

I am trying to create a azure key vault with the help of terraform where i want to save my DB password in my azure devops pipeline because obviously I cannot hardcode it to my tfvars file.
As u can see i m creating an empty job and saving my password variable with value in pipeline
but I am not able to understand why my terraform plan is waiting in console like it is asking user to enter the password
below is snapshot of LOG:
can u please help me that what I am missing here ??
Also , I have i m passing my password in command line : then I am getting below error :
2022-05-13T05:11:00.5948619Z [31m│[0m [0m[1m[31mError: [0m[0m[1mbuilding account: getting authenticated object ID: Error listing Service Principals: autorest.DetailedError{Original:adal.tokenRefreshError{message:"adal: Refresh request failed. Status Code = '401'. Response body: {"error":"invalid_client","error_description":"AADSTS7000215: Invalid client secret provided. Ensure the secret being sent in the request is the client secret value, not the client secret ID, for a secret added to app 'a527faff-6956-4b8a-93ad-d9a14ab41610'.\r\nTrace ID: 81c1b1e8-1b0c-4f21-ad90-baf277d43801\r\nCorrelation ID: c77d437b-a6e8-4a74-8342-1508de00fa3a\r\nTimestamp: 2022-05-13 05:11:00Z","error_codes":[7000215],"timestamp":"2022-05-13 05:11:00Z","trace_id":"81c1b1e8-1b0c-4f21-ad90-baf277d43801","correlation_id":"c77d437b-a6e8-4a74-8342-1508de00fa3a","error_uri":"https://login.microsoftonline.com/error?code=7000215"} Endpoint https://login.microsoftonline.com/*/oauth2/token?api-version=1.0", resp:(http.Response)(0xc00143c000)}, PackageType:"azure.BearerAuthorizer", Method:"WithAuthorization", StatusCode:401, Message:"Failed to refresh the Token for request to https://graph.windows.net//servicePrincipals?%24filter=appId+eq+%27a527faff-6956-4b8a-93ad-d9a14ab41610%27&api-version=1.6", ServiceError:[]uint8(nil), Response:(*http.Response)(0xc00143c000)}[0m
2022-05-13T05:11:00.5952404Z [31m│[0m [0m

How to create/start cluster from data bricks web activity by invoking databricks rest api

I have 2 requirements:
1:I have a clusterID. I need to start the cluster from a "Wb Activity" in ADF. The activity parameters look like this:
url:https://XXXX..azuredatabricks.net/api/2.0/clusters/start
body: {"cluster_id":"0311-004310-cars577"}
Authentication: Azure Key Vault Client Certificate
Upon running this activity I am encountering with below error:
"errorCode": "2108",
"message": "Error calling the endpoint
'https://xxxxx.azuredatabricks.net/api/2.0/clusters/start'. Response status code: ''. More
details:Exception message: 'Cannot find the requested object.\r\n'.\r\nNo response from the
endpoint. Possible causes: network connectivity, DNS failure, server certificate validation or
timeout.",
"failureType": "UserError",
"target": "GetADBToken",
"GetADBToken" is my activity name.
The above security mechanism is working for other Databricks related activity such a running jar which is already installed on my databricks cluster.
2: I want to create a new cluster with the below settings:
url:https://XXXX..azuredatabricks.net/api/2.0/clusters/create
body:{
"cluster_name": "my-cluster",
"spark_version": "5.3.x-scala2.11",
"node_type_id": "i3.xlarge",
"spark_conf": {
"spark.speculation": true
},
"num_workers": 2
}
Upon calling this api, if a cluster creation is successful I would like to capture the cluster id in the next activity.
So what would be the output of the above activity and how can I access them in an immediate ADF activity?
For #2 ) Can you please check if you change the version
"spark_version": "5.3.x-scala2.11"
to
"spark_version": "6.4.x-scala2.11"
if that helps

Hashicorp Vault: "Code: 400. Errors" Error Message

When using Vault Agent with a secret ID file, I received the following error message:
$ ./vault agent --config auth_config.hcl
==> Vault server started! Log data will stream in below:
==> Vault agent configuration:
Api Address 1: http://127.0.0.1:8300
Cgo: disabled
Log Level: info
Version: Vault v1.3.0
2020-02-04T14:08:28.352-0800 [INFO] auth.handler: starting auth handler
2020-02-04T14:08:28.352-0800 [INFO] auth.handler: authenticating
2020-02-04T14:08:28.352-0800 [INFO] sink.server: starting sink server
2020-02-04T14:08:28.352-0800 [INFO] template.server: starting template server
2020-02-04T14:08:28.352-0800 [INFO] template.server: no templates found
2020-02-04T14:08:28.352-0800 [INFO] template.server: template server stopped
2020-02-04T14:08:28.354-0800 [ERROR] auth.handler: error authenticating: error="Error making API request.
URL: PUT http://127.0.0.1:8200/v1/auth/approle/login
Code: 400. Errors:
* invalid secret id" backoff=2.190384035
The command I executed was:
vault agent --config auth_config.hcl
The contents of my auth_config.hcl file is:
vault {
address = "http://127.0.0.1:8200"
}
auto_auth {
method "approle" {
config {
role_id_file_path = "./role_id"
secret_id_file_path = "./secret_id"
remove_secret_id_file_after_reading = false
}
}
}
cache {
use_auto_auth_token = true
}
listener "tcp" {
address = "127.0.0.1:8300"
tls_disable = true
}
My secret ID was generated using the following command:
vault write -f auth/approle/role/payments_service/secret-id -format=json | sed -E -n 's/.*"secret_id": "([^"]*).*/\1/p' > secret_id
Why is this error happening?
I found that the usual reason that this happens because the secret ID file wasn't generated correctly in the first place. See this Github thread for example. Unfortunately, in my case, the file was generated. The file secret_id referenced in auth_config.hcl contained the secret ID.
In my case, the problem was that after I generated the file, secret_id, I executed the command vault write -f auth/approle/role/payments_service/secret-id a second time. This new command didn't write over the original file with a new secret ID. The consequence of this new command was that it respawned a new secret ID which invalidated the previous secret ID which was written to the secret_id file.
My solution was to rerun the command that wrote the secret ID to the file, secret_id, and then immediately run the Vault Agent. Problem solved.
My case was because the app (kes) was trying to use http, instead of https, to connect to vault, while the tls was enabled both in vault and the app (kes). Once it was updated, the app could connect to vault without any issue
Error: failed to connect to Vault: Error making API request.
URL: PUT http://vault.vault:8200/v1/auth/approle/login
Code: 400. Raw Message:
Client sent an HTTP request to an HTTPS server.
Authenticating to Hashicorp Vault 'http://vault.vault:8200'

Haproxy exporter unable to fetch data

I am using haproxy_exporter in prometheus and add prometheus as a datasource in grafana and the haproxy plugin using prometheus as a datasource in order to fetch haproxy stats and shown in grafana server. And i am not able to get the output from it.
When i run below command, I am getting error invalid URL port.
./haproxy_exporter --no-haproxy.ssl-verify --haproxy.scrape-uri="http://user:$(cat pwfile)192.168.1.10:10000/haproxy/stats;csv"
OUTPUT:
INFO[0000] Starting haproxy_exporter (version=0.9.0, branch=master, revision=0cae8ee3e3f3b7c517db2cc68f386672d8b1b6a7) source=haproxy_exporter.go:495
INFO[0000] Build context (go=go1.10.1, user=root#rlinux57, date=20180724-16:08:06) source=haproxy_exporter.go:496
INFO[0000] Listening on :9101 source=haproxy_exporter.go:521
**ERRO[0013] Can't scrape HAProxy: Get http://admin:abEDokA("192.168.1.10:10000/haproxy/stats;csv: invalid URL port abEDokA("192.168.1.10:10000" source=haproxy_exporter.go:315**
And when i placed # sign between password and IP address, such as ./haproxy_exporter --no-haproxy.ssl-verify --haproxy.scrape-uri="http://admin:abEDokA("#192.168.1.10:10000/haproxy/stats;csv"
It gives below error:
INFO[0000] Starting haproxy_exporter (version=0.9.0, branch=master, revision=0cae8ee3e3f3b7c517db2cc68f386672d8b1b6a7) source=haproxy_exporter.go:495
INFO[0000] Build context (go=go1.10.1, user=root#rlinux57, date=20180724-16:08:06) source=haproxy_exporter.go:496
FATA[0000] parse http://admin:abEDokA("#192.168.1.10:10000/haproxy/stats;csv: net/url: invalid userinfo source=haproxy_exporter.go:500
And my prometheus settings are:
- job_name: 'haproxy'
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ['localhost:9101']
You need the # in there and you might need to get rid of the " in your password. Maybe simply escaping it (\") could work, but the second error message suggests haproxy_exporter somehow correctly receives the URL as http://admin:abEDokA("#192.168.1.10:10000/haproxy/stats;csv but is then unable to parse it.
Yup, according to http://www.ietf.org/rfc/rfc1738.txt, " is not a valid character in a URL. You may get around it by using its escape, %22.