cannot provision password in grafana datasource - postgresql

I've been on this problem for a while but can't get it solved.
i'm trying to provision my datasources of grafana. Prometheus is working but i also need a postgres-datasource, which requires a password.
every setting except password is filled in.
i've been trying to use a yaml file and through curl. Sensitive data is changed by another value
curl:
curl 'http://admin:admin#127.0.0.1:3000/api/datasources' -X POST -H 'Content-Type: application/json;charset=UTF-8' --data-binary '{"name":"NameOfDataSource","type":"postgres","url":"172.17.0.4:5432","access":"proxy","isDefault":false,"database":"database","user":"username","password":"passwordOfUser","typeLogoUrl":"public/app/plugins/datasource/postgres/img/postgresql_logo.svg","basicAuth":false,"jsonData":{"keepCookies":[],"sslmode":"disable"},"readOnly":false}'
yaml file without comments:
apiVersion: 1
deleteDatasources:
- name: NameOfDataSource
orgId: 1
datasources:
- name: "NameOfDataSource"
type: "postgres"
access: "proxy"
url: "172.17.0.4:5432"
user: "usernamme"
password: "passwordOfUser"
database: "database"
basicAuth: false
isDefault: false
jsonData: {sslmode: "disable"}
readOnly: false
editable: true
Is there someone who can help me out?
Thanks in advance

it seems that if you check the api of grafana, it shows the password: between the other settings (as shown in my question) but it needs to be in secureJsonData instead.

Related

Setting up connection to PostgreSQL in Grafana on start up

I need to set up a datasource in Grafana upon start up with a PostgreSQL. If I understood the website correctly I just had to add a YAML file in the provisioning/datasources directory. I have set it up, but when I look at the logs it doesnt even seem like any attempt at a connection was made, since I do not see any errors, and when I look for the data source it is not listed. Anyone know how to do this correctly? Here is my YAML file.
apiVersion: 1
datasources:
- name: YOLOv5
type: PostgreSQL
access: proxy
orgId: 1
uid: my_unique_uid
url: http://localhost:5432
user: user
database: test
basicAuth:
basicAuthUser: user
withCredentials:
isDefault:
jsonData:
postgresVersion: '15.1'
tlsAuth: false
tlsAuthWithCACert: false
secureJsonData:
password: password
basicAuthPassword: password
version: 1
editable: false

How to use external database (AWS RDS) with Bitnami Keycloak

I'm trying to use Bitnami Keycloak Helm, which has an internal dependency on Bitnami PostgreSQL that I cannot use. I have to use our existing RDS as an external DB, which seems possible but instruction on this page is completely empty. Unfortunately, I can only use the Bitnami helm for Keycloak, FYI. Can anyone point me to the right direction or show what and where to change the stock chart to make it happen pls? Not getting much luck with Google atm.
Thanks in advance!
You need to use a sidecar container which will handle authorization and proxy the db calls from keycloak to your managed database :
[keycloak] --localhost:XXXX-> [sidecar container] -> [Aws RDS]
You'll find documentation for this on the bitnami chart github repo : https://github.com/bitnami/charts/tree/master/bitnami/keycloak#use-sidecars-and-init-containers
On the stock chart, you can set these properties:
postgresql:
enabled: false
externalDatabase:
host: ${DB_URL}
port: ${DB_PORT}
user: ${DB_USERNAME}
database: ${DB_NAME}
password: ${DB_PASSWORD}
If you need high availability, i.e will be running keycloak with multiple replicas, add the below as well:
cache:
enabled: true
extraEnvVars:
- name: KC_CACHE
value: ispn
- name: KC_CACHE_STACK
value: kubernetes
- name: JGROUPS_DISCOVERY_PROTOCOL
value: "JDBC_PING"
Sources:
https://github.com/bitnami/charts/tree/master/bitnami/keycloak/#keycloak-cache-parameters
https://github.com/bitnami/charts/tree/master/bitnami/keycloak/#database-parameters
https://www.keycloak.org/server/caching#_relevant_options

Datasource config for Azure Monitor datasource in kube-prometheus-stack

I'm trying to figure out how to configure an Azure Monitor datasource for Grafana.
What works so far is that the datasource is listed in Grafana when I deploy the stack via HELM.
This is the respective config from my values.yml:
grafana:
additionalDataSources:
- name: Azure Monitor
type: grafana-azure-monitor-datasource
version: 1
id: 2
orgId: 1
typeLogoUrl: public/app/plugins/datasource/grafana-azure-monitor-datasource/img/logo.jpg
url: /api/datasources/proxy/2
access: proxy
isDefault: false
readOnly: false
editable: true
jsonData:
timeInterval: 30s
azureLogAnalyticsSameAs: true
cloudName: azuremonitor
clientId: $GF_AZURE_CLIENT_ID
tenantId: $GF_AZURE_TENANT_ID
subscriptionId: $GF_AZURE_SUBSCRIPTION_ID
Now, everytime grafana restarts, I'd need to set the client secret again.
Is there any way to configure it directly for the startup of Grafana, as well as the Default subscription being used?
I finally found the missing key:
grafana:
additionalDataSources:
- name: Azure Monitor
...
jsonData:
...
secureJsonData: # the missing piece
clientSecret: $GF_AZURE_CLIENT_SECRET
The client secret has to be passed via secureJsonData.

Creating credential using Ansible Tower REST API

In my Ansible Tower, I have a custom credential by the name of Token wherein we store atoken so that using this credential we do not have to log in and can use this credential in various jobs.
Below are the fields required -
Name:
Credential Type: (where we select this custom credential type)
API Token Value: (where the token is entered and is also denoted as
an extra variable my_token)
Below is the yml file I am using to do the needful -
—-
Required info
tasks:
- name: Create credential
uri:
url: “https://ans........../api/v1/credentials/“
method: “POST”
kind: SecureCloud
name: Token
body:
extra_vars:
my_token: “{ key }”
body_format: json
I am confused as to how to enter the field values Name and Credential Types in the above playbook. Do I also require any other field(s) while doing so? Also is the url in the uri module correct?
There are two ways of creating a custom credential (I prefer the second one):
First Option: Your Approach - URI Module
- name: Create Custom Credential
uri:
url: "https://endpoint/api/v2/credentials/"
method: POST
user: admin
password: password
headers:
Content-Type: "application/json"
body: '{"name":"myfirsttoken","description":"","organization":34,"credential_type":34,"inputs":{"token":"MyToken"}}'
force_basic_auth: true
validate_certs: false
status_code: 200, 201
no_log: false
But, be careful because this is not idempotent and you should do a GET Credentials First with the method: GET, register your results and find your credential in your register.json.results variable.
Second Option: My Preferred Approach - tower-cli
You can do exactly the same, easier and idempotent with:
- name: Add Custom Credential
command: tower-cli credential create --name="{{ item }}" --credential-type "{{ credential_type }}" --inputs "{'token':'123456'}" -h endpoint -u admin -p password --organization Default
no_log: true
with_items:
- MyCustomToken
You will get something like:
== ============= ===============
id name credential_type
== ============= ===============
46 MyCustomToken 34
== ============= ===============
The cool stuff is that you can fully automate your tokens and even autogenerate them with:
token: "{{ lookup('password', '/dev/null length=20 chars=ascii_letters,digits') }}"
And then:
---
- name: Create Custom Credential Token
hosts: localhost
connection: local
gather_facts: false
vars:
token: "{{ lookup('password', '/dev/null length=20 chars=ascii_letters,digits') }}"
credential_type: MyCustom
tasks:
- name: Create Credential Type
tower_credential_type:
name: "{{ credential_type }}"
description: Custom Credentials type
kind: cloud
inputs: {"fields":[{"secret":true,"type":"string","id":"token","label":"token"}],"required":["token"]}
state: present
tower_verify_ssl: false
tower_host: endpoint
tower_username: admin
tower_password: password
- name: Add Custom Credential
command: tower-cli credential create --name="{{ item }}" --credential-type "{{ credential_type }}" --inputs "{'token':'{{ token }}'}" -h endpoint -u admin -p password --organization Default
no_log: true
with_items:
- MyCustomToken

How to automatically connect Grafana (with PostgreSQL instead of SQLite 3) to Prometheus when using Helm

I am using Helm as for Kubernetes deployment (Grafana and Prometheus) specifically. I have specified values.yaml files for both of them. It works amazingly.
Since I have changed Grafana datasource from default sqlite3 to PostgreSQL - data-source configuration is now stored in PostgreSQL database.
Well, the problem is that in my values.yaml file* for **Grafana I have specified datasource as following:
datasources: {}
datasources.yaml:
apiVersion: 1
datasources:
- name: on-premis
type: prometheus
url: http://prom-helmf-ns-monitoring-prometheus-server
access: direct
isDefault: true
...
...
grafana.ini:
paths:
data: /var/lib/grafana/data
logs: /var/log/grafana
plugins: /var/lib/grafana/plugins
analytics:
check_for_updates: true
log:
mode: console
grafana_net:
url: https://grafana.net
database:
## You can configure the database connection by specifying type, host, name, user and password
## # as separate properties or as on string using the URL property.
## # Either "mysql", "postgres" or "sqlite3", it's your choice
type: postgres
host: qa.com:5432
name: grafana
user: grafana
# If the password contains # or ; you have to wrap it with trippel quotes. Ex """#password;"""
password: passwd
ssl_mode: disable
Unfortunately this does not take an effect and I have to configure connection to in Grafana web interface manually - which is not what I need. How do I specify this section correctly?
datasources: {}
datasources.yaml:
apiVersion: 1
datasources:
- name: on-premis
type: prometheus
url: http://prom-helmf-ns-monitoring-prometheus-server
access: direct
isDefault: true
remove '{}' after section datasources.
like this
datasources:
datasources.yaml:
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
url: http://prometheus-server
access: proxy
isDefault: true