Datasource config for Azure Monitor datasource in kube-prometheus-stack - grafana

I'm trying to figure out how to configure an Azure Monitor datasource for Grafana.
What works so far is that the datasource is listed in Grafana when I deploy the stack via HELM.
This is the respective config from my values.yml:
grafana:
additionalDataSources:
- name: Azure Monitor
type: grafana-azure-monitor-datasource
version: 1
id: 2
orgId: 1
typeLogoUrl: public/app/plugins/datasource/grafana-azure-monitor-datasource/img/logo.jpg
url: /api/datasources/proxy/2
access: proxy
isDefault: false
readOnly: false
editable: true
jsonData:
timeInterval: 30s
azureLogAnalyticsSameAs: true
cloudName: azuremonitor
clientId: $GF_AZURE_CLIENT_ID
tenantId: $GF_AZURE_TENANT_ID
subscriptionId: $GF_AZURE_SUBSCRIPTION_ID
Now, everytime grafana restarts, I'd need to set the client secret again.
Is there any way to configure it directly for the startup of Grafana, as well as the Default subscription being used?

I finally found the missing key:
grafana:
additionalDataSources:
- name: Azure Monitor
...
jsonData:
...
secureJsonData: # the missing piece
clientSecret: $GF_AZURE_CLIENT_SECRET
The client secret has to be passed via secureJsonData.

Related

Setting up connection to PostgreSQL in Grafana on start up

I need to set up a datasource in Grafana upon start up with a PostgreSQL. If I understood the website correctly I just had to add a YAML file in the provisioning/datasources directory. I have set it up, but when I look at the logs it doesnt even seem like any attempt at a connection was made, since I do not see any errors, and when I look for the data source it is not listed. Anyone know how to do this correctly? Here is my YAML file.
apiVersion: 1
datasources:
- name: YOLOv5
type: PostgreSQL
access: proxy
orgId: 1
uid: my_unique_uid
url: http://localhost:5432
user: user
database: test
basicAuth:
basicAuthUser: user
withCredentials:
isDefault:
jsonData:
postgresVersion: '15.1'
tlsAuth: false
tlsAuthWithCACert: false
secureJsonData:
password: password
basicAuthPassword: password
version: 1
editable: false

Serverless: create api key from SecretsManager value

I have a Serverless stack deploying an API to AWS. I want to protect it using an API key stored in Secrets manager. The idea is to have the value of the key in SSM, pull it on deploy and use it as my API key.
serverless.yml
service: my-app
frameworkVersion: '2'
provider:
name: aws
runtime: nodejs12.x
...
apiKeys:
- name: apikey
value: ${ssm:myapp-api-key}
As far as I can tell, the deployed API Gateway key should be the same as the SSM Secret, yet when I look in the console, the 2 values are different. What am I overlooking? No error messages either.
I ran into the same problem a while ago and I resorted to using the serverless-add-api-key plugin as it was not comprehensible for me when Serverless was creating or reusing new API keys for API Gateway.
With this plugin your serverless.yml would look something like this:
service: my-app
frameworkVersion: '2'
plugins:
- serverless-add-api-key
custom:
apiKeys:
- name: apikey
value: ${ssm:myapp-api-key}
functions:
your-function:
runtime: ...
handler: ...
name: ...
events:
- http:
...
private: true
You can also use a stage-specific configuration:
custom:
apiKeys:
dev:
- name: apikey
value: ${ssm:myapp-api-key}
This worked well for me:
custom:
apiKeys:
- name: apikey
value: ${ssm:/aws/reference/secretsmanager/dev/user-api/api-key}
deleteAtRemoval: false # Retain key after stack removal
functions:
getUserById:
handler: src/handlers/user/by-id.handler
events:
- http:
path: user/{id}
method: get
cors: true
private: true

How to wait until env for appid is created in jelastic manifest installation?

I have the following manifest:
jpsVersion: 1.3
jpsType: install
application:
id: shopozor-k8s-cluster
name: Shopozor k8s cluster
version: 0.0
baseUrl: https://raw.githubusercontent.com/shopozor/services/dev
settings:
fields:
- name: envName
caption: Env Name
type: string
default: shopozor
- name: topo
type: radio-fieldset
values:
0-dev: '<b>Development:</b> one master (1) and one scalable worker (1+)'
1-prod: '<b>Production:</b> multi master (3) with API balancers (2+) and scalable workers (2+)'
default: 0-dev
- name: version
type: string
caption: Version
default: v1.16.3
onInstall:
- installKubernetes
- enableSubDomains
actions:
installKubernetes:
install:
jps: https://github.com/jelastic-jps/kubernetes/blob/${settings.version}/manifest.jps
envName: ${settings.envName}
displayName: ${settings.envName}
settings:
deploy: cmd
cmd: |-
curl -fsSL ${baseUrl}/scripts/install_k8s.sh | /bin/bash
topo: ${settings.topo}
dashboard: version2
ingress-controller: Nginx
storage: true
api: true
monitoring: true
version: ${settings.version}
jaeger: false
enableSubDomains:
- jelastic.env.binder.AddDomains[cp]:
domains: staging,api-staging,assets-staging,api,assets
Unfortunately, when I run that manifest, the k8s cluster gets installed, but the subdomains cannot be created (yet), because:
[15:26:28 Shopozor.cluster:3]: enableSubDomains: {"action":"enableSubDomains","params":{}}
[15:26:29 Shopozor.cluster:4]: api [cp]: {"method":"jelastic.env.binder.AddDomains","params":{"domains":"staging,api-staging,assets-staging,api,assets"},"nodeGroup":"cp"}
[15:26:29 Shopozor.cluster:4]: ERROR: api.response: {"result":2303,"source":"JEL","error":"env for appid [5ce25f5a6988fbbaf34999b08dd1d47c] not created."}
What jelastic API methods can I use to perform the necessary waiting until subdomain creation is possible?
My current workaround is to split that manifest into two manifests: one cluster installation manifest and one update manifest creating the subdomains. However, I'd like to have everything in the same manifest.
Please change this:
enableSubDomains:
- jelastic.env.binder.AddDomains[cp]:
domains: staging,api-staging,assets-staging,api,assets
to:
enableSubDomains:
- jelastic.env.binder.AddDomains[cp]:
envName: ${settings.envName}
domains: staging,api-staging,assets-staging,api,assets

k8s scdf2 how config volumenMount in a task (no freetext)

Deploying a task, as user, i need config k8s params like i do using "freetext".
The k8s config is following
Secret: "kind": "Secret","apiVersion": "v1","metadata": {"name": "omni-secret","namespace": "default",
bootstrap.yml:
spring:
application:
name: mk-adobe-analytics-task
cloud:
kubernetes:
config:
enabled: false
secrets:
enabled: true
namespace: default
paths:
- /etc/secret-volume
log.info(AdobeAnalyticsConstants.LOG_RECOVERING_SECRET, env.getProperty("aws.bucketname"));
Deploying task:
task launch test-007 --properties "deployer.*.kubernetes.volumeMounts=[{name: secret-volume, mountPath: '/etc/secret-volume'}], deployer.* .kubernetes.volumes=[{name: 'secret-volume', secret: {secretName: 'omni-secret' }}]"
Result:
2019-06-10 10:32:50.852 INFO 1 --- Recovering property "aws.bucketname": null
How can i map into a task the k8s volumens? simply k8s deploy , it is ok using streams
it's not clear how to start with your issue but please take a look for Kubernetes PropertySource implementations.
Inside "Secrets PropertySource - Table 3.2. Properties" you can find other settings like:
- spring.cloud.kubernetes.secrets.name
- spring.cloud.kubernetes.secrets.labels
- spring.cloud.kubernetes.secrets.enableApi
So please refer to the documentation.
It's also possible that your environment variable aws.bucketname wasn't configured properly.
Hope this help.

How to automatically connect Grafana (with PostgreSQL instead of SQLite 3) to Prometheus when using Helm

I am using Helm as for Kubernetes deployment (Grafana and Prometheus) specifically. I have specified values.yaml files for both of them. It works amazingly.
Since I have changed Grafana datasource from default sqlite3 to PostgreSQL - data-source configuration is now stored in PostgreSQL database.
Well, the problem is that in my values.yaml file* for **Grafana I have specified datasource as following:
datasources: {}
datasources.yaml:
apiVersion: 1
datasources:
- name: on-premis
type: prometheus
url: http://prom-helmf-ns-monitoring-prometheus-server
access: direct
isDefault: true
...
...
grafana.ini:
paths:
data: /var/lib/grafana/data
logs: /var/log/grafana
plugins: /var/lib/grafana/plugins
analytics:
check_for_updates: true
log:
mode: console
grafana_net:
url: https://grafana.net
database:
## You can configure the database connection by specifying type, host, name, user and password
## # as separate properties or as on string using the URL property.
## # Either "mysql", "postgres" or "sqlite3", it's your choice
type: postgres
host: qa.com:5432
name: grafana
user: grafana
# If the password contains # or ; you have to wrap it with trippel quotes. Ex """#password;"""
password: passwd
ssl_mode: disable
Unfortunately this does not take an effect and I have to configure connection to in Grafana web interface manually - which is not what I need. How do I specify this section correctly?
datasources: {}
datasources.yaml:
apiVersion: 1
datasources:
- name: on-premis
type: prometheus
url: http://prom-helmf-ns-monitoring-prometheus-server
access: direct
isDefault: true
remove '{}' after section datasources.
like this
datasources:
datasources.yaml:
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
url: http://prometheus-server
access: proxy
isDefault: true