Influxdb datasource added using configmap shows error "reading InfluxDB. Status Code: 401" with Kubernetes version 1.24.x - grafana

What happened:
After updating from k8s version 1.23.x to v1.24.x ,
the Influxdb data source which is added using configmap shows error
reading InfluxDB. Status Code: 401 .
Strange enough, the same datasource configuration added manually
on Grafana dashboard using admin role works fine and does NOT show any such error .
The Influxdb Datasource with configmap works fine with K8s version 1.23.x.
**Environment**:
- Grafana version: 9.1.7
- Data source type & version: Influxdb
- OS Grafana is installed on: Ubuntu
- User OS & Browser: Chrome
- K8s version : 1.24.x
- Infludb version: v2.4.0
Configmap to add Influxdb datasource
apiVersion: v1
kind: ConfigMap
metadata:
xxxxxx
data:
datasource_influx.yaml: |-
apiVersion: 1
datasources:
- name: InfluxDB
type: influxdb
access: proxy
database: dcs
url: http://monitoring-influxdb:8086
basicAuth: true
basicAuthUser: admin
user: admin
secureJsonData:
password: ${admin-user-password}
basicAuthPassword: ${admin-user-password}
How to reproduce it (as minimally and precisely as possible):
Add a Influxdb datasource using configmap in the K8s version 1.24.x.

Related

Flink kubernetes deployment - how to provide S3 credentials from Hashicorp Vault?

I'm trying to deploy a Flink stream processor to a Kubernetes cluster with the help of the official Flink kubernetes operator.
The Flink app also uses Minio as its state backend. Everything worked fine until I tried to provide the credentials from Hashicorp Vault in the following way:
apiVersion: flink.apache.org/v1beta1
kind: FlinkDeployment
metadata:
name: flink-app
namespace: default
spec:
serviceAccount: sa-example
podTemplate:
apiVersion: v1
kind: Pod
metadata:
name: pod-template
spec:
serviceAccountName: default:sa-example
containers:
- name: flink-main-container
# ....
flinkVersion: v1_14
flinkConfiguration:
presto.s3.endpoint: https://s3-example-api.dev.net
high-availability: org.apache.flink.kubernetes.highavailability.KubernetesHaServicesFactory
high-availability.storageDir: s3p://example-flink/example-1/high-availability/
high-availability.cluster-id: example-1
high-availability.namespace: example
high-availability.service-account: default:sa-example
# presto.s3.access-key: *
# presto.s3.secret-key: *
presto.s3.path-style-access: "true"
web.upload.dir: /opt/flink
jobManager:
podTemplate:
apiVersion: v1
kind: Pod
metadata:
name: job-manager-pod-template
annotations:
vault.hashicorp.com/namespace: "/example/dev"
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/agent-init-first: "true"
vault.hashicorp.com/agent-inject-secret-appsecrets.yaml: "example/Minio"
vault.hashicorp.com/role: "example-serviceaccount"
vault.hashicorp.com/auth-path: auth/example
vault.hashicorp.com/agent-inject-template-appsecrets.yaml: |
{{- with secret "example/Minio" -}}
presto.s3.access-key: {{.Data.data.accessKey}}
presto.s3.secret-key: {{.Data.data.secretKey}}
{{- end }}
When I comment the presto.s3.access-key and presto.s3.secret-key config values in the flinkConfiguration, replace them with the above listed Hashicorp Vault annotations and try to provide them programmatically during runtime:
val configuration: Configuration = getSecretsFromFile("/vault/secrets/appsecrets.yaml")
val env = org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.getExecutionEnvironment(configuration)
I receive the following error message:
java.io.IOException: com.amazonaws.SdkClientException: Unable to load AWS credentials from any provider in the chain: [EnvironmentVariableCredentialsProvider: Unable to load AWS credentials from environment variables (AWS_ACCESS_KEY_ID (or AWS_ACCESS_KEY) and AWS_SECRET_KEY (or AWS_SECRET_ACCESS_KEY)), SystemPropertiesCredentialsProvider: Unable to load AWS credentials from Java system properties (aws.accessKeyId and aws.secretKey), WebIdentityTokenCredentialsProvider: You must specify a value for roleArn and roleSessionName, com.amazonaws.auth.profile.ProfileCredentialsProvider#5331f738: profile file cannot be null, com.amazonaws.auth.EC2ContainerCredentialsProviderWrapper#bc0353f: Failed to connect to service endpoint: ]
at com.facebook.presto.hive.s3.PrestoS3FileSystem$PrestoS3OutputStream.uploadObject(PrestoS3FileSystem.java:1278) ~[flink-s3-fs-presto-1.14.2.jar:1.14.2]
at com.facebook.presto.hive.s3.PrestoS3FileSystem$PrestoS3OutputStream.close(PrestoS3FileSystem.java:1226) ~[flink-s3-fs-presto-1.14.2.jar:1.14.2]
at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72) ~[flink-s3-fs-presto-1.14.2.jar:1.14.2]
at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101) ~[flink-s3-fs-presto-1.14.2.jar:1.14.2]
at org.apache.flink.fs.s3presto.common.HadoopDataOutputStream.close(HadoopDataOutputStream.java:52) ~[flink-s3-fs-presto-1.14.2.jar:1.14.2]
at org.apache.flink.runtime.blob.FileSystemBlobStore.put(FileSystemBlobStore.java:80) ~[flink-dist_2.12-1.14.2.jar:1.14.2]
at org.apache.flink.runtime.blob.FileSystemBlobStore.put(FileSystemBlobStore.java:72) ~[flink-dist_2.12-1.14.2.jar:1.14.2]
at org.apache.flink.runtime.blob.BlobUtils.moveTempFileToStore(BlobUtils.java:385) ~[flink-dist_2.12-1.14.2.jar:1.14.2]
at org.apache.flink.runtime.blob.BlobServer.moveTempFileToStore(BlobServer.java:680) ~[flink-dist_2.12-1.14.2.jar:1.14.2]
at org.apache.flink.runtime.blob.BlobServerConnection.put(BlobServerConnection.java:350) [flink-dist_2.12-1.14.2.jar:1.14.2]
at org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:110) [flink-dist_2.12-1.14.2.jar:1.14.2]
I initially also tried to append the secrets to flink-config.yaml in the docker-entrypoint.sh based on this documentation - Configure Access Credentials:
if [ -f '/vault/secrets/appsecrets.yaml' ]; then
(echo && cat '/vault/secrets/appsecrets.yaml') >> $FLINK_HOME/conf/flink-conf.yaml
fi
The question is how to provide the S3 credentials during the runtime since the Flink operator mounts the flink-config.yaml from a config map and it is a flink-conf.yaml: Read-only file system.
Thank you
There is no support for this from the Kubernetes operator. In fact, this is not a limitation of the Flink Kubernetes operator, it is due to the fact of lack in support in Kubernetes native integration. There is a separate story for this in the Kubernetes operator side - FLINK-27491.
As a workaround, what you can do is, set up an init container and update the config map from the init container using kubernetes API after reading it from the vault. So the updated config map should have the secrets replaced by the init container and those will be visible to the job manager and all of its task managers. The whole Flink cluster journey starts only after updating the config map from the init container so it should be visible to the Flink cluster.
A simple example to update the config map from the init container can be found here. In this example, the config map is updated with a simple CURL command. In theory, you can use any lightweight client to update the config map like this.
A side note: If possible I would suggest to use AWS IAM role rather than IAM plain secrets as IAM role is more secure compared to IAM static credentials.

how to use postgres.db.name for multiple databases in kubernetes configMaps

So I want to create multiple postgresql databases in the kubernetes deployment.
I tried with the below configMaps configuration but the databases are not being created. I tried to log into the postgres db pod with one of the database names I used in the configMaps but it say's the databse doesn't exist.
method 1:
apiVersion: v1
kind: ConfigMap
metadata:
name: hydra-kratos-postgres-config
labels:
app: hydra-kratos-db
data:
postgres.db.user: pguser
postgres.db.password: secret
postgres.db.name:
- postgredb1
- postgredb2
- postgredb3
method 2:
apiVersion: v1
kind: ConfigMap
metadata:
name: hydra-kratos-postgres-config
labels:
app: hydra-kratos-db
data:
POSTGRES_USER: pguser
POSTGRES_PASSWORD: secret
POSTGRES_MULTIPLE_DATABASES:
- kratos
- hydra
Would appreciate any suggestions on this. Thank you.
I assume you are using official Postgres image - by default it doesn't support multiple databases declaration on init. You could try building your own Postgres image like in this repo. If you create k8s deployment based on such image, I think there is a chance that your variable POSTGRES_MULTIPLE_DATABASES could work.
Let me know if you decide to try this.

How to add new cluster in ArgoCD (use config file of Rancher)? - the server has asked for the client to provide credentials

I want to add a new cluster in addition to the default cluster on ArgoCD but when I add it, I get an error:
FATA[0001] rpc error: code = Unknown desc = REST config invalid: the server has asked for the client to provide credentials
I use the command argocd cluster add cluster-name
I download config file k8s of Rancher.
Thanks!
I solved my problem but welcome other solutions from everyone :D
First, create a secret with the following content:
apiVersion: v1
kind: Secret
metadata:
namespace: argocd # same namespace of argocd-app
name: mycluster-secret
labels:
argocd.argoproj.io/secret-type: cluster
type: Opaque
stringData:
name: cluster-name # Get from clusters - name field in config k8s file.
server: https://mycluster.com # Get from clusters - name - cluster - server field in config k8s file.
config: |
{
"bearerToken": "<authentication token>",
"tlsClientConfig": {
"insecure": false,
"caData": "<base64 encoded certificate>"
}
}
bearerToken - Get from users - user - token field in config k8s file.
caData - Get from clusters - name - cluster - certificate-authority-data field in config k8s file.
Then, apply this yaml file and the new cluster will be automatically added to ArgoCD.
I found the solution on github:
https://gist.github.com/janeczku/b16154194f7f03f772645303af8e9f80

stable/prometheus-operator - adding persistent grafana dashboards

I am trying to add a new dashboard to the below helm chart
https://github.com/helm/charts/tree/master/stable/prometheus-operator
The documentation is not very clear.
I have added a config map to the name space like the below -
apiVersion: v1
kind: ConfigMap
metadata:
name: sample-grafana-dashboard
namespace: monitoring
labels:
grafana_dashboard: "1"
data:
etcd-dashboard.json: |-
{JSON}
According to the documentation, this should just be "picked" up and added, but its not.
https://github.com/helm/charts/tree/master/stable/grafana#configuration
The sidecar option in my values.yaml looks like -
grafana:
enabled: true
## Deploy default dashboards.
##
defaultDashboardsEnabled: true
adminPassword: password
ingress:
## If true, Grafana Ingress will be created
##
enabled: false
## Annotations for Grafana Ingress
##
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
## Labels to be added to the Ingress
##
labels: {}
## Hostnames.
## Must be provided if Ingress is enable.
##
# hosts:
# - grafana.domain.com
hosts: []
## Path for grafana ingress
path: /
## TLS configuration for grafana Ingress
## Secret must be manually created in the namespace
##
tls: []
# - secretName: grafana-general-tls
# hosts:
# - grafana.example.com
#dashboardsConfigMaps:
#sidecarProvider: sample-grafana-dashboard
sidecar:
dashboards:
enabled: true
label: grafana_dashboard
I have also tried adding this to the value.yml
dashboardsConfigMaps:
- sample-grafana-dashboard
Which, doesn't work.
Does anyone have any experience with adding your own dashboards to this helm chart as I really am at my wits end.
To sum up:
For sidecar you need only one option set to true - grafana.sidecar.dashboards.enabled
Install prometheus-operator witch sidecard enabled:
helm install stable/prometheus-operator --name prometheus-operator --set grafana.sidecar.dashboards.enabled=true --namespace monitoring
Add new dashboard, for example
MongoDB_Overview:
wget https://raw.githubusercontent.com/percona/grafana-dashboards/master/dashboards/MongoDB_Overview.json
kubectl -n monitoring create cm grafana-mongodb-overview --from-file=MongoDB_Overview.json
Now the tricky part, you have to set a correct label for your
configmap, by default grafana.sidecar.dashboards.label is set
tografana_dashboard, so:
kubectl -n monitoring label cm grafana-mongodb-overview grafana_dashboard=mongodb-overview
Now you should find your newly added dashboard in grafana, moreover every confimap with label grafana_dashboard will be processed as dashboard.
The dashboard is persisted and safe, stored in configmap.
UPDATE:
January 2021:
Prometheus operator chart was migrated from stable repo to Prometheus Community Kubernetes Helm Charts and helm v3 was released so:
Create namespace:
kubectl create namespace monitoring
Install prometheus-operator from helm chart:
helm install prometheus-operator prometheus-community/kube-prometheus-stack --namespace monitoring
Add Mongodb dashboard as an example
wget https://raw.githubusercontent.com/percona/grafana-dashboards/master/dashboards/MongoDB_Overview.json
kubectl -n monitoring create cm grafana-mongodb-overview --from-file=MongoDB_Overview.json
Lastly, label the dashboard:
kubectl -n monitoring label cm grafana-mongodb-overview grafana_dashboard=mongodb-overview
You have to:
define you dashboard json as a configmap (as you have done, but see below for an easier way)
define a provider: to tell where to load the dashboard
map the two together
from values.yml:
dashboardsConfigMaps:
application: application
dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: application
orgId: 1
folder: "Application Metrics"
type: file
disableDeletion: true
editable: false
options:
path: /var/lib/grafana/dashboards/application
Now the application config map should create files in this directory in the pod, and as has been discussed the sidecar should load them into an Application Metrics folder, seen in the GUI.
That probably answers your issue as written, but as long as your dashboards aren't too big using kustonmise mean you can have the json on disk without needing to include the json in another file thus:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
# May choose to enable this if need to refer to configmaps outside of kustomize
generatorOptions:
disableNameSuffixHash: true
namespace: monitoring
configMapGenerator:
- name: application
files:
- grafana-dashboards/application/api01.json
- grafana-dashboards/application/api02.json
For completeness sake you can also load dashboards from url or from the Grafana site, although I don't believe mixing method in the same folder works.
So:
dashboards:
kafka:
kafka01:
url: https://raw.githubusercontent.com/kudobuilder/operators/master/repository/kafka/docs/latest/resources/grafana-dashboard.json
folder: "KUDO Kafka"
datasource: Prometheus
nginx:
nginx1:
gnetId: 9614
datasource: Prometheus
dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: kafka
orgId: 1
folder: "KUDO Kafka"
type: file
disableDeletion: true
editable: false
options:
path: /var/lib/grafana/dashboards/kafka
- name: nginx
orgId: 1
folder: Nginx
type: file
disableDeletion: true
editable: false
options:
path: /var/lib/grafana/dashboards/nginx
Creates two new folders containing a dashboard each, from external sources, or maybe you point this at your git repo you de-couple your dashboard commits from your deployment.
If you do not change the settings in the helm chart. The default user/password for grafana is:
user: admin
password: prom-operator

How to enable logging for third party containers in Kubernetes?

The similar as Docker using this as below to configure logging in compose file for third party (mariadb, opentsdb ...) to show logs on Kibana.
logging:
driver: fluentd
options:
fluentd-address: "0.0.0.0:24224"
tag: "docker.{{.ID}}"
I want to ask that how to configure for Kubernetes?
Basically, you can use fluentd to collect logs and push them to 3rd party log storage (StackDriver or ElasticSearch). To ensure that fluentd is running on every cluster node we can use DaemonSet object.
As an example let’s see a part of the file content:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: fluentd
namespace: kube-system
...
spec:
...
spec:
containers:
- name: fluentd
image: quay.io/fluent/fluentd-kubernetes-daemonset
env:
- name: FLUENT_ELASTICSEARCH_HOST
value: "elasticsearch-logging"
- name: FLUENT_ELASTICSEARCH_PORT
value: "9200"
...
This article describes most important steps to get everything set up.
Get Fluentd DaemonSet sources
We have created a Fluentd DaemonSet that have the proper rules and container image ready to get started:
https://github.com/fluent/fluentd-kubernetes-daemonset
Please grab a copy of the repository from the command line using GIT:
$ git clone https://github.com/fluent/fluentd-kubernetes-daemonset