How do I map one variable's values to another variable in Grafana? - grafana

In my Grafana dashboard (with Prometheus as a data source), I have a custom $tier variable, which allows the user to pick the tier from a dropdown. It's defined as:
Values separated by comma: production, stage, development
I need to filter a Prometheus metric by a label which contains a shortened version of the tier name:
"foo-dev"
"foo-stage"
"foo-prod"
I was thinking that I'd create a hidden variable $shortened_tier so I could use that in my query filter, like this:
my_label=~"foo-$shortened_tier"
I'd like to define it based on the value of $tier:
"development" -> "dev"
"stage" -> "stage"
"production" -> "prod"
How do I do that?

I figured out a workaround for this, but it is suuuuper hacky:
Name: shortened_tier
Type: Query
Data Source: Prometheus
Query: label_values(up{env="$tier"}, env)
Regex: (dev|stage|prod).*
What I wanted to do was simply Query: $tier, but since Grafana wouldn't let me do that, I had to use a completely different metric (up) where I could pass in $tier and get back the same exact value as a string. Then I use regex to just look for dev|stage|prod at the beginning of the string, capture that part, and throw away the rest.
This has the result that I'm looking for, with the value of $shortened_tier dynamically changing based on the value that's selected and assigned to $tier. But man I wish Grafana had a less hacky way to do this.

Related

How to select object attribute in ADF using variable

I'm trying to parametrize a pipeline in Azure Data Factory in order to enable a certain functionality to mulptiple environments. The idea is that the current environment is always available through a global parameter. I'd like to use this parameter to look up an array of environments to process data to. Example:
targetEnvs = [{ "dev": ["dev"], "test": ["dev", "test"], "acc": [], "prod": ["acc", "prod"] }]
Then one should be able to select the targetEnv array with something like targetEnvs[environment] or targetEnvs.environment. Subsequently a ForEach is used to execute some logic on these target environments.
I tried setting this up with targetEnvs as a pipeline parameter (with default value mapping each env directly to targetEnv, as follows: {"dev": ["dev"], "test": ["test"]}) Then I have a Set variable step to take value from the targetEnvs parameter, as follows:.
I'm now looking for a way to use the current environment (stored in a global parameter) instead of hardcoding "dev" in the Set Variable expression, but I'm not sure how to do this.
.
Using this expression won't even start the pipeline.
.
Question: how do I select this attribute of the object? Any other suggestions on how to do tackle this problem are welcome as well!
(Python analogy would be to have a dictionary target_envs and taking a value from it by using the key "current_env": target_envs[current_env].)
When I tried to access the object same as you, the same error occurred. I have taken the parameter targetEnv (given array) and global parameter environment with value as dev.
You can use the following dynamic content to access the key value.
#pipeline().parameters.targetEnv[0][pipeline().globalParameters.environment]

Whats the meaning of CHANGEME in K8s helm charts?

While checking the values of yaml files for a helm chart, one often encounters
changeme passed as a value. E.g.:
rabbitmq.conf: |-
##username and password
default_user={{.Values.rabbitmq.username}}
default_pass=CHANGEME
or:
config:
accumuloSite:
instance.secret: "changeme"
userManagement:
rootPassword: "changeme"
What is the meaning of "changeme"?
Is it just a word that needs to be replaced? If so, what will happen if it is not? A security hole, or hopefully an error?
Or is it a keyword that lets the system to replace this with a secure password? If so, how does the system know what type of password to produce?
In either case, how does the chart connect this value with other places where this value might be needed? ( e.g. if this is a password another -dependent to the first- service needs, how is the manually assigned / derived password propagated to the second service? )
(*mainly interested about helm v3 if this is important)
I'd almost always expect this to be just a placeholder that needs to be filled in. In many cases YAML can wind up having inconsistent types if a value is actually absent, so it can be useful to have some value in the chart's values.yaml, but for things like passwords there's not a "right default value" you could include.
Nothing will automatically replace these for you or warn if you're using the default values. Nothing bad will obviously happen if you do deploy with these values, but I'm sure changeme is up there with passw0rd on the short list of default passwords to try if you're actively trying to break into a system.
If you were writing your own chart, you could also test if a value is present using required and explain what's missing, and this approach might be more secure than having a well-known default password.

Grafana - Is it possible to use variables in Loki-based dashboard query?

I am working on a Loki-based Dashboard on Grafana. I have one panel for searching text in the Loki trace logs, the current query is like:
{job="abc-service"}
|~ "searchTrace"
|json
|line_format "{if .trace_message}} Message: \t{{.trace_message}} {{end}}"
Where searchTrace is a variable of type "Text box" for the user to input search text.
I want to include another variable skipTestLog to skip logs created by some test cron tasks. skipTestLog is a custom variable of two options: Yes,No.
Suppose the logs created by test cron tasks contain the text CronTest in the field trace_message after the json parser, are there any ways to filter them out based on the selected value of skipTestLog?
Create a key/value custom variable like in the following example:
Use the variable like in the following example:

Grafana: Is a dashboard-wide constant map/dictionary possible?

I have a Grafana dashboard which has 3 different global variables for the user to choose from: cloud(aws, azure, gcp), environment(dev, stage, prod), location(eastus2, westus2, westeurope, northeurope, etc..)
The user can choose a specific dashboard with the combination of these three variables.
I want to add a string constant(say, a uuid), which is unique for different combination of these three variables, such that:
aws-dev-eastus2 => b3207989-162c-4be6-a3d0-3a17444cff7d
azure-stage-westeurope => 5340aad8-ea3d-416b-8ab2-1cafd7c301ca
gcp-prod-westus2 => 2f2b3a9c-c179-4b70-b688-36d9f3548bc2
...
I wonder if it is possible to have a constant type variable, in the format of map/dictionary, so that when the three variables "cloud/environment/location" are fix, this variable can return the corresponding uuid.
Any ideas?
1.) Global variables - No they can't be "global". Grafana has only some builtin variables, which are global - doc: https://grafana.com/docs/grafana/latest/variables/variable-types/global-variables/. User can define only dasbhoard variable (so scope of the variable is only for one particular dashboard, not for whole Grafana instance)
2.) Key => value variable - yes/no, only some SQL (e.g. PotgreSQL, MySQL) datasources support it and also Custom variable supports it (make sure you have Grafana version, with support for that - ). Doc: https://grafana.com/docs/grafana/latest/variables/variable-types/add-custom-variable/#enter-custom-options

How to get the id of the run from within a component?

I'm doing some experimentation with Kubeflow Pipelines and I'm interested in retrieving the run id to save along with some metadata about the pipeline execution. Is there any way I can do so from a component like a ContainerOp?
You can use kfp.dsl.EXECUTION_ID_PLACEHOLDER and kfp.dsl.RUN_ID_PLACEHOLDER as arguments for your component. At runtime they will be replaced with the actual values.
I tried to do this using the Python's DSL but seems that isn't possible right now.
The only option that I found is to use the method that they used in this sample code. You basically declare a string containing {{workflow.uid}}. It will be replaced with the actual value during execution time.
You can also do this in order to get the pod name, it would be {{pod.name}}.
Since kubeflow pipeline relies on argo, you can use argo variable to get what you want.
For example,
#func_to_container_op
def dummy(run_id, run_name) -> str:
return run_id, run_name
#dsl.pipeline(
name='test_pipeline',
)
def test_pipeline():
dummy('{{workflow.labels.pipeline/runid}}', '{{workflow.annotations.pipelines.kubeflow.org/run_name}}')
You will find that the placeholders will be replaced with the correct run_id and run_name.
For more argo variables: https://github.com/argoproj/argo-workflows/blob/master/docs/variables.md
To Know what are recorded in the labels and annotation in the kubeflow pipeline run, just get the corresponding workflow from k8s.
kubectl get workflow/XXX -oyaml
create_run_from_pipeline_func which returns RunPipelineResult, and has run_id attribute
client = kfp.Client(host)
result = client.create_run_from_pipeline_func(…)
result.run_id
Your component's container should have an environment variable called HOSTNAME that is set to its unique pod name, from which you derive all necessary metadata.