Error while checking for service account using Lookup Function - kubernetes

{{- if not (lookup "v1" "ServiceAccount" "{{.Release.Namespace}}" "{{ .Release.preinstall }}" ) }}
<< another service account >>
{{- end }}
While using lookup function for checking that if service account is already present it will not create another service account with same functionality as earlier they were created simultaneously but even after using the lookup function they are created simultaneously and if using the index function or providing the name as a variable $namespace:= {{.Release.Namespace}} and $service:= {{ .Release.preinstall"}}) } it is giving the error as nil pointer.
Can anyone please help with the use of lookup function and what is the error in using it

Related

Using Elastic Search Filters in Grafana Alert Labels

I have configured Elastic Data Source for Grafana, and I am filtering out error count in logs kubernetes deployment. It works as expected except the labels in message template.
I want to print the value for kubernetes.deployment.name which I get from the elastic datasource.
It is showed in labels as follows
[ var='A' labels={kubernetes.deployment.name=api-controller} value=271 ], [ var='B' labels={kubernetes.deployment.name=api-controller} value=0 ]
But when I print it in the description it gives me
Following is the error message I am printing
Error Count for {{ $labels.kubernetes.deployment.name }} has crossed the threshold of 5000 errors in 15 minutes
Error Count for <no value> has crossed the threshold of 5000 errors in 15 minutes
Another way I tried was
{{ $labels["kubernetes.deployment.name"] }}
But it prints the whole expression as it is.
Error Count for {{ $labels["kubernetes.deployment.name"] }} has crossed the threshold of 5000 errors in 15 minutes
Try to use:
{{ index $labels "kubernetes.deployment.name" }}

Invalid label selector for kubernetes.core.k8s_info ansible inside operator

I am trying to filter out the the deployments which are not of my current version using ansible.
- name: Filter and get old deployment
kubernetes.core.k8s_info:
api_version: v1
kind: Deployment
namespace: "my_namespace"
label_selectors:
- curr_version notin (1.1.0)
register: old_deployments
Expected the output to give the list of deployments not having curr_version equal to 1.1.0
But I am getting this error
{"level":"error","ts":1665557104.5018141,"logger":"proxy","msg":"Unable to convert label selectors for the client","error":"invalid selector: [curr_version notin (1.1.0)]","stacktrace":"net/http.serverHandler.ServeHTTP\n\t/usr/lib/golang/src/net/http/server.go:2879\nnet/http.(*conn).serve\n\t/usr/lib/golang/src/net/http/server.go:1930"}
I referenced the pattern matching from here - https://github.com/abikouo/kubernetes.core/blob/08596fd05ba7190a04e7112270a38a0ce32095dd/plugins/module_utils/selector.py#L39
According to the pattern the above selector looks fine.
Even I tried to change the selector line as this (for testing purpose) -
- curr_version notin ("1.1.0")
But getting error as below.
{"level":"error","ts":1665555657.2939646,"logger":"requestfactory","msg":"Could not parse request","error":"unable to parse requirement: values[0][curr_version]: Invalid value: \"\\\"1.1.0\\\"\": a valid label must be an empty string or consist of alphanumeric characters, '-', '_' or '.', and must start and end with an alphanumeric character (e.g. 'MyValue', or 'my_value', or '12345', regex used for validation is '(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])?')","stacktrace":"net/http.serverHandler.ServeHTTP\n\t/usr/lib/golang/src/net/http/server.go:2879\nnet/http.(*conn).serve\n\t/usr/lib/golang/src/net/http/server.go:1930"}
{"level":"error","ts":1665555657.2940943,"logger":"proxy","msg":"Unable to convert label selectors for the client","error":"invalid selector: [curr_version notin (\"1.1.0\")]","stacktrace":"net/http.serverHandler.ServeHTTP\n\t/usr/lib/golang/src/net/http/server.go:2879\nnet/http.(*conn).serve\n\t/usr/lib/golang/src/net/http/server.go:1930"}
I am not sure where am I wrong. I tried to find out the possible workaround but was not able to find it anywhere.
Although I am guessing second issue is just because the label selector is a string and the pattern doesn't allows to have quotes in the string. Which is understood.
Other information which might be useful
kubernetes.core.k8s version - 2.2.3
operator-sdk version - 1.23.0
ansible version - 2.9.27
python - 3.6.8
EDIT
I am using ose-ansible-operator image v4.10 to build an operator. I am unable to see the same error in local. But I am able to when going to operator.

Problem with k8s reading keys "ValueError: Could not deserialize key data."

I have deployed app on GKE private cluster, and i have backend service, but the problem is that the backend service cant read the GOOGLE_ACCOUNT_PRIVATE_KEY, I am getting the following error:
line 1526, in _handle_key_loading_error
raise ValueError("Could not deserialize key data.")
ValueError: Could not deserialize key data.
This is part of the backend deployment where this env is found:
env:
- name: GOOGLE_ACCOUNT_PRIVATE_KEY
valueFrom:
configMapKeyRef:
name: gapk
key: GOOGLE_ACCOUNT_PRIVATE_KEY
I have also other env that are successfully and i don't have any error for them
This is how i keep the GOOGLE_ACCOUNT_PRIVATE_KEY env:
apiVersion: v1
kind: ConfigMap
metadata:
name: gapk
data:
GOOGLE_ACCOUNT_PRIVATE_KEY: '-----BEGIN PRIVATE KEY-----\private key\n-----END PRIVATE KEY-----\n'
with " " instead of ' ' is interpreted \n like new row but when i put the key in ' ' I have the serialize error, in both ways i got it wrong :(
Am I doing some mistake while decoding, also i put the original value of the key, not encoded in base64, and still getting the error ValueError: Could not deserialize key data.
Have you tried replacing \n with \\n?
The other thing to try is just to remove the \ns and insert real newlines over mulitple lines. So long as the string is quoted it should be fine. The other thing to try is to remove the trailing newline since private key regex's not always consistent on this one.

Is there any placeholder notation in mta.yaml that removes spaces from the CF org name parameter?

We are using mta to structure our application and deploying it using the SAP Cloud SDK Pipeline and Transport Management landscape.
In the mta.yaml, we are referencing the org (organization) parameter value using the placeholder notation ${org}.
The issue is that the org name contains spaces between the characters (viz. Sample Org Name) and that is causing error during the application deployment to Cloud Foundry.
We do not want to rename the org name.
Is there any other placeholder notation that removes the spaces between the characters?
We have observed that ${default-host} removes the spaces from the organization name but its scope is limited to only modules and not resources.
We need the substitution variable in the resources scope.
Appreciate if someone can help us here to resolve the issue.
Please find snippet of the mta.yaml and the error message.
resources:
- name: uaa_test_app
parameters:
path: ./xs-security.json
service-plan: application
service: xsuaa
config:
xsappname: 'test-app-${org}-${space}'
type: org.cloudfoundry.managed-service
Error Message:
Service operation failed: Controller operation failed: 502 Updating service "uaa_test_app" failed: Bad Gateway: Service broker error: Service broker xsuaa failed with: org.springframework.cloud.servicebroker.exception.ServiceBrokerException: Error updating application null (Error parsing xs-security.json data: Inconsistent xs-security.json: Invalid xsappname "Test-App-Sample Org Name-test": May only include characters 'a'-'z', 'A'-'Z', '0'-'9', '_', '-', '', and '/'.)

How to write prometheus alert rules for mesos and HAProxy process down.?

I am working on a task where I need to configure and validate prometheus alertmanager.User should get alert when mesos process and HAProxy process is down, I tried to find alert rules for these on internet, but did not find proper. Can any one tell me how to write the alert rules for these. basically need condition clause.
This depends on how you are monitoring things. Let's use HAProxy as an example and say you are using the HAProxy Exporter (https://github.com/prometheus/haproxy_exporter) to monitor it. The HAProxy Exporter includes a metric named haproxy_up, which indicates whether it successfully scraped HAProxy (when Prometheus in turn scraped the exporter). If HAProxy couldn't be scraped, haproxy_up will have a value of 0 and you can alert on that. Let's say your HAProxy Exporter has a Prometheus job name of haproxy-exporter. You could then write an alerting rule like this:
ALERT HAProxyDown
IF haproxy_up{job="haproxy-exporter"} == 0
FOR 5m
LABELS {
severity = "page"
}
ANNOTATIONS {
summary = "HAProxy {{ $labels.instance }} down",
description = "HAProxy {{ $labels.instance }} could not be scraped."
}
This will send an alert if any HAProxy instance could not be scraped for more than 5 minutes.
If you wanted to know whether the exporter (instead of HAProxy itself) was down, you could instead use the expression up{job="haproxy-exporter"} == 0 to find any down HAProxy Exporter instances. Probably you'll want to check both actually.
I can't say much about Mesos and its exporter since I don't have any experience with them, but I imagine it would be something similar.
Also for export mesos metrics you should use mesos-exporter. https://github.com/prometheus-junkyard/mesos_exporter
https://hub.docker.com/r/prom/mesos-exporter/
It also has mesos_up metric. Your alert should be the same like HaProxy alert:
ALERT MesosMasterDown
IF mesos_up{job="mesos-master-exporter"} == 0
FOR 5m
LABELS {
severity = "page"
}
ANNOTATIONS {
summary = "Mesos master {{ $labels.instance }} down",
description = "Mesos master {{ $labels.instance }} could not be scraped."
}
ALERT MesosSlaveDown
IF mesos_up{job="mesos-slave-exporter"} == 0
FOR 5m
LABELS {
severity = "page"
}
ANNOTATIONS {
summary = "Mesos slave {{ $labels.instance }} down",
description = "Mesos slave {{ $labels.instance }} could not be scraped."
}