.Net Confluent.Kafka.SchemaRegistry ValueSerializer error - apache-kafka

Manually registered schema in SchemaRegistry using curl command. Schema registered is:
'{ "schema": "{ \"type\": \"record\", \"name\": \"Person\", \"namespace\": \"com.xxx\", \"fields\": [ { \"name\": \"firstName\", \"type\": \"string\" }, { \"name\": \"lastName\", \"type\": \"string\" }, { \"name\": \"age\", \"type\": \"long\" } ]}" }'
Created code in .Net refering to link https://github.com/confluentinc/confluent-kafka-dotnet/blob/master/examples/JsonSerialization/Program.cs but having below error:
One or more errors occurred. (Local: Value serialization error)
The JSON schema corresponding to the written data:
{"type":"record","name":"Person","namespace":"com.xxx","fields":[{"name":"firstName","type":"string"},{"name":"lastName","type":"string"},{"name":"age","type":"long"}]}

Related

Get specific value out of Google Fit API in REST

I am calling the google fit API for the activity between two dates but how do I just get the Activity integers? Is there a direct way to call it from the HTTP request or do I have to filter the JSON data myself?
social_token = SocialToken.objects.get(account__user=2)
token=social_token.token
url = "https://www.googleapis.com/fitness/v1/users/me/dataset:aggregate"
headers = {
"Authorization": "Bearer {}".format(token),
"Content-Type": "application/json"
}
body = {
"aggregateBy": [{
"dataTypeName": "com.google.activity.segment",
}],
"startTimeMillis": 1634767200000,
"endTimeMillis": 1634853600000
}
respo = requests.post(url, data=json.dumps(body), headers=headers)
The JSON response
[
"{\n \"bucket\":
[\n {\n \"startTimeMillis\": \"1634767200000\",\n \"endTimeMillis\": \"1634853600000\",\n \"dataset\":
[\n ",
" {\n \"dataSourceId\": \"derived:com.google.activity.segment:com.google.android.gms:merge_activity_segments\",\n \"",
"point\":
[\n {\n \"startTimeNanos\": \"1634818320000000000\",\n \"endTimeNanos\": \"163482012000000000",
"0\",\n \"dataTypeName\": \"com.google.activity.segment\",\n \"originDataSourceId\": \"raw:com.google.activity.se",
"gment:com.google.android.apps.fitness:user_input\",\n \"value\": [\n {\n \"intVal\": 97,\n ",
" \"mapVal\": []\n }\n ]\n },\n {\n \"startTimeNanos\": \"163",
"4820120000000000\",\n \"endTimeNanos\": \"1634820292573000000\",\n \"dataTypeName\": \"com.google.activity.segme",
"nt\",\n \"value\": [\n {\n \"intVal\": 7,\n \"mapVal\": []\n }\n",
" ]\n },\n {\n \"startTimeNanos\": \"1634823157245000000\",\n \"endTimeNanos\"",
": \"1634823301721000000\",\n \"dataTypeName\": \"com.google.activity.segment\",\n \"value\": [\n {",
"\n \"intVal\": 7,\n \"mapVal\": []\n }\n ]\n }\n ]\n ",
" }\n ]\n }\n ]\n}\n"
]
To get all individual entries for a datatype between a range, you need to use the Users.dataSources.datasets: get API.
The request method will be GET and the endpoint will be
https://www.googleapis.com/fitness/v1/users/me/dataSources/{dataSourceId}/datasets/{datasetId}
where datasetId is of the format StartTimeInNanoseconds-EndTimeInNanoseconds and dataSourceId will be the ID of the dataource against which you want the individual entries. For example, for body temperature, it will be derived:com.google.body.temperature:com.google.android.gms:merged.
To get the list of all the available data source IDs your application has access to, call the API
GET https://www.googleapis.com/fitness/v1/users/me/dataSources
Sample request to get body temperature
GET https://www.googleapis.com/fitness/v1/users/me/dataSources/derived:com.google.body.temperature:com.google.android.gms:merged/datasets/1655750425675000000-1655991535675000000
Response
{
"minStartTimeNs": "1655750425675000000",
"maxEndTimeNs": "1655991535675000000",
"dataSourceId": "derived:com.google.body.temperature:com.google.android.gms:merged",
"point": [
{
"modifiedTimeMillis": "1655961443186",
"startTimeNanos": "1655944200000000000",
"endTimeNanos": "1655944200000000000",
"value": [
{
"mapVal": [],
"fpVal": 30
},
{
"mapVal": []
}
],
"dataTypeName": "com.google.body.temperature",
"originDataSourceId": "raw:com.google.body.temperature:com.google.android.apps.fitness:user_input"
},
{
"modifiedTimeMillis": "1655961443186",
"startTimeNanos": "1655947800000000000",
"endTimeNanos": "1655947800000000000",
"value": [
{
"mapVal": [],
"fpVal": 34
},
{
"mapVal": []
}
],
"dataTypeName": "com.google.body.temperature",
"originDataSourceId": "raw:com.google.body.temperature:com.google.android.apps.fitness:user_input"
},
{
"modifiedTimeMillis": "1655961443186",
"startTimeNanos": "1655955000000000000",
"endTimeNanos": "1655955000000000000",
"value": [
{
"mapVal": [],
"fpVal": 38
},
{
"mapVal": []
}
],
"dataTypeName": "com.google.body.temperature",
"originDataSourceId": "raw:com.google.body.temperature:com.google.android.apps.fitness:user_input"
}
]
}

Get 500 error on update Pulsar schema of type "JSON" using admin API

I am trying to update Pulsar schema of type "JSON" using Admin API.
I have a pulsar namespace "lol" and topic "sdf" with a single schema version
List of topic schema versions
I tried to update this schema by posting another JSON schema but received a 500 error.
Post request and response
A Pulsar logfile is empty. Pulsar stack trace has nothing informative.
09:41:36.930 [BookKeeperClientWorker-OrderedExecutor-0-0] INFO org.eclipse.jetty.server.RequestLog - 127.0.0.1 - - [20/Jul/2021:09:41:36 +0300] "POST /admin/v2/schemas/public/lol/sdf/schema HTTP/1.1" 500 565 "-" "PostmanRuntime/7.26.8" 159
When I try to update the schema of type "AVRO" in the same way, everything works fine and the schema version rises up.
Can anybody help to find the cause of such weired behavior?
Here is request body
{"type": "JSON", "schema": "{ \"$id\": \"https://example.com/person.schema.json\", \"$schema\": \"https://json-schema.org/draft/2020-12/schema\", \"title\": \"Person\", \"type\": \"object\", \"properties\": { \"firstName\": { \"type\": \"string\", \"description\": \"The person's first name.\" }, \"lastName\": { \"type\": \"string\", \"description\": \"The person's last name.\" }, \"age\": { \"description\": \"Age in years which must be equal to or greater than zero.\", \"type\": \"integer\", \"minimum\": 0 } }}", "properties": {} }
Here is the current schema definition "GET /admin/v2/schemas/public/lol/sdf/schema"
{
"version": 0,
"type": "JSON",
"timestamp": 0,
"data": "{ \"$id\": \"https://example.com/person.schema.json\", \"$schema\": \"https://json-schema.org/draft/2020-12/schema\", \"title\": \"Person\", \"type\": \"object\", \"properties\": { \"firstName\": { \"type\": \"string\", \"description\": \"The person's first name.\" }, \"surname\": { \"type\": \"string\", \"description\": \"The person's last name.\" }, \"age\": { \"description\": \"Age in years which must be equal to or greater than zero.\", \"type\": \"integer\", \"minimum\": 0 } }}",
"properties": {}

Invalid story format failed to parse story while posting Rasa X Http API

I am trying to create a story using POST in Postman tool and below is my story format .
I am using below format because in GET request I got the story in the same format.
{
"id": 65,
"name": "interactive_story_65",
"story": "35 interactive_story_65\n* emp_info\n - utter_employee",
"annotation": {
"user": "me",
"time": 1597919151.8836874962
},
"filename": "data\\stories.md"
}
But, I am getting below error:
{
"version": "0.31.0",
"status": "failure",
"message": "Failed to parse story.",
"reason": "StoryParseError",
"details": "Invalid story format. Failed to parse '## {\r\n \"id\": 65,\r\n \"name\": \"interactive_story_65\",\r\n \"story\": \"## interactive_story_65\\n* emp_info\\n - utter_employee\",\r\n \"annotation\": {\r\n \"user\": \"me\",\r\n \"time\": 1597919151.8836874962\r\n },\r\n \"filename\": \"data\\\\stories.md\"\r\n }'",
"help": null,
"code": 400
}
Attached is below screenshot:
enter image description here
Please help.
This endpoint is actually expecting plain markdown, with text/x-markdown as the content-type header. If you look closely at the docs, you'll see that you're using the response schema as the request schema - I did that too at first. The request schema is just a markdown string e.g.
curl --request PUT \
--url http://localhost:5002/api/stories \
--header 'authorization: Bearer <Token>' \
--header 'content-type: text/x-markdown' \
--data '## greet
* greet
- utter_greet\n'

How to extract Kubernestes PODS mac addresses from annotations object

I am trying to extract the mac or ips addresses under metadata.annotations using either kubectl get po in json filter or using jq. other objects are easy to manipulate to get those values.
kubectl get po -o json -n multus|jq -r .items
Under annotations, there is duplication CNI info but it is ok. I like to extract those MAC addresses using jq. it seems to be tricky on this one.
[
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"annotations": {
"k8s.v1.cni.cncf.io/network-status": "[{\n \"name\": \"eps-cni\",\n \"ips\": [\n \"172.31.83.216\"\n ],\n \"default\": true,\n \"dns\": {}\n},{\n \"name\": \"ipvlan1-busybox1\",\n \"interface\": \"net1\",\n \"ips\": [\n \"172.31.230.70\"\n ],\n \"mac\": \"0a:2d:40:c6:f8:ea\",\n \"dns\": {}\n},{\n \"name\": \"ipvlan2-busybox1\",\n \"interface\": \"net2\",\n \"ips\": [\n \"172.31.232.70\"\n ],\n \"mac\": \"0a:52:8a:62:5d:f4\",\n \"dns\": {}\n}]",
"k8s.v1.cni.cncf.io/networks": "ipvlan1-busybox1, ipvlan2-busybox1",
"k8s.v1.cni.cncf.io/networks-status": "[{\n \"name\": \"eps-cni\",\n \"ips\": [\n \"172.31.83.216\"\n ],\n \"default\": true,\n \"dns\": {}\n},{\n \"name\": \"ipvlan1-busybox1\",\n \"interface\": \"net1\",\n \"ips\": [\n \"172.31.230.70\"\n ],\n \"mac\": \"0a:2d:40:c6:f8:ea\",\n \"dns\": {}\n},{\n \"name\": \"ipvlan2-busybox1\",\n \"interface\": \"net2\",\n \"ips\": [\n \"172.31.232.70\"\n ],\n \"mac\": \"0a:52:8a:62:5d:f4\",\n \"dns\": {}\n}]",
"kubernetes.io/psp": "eps.privileged"
},
"creationTimestamp": "2020-05-24T17:09:10Z",
"generateName": "busybox1-f476958bd-",
"labels": {
"app": "busybox",
"pod-template-hash": "f476958bd"
},
"name": "busybox1-f476958bd-hds4w",
"namespace": "multus",
"ownerReferences": [
{
"apiVersion": "apps/v1",
"blockOwnerDeletion": true,
"controller": true,
"kind": "ReplicaSet",
"name": "busybox1-f476958bd",
"uid": "5daf9b52-e1b3-4df7-b5a1-028b48e7fcc0"
}
],
"resourceVersion": "965176",
"selfLink": "/api/v1/namespaces/multus/pods/busybox1-f476958bd-hds4w",
"uid": "0051b85d-9774-4f89-8658-f34065222bf0"
},
for basic jq,
[root#ip-172-31-103-214 ~]# kubectl get po -o json -n multus|jq -r '.items[] | .spec.volumes'
[
{
"name": "test-busybox1-token-f6bdj",
"secret": {
"defaultMode": 420,
"secretName": "test-busybox1-token-f6bdj"
}
}
]
I can switch the get pod to yaml format then using normal grep cmd.
kubectl get po -o yaml -n multus|egrep 'mac'|sort -u
"mac": "0a:2d:40:c6:f8:ea",
"mac": "0a:52:8a:62:5d:f4",
Thanks
Starting with the original JSON and using jq's -r command-line option, the following jq filter yields the output shown below:
.[]
| .metadata.annotations[]
| (fromjson? // empty)
| .[]
| select(has("mac"))
| {mac}
Output:
{"mac":"0a:2d:40:c6:f8:ea"}
{"mac":"0a:52:8a:62:5d:f4"}
{"mac":"0a:2d:40:c6:f8:ea"}
{"mac":"0a:52:8a:62:5d:f4"}
Please try the below command and should get the expected output.
cat abc.json | jq -r '.metadata.annotations."k8s.v1.cni.cncf.io/networks-status" | fromjson | .[].mac '
where abc.json is your son file.

Prometheus alertmanager fails to send notifications due to "context deadline exceeded""

I configured prometheus-operator chart with prometheus-msteams for monitoring and alerting of k8s cluster.
But all notifications are not correctly directed to the MSteams channel. If i have 6 alerts that are firing, i can see them in the alertmanager's UI, but only one or two of them are sent to MS teams channel.
I can see this log in alertmanager pod:
C:\monitoring>kubectl logs alertmanager-monitor-prometheus-operato-alertmanager-0 -c alertmanager
level=info ts=2019-11-04T09:16:47.358Z caller=main.go:217 msg="Starting Alertmanager" version="(version=0.19.0, branch=HEAD, revision=7aa5d19fea3f58e3d27dbdeb0f2883037168914a)"
level=info ts=2019-11-04T09:16:47.358Z caller=main.go:218 build_context="(go=go1.12.8, user=root#587d0268f963, date=20190903-15:01:40)"
level=warn ts=2019-11-04T09:16:47.553Z caller=cluster.go:228 component=cluster msg="failed to join cluster" err="1 error occurred:\n\t* Failed to resolve alertmanager-monitor-prometheus-operato-alertmanager-0.alertmanager-operated.monitoring.svc:9094: lookup alertmanager-monitor-prometheus-operato-alertmanager-0.alertmanager-operated.monitoring.svc on 169.254.25.10:53: no such host\n\n"
level=info ts=2019-11-04T09:16:47.553Z caller=cluster.go:230 component=cluster msg="will retry joining cluster every 10s"
level=warn ts=2019-11-04T09:16:47.553Z caller=main.go:308 msg="unable to join gossip mesh" err="1 error occurred:\n\t* Failed to resolve alertmanager-monitor-prometheus-operato-alertmanager-0.alertmanager-operated.monitoring.svc:9094: lookup alertmanager-monitor-prometheus-operato-alertmanager-0.alertmanager-operated.monitoring.svc on 169.254.25.10:53: no such host\n\n"
level=info ts=2019-11-04T09:16:47.553Z caller=cluster.go:623 component=cluster msg="Waiting for gossip to settle..." interval=2s
level=info ts=2019-11-04T09:16:47.597Z caller=coordinator.go:119 component=configuration msg="Loading configuration file" file=/etc/alertmanager/config/alertmanager.yaml
level=info ts=2019-11-04T09:16:47.598Z caller=coordinator.go:131 component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/config/alertmanager.yaml
level=info ts=2019-11-04T09:16:47.601Z caller=main.go:466 msg=Listening address=:9093
level=info ts=2019-11-04T09:16:49.554Z caller=cluster.go:648 component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000149822s
level=info ts=2019-11-04T09:16:57.555Z caller=cluster.go:640 component=cluster msg="gossip settled; proceeding" elapsed=10.001110685s
level=error ts=2019-11-04T09:38:02.472Z caller=notify.go:372 component=dispatcher msg="Error on notify" err="unexpected status code 500: http://prometheus-msteams:2000/alertmanager" context_err="context deadline exceeded"
level=error ts=2019-11-04T09:38:02.472Z caller=dispatch.go:266 component=dispatcher msg="Notify for alerts failed" num_alerts=4 err="unexpected status code 500: http://prometheus-msteams:2000/alertmanager"
level=error ts=2019-11-04T09:43:02.472Z caller=notify.go:372 component=dispatcher msg="Error on notify" err="unexpected status code 500: http://prometheus-msteams:2000/alertmanager" context_err="context deadline exceeded"
level=error ts=2019-11-04T09:43:02.472Z caller=dispatch.go:266 component=dispatcher msg="Notify for alerts failed" num_alerts=5 err="unexpected status code 500: http://prometheus-msteams:2000/alertmanager"
level=error ts=2019-11-04T09:48:02.473Z caller=notify.go:372 component=dispatcher msg="Error on notify" err="unexpected status code 500: http://prometheus-msteams:2000/alertmanager" context_err="context deadline exceeded"
level=error ts=2019-11-04T09:48:02.473Z caller=dispatch.go:266 component=dispatcher msg="Notify for alerts failed" num_alerts=5 err="unexpected status code 500: http://prometheus-msteams:2000/alertmanager"
level=error ts=2019-11-04T09:53:02.473Z caller=notify.go:372 component=dispatcher msg="Error on notify" err="unexpected status code 500: http://prometheus-msteams:2000/alertmanager" context_err="context deadline exceeded"
level=error ts=2019-11-04T09:53:02.473Z caller=dispatch.go:266 component=dispatcher msg="Notify for alerts failed" num_alerts=5 err="unexpected status code 500: http://prometheus-msteams:2000/alertmanager"
How can I solve this error?
EDIT :
The setup uses prometheus-msteams as a webhook to redirect the alerts notifications from alertmanager to MSTeams channel.
The prometheus-msteams container logs also have some errors:
C:\> kubectl logs prometheus-msteams-564bc7d99c-dpzsm
time="2019-11-06T06:45:14Z" level=info msg="Version: v1.1.4, Commit: d47a7ab, Branch: HEAD, Build Date: 2019-08-04T17:17:06+0000"
time="2019-11-06T06:45:14Z" level=info msg="Parsing the message card template file: /etc/template/card.tmpl"
time="2019-11-06T06:45:15Z" level=warning msg="If the 'config' flag is used, the 'webhook-url' and 'request-uri' flags will be ignored."
time="2019-11-06T06:45:15Z" level=info msg="Parsing the configuration file: /etc/config/connectors.yaml"
time="2019-11-06T06:45:15Z" level=info msg="Creating the server request path \"/alertmanager\" with webhook \"https://outlook.office.com/webhook/00ce0266-7013-4d53-a20f-115ece04042d#9afb1f8a-2192-45ba-b0a1-6b193c758e24/IncomingWebhook/43c3d745ff5e426282f1bc6b5e79bfea/8368b12d-8ac9-4832-b7b5-b337ac267220\""
time="2019-11-06T06:45:15Z" level=info msg="prometheus-msteams server started listening at 0.0.0.0:2000"
time="2019-11-06T07:01:07Z" level=info msg="/alertmanager received a request"
time="2019-11-06T07:01:07Z" level=debug msg="Prometheus Alert: {\"receiver\":\"prometheus-msteams\",\"status\":\"firing\",\"alerts\":[{\"status\":\"firing\",\"labels\":{\"alertname\":\"KubeDeploymentReplicasMismatch\",\"deployment\":\"storagesvc\",\"endpoint\":\"http\",\"instance\":\"10.233.108.72:8080\",\"job\":\"kube-state-metrics\",\"namespace\":\"fission\",\"pod\":\"monitor-kube-state-metrics-856bc9455b-7z5qx\",\"prometheus\":\"monitoring/monitor-prometheus-operato-prometheus\",\"service\":\"monitor-kube-state-metrics\",\"severity\":\"critical\"},\"annotations\":{\"message\":\"Deployment fission/storagesvc has not matched the expected number of replicas for longer than 15 minutes.\",\"runbook_url\":\"https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md#alert-name-kubedeploymentreplicasmismatch\"},\"startsAt\":\"2019-11-06T07:00:32.453590324Z\",\"endsAt\":\"0001-01-01T00:00:00Z\",\"generatorURL\":\"http://monitor-prometheus-operato-prometheus.monitoring:9090/graph?g0.expr=kube_deployment_spec_replicas%7Bjob%3D%22kube-state-metrics%22%7D+%21%3D+kube_deployment_status_replicas_available%7Bjob%3D%22kube-state-metrics%22%7D\\u0026g0.tab=1\"},{\"status\":\"firing\",\"labels\":{\"alertname\":\"KubePodNotReady\",\"namespace\":\"fission\",\"pod\":\"storagesvc-5bff46b69b-vfdrd\",\"prometheus\":\"monitoring/monitor-prometheus-operato-prometheus\",\"severity\":\"critical\"},\"annotations\":{\"message\":\"Pod fission/storagesvc-5bff46b69b-vfdrd has been in a non-ready state for longer than 15 minutes.\",\"runbook_url\":\"https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md#alert-name-kubepodnotready\"},\"startsAt\":\"2019-11-06T07:00:32.453590324Z\",\"endsAt\":\"0001-01-01T00:00:00Z\",\"generatorURL\":\"http://monitor-prometheus-operato-prometheus.monitoring:9090/graph?g0.expr=sum+by%28namespace%2C+pod%29+%28kube_pod_status_phase%7Bjob%3D%22kube-state-metrics%22%2Cphase%3D~%22Failed%7CPending%7CUnknown%22%7D%29+%3E+0\\u0026g0.tab=1\"}],\"groupLabels\":{\"namespace\":\"fission\",\"severity\":\"critical\"},\"commonLabels\":{\"namespace\":\"fission\",\"prometheus\":\"monitoring/monitor-prometheus-operato-prometheus\",\"severity\":\"critical\"},\"commonAnnotations\":{},\"externalURL\":\"http://monitor-prometheus-operato-alertmanager.monitoring:9093\",\"version\":\"4\",\"groupKey\":\"{}:{namespace=\\\"fission\\\", severity=\\\"critical\\\"}\"}"
time="2019-11-06T07:01:07Z" level=debug msg="Alert rendered in template file: \r\n{\r\n \"#type\": \"MessageCard\",\r\n \"#context\": \"http://schema.org/extensions\",\r\n \"themeColor\": \"8C1A1A\",\r\n \"summary\": \"\",\r\n \"title\": \"Prometheus Alert (firing)\",\r\n \"sections\": [ \r\n {\r\n \"activityTitle\": \"[](http://monitor-prometheus-operato-alertmanager.monitoring:9093)\",\r\n \"facts\": [\r\n {\r\n \"name\": \"message\",\r\n \"value\": \"Deployment fission/storagesvc has not matched the expected number of replicas for longer than 15 minutes.\"\r\n },\r\n {\r\n \"name\": \"runbook\\\\_url\",\r\n \"value\": \"https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md#alert-name-kubedeploymentreplicasmismatch\"\r\n },\r\n {\r\n \"name\": \"alertname\",\r\n \"value\": \"KubeDeploymentReplicasMismatch\"\r\n },\r\n {\r\n \"name\": \"deployment\",\r\n \"value\": \"storagesvc\"\r\n },\r\n {\r\n \"name\": \"endpoint\",\r\n \"value\": \"http\"\r\n },\r\n {\r\n \"name\": \"instance\",\r\n \"value\": \"10.233.108.72:8080\"\r\n },\r\n {\r\n \"name\": \"job\",\r\n \"value\": \"kube-state-metrics\"\r\n },\r\n {\r\n \"name\": \"namespace\",\r\n \"value\": \"fission\"\r\n },\r\n {\r\n \"name\": \"pod\",\r\n \"value\": \"monitor-kube-state-metrics-856bc9455b-7z5qx\"\r\n },\r\n {\r\n \"name\": \"prometheus\",\r\n \"value\": \"monitoring/monitor-prometheus-operato-prometheus\"\r\n },\r\n {\r\n \"name\": \"service\",\r\n \"value\": \"monitor-kube-state-metrics\"\r\n },\r\n {\r\n \"name\": \"severity\",\r\n \"value\": \"critical\"\r\n }\r\n ],\r\n \"markdown\": true\r\n },\r\n {\r\n \"activityTitle\": \"[](http://monitor-prometheus-operato-alertmanager.monitoring:9093)\",\r\n \"facts\": [\r\n {\r\n \"name\": \"message\",\r\n \"value\": \"Pod fission/storagesvc-5bff46b69b-vfdrd has been in a non-ready state for longer than 15 minutes.\"\r\n },\r\n {\r\n \"name\": \"runbook\\\\_url\",\r\n \"value\": \"https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md#alert-name-kubepodnotready\"\r\n },\r\n {\r\n \"name\": \"alertname\",\r\n \"value\": \"KubePodNotReady\"\r\n },\r\n {\r\n \"name\": \"namespace\",\r\n \"value\": \"fission\"\r\n },\r\n {\r\n \"name\": \"pod\",\r\n \"value\": \"storagesvc-5bff46b69b-vfdrd\"\r\n },\r\n {\r\n \"name\": \"prometheus\",\r\n \"value\": \"monitoring/monitor-prometheus-operato-prometheus\"\r\n },\r\n {\r\n \"name\": \"severity\",\r\n \"value\": \"critical\"\r\n }\r\n ],\r\n \"markdown\": true\r\n }\r\n ]\r\n}\r\n"
time="2019-11-06T07:01:07Z" level=debug msg="Size of message is 1714 Bytes (~1 KB)"
time="2019-11-06T07:01:07Z" level=info msg="Created a card for Microsoft Teams /alertmanager"
time="2019-11-06T07:01:07Z" level=debug msg="Teams message cards: [{\"#type\":\"MessageCard\",\"#context\":\"http://schema.org/extensions\",\"themeColor\":\"8C1A1A\",\"summary\":\"\",\"title\":\"Prometheus Alert (firing)\",\"sections\":[{\"activityTitle\":\"[](http://monitor-prometheus-operato-alertmanager.monitoring:9093)\",\"facts\":[{\"name\":\"message\",\"value\":\"Deployment fission/storagesvc has not matched the expected number of replicas for longer than 15 minutes.\"},{\"name\":\"runbook\\\\_url\",\"value\":\"https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md#alert-name-kubedeploymentreplicasmismatch\"},{\"name\":\"alertname\",\"value\":\"KubeDeploymentReplicasMismatch\"},{\"name\":\"deployment\",\"value\":\"storagesvc\"},{\"name\":\"endpoint\",\"value\":\"http\"},{\"name\":\"instance\",\"value\":\"10.233.108.72:8080\"},{\"name\":\"job\",\"value\":\"kube-state-metrics\"},{\"name\":\"namespace\",\"value\":\"fission\"},{\"name\":\"pod\",\"value\":\"monitor-kube-state-metrics-856bc9455b-7z5qx\"},{\"name\":\"prometheus\",\"value\":\"monitoring/monitor-prometheus-operato-prometheus\"},{\"name\":\"service\",\"value\":\"monitor-kube-state-metrics\"},{\"name\":\"severity\",\"value\":\"critical\"}],\"markdown\":true},{\"activityTitle\":\"[](http://monitor-prometheus-operato-alertmanager.monitoring:9093)\",\"facts\":[{\"name\":\"message\",\"value\":\"Pod fission/storagesvc-5bff46b69b-vfdrd has been in a non-ready state for longer than 15 minutes.\"},{\"name\":\"runbook\\\\_url\",\"value\":\"https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md#alert-name-kubepodnotready\"},{\"name\":\"alertname\",\"value\":\"KubePodNotReady\"},{\"name\":\"namespace\",\"value\":\"fission\"},{\"name\":\"pod\",\"value\":\"storagesvc-5bff46b69b-vfdrd\"},{\"name\":\"prometheus\",\"value\":\"monitoring/monitor-prometheus-operato-prometheus\"},{\"name\":\"severity\",\"value\":\"critical\"}],\"markdown\":true}]}]"
time="2019-11-06T07:01:07Z" level=info msg="Microsoft Teams response text: 1"
time="2019-11-06T07:01:07Z" level=info msg="A card was successfully sent to Microsoft Teams Channel. Got http status: 200 OK"
time="2019-11-06T07:01:07Z" level=info msg="Microsoft Teams response text: Summary or Text is required."
time="2019-11-06T07:01:07Z" level=error msg="Failed sending to the Teams Channel. Teams http response: 400 Bad Request"
time="2019-11-06T07:01:08Z" level=info msg="/alertmanager received a request"
time="2019-11-06T07:01:08Z" level=debug msg="Prometheus Alert: {\"receiver\":\"prometheus-msteams\",\"status\":\"firing\",\"alerts\":[{\"status\":\"firing\",\"labels\":{\"alertname\":\"KubeDeploymentReplicasMismatch\",\"deployment\":\"storagesvc\",\"endpoint\":\"http\",\"instance\":\"10.233.108.72:8080\",\"job\":\"kube-state-metrics\",\"namespace\":\"fission\",\"pod\":\"monitor-kube-state-metrics-856bc9455b-7z5qx\",\"prometheus\":\"monitoring/monitor-prometheus-operato-prometheus\",\"service\":\"monitor-kube-state-metrics\",\"severity\":\"critical\"},\"annotations\":{\"message\":\"Deployment fission/storagesvc has not matched the expected number of replicas for longer than 15 minutes.\",\"runbook_url\":\"https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md#alert-name-kubedeploymentreplicasmismatch\"},\"startsAt\":\"2019-11-06T07:00:32.453590324Z\",\"endsAt\":\"0001-01-01T00:00:00Z\",\"generatorURL\":\"http://monitor-prometheus-operato-prometheus.monitoring:9090/graph?g0.expr=kube_deployment_spec_replicas%7Bjob%3D%22kube-state-metrics%22%7D+%21%3D+kube_deployment_status_replicas_available%7Bjob%3D%22kube-state-metrics%22%7D\\u0026g0.tab=1\"},{\"status\":\"firing\",\"labels\":{\"alertname\":\"KubePodNotReady\",\"namespace\":\"fission\",\"pod\":\"storagesvc-5bff46b69b-vfdrd\",\"prometheus\":\"monitoring/monitor-prometheus-operato-prometheus\",\"severity\":\"critical\"},\"annotations\":{\"message\":\"Pod fission/storagesvc-5bff46b69b-vfdrd has been in a non-ready state for longer than 15 minutes.\",\"runbook_url\":\"https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md#alert-name-kubepodnotready\"},\"startsAt\":\"2019-11-06T07:00:32.453590324Z\",\"endsAt\":\"0001-01-01T00:00:00Z\",\"generatorURL\":\"http://monitor-prometheus-operato-prometheus.monitoring:9090/graph?g0.expr=sum+by%28namespace%2C+pod%29+%28kube_pod_status_phase%7Bjob%3D%22kube-state-metrics%22%2Cphase%3D~%22Failed%7CPending%7CUnknown%22%7D%29+%3E+0\\u0026g0.tab=1\"}],\"groupLabels\":{\"namespace\":\"fission\",\"severity\":\"critical\"},\"commonLabels\":{\"namespace\":\"fission\",\"prometheus\":\"monitoring/monitor-prometheus-operato-prometheus\",\"severity\":\"critical\"},\"commonAnnotations\":{},\"externalURL\":\"http://monitor-prometheus-operato-alertmanager.monitoring:9093\",\"version\":\"4\",\"groupKey\":\"{}:{namespace=\\\"fission\\\", severity=\\\"critical\\\"}\"}"
time="2019-11-06T07:01:08Z" level=debug msg="Alert rendered in template file: \r\n{\r\n \"#type\": \"MessageCard\",\r\n \"#context\": \"http://schema.org/extensions\",\r\n \"themeColor\": \"8C1A1A\",\r\n \"summary\": \"\",\r\n \"title\": \"Prometheus Alert (firing)\",\r\n \"sections\": [ \r\n {\r\n \"activityTitle\": \"[](http://monitor-prometheus-operato-alertmanager.monitoring:9093)\",\r\n \"facts\": [\r\n {\r\n \"name\": \"message\",\r\n \"value\": \"Deployment fission/storagesvc has not matched the expected number of replicas for longer than 15 minutes.\"\r\n },\r\n {\r\n \"name\": \"runbook\\\\_url\",\r\n \"value\": \"https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md#alert-name-kubedeploymentreplicasmismatch\"\r\n },\r\n {\r\n \"name\": \"alertname\",\r\n \"value\": \"KubeDeploymentReplicasMismatch\"\r\n },\r\n {\r\n \"name\": \"deployment\",\r\n \"value\": \"storagesvc\"\r\n },\r\n {\r\n \"name\": \"endpoint\",\r\n \"value\": \"http\"\r\n },\r\n {\r\n \"name\": \"instance\",\r\n \"value\": \"10.233.108.72:8080\"\r\n },\r\n {\r\n \"name\": \"job\",\r\n \"value\": \"kube-state-metrics\"\r\n },\r\n {\r\n \"name\": \"namespace\",\r\n \"value\": \"fission\"\r\n },\r\n {\r\n \"name\": \"pod\",\r\n \"value\": \"monitor-kube-state-metrics-856bc9455b-7z5qx\"\r\n },\r\n {\r\n \"name\": \"prometheus\",\r\n \"value\": \"monitoring/monitor-prometheus-operato-prometheus\"\r\n },\r\n {\r\n \"name\": \"service\",\r\n \"value\": \"monitor-kube-state-metrics\"\r\n },\r\n {\r\n \"name\": \"severity\",\r\n \"value\": \"critical\"\r\n }\r\n ],\r\n \"markdown\": true\r\n },\r\n {\r\n \"activityTitle\": \"[](http://monitor-prometheus-operato-alertmanager.monitoring:9093)\",\r\n \"facts\": [\r\n {\r\n \"name\": \"message\",\r\n \"value\": \"Pod fission/storagesvc-5bff46b69b-vfdrd has been in a non-ready state for longer than 15 minutes.\"\r\n },\r\n {\r\n \"name\": \"runbook\\\\_url\",\r\n \"value\": \"https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md#alert-name-kubepodnotready\"\r\n },\r\n {\r\n \"name\": \"alertname\",\r\n \"value\": \"KubePodNotReady\"\r\n },\r\n {\r\n \"name\": \"namespace\",\r\n \"value\": \"fission\"\r\n },\r\n {\r\n \"name\": \"pod\",\r\n \"value\": \"storagesvc-5bff46b69b-vfdrd\"\r\n },\r\n {\r\n \"name\": \"prometheus\",\r\n \"value\": \"monitoring/monitor-prometheus-operato-prometheus\"\r\n },\r\n {\r\n \"name\": \"severity\",\r\n \"value\": \"critical\"\r\n }\r\n ],\r\n \"markdown\": true\r\n }\r\n ]\r\n}\r\n"
time="2019-11-06T07:01:08Z" level=debug msg="Size of message is 1714 Bytes (~1 KB)"
time="2019-11-06T07:01:08Z" level=info msg="Created a card for Microsoft Teams /alertmanager"
time="2019-11-06T07:01:08Z" level=debug msg="Teams message cards: [{\"#type\":\"MessageCard\",\"#context\":\"http://schema.org/extensions\",\"themeColor\":\"8C1A1A\",\"summary\":\"\",\"title\":\"Prometheus Alert (firing)\",\"sections\":[{\"activityTitle\":\"[](http://monitor-prometheus-operato-alertmanager.monitoring:9093)\",\"facts\":[{\"name\":\"message\",\"value\":\"Deployment fission/storagesvc has not matched the expected number of replicas for longer than 15 minutes.\"},{\"name\":\"runbook\\\\_url\",\"value\":\"https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md#alert-name-kubedeploymentreplicasmismatch\"},{\"name\":\"alertname\",\"value\":\"KubeDeploymentReplicasMismatch\"},{\"name\":\"deployment\",\"value\":\"storagesvc\"},{\"name\":\"endpoint\",\"value\":\"http\"},{\"name\":\"instance\",\"value\":\"10.233.108.72:8080\"},{\"name\":\"job\",\"value\":\"kube-state-metrics\"},{\"name\":\"namespace\",\"value\":\"fission\"},{\"name\":\"pod\",\"value\":\"monitor-kube-state-metrics-856bc9455b-7z5qx\"},{\"name\":\"prometheus\",\"value\":\"monitoring/monitor-prometheus-operato-prometheus\"},{\"name\":\"service\",\"value\":\"monitor-kube-state-metrics\"},{\"name\":\"severity\",\"value\":\"critical\"}],\"markdown\":true},{\"activityTitle\":\"[](http://monitor-prometheus-operato-alertmanager.monitoring:9093)\",\"facts\":[{\"name\":\"message\",\"value\":\"Pod fission/storagesvc-5bff46b69b-vfdrd has been in a non-ready state for longer than 15 minutes.\"},{\"name\":\"runbook\\\\_url\",\"value\":\"https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md#alert-name-kubepodnotready\"},{\"name\":\"alertname\",\"value\":\"KubePodNotReady\"},{\"name\":\"namespace\",\"value\":\"fission\"},{\"name\":\"pod\",\"value\":\"storagesvc-5bff46b69b-vfdrd\"},{\"name\":\"prometheus\",\"value\":\"monitoring/monitor-prometheus-operato-prometheus\"},{\"name\":\"severity\",\"value\":\"critical\"}],\"markdown\":true}]}]"
time="2019-11-06T07:01:08Z" level=info msg="Microsoft Teams response text: Summary or Text is required."
time="2019-11-06T07:01:08Z" level=error msg="Failed sending to the Teams Channel. Teams http response: 400 Bad Request"
Maybe due to this 400 bad request error from prometheus-msteams, the alertmanager was returning unexpected status code 500.
An issue with the file https://github.com/bzon/prometheus-msteams/blob/master/chart/prometheus-msteams/card.tmpl caused these errors.
The problem was the summary field was empty. A slight change in the file was made as described in this tutorial solved the errors.
You can use the new modified card template by overriding the default one.