How to do multi graph time series on Grafana with Kusto - grafana

Well, I want to do this: https://grafana.com/docs/grafana/v9.0/basics/timeseries-dimensions/, but on Kusto.
The problem is that I don't know where I'm failing, the unique point is that I have the following warning:
Detected long formatted time series but failed to convert from long
frame: long series must be sorted ascending by time to be converted.
The point is that my query returns the following:
let test = datatable (Timestamp: datetime, Id: string, Value: dynamic)
[
datetime(2022-11-09 11:39:25), "machineA", "True",
datetime(2022-11-09 11:39:30), "machineA", "True",
datetime(2022-11-09 11:39:35), "machineA", "False",
datetime(2022-11-09 11:39:36), "machineA", "False",
datetime(2022-11-09 11:40:03), "machineA", "True",
datetime(2022-11-09 11:40:03), "machineA", "True",
datetime(2022-11-09 11:40:04), "machineA", "True",
datetime(2022-11-09 11:40:05), "machineA", "True",
datetime(2022-11-09 11:40:25), "machineA", "False",
datetime(2022-11-09 11:40:25), "machineA", "False",
datetime(2022-11-09 11:40:26), "machineA", "False",
datetime(2022-11-09 11:40:27), "machineA", "False",
datetime(2022-11-09 11:40:37), "machineA", "True",
datetime(2022-11-09 11:40:47), "machineA", "False",
datetime(2022-11-09 11:40:57), "machineA", "True",
datetime(2022-11-09 11:40:59), "machineA", "True",
datetime(2022-11-09 11:40:25), "machineB", "True",
datetime(2022-11-09 11:40:30), "machineB", "True",
datetime(2022-11-09 11:40:35), "machineB", "False",
datetime(2022-11-09 11:40:36), "machineB", "False",
datetime(2022-11-09 11:41:03), "machineB", "True",
datetime(2022-11-09 11:41:03), "machineB", "True",
datetime(2022-11-09 11:41:04), "machineB", "True",
datetime(2022-11-09 11:41:05), "machineB", "True",
datetime(2022-11-09 11:41:25), "machineB", "False",
datetime(2022-11-09 11:41:25), "machineB", "False",
datetime(2022-11-09 11:41:26), "machineB", "False",
datetime(2022-11-09 11:41:27), "machineB", "False",
datetime(2022-11-09 11:41:37), "machineB", "True",
datetime(2022-11-09 11:41:47), "machineB", "False",
datetime(2022-11-09 11:41:57), "machineB", "True",
datetime(2022-11-09 11:41:59), "machineB", "True",
datetime(2022-11-09 11:42:25), "machineC", "True",
datetime(2022-11-09 11:42:30), "machineC", "True",
datetime(2022-11-09 11:42:35), "machineC", "False",
datetime(2022-11-09 11:42:36), "machineC", "False",
datetime(2022-11-09 11:43:03), "machineC", "True",
datetime(2022-11-09 11:43:03), "machineC", "True",
datetime(2022-11-09 11:43:04), "machineC", "True",
datetime(2022-11-09 11:43:05), "machineC", "True",
datetime(2022-11-09 11:43:25), "machineC", "False",
datetime(2022-11-09 11:43:25), "machineC", "False",
datetime(2022-11-09 11:43:26), "machineC", "False",
datetime(2022-11-09 11:43:27), "machineC", "False",
datetime(2022-11-09 11:43:37), "machineC", "True",
datetime(2022-11-09 11:43:47), "machineC", "False",
datetime(2022-11-09 11:43:57), "machineC", "False",
datetime(2022-11-09 11:43:59), "machineC", "False",
];
let tiemposCicloBruto = test
| where Timestamp > ago(100d)
| partition hint.strategy=native by Id
(
order by Timestamp asc // ordenamos ascendentemente
| extend prev_Timestamp = prev(Timestamp) // extendemos la fecha previa
| extend prev_Value = prev(Value) // extendemos el valor previo
| extend duration =
iif( // Condicion ternaria
prev_Value == "True" and Value == "False" // Si anteriormente estaba en funcion y el valor actual es parado, cuenta como tiempo de ciclo
or prev_Value == "True" and Value == "True", // Si el valor anterior era funcionando y el actual tambien, la maquina sigue funcionando
Timestamp - prev_Timestamp, // Para ese caso restamos la diferencia de tiempo
time(null) // Para el caso contrario, devolvemos nulo
)
| project Id, Timestamp, duration, Value, prev_Value
);
tiemposCicloBruto // La consulta para 1d completo tarda entre 1-1.5s
| where isnotnull(duration)
| partition hint.strategy=native by Id ( // partimos por Id
order by Timestamp asc // debe ser siempre ascendente si no pierde la logica
| scan declare (y:timespan=time(null), x:timespan=time(null)) with ( // declaramos el scan
step s1: true => // declaramos el paso
x=iif(s1.Value == "True" and Value == "True", iif(isnull(s1.x), duration, s1.x)+s1.duration, time(null)), // si tenemos varios True-True consecutivos, sumamos la duracion anterior a la actual, asignandola a X
y=iif(s1.Value=="False" and Value=="False", duration, // si tenemos el caso de que es False-False, partimos de la duracion
iif(s1.Value=="True" and Value=="False", s1.x+duration, time(null))); // si tenemos que la maquina estaba funcionando y para, sumamos las duraciones consecutivas de mientras que estaba funcionando
)
| extend next_Id=next(Id)
| extend tiempoCiclo=iif(isempty(next_Id), duration, iif(isnull(y) and prev_Value == "True" and Value=="False", duration+prev(duration), iif(isnotnull(y), y, time(null)))) / 1s // y para aquellos cambios de maquina o para aquellos donde no hubiera valor por casuistica, asignamos la duracion o la duracion+duracion previa
| where isnotnull(tiempoCiclo) // filtramos
| project-away prev_Value, x, y, duration, Value, next_Id
)
But I cannot see those machine groupings on my KQL Grafana Time series graph:
At least, it should show as much as machines in my environment:

The answer is contained in your question:
Detected long formatted time series but failed to convert from long frame: long series must be sorted ascending by time to be converted.
You need to add sorting at the end of your query. It is as easy as this.
...
| order by Timestamp asc
Also make sure to select "Time series" in "Format as" drop-down.

Generally I found that Grafana expects multiple series to be distinct columns rather than one discriminator column. One option would be to use the pivot plugin.
Adding
| evaluate pivot(Id, max(tiempoCiclo))
to the end of your query yields the following dataset:
which results in the following grafana graph:

Related

MongoDbConnector publish multiple collections to only topic Kafka

Below is my MongoDbConnector configuration:
{
"connector.class": "io.debezium.connector.mongodb.MongoDbConnector",
"collection.include.list": "dbname.messages,dbname.comments",
"mongodb.password": "mongodbpassword",
"tasks.max": "1",
"database.history.kafka.topic": "dev.dbhistory.unwrap_with_key_id_8",
"mongodb.user": "mongodbuser",
"heartbeat.interval.ms": "90000",
"mongodb.name": "analytics",
"snapshot.delay.ms": "120000",
"key.converter.schemas.enable": "false",
"poll.interval.ms": "3000",
"value.converter.schemas.enable": "false",
"mongodb.authsource": "admin",
"errors.tolerance": "all",
"value.converter": "org.apache.kafka.connect.json.JsonConverter",
"mongodb.hosts": "rs0/ip:27017",
"key.converter": "org.apache.kafka.connect.json.JsonConverter",
"database.include.list": "dbname",
"snapshot.mode": "initial"
}
I need this publish to only topic, but it create two topic is analytics.dbname.messages and analytics.dbname.messages. How can i do?
My english is not good! Thanks!

Keycloak SAML Client is not showing the login screen

I have created a realm in Keycloak named SAML-Demo-Py
In the realm I have created a SAML client with these configuration
{
"clientId": "http://localhost:8081/python-app",
"name": "test-client-saml",
"description": "",
"rootUrl": "",
"adminUrl": "http://localhost:8081/cgi-bin/saml-consumer.py",
"baseUrl": "",
"surrogateAuthRequired": false,
"enabled": true,
"alwaysDisplayInConsole": false,
"clientAuthenticatorType": "client-secret",
"redirectUris": [
"http://localhost:8081/cgi-bin/saml-consumer.py"
],
"webOrigins": [],
"notBefore": 0,
"bearerOnly": false,
"consentRequired": false,
"standardFlowEnabled": true,
"implicitFlowEnabled": false,
"directAccessGrantsEnabled": true,
"serviceAccountsEnabled": false,
"publicClient": true,
"frontchannelLogout": true,
"protocol": "saml",
"attributes": {
"saml_assertion_consumer_url_redirect": "http://localhost:8081/cgi-bin/saml-consumer.py",
"saml.force.post.binding": "true",
"saml.server.signature.keyinfo.ext": "false",
"saml.signing.certificate": "MIICzzCCAbcCBgGDSplRlDANBgkqhkiG9w0BAQsFADArMSkwJwYDVQQDDCBodHRwOi8vbG9jYWxob3N0OjgwODEvcHl0aG9uLWFwcDAeFw0yMjA5MTcwODM2NDVaFw0zMjA5MTcwODM4MjVaMCsxKTAnBgNVBAMMIGh0dHA6Ly9sb2NhbGhvc3Q6ODA4MS9weXRob24tYXBwMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAve0sgrdPWvcKiRJ5UPcpa301aiIO7F7KDKnMDtxgY2EE47z8XS5TUvvySa75L0phxZQv+iqYSV1HlUkgJQiax3fg7JgxJ+BY3ss9Dz/m9x7stErFoMXeH+Bhdk9H7hUojlRqxn8qU7ZwInENvw8hlzTgouc5hqqBaQybXqtlmJi6HWAH/3Ck6jCd+P6pjIaGrQpYxpmpmPKbhRnovZWgL6KqdnoEl1thEcPAEbXE7HiB6h4z9HdF2EWx/8U7HE/qCq2m2lPsKn2OnJk2ejDqa5SyTBXRqeqS8sG70VXrotEtJVVuPJCFvGy8r7mq0bVX/83y+PyEpppsJMMtockdgwIDAQABMA0GCSqGSIb3DQEBCwUAA4IBAQBEDptd2Goi1XFKur3dpRHyOW9zxs96gGGllrcnQZCtZjVEb31Jibr0HFDftHR22hQV1mbQLokNk8k0on7tJvsOIdXv3459gkZyouYSmYpDTvfCi3YUgdxVrDiHzwaN6uIjqXeDeI08LmH2xDZwac46vqJyqx3QI/r84qmWR2Nil/7DnJegVWyr961GWm0HH6BpweZZ5QbEw0UHihqWDthgX4BAmVnRPm9fHebIa3/7AUqodh8hoRDJcaDiyLB+V8nUPD6vLwgFgiLSqLSPaJ0OSzBvz3cpNLL1PVkFVaoQZ7W3FX6Ruvz0Bj/piQ+h2fnX4NMkj33f6CyhJTGSfbnk",
"saml$server$signature$keyinfo$ext": "false",
"saml.signature.algorithm": "RSA_SHA256",
"saml.client.signature": "false",
"saml.force.name.id.format": "false",
"saml.signing.private.key": "MIIEowIBAAKCAQEAve0sgrdPWvcKiRJ5UPcpa301aiIO7F7KDKnMDtxgY2EE47z8XS5TUvvySa75L0phxZQv+iqYSV1HlUkgJQiax3fg7JgxJ+BY3ss9Dz/m9x7stErFoMXeH+Bhdk9H7hUojlRqxn8qU7ZwInENvw8hlzTgouc5hqqBaQybXqtlmJi6HWAH/3Ck6jCd+P6pjIaGrQpYxpmpmPKbhRnovZWgL6KqdnoEl1thEcPAEbXE7HiB6h4z9HdF2EWx/8U7HE/qCq2m2lPsKn2OnJk2ejDqa5SyTBXRqeqS8sG70VXrotEtJVVuPJCFvGy8r7mq0bVX/83y+PyEpppsJMMtockdgwIDAQABAoIBABjRKm1ELarZt/c0QkTpnvBsMnQVUjThp+4iq8bPVgr2TPDDK4izenDP+hdVtTrQMdli5Sf/s9l2RlnD7d7Y8nyY9fuEYXvv7Tzjeq2I8JGe6VgfoxZAdKdepu2SK3h5LEz4y+D3Ed1Ra/KcKisqe32qC6ZNp28ozXMgEhc7NzHKnOxKA0IXFzuT6cDJgJ8TQfa2K9dGgjEE9GTCCxnZEvdf17sEc4zVMJDDcE4Rorv0mZeN1QRXVSpnW1q0vFh1CvH1H9oBCUey8cElQybARTDbMJJWgHCdd29EbilmlyB4TFI34tcaKZUlvL3sT8fvRyV6yLL0M0mqVWYxhu24naUCgYEA/WKH8G6SsM+LhC2oVJWUSDicpDNNEi2Av2FZOSg7Vbqj53/t69kSPWv0Rp0o7Bz55xwxH+n6PjUEvHyiS3UBIW6kS81av1H8Gle/1XpopHEY4aX47KCdF1Wc0yq8l5ii+oFZt3GcS1egDFPiQgyo2lhlrrQ4mckcSZF3pdd8UTUCgYEAv+L6qdXxPZvyzRgsRfhmVl8FKdKPYFqBWQgh2Lx21J16T8jRhjFfJP6NNi2JupWrQB2JdA5sivx81GMxExo5j8P1VQm/g/c3I2bxb1kYDE77x6KJzmr0i7hK0SEGCcP51EdVpjtPzcdG3st3GT5UClvZ+rTfO87e6n4pdlMUgtcCgYBUre8cTPe9Gz9XByMwUWTi1fiTb4mcP5S9YL0+utFJjzxji39pyHuuBzv1tWQNtIlX0TYhokI9M97HVyet7AZas+04YAKp2a5U52p235fFDP7xulP8UJjvSW9Fqwyn5RzidwQSqGdBTqFwPUBqLmznu48P2a7oxisr8u93fxJO2QKBgG0fkdl/139n7n6AXr0z9E7uHquYGP18us5893KgSxvCqsowtCcScL9DG99Rql+3ufnuUjrz8PpheEP4XPI2GcIOeLhxoL5Vmr/BTVA7ZJerWzg+0QvYe1Xx6mpf02U+VBdKsgSk+k9WIpGVOBfdAEIb1izjK4iBrve/46hsut9lAoGBAKeKxXlH45BTuRg/Rcp60TTV4i+Aed0ApTmvYU22XasXRfrFsauLaxUkCP0x0hvdVx+cR37Jo9Nfw9DmplzHLVMI6STzracLuhjf0HKVS6XSJa5l4VpsmkJaWxyqdrm91OXnOOizTf6pOoRFoFI21u7ZKqMoEcaqrCOIwqv21qpa",
"saml.allow.ecp.flow": "false",
"saml.server.signature.keyinfo.xmlSigKeyInfoKeyNameTransformer": "NONE",
"saml.assertion.signature": "true",
"saml.encrypt": "false",
"login_theme": "keycloak",
"saml_assertion_consumer_url_post": "http://localhost:8081/cgi-bin/saml-consumer.py",
"saml$server$signature": "true",
"saml.server.signature": "true",
"saml.artifact.binding.identifier": "90y02x5M6keohgaOI7l3D0ZpdfU=",
"saml$server$signature$keyinfo$xmlSigKeyInfoKeyNameTransformer": "NONE",
"saml.artifact.binding": "false",
"saml_force_name_id_format": "false",
"saml.authnstatement": "true",
"display.on.consent.screen": "false",
"saml_name_id_format": "username",
"saml.onetimeuse.condition": "false",
"saml_signature_canonicalization_method": "http://www.w3.org/2001/10/xml-exc-c14n#"
},
"authenticationFlowBindingOverrides": {},
"fullScopeAllowed": true,
"nodeReRegistrationTimeout": -1,
"defaultClientScopes": [
"role_list"
],
"optionalClientScopes": [],
"access": {
"view": true,
"configure": true,
"manage": true
}
}
I have a running python dummy app which will act as ACS.
This is the SAML Request which will be send from the browser in the form of
<IDP-sso-url>?SAMLRequest=<encoded-same-request>
which results in
http://localhost:8080/realms/SAML-Demp-Py?SAMLRequest=lVLLTsMwEPyVyHfXSZpCayWVSiNEJEBRKRy4INdZqIVjB6%2FD4%2B9xUpDKBcHNK8%2FMzow2R9Hqjq96vzcbeOkBffTeaoN8%2FChI7wy3AhVyI1pA7iW%2FWV1d8nQS885Zb6XV5ED5HSwQwXllDTla8GfK6vu5tgb7FtwNuFcl4XZzWZC99x1nTFsp9N6i5%2FN4HjMHQrfIBjVaQtvR%2BoPtnH0Gx4bVVDUd9SEvA9N0VhlPojKMyohhzz9USXRunYSxw4I8Co1AoqosSFU%2BxMluOltAQuOTpKEZwIyKTIYxyzKxkDJtxGkAI%2FZQGfTC%2BIKkcZrSeEGT022S8GnM05PJdDG%2FJ1H9VfiZMo0yT7%2B3tzuAkF9stzXdQKMcyBDyDhyOAQOILPOhCz4acMs%2FR87ZMS0%2FHNF1sFCVtdVKfkQrre3bOlA9FMS7HsaWWhHy1YMB9BAaZ8uD0s8LXH4C&RelayState=9UITFKRCV-uUX7vdyNF6xIEybOVX2uDNZVohLi5OenU.q8U2Lp-_Lg8.http%3A%2F%2Flocalhost%3A8081%2Fpython-app
This is the request before encoding
<?xml version="1.0" encoding="UTF-8"?>
<saml2p:AuthnRequest
xmlns:saml2p="urn:oasis:names:tc:SAML:2.0:protocol"
AssertionConsumerServiceURL="http://localhost:8081/cgi-bin/saml-consumer.py"
Destination="http://localhost:8080/realms/SAML-Demp-Py/protocol/saml"
ID="1234"
IssueInstant="2021-09-17T18:41:20.295Z"
ProtocolBinding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST"
Version="2.0">
<saml2:Issuer
xmlns:saml2="urn:oasis:names:tc:SAML:2.0:assertion">http://localhost:8081/python-app
</saml2:Issuer>
</saml2p:AuthnRequest>
Now once past this in the url I don't see the keycloak login screen. Instead the url in the browser is still the same as above and I see this json in the body of the page
{"realm":"SAML-Demp-Py","public_key":"MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAvvoRDqFxcnJAmISv4qequAJ4w1dubE47Z2RStqVQ7oUZaUPO2magxvfa8CyaEwHuySkn42NjrafVMhZp/0gBVtsVW4uwNJCMXjvv8Tfbd4qUtVo50mEgGIpJRsm5RPXUJX2q1Yz/jX+U2vTfutEIDpRZGF6PtHt/tavuGt+qohpRycTWFvcEU7LuVu7Sj7AaOBNpgGSITUEYc09+JwrWcbenhrBZq9mJ5hgCw/0TU+HcwCIi1XgUVrzZ6OG0DwE702ezuDP2Nbs78U0w6tHUj+B1+p/TAntIg39CiwHHL3kIcwwJpHiJmeLNTlwlQm+Ny3w9LmP09dPAjMm2lw1qywIDAQAB","token-service":"http://localhost:8080/realms/SAML-Demp-Py/protocol/openid-connect","account-service":"http://localhost:8080/realms/SAML-Demp-Py/account","tokens-not-before":0}
You are using wrong endpoint. It is:
http://localhost:8080/realms/SAML-Demp-Py/protocol/saml?SAMLRequest=<encoded-saml-request>
instead of:
http://localhost:8080/realms/SAML-Demp-Py?SAMLRequest=<encoded-saml-request>

What kind of data got routed to a dead letter queue topic?

I Have implemented Dead Letter Queues error handling in Kafka. It works and the data are sent to DLQ topics. I am not understanding what types of data got routed in DLQ topics.
1st picture is the data that got routed into DLQ Topics and the second one is the normal data that got sunk into databases.
Does anyone have any idea how does that key got changed as I have used id as a key?
Here is my source and sink properties:
"name": "jdbc_source_postgresql_analytics",
"config": {
"connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
"connection.url": "jdbc:postgresql://192.168.5.40:5432/abc",
"connection.user": "abc",
"connection.password": "********",
"topic.prefix": "test_",
"mode": "timestamp+incrementing",
"incrementing.column.name": "id",
"timestamp.column.name": "updatedAt",
"validate.non.null": true,
"table.whitelist": "test",
"key.converter": "org.apache.kafka.connect.converters.IntegerConverter",
"value.converter": "org.apache.kafka.connect.json.JsonConverter",
"key.converter.schemas.enable": false,
"value.converter.schemas.enable": false,
"catalog.pattern": "public",
"transforms": "createKey,extractInt",
"transforms.createKey.type": "org.apache.kafka.connect.transforms.ValueToKey",
"transforms.createKey.fields": "id",
"transforms.extractInt.type": "org.apache.kafka.connect.transforms.ExtractField$Key",
"transforms.extractInt.field": "id",
"errors.tolerance": "all"
}
}
sink properties:
{
"name": "es_sink_analytics",
"config": {
"connector.class": "io.confluent.connect.elasticsearch.ElasticsearchSinkConnector",
"type.name": "_doc",
"key.converter.schemas.enable": "false",
"topics": "TEST",
"topic.index.map": "TEST:te_test",
"value.converter.schemas.enable": "false",
"connection.url": "http://192.168.10.40:9200",
"connection.username": "******",
"connection.password": "********",
"key.ignore": "false",
"errors.tolerance": "all",
"errors.deadletterqueue.topic.name": "dlq-error-es",
"errors.deadletterqueue.topic.replication.factor": "1",
"value.converter": "org.apache.kafka.connect.json.JsonConverter",
"key.converter": "org.apache.kafka.connect.converters.IntegerConverter",
"schema.ignore": "true",
"error.tolerance":"all"
}
}

Multiple replication slot for debezium connector

I want to create multiple debezium connector with different replication slot. But I am Unable to create multiple replication slot for postgres debezium connector.
I am using docker container for Postgres & kafka. I tried setting up max_replication_slots = 2 in postgressql.conf file & also different slot.name. but still it did not create 2 replication slot for me.
{
"config": {
"batch.size": "49152",
"buffer.memory": "100663296",
"compression.type": "lz4",
"connector.class": "io.debezium.connector.postgresql.PostgresConnector",
"database.dbname": "Db1",
"database.hostname": "DBhost",
"database.password": "dbpwd",
"database.port": "5432",
"database.server.name": "serve_name",
"database.user": "usename",
"decimal.handling.mode": "double",
"hstore.handling.mode": "json",
"key.converter": "org.apache.kafka.connect.json.JsonConverter",
"name": "debezium-702771",
"plugin.name": "wal2json",
"schema.refresh.mode": "columns_diff_exclude_unchanged_toast",
"slot.drop_on_stop": "true",
"slot.name": "debezium1",
"table.whitelist": "tabel1",
"time.precision.mode": "adaptive_time_microseconds",
"transforms": "Reroute",
"transforms.Reroute.topic.regex": "(.*).public.(.*)",
"transforms.Reroute.topic.replacement": "$1.$2",
"transforms.Reroute.type": "io.debezium.transforms.ByLogicalTableRouter",
"value.converter": "io.confluent.connect.avro.AvroConverter",
"value.converter.schema.registry.url": "http://schema-registry:8081"
},
"name": "debezium-702771",
"tasks": [],
"type": "source"
}
{
"config": {
"batch.size": "49152",
"buffer.memory": "100663296",
"compression.type": "lz4",
"connector.class": "io.debezium.connector.postgresql.PostgresConnector",
"database.dbname": "Db1",
"database.hostname": "DBhost",
"database.password": "dbpwd",
"database.port": "5432",
"database.server.name": "serve_name",
"database.user": "usename",
"decimal.handling.mode": "double",
"hstore.handling.mode": "json",
"key.converter": "org.apache.kafka.connect.json.JsonConverter",
"name": "debezium-702772",
"plugin.name": "wal2json",
"schema.refresh.mode": "columns_diff_exclude_unchanged_toast",
"slot.drop_on_stop": "true",
"slot.name": "debezium2",
"table.whitelist": "tabel1",
"time.precision.mode": "adaptive_time_microseconds",
"transforms": "Reroute",
"transforms.Reroute.topic.regex": "(.*).public.(.*)",
"transforms.Reroute.topic.replacement": "$1.$2",
"transforms.Reroute.type": "io.debezium.transforms.ByLogicalTableRouter",
"value.converter": "io.confluent.connect.avro.AvroConverter",
"value.converter.schema.registry.url": "http://schema-registry:8081"
},
"name": "debezium-702772",
"tasks": [],
"type": "source"
}
It creates multiple connector but not multiple replication slot even after giving different slot name. Do I need to do anything over here.

MongoDB query for the most similar array

I have a bunch of documents in a MongoDB collection that has a fingerprint, that is made up of 8 booleans in a sequence. I would like to construct a query that will give me the documents that has between 7 to 8 of the booleans similar to my query in the same sequence.
So my query would be something in the line of: Find All documents that has the
fingerPrint = ["true", "false", "true", "true", "true", "true", "false", "true" ]
The query would return both documents, since the first document has all the booleans in the sequence correct and the second document has 7 of the 8 booleans correct in the sequence.
{
"_id" : ObjectId("5538e75c3cea103b25ff94a3"),
"name" : "document1",
"fingerPrint" : [
"true",
"false",
"true",
"true",
"true",
"true",
"false",
"true"
]
},
{
"_id" : ObjectId("5538e75c3cea103b25ff94a4"),
"name" : "document2",
"fingerPrint" : [
"true",
"false",
"true",
"true",
"false",
"true",
"false",
"true"
]
}
How would I go about doing this?
Alternativly: Is there a better way of storing the bit array and be able to query the collection more optimally?