ActiveMQ Artemis: Zombie replica (slave) instance - kubernetes

We have deployed ActiveMQ Artemis v2.19.0 in a HA+Cluster configuration, hosted on Kubernetes (non-cloud) and use the JGroups KUBE_PING for the broker discovery. During regular operations, we have 2 primaries and 2 replica brokers and everything looks fine.
For testing, we now remove the replica instances (no Pods left) – and end up with a weird cluster state: 2 primaries – and 1 zombie replica connected to primary 1. The replica instances were shut down (scaling the corresponding StatefulSet to zero), i.e., no hard kill.
Restarting the replicas brings the cluster back to a normal state – sometimes.
According to the docs, the missing broker instances should be removed:
If it has not received a broadcast from a particular server for a length of time it will remove that server's entry from its list.
So the questions are: Why do we see the zombie broker (even after hours)? And how can we get back to a clean state without shutting down all instances?
Here is our jgroups.xml:
<config xmlns="urn:org:jgroups"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/JGroups-3.0.xsd">
<TCP
enable_diagnostics="true"
bind_addr="match-interface:eth0,lo"
bind_port="7800"
recv_buf_size="20000000"
send_buf_size="640000"
max_bundle_size="64000"
max_bundle_timeout="30"
sock_conn_timeout="300"
thread_pool.enabled="true"
thread_pool.min_threads="2"
thread_pool.max_threads="8"
thread_pool.keep_alive_time="5000"
thread_pool.queue_enabled="true"
thread_pool.queue_max_size="10000"
thread_pool.rejection_policy="run"
oob_thread_pool.enabled="true"
oob_thread_pool.min_threads="1"
oob_thread_pool.max_threads="8"
oob_thread_pool.keep_alive_time="5000"
oob_thread_pool.queue_enabled="true"
oob_thread_pool.queue_max_size="100"
oob_thread_pool.rejection_policy="run"
/>
<TRACE/>
<org.jgroups.protocols.kubernetes.KUBE_PING
namespace="${kubernetesNamespace:default}"
labels="artemis-cluster=${clusterName:activemq-artemis}"
/>
<MERGE3 min_interval="10000" max_interval="30000"/>
<FD_SOCK/>
<FD timeout="3000" max_tries="3" />
<VERIFY_SUSPECT timeout="1500" />
<BARRIER />
<pbcast.NAKACK2 use_mcast_xmit="false" discard_delivered_msgs="true"/>
<UNICAST3
xmit_table_num_rows="100"
xmit_table_msgs_per_row="1000"
xmit_table_max_compaction_time="30000"
/>
<pbcast.STABLE stability_delay="1000" desired_avg_gossip="50000" max_bytes="400000"/>
<pbcast.GMS print_local_addr="true" join_timeout="3000" view_bundling="true"/>
<!-- <FC max_credits="2000000" min_threshold="0.10"/> -->
<MFC max_credits="2M" min_threshold="0.4"/>
<FRAG2 frag_size="60000" />
<pbcast.STATE_TRANSFER/>
<!-- <pbcast.FLUSH timeout="0"/> -->
</config>
Update
Configured logging as advised by Domenico. This time, when we shut down the replica brokers, both continue to exist as zombie instances:
Here are the logs (shutdown of replicas instances started at 2021-12-09T13:03:43Z):
------------------- TRACE (sent) -----------------------
MSG, arg=[dst: <null>, src: <null> (1 headers), size=0 bytes, flags=OOB|INTERNAL, transient_flags=DONT_LOOPBACK] (headers=NAKACK2: [HIGHEST_SEQNO, seqno=1727])
--------------------------------------------------------
---------------- TRACE (received) ----------------------
MSG, arg=[dst: <null>, src: ha-asa-activemq-artemis-primary-1-53544 (2 headers), size=0 bytes, flags=DONT_BUNDLE|INTERNAL] (headers=MERGE3: INFO: view_id=[ha-asa-activemq-artemis-primary-1-53544|5], logical_name=ha-asa-activemq-artemis-primary-1-53544, physical_addr=172.30.20.216:7800, TP: [cluster_name=active_broadcast_channel])
--------------------------------------------------------
------------------- TRACE (sent) -----------------------
SET_PHYSICAL_ADDRESS, arg=ha-asa-activemq-artemis-primary-1-53544 : 172.30.20.216:7800
--------------------------------------------------------
---------------- TRACE (received) ----------------------
MSG, arg=[dst: <null>, src: ha-asa-activemq-artemis-primary-1-53544 (2 headers), size=606 bytes] (headers=NAKACK2: [MSG, seqno=1733], TP: [cluster_name=active_broadcast_channel])
--------------------------------------------------------
{"timestamp":"2021-12-09T13:07:21.256Z","sequence":11249,"loggerClassName":"java.util.logging.Logger","loggerName":"org.apache.activemq.artemis.core.cluster.DiscoveryGroup","level":"DEBUG","message":"receiving 606","threadName":"activemq-discovery-group-thread-cluster-discovery-group0 (DiscoveryGroup-892093608)","threadId":78,"mdc":{},"ndc":"","hostName":"ha-asa-activemq-artemis-primary-0","processName":"Artemis","processId":303}
{"timestamp":"2021-12-09T13:07:21.256Z","sequence":11248,"loggerClassName":"java.util.logging.Logger","loggerName":"org.apache.activemq.artemis.core.cluster.DiscoveryGroup","level":"DEBUG","message":"receiving 606","threadName":"activemq-discovery-group-thread-cluster-discovery-group0 (DiscoveryGroup-176376157)","threadId":91,"mdc":{},"ndc":"","hostName":"ha-asa-activemq-artemis-primary-0","processName":"Artemis","processId":303}
{"timestamp":"2021-12-09T13:07:21.256Z","sequence":11252,"loggerClassName":"java.util.logging.Logger","loggerName":"org.apache.activemq.artemis.core.cluster.DiscoveryGroup","level":"DEBUG","message":"Received nodeID caec362d-58dc-11ec-9bf0-d2725171aa2d with originatingID = caad7f99-58dc-11ec-867d-ce446123ae5c","threadName":"activemq-discovery-group-thread-cluster-discovery-group0 (DiscoveryGroup-176376157)","threadId":91,"mdc":{},"ndc":"","hostName":"ha-asa-activemq-artemis-primary-0","processName":"Artemis","processId":303}
{"timestamp":"2021-12-09T13:07:21.256Z","sequence":11254,"loggerClassName":"java.util.logging.Logger","loggerName":"org.apache.activemq.artemis.core.cluster.DiscoveryGroup","level":"DEBUG","message":"Received nodeID d444f140-58dc-11ec-9bf0-d2725171aa2d with originatingID = caad7f99-58dc-11ec-867d-ce446123ae5c","threadName":"activemq-discovery-group-thread-cluster-discovery-group0 (DiscoveryGroup-892093608)","threadId":78,"mdc":{},"ndc":"","hostName":"ha-asa-activemq-artemis-primary-0","processName":"Artemis","processId":303}
{"timestamp":"2021-12-09T13:07:21.256Z","sequence":11256,"loggerClassName":"java.util.logging.Logger","loggerName":"org.apache.activemq.artemis.core.cluster.DiscoveryGroup","level":"DEBUG","message":"Received 1 discovery entry elements","threadName":"activemq-discovery-group-thread-cluster-discovery-group0 (DiscoveryGroup-176376157)","threadId":91,"mdc":{},"ndc":"","hostName":"ha-asa-activemq-artemis-primary-0","processName":"Artemis","processId":303}
{"timestamp":"2021-12-09T13:07:21.256Z","sequence":11258,"loggerClassName":"java.util.logging.Logger","loggerName":"org.apache.activemq.artemis.core.cluster.DiscoveryGroup","level":"DEBUG","message":"Received 1 discovery entry elements","threadName":"activemq-discovery-group-thread-cluster-discovery-group0 (DiscoveryGroup-892093608)","threadId":78,"mdc":{},"ndc":"","hostName":"ha-asa-activemq-artemis-primary-0","processName":"Artemis","processId":303}
{"timestamp":"2021-12-09T13:07:21.257Z","sequence":11261,"loggerClassName":"java.util.logging.Logger","loggerName":"org.apache.activemq.artemis.core.cluster.DiscoveryGroup","level":"DEBUG","message":"DiscoveryEntry[nodeID=caad7f99-58dc-11ec-867d-ce446123ae5c, connector=TransportConfiguration(name=artemis-tls-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?trustStorePassword=****&tcpReceiveBufferSize=1048576&port=61617&sslEnabled=true&host=ha-asa-activemq-artemis-primary-1-ha-asa-activemq-artemis-default-svc-bbscluster-hemisphere-local&trustStorePath=/var/lib/artemis/certs/truststore-jks&useEpoll=true&tcpSendBufferSize=1048576, lastUpdate=1639055241256]","threadName":"activemq-discovery-group-thread-cluster-discovery-group0 (DiscoveryGroup-892093608)","threadId":78,"mdc":{},"ndc":"","hostName":"ha-asa-activemq-artemis-primary-0","processName":"Artemis","processId":303}
{"timestamp":"2021-12-09T13:07:21.257Z","sequence":11260,"loggerClassName":"java.util.logging.Logger","loggerName":"org.apache.activemq.artemis.core.cluster.DiscoveryGroup","level":"DEBUG","message":"DiscoveryEntry[nodeID=caad7f99-58dc-11ec-867d-ce446123ae5c, connector=TransportConfiguration(name=artemis-tls-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?trustStorePassword=****&tcpReceiveBufferSize=1048576&port=61617&sslEnabled=true&host=ha-asa-activemq-artemis-primary-1-ha-asa-activemq-artemis-default-svc-bbscluster-hemisphere-local&trustStorePath=/var/lib/artemis/certs/truststore-jks&useEpoll=true&tcpSendBufferSize=1048576, lastUpdate=1639055241256]","threadName":"activemq-discovery-group-thread-cluster-discovery-group0 (DiscoveryGroup-176376157)","threadId":91,"mdc":{},"ndc":"","hostName":"ha-asa-activemq-artemis-primary-0","processName":"Artemis","processId":303}
{"timestamp":"2021-12-09T13:07:21.257Z","sequence":11264,"loggerClassName":"java.util.logging.Logger","loggerName":"org.apache.activemq.artemis.core.cluster.DiscoveryGroup","level":"DEBUG","message":"changed = false","threadName":"activemq-discovery-group-thread-cluster-discovery-group0 (DiscoveryGroup-892093608)","threadId":78,"mdc":{},"ndc":"","hostName":"ha-asa-activemq-artemis-primary-0","processName":"Artemis","processId":303}
{"timestamp":"2021-12-09T13:07:21.257Z","sequence":11266,"loggerClassName":"java.util.logging.Logger","loggerName":"org.apache.activemq.artemis.core.cluster.DiscoveryGroup","level":"DEBUG","message":"changed = false","threadName":"activemq-discovery-group-thread-cluster-discovery-group0 (DiscoveryGroup-176376157)","threadId":91,"mdc":{},"ndc":"","hostName":"ha-asa-activemq-artemis-primary-0","processName":"Artemis","processId":303}
{"timestamp":"2021-12-09T13:07:21.257Z","sequence":11268,"loggerClassName":"java.util.logging.Logger","loggerName":"org.apache.activemq.artemis.core.cluster.DiscoveryGroup","level":"DEBUG","message":"Calling notifyAll","threadName":"activemq-discovery-group-thread-cluster-discovery-group0 (DiscoveryGroup-892093608)","threadId":78,"mdc":{},"ndc":"","hostName":"ha-asa-activemq-artemis-primary-0","processName":"Artemis","processId":303}
{"timestamp":"2021-12-09T13:07:21.257Z","sequence":11270,"loggerClassName":"java.util.logging.Logger","loggerName":"org.apache.activemq.artemis.core.cluster.DiscoveryGroup","level":"DEBUG","message":"Calling notifyAll","threadName":"activemq-discovery-group-thread-cluster-discovery-group0 (DiscoveryGroup-176376157)","threadId":91,"mdc":{},"ndc":"","hostName":"ha-asa-activemq-artemis-primary-0","processName":"Artemis","processId":303}
{"timestamp":"2021-12-09T13:07:21.257Z","sequence":11272,"loggerClassName":"java.util.logging.Logger","loggerName":"org.apache.activemq.artemis.api.core.JGroupsBroadcastEndpoint","level":"TRACE","message":"Receiving Broadcast: clientOpened=true, channelOPen=true","threadName":"activemq-discovery-group-thread-cluster-discovery-group0 (DiscoveryGroup-892093608)","threadId":78,"mdc":{},"ndc":"","hostName":"ha-asa-activemq-artemis-primary-0","processName":"Artemis","processId":303}
{"timestamp":"2021-12-09T13:07:21.257Z","sequence":11274,"loggerClassName":"java.util.logging.Logger","loggerName":"org.apache.activemq.artemis.api.core.JGroupsBroadcastEndpoint","level":"TRACE","message":"Receiving Broadcast: clientOpened=true, channelOPen=true","threadName":"activemq-discovery-group-thread-cluster-discovery-group0 (DiscoveryGroup-176376157)","threadId":91,"mdc":{},"ndc":"","hostName":"ha-asa-activemq-artemis-primary-0","processName":"Artemis","processId":303}
---------------- TRACE (received) ----------------------
MSG, arg=[dst: ha-asa-activemq-artemis-primary-0-4048, src: ha-asa-activemq-artemis-primary-1-53544 (2 headers), size=0 bytes, flags=INTERNAL] (headers=FD: heartbeat, TP: [cluster_name=active_broadcast_channel])
--------------------------------------------------------
------------------- TRACE (sent) -----------------------
MSG, arg=[dst: ha-asa-activemq-artemis-primary-1-53544, src: <null> (1 headers), size=0 bytes, flags=INTERNAL] (headers=FD: heartbeat ack)
--------------------------------------------------------
------------------- TRACE (sent) -----------------------
MSG, arg=[dst: ha-asa-activemq-artemis-primary-1-53544, src: <null> (1 headers), size=0 bytes, flags=INTERNAL] (headers=FD: heartbeat)
--------------------------------------------------------
---------------- TRACE (received) ----------------------
MSG, arg=[dst: ha-asa-activemq-artemis-primary-0-4048, src: ha-asa-activemq-artemis-primary-1-53544 (2 headers), size=0 bytes, flags=INTERNAL] (headers=FD: heartbeat ack, TP: [cluster_name=active_broadcast_channel])
--------------------------------------------------------
{"timestamp":"2021-12-09T13:07:22.908Z","sequence":11276,"loggerClassName":"java.util.logging.Logger","loggerName":"org.apache.activemq.artemis.api.core.JGroupsBroadcastEndpoint","level":"TRACE","message":"Broadcasting: BroadCastOpened=true, channelOPen=true","threadName":"Thread-1 (ActiveMQ-scheduled-threads)","threadId":85,"mdc":{},"ndc":"","hostName":"ha-asa-activemq-artemis-primary-0","processName":"Artemis","processId":303}
------------------- TRACE (sent) -----------------------
MSG, arg=[dst: <null>, src: ha-asa-activemq-artemis-primary-0-4048 (1 headers), size=606 bytes, transient_flags=DONT_LOOPBACK] (headers=NAKACK2: [MSG, seqno=1728])
--------------------------------------------------------
---------------- TRACE (received) ----------------------
MSG, arg=[dst: <null>, src: ha-asa-activemq-artemis-primary-1-53544 (2 headers), size=0 bytes, flags=OOB|INTERNAL] (headers=NAKACK2: [HIGHEST_SEQNO, seqno=1733], TP: [cluster_name=active_broadcast_channel])
--------------------------------------------------------
------------------- TRACE (sent) -----------------------
MSG, arg=[dst: <null>, src: <null> (1 headers), size=0 bytes, flags=OOB|INTERNAL, transient_flags=DONT_LOOPBACK] (headers=NAKACK2: [HIGHEST_SEQNO, seqno=1728])
--------------------------------------------------------
---------------- TRACE (received) ----------------------
MSG, arg=[dst: ha-asa-activemq-artemis-primary-0-4048, src: ha-asa-activemq-artemis-primary-1-53544 (2 headers), size=0 bytes, flags=INTERNAL] (headers=FD: heartbeat, TP: [cluster_name=active_broadcast_channel])
--------------------------------------------------------
------------------- TRACE (sent) -----------------------
MSG, arg=[dst: ha-asa-activemq-artemis-primary-1-53544, src: <null> (1 headers), size=0 bytes, flags=INTERNAL] (headers=FD: heartbeat ack)
--------------------------------------------------------
------------------- TRACE (sent) -----------------------
MSG, arg=[dst: ha-asa-activemq-artemis-primary-1-53544, src: <null> (1 headers), size=0 bytes, flags=INTERNAL] (headers=FD: heartbeat)
--------------------------------------------------------
---------------- TRACE (received) ----------------------
MSG, arg=[dst: ha-asa-activemq-artemis-primary-0-4048, src: ha-asa-activemq-artemis-primary-1-53544 (2 headers), size=0 bytes, flags=INTERNAL] (headers=FD: heartbeat ack, TP: [cluster_name=active_broadcast_channel])
--------------------------------------------------------

Related

How to optimize EmbddedKafka and Mongo logs in Spring Boot

how to properly keep only relevant logs when using MongoDB and Kafka in a SpringBoot application
2022-08-02 11:14:58.148 INFO 363923 --- [ main] kafka.server.KafkaConfig : KafkaConfig values:
advertised.listeners = null
alter.config.policy.class.name = null
alter.log.dirs.replication.quota.window.num = 11
alter.log.dirs.replication.quota.window.size.seconds = 1
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.heartbeat.interval.ms = 2000
broker.id = 0
broker.id.generation.enable = true
broker.rack = null
broker.session.timeout.ms = 9000
client.quota.callback.class = null
compression.type = producer
connection.failed.authentication.delay.ms = 100
connections.max.idle.ms = 600000
...
2022-08-02 11:15:11.005 INFO 363923 --- [er-event-thread] state.change.logger : [Controller id=0 epoch=1] Changed partition test_cfr_prv_customeragreement_event_disbursement_ini-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isr=List(0), zkVersion=0)
2022-08-02 11:15:11.005 INFO 363923 --- [er-event-thread] state.change.logger : [Controller id=0 epoch=1] Changed partition test_cfr_prv_customeragreement_event_receipt_ini-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isr=List(0), zkVersion=0)
2022-08-02 11:15:11.017 INFO 363923 --- [er-event-thread] state.change.logger : [Controller id=0 epoch=1] Sending LeaderAndIsr request to broker 0 with 2 become-leader and 0 become-follower partitions
2022-08-02 11:15:11.024 INFO 363923 --- [er-event-thread] state.change.logger : [Controller id=0 epoch=1] Sending UpdateMetadata request to brokers HashSet(0) for 2 partitions
2022-08-02 11:15:11.026 INFO 363923 --- [er-event-thread] state.change.logger : [Controller id=0 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions
2022-08-02 11:15:11.028 INFO 363923 --- [quest-handler-0] state.change.logger : [Broker id=0] Handling LeaderAndIsr request correlationId 1 from controller 0 for 2 partitions
example of undesired logs
2022-08-02 11:15:04.578 INFO 363923 --- [ Thread-3] o.s.b.a.mongo.embedded.EmbeddedMongo : {"t":{"$date":"2022-08-02T11:15:04.578+02:00"},"s":"I", "c":"CONTROL", "id":51765, "ctx":"initandlisten","msg":"Operating System","attr":{"os":{"name":"Ubuntu","version":"20.04"}}}
2022-08-02 11:15:04.579 INFO 363923 --- [ Thread-3] o.s.b.a.mongo.embedded.EmbeddedMongo : {"t":{"$date":"2022-08-02T11:15:04.578+02:00"},"s":"I", "c":"CONTROL", "id":21951, "ctx":"initandlisten","msg":"Options set by command line","attr":{"options":{"net":{"bindIp":"127.0.0.1","port":34085},"replication":{"oplogSizeMB":10,"replSet":"rs0"},"security":{"authorization":"disabled"},"storage":{"dbPath":"/tmp/embedmongo-db-66eab1ce-d099-40ec-96fb-f759ef3808a4","syncPeriodSecs":0}}}}
2022-08-02 11:15:04.585 INFO 363923 --- [ Thread-3] o.s.b.a.mongo.embedded.EmbeddedMongo : {"t":{"$date":"2022-08-02T11:15:04.585+02:00"},"s":"I", "c":"STORAGE", "id":22297, "ctx":"initandlisten","msg":"Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem","tags":["startupWarnings"]}
Please find here a link to a sample project github.com/smaillns/springboot-mongo-kafka
If we run a test we'll get a bunch of logs ! What's wrong with the current configuration ?

How to collate pytest logging output to console?

I'd like to collate logging output to console such that the repeated "----ClassName.TestName---, and "-- Captured log call---" lines are removed or limited to a single entry. The below simplified example, with output, demonstrates the problem.
desired output:
2020-08-14 13:51:50 INFO test[test_01]
2020-08-14 13:51:50 INFO test[test_02]
2020-08-14 13:51:50 INFO test[test_03]
========= short test summary info =====================================
PASSED tests/test_logging.py::Test_Logging::test_01
PASSED tests/test_logging.py::Test_Logging::test_02
PASSED tests/test_logging.py::Test_Logging::test_03
source code:
import logging
import pytest
#pytest.mark.testing
class Test_Logging:
_logger = None
def setup_method(self):
self._logger = logging.getLogger('Test Logger')
def test_01(self, request):
self._logger.info(f"test[{request.node.name}]")
def test_02(self, request):
self._logger.info(f"test[{request.node.name}]")
def test_03(self, request):
self._logger.info(f"test[{request.node.name}]")
current output:
_____________ Test_Logging.test_01 ____________________________________
-------- Captured log call --------------------------------------------
2020-08-14 13:51:50 INFO test[test_01]
_____________ Test_Logging.test_02 ____________________________________
-------- Captured log call --------------------------------------------
2020-08-14 13:51:50 INFO test[test_02]
_____________ Test_Logging.test_03 ____________________________________
-------- Captured log call --------------------------------------------
2020-08-14 13:51:50 INFO test[test_03]
========= short test summary info =====================================
PASSED tests/test_logging.py::Test_Logging::test_01
PASSED tests/test_logging.py::Test_Logging::test_02
PASSED tests/test_logging.py::Test_Logging::test_03

Tuning ReactiveElasticsearchClient due to ReadTimeoutException

We've been experimenting around with the ReactiveElasticsearchRepository however we're running into issues when the service remains idle for several hours and you attempt to retrieve data from Elastic Search that it times out.
What we're seeing when making those first few requests is:
2019-11-06 17:31:35.858 WARN [my-service,,,] 56942 --- [ctor-http-nio-1] r.netty.http.client.HttpClientConnect : [id: 0x8cf5e94d, L:/192.168.1.100:60745 - R:elastic.internal.com/192.168.1.101:9200] The connection observed an error
io.netty.handler.timeout.ReadTimeoutException: null
When I enable DEBUG for reactor.netty, I can see that it goes through the motions of trying each connection in the pool:
2019-11-06 17:31:30.841 DEBUG [my-service,,,] 56942 --- [ctor-http-nio-1] r.n.resources.PooledConnectionProvider : [id: 0x8cf5e94d, L:/192.168.1.100:60745 - R:elastic.internal.com/192.168.1.101:9200] Channel acquired, now 1 active connections and 2 inactive connections
2019-11-06 17:31:35.858 WARN [my-service,,,] 56942 --- [ctor-http-nio-1] r.netty.http.client.HttpClientConnect : [id: 0x8cf5e94d, L:/192.168.1.100:60745 - R:elastic.internal.com/192.168.1.101:9200] The connection observed an error
io.netty.handler.timeout.ReadTimeoutException: null
2019-11-06 17:31:35.881 DEBUG [my-service,,,] 56942 --- [ctor-http-nio-1] r.n.resources.PooledConnectionProvider : [id: 0x8cf5e94d, L:/192.168.1.100:60745 ! R:elastic.internal.com/192.168.1.101:9200] Releasing channel
2019-11-06 17:31:35.891 DEBUG [my-service,,,] 56942 --- [ctor-http-nio-1] r.n.resources.PooledConnectionProvider : [id: 0x8cf5e94d, L:/1192.168.1.100:60745 ! R:elastic.internal.com/192.168.1.101:9200] Channel cleaned, now 0 active connections and 2 inactive connections
2019-11-06 17:32:21.249 DEBUG [my-service,,,] 56942 --- [ctor-http-nio-1] r.n.resources.PooledConnectionProvider : [id: 0x38e99d68, L:/192.168.1.100:60744 - R:elastic.internal.com/192.168.1.101:9200] Channel acquired, now 1 active connections and 1 inactive connections
2019-11-06 17:32:26.251 WARN [my-service,,,] 56942 --- [ctor-http-nio-1] r.netty.http.client.HttpClientConnect : [id: 0x38e99d68, L:/192.168.1.100:60744 - R:elastic.internal.com/192.168.1.101:9200] The connection observed an error
io.netty.handler.timeout.ReadTimeoutException: null
2019-11-06 17:32:26.255 DEBUG [my-service,,,] 56942 --- [ctor-http-nio-1] r.n.resources.PooledConnectionProvider : [id: 0x38e99d68, L:/192.168.1.100:60744 ! R:elastic.internal.com/192.168.1.101:9200] Releasing channel
2019-11-06 17:32:26.256 DEBUG [my-service,,,] 56942 --- [ctor-http-nio-1] r.n.resources.PooledConnectionProvider : [id: 0x38e99d68, L:/192.168.1.100:60744 ! R:elastic.internal.com/192.168.1.101:9200] Channel cleaned, now 0 active connections and 1 inactive connections
2019-11-06 17:32:32.592 DEBUG [my-service,,,] 56942 --- [ctor-http-nio-1] r.n.resources.PooledConnectionProvider : [id: 0xdee3a211, L:/1192.168.1.100:60746 - R:elastic.internal.com/192.168.1.101:9200] Channel acquired, now 1 active connections and 0 inactive connections
2019-11-06 17:32:37.597 WARN [my-service,,,] 56942 --- [ctor-http-nio-1] r.netty.http.client.HttpClientConnect : [id: 0xdee3a211, L:/192.168.1.100:60746 - R:elastic.internal.com/192.168.1.101:9200] The connection observed an error
io.netty.handler.timeout.ReadTimeoutException: null
2019-11-06 17:32:37.600 DEBUG [my-service,,,] 56942 --- [ctor-http-nio-1] r.n.resources.PooledConnectionProvider : [id: 0xdee3a211, L:/192.168.1.100:60746 ! R:elastic.internal.com/192.168.1.101:9200] Releasing channel
2019-11-06 17:32:37.600 DEBUG [my-service,,,] 56942 --- [ctor-http-nio-1] r.n.resources.PooledConnectionProvider : [id: 0xdee3a211, L:/192.168.1.100:60746 ! R:elastic.internal.com/192.168.1.101:9200] Channel cleaned, now 0 active connections and 0 inactive connections
Until eventually all the active / inactive connections have been cleaned and then it re-creates new connections which then work.
Is there a way to tune things behind the scenes, to limit how long a connection can remain in the pool for before being re-created? Or an alternative idea to be able to handle these timeouts.

Spring Cloud Stream Kafka Stream application shows Resetting offset for partition event-x to offset 0 on every restart

I have a Spring Cloud Stream Kafka Stream application that reads from a topic (event) and performs a simple processing:
#Configuration
class EventKStreamConfiguration {
private val logger = LoggerFactory.getLogger(javaClass)
#StreamListener
fun process(#Input("event") eventStream: KStream<String, EventReceived>) {
eventStream.foreach { key, value ->
logger.info("--------> Processing Event {}", value)
// Save in DB
}
}
}
This application is using a Kafka environment from Confluent Cloud, with an event topic with 6 partitions. The full configuration is:
spring:
application:
name: events-processor
cloud:
stream:
schema-registry-client:
endpoint: ${schema-registry-url:http://localhost:8081}
kafka:
streams:
binder:
brokers: ${kafka-brokers:localhost}
configuration:
application:
id: ${spring.application.name}
default:
key:
serde: org.apache.kafka.common.serialization.Serdes$StringSerde
schema:
registry:
url: ${spring.cloud.stream.schema-registry-client.endpoint}
value:
subject:
name:
strategy: io.confluent.kafka.serializers.subject.RecordNameStrategy
processing:
guarantee: exactly_once
bindings:
event:
consumer:
valueSerde: io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde
bindings:
event:
destination: event
data:
mongodb:
uri: ${mongodb-uri:mongodb://localhost/test}
server:
port: 8085
logging:
level:
org.springframework.kafka.config: debug
---
spring:
profiles: confluent-cloud
cloud:
stream:
kafka:
streams:
binder:
autoCreateTopics: false
configuration:
retry:
backoff:
ms: 500
security:
protocol: SASL_SSL
sasl:
mechanism: PLAIN
jaas:
config: xxx
basic:
auth:
credentials:
source: USER_INFO
schema:
registry:
basic:
auth:
user:
info: yyy
Messages are being correctly processed by the KStream. If I restart the application they are not reprocessed. Note: I don’t want them to be reprocessed, so this behaviour is ok.
However the startup logs show some strange bits:
First it displays the creation of a restore consumer client. with auto offset reset none:
2019-07-19 10:20:17.120 INFO 82473 --- [ main] o.a.k.s.p.internals.StreamThread : stream-thread [events-processor-9a8069c4-3fb6-4d76-a207-efbbadd52b8f-StreamThread-1] Creating restore consumer client
2019-07-19 10:20:17.123 INFO 82473 --- [ main] o.a.k.clients.consumer.ConsumerConfig : ConsumerConfig values:
auto.commit.interval.ms = 5000
auto.offset.reset = none
Then it creates a consumer client with auto offset reset earliest.
2019-07-19 10:20:17.235 INFO 82473 --- [ main] o.a.k.s.p.internals.StreamThread : stream-thread [events-processor-9a8069c4-3fb6-4d76-a207-efbbadd52b8f-StreamThread-1] Creating consumer client
2019-07-19 10:20:17.241 INFO 82473 --- [ main] o.a.k.clients.consumer.ConsumerConfig : ConsumerConfig values:
auto.commit.interval.ms = 5000
auto.offset.reset = earliest
The final traces of the startup log show an offset reset to 0. This happens on every restart of the application:
2019-07-19 10:20:31.577 INFO 82473 --- [-StreamThread-1] o.a.k.s.p.internals.StreamThread : stream-thread [events-processor-9a8069c4-3fb6-4d76-a207-efbbadd52b8f-StreamThread-1] State transition from PARTITIONS_ASSIGNED to RUNNING
2019-07-19 10:20:31.578 INFO 82473 --- [-StreamThread-1] org.apache.kafka.streams.KafkaStreams : stream-client [events-processor-9a8069c4-3fb6-4d76-a207-efbbadd52b8f] State transition from REBALANCING to RUNNING
2019-07-19 10:20:31.669 INFO 82473 --- [events-processor] o.a.k.c.consumer.internals.Fetcher : [Consumer clientId=events-processor-9a8069c4-3fb6-4d76-a207-efbbadd52b8f-StreamThread-1-consumer, groupId=events-processor] Resetting offset for partition event-3 to offset 0.
2019-07-19 10:20:31.669 INFO 82473 --- [events-processor] o.a.k.c.consumer.internals.Fetcher : [Consumer clientId=events-processor-9a8069c4-3fb6-4d76-a207-efbbadd52b8f-StreamThread-1-consumer, groupId=events-processor] Resetting offset for partition event-0 to offset 0.
2019-07-19 10:20:31.669 INFO 82473 --- [events-processor] o.a.k.c.consumer.internals.Fetcher : [Consumer clientId=events-processor-9a8069c4-3fb6-4d76-a207-efbbadd52b8f-StreamThread-1-consumer, groupId=events-processor] Resetting offset for partition event-1 to offset 0.
2019-07-19 10:20:31.669 INFO 82473 --- [events-processor] o.a.k.c.consumer.internals.Fetcher : [Consumer clientId=events-processor-9a8069c4-3fb6-4d76-a207-efbbadd52b8f-StreamThread-1-consumer, groupId=events-processor] Resetting offset for partition event-5 to offset 0.
2019-07-19 10:20:31.670 INFO 82473 --- [events-processor] o.a.k.c.consumer.internals.Fetcher : [Consumer clientId=events-processor-9a8069c4-3fb6-4d76-a207-efbbadd52b8f-StreamThread-1-consumer, groupId=events-processor] Resetting offset for partition event-4 to offset 0.
What's the reason why there are two consumers configured?
Why does the second one have auto.offset.reset = earliest when I haven't configured it explicitly and the Kafka default is latest?
I want the default (auto.offset.reset = latest) behaviour and it seems to be working fine. However, doesn't it contradict what I see in the logs?
UPDATE:
I would rephrase the third question like this: Why do the logs show that the partitions are being reseted to 0 on every restart and in spite of it no messages are redelivered to the KStream?
UPDATE 2:
I've simplified the scenario, this time with a native Kafka Streams application. The behaviour is exactly the same as observed with Spring Cloud Stream. However, inspecting the consumer-group and the partitions I've found it kind of makes sense.
KStream:
fun main() {
val props = Properties()
props[StreamsConfig.APPLICATION_ID_CONFIG] = "streams-wordcount"
props[StreamsConfig.BOOTSTRAP_SERVERS_CONFIG] = "localhost:9092"
props[StreamsConfig.CACHE_MAX_BYTES_BUFFERING_CONFIG] = 0
props[StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG] = Serdes.String().javaClass.name
props[StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG] = Serdes.String().javaClass.name
val builder = StreamsBuilder()
val source = builder.stream<String, String>("streams-plaintext-input")
source.foreach { key, value -> println("$key $value") }
val streams = KafkaStreams(builder.build(), props)
val latch = CountDownLatch(1)
// attach shutdown handler to catch control-c
Runtime.getRuntime().addShutdownHook(object : Thread("streams-wordcount-shutdown-hook") {
override fun run() {
streams.close()
latch.countDown()
}
})
try {
streams.start()
latch.await()
} catch (e: Throwable) {
exitProcess(1)
}
exitProcess(0)
}
This is what I've seen:
1) With an empty topic, the startup shows a resetting of all partitions to offset 0:
07:55:03.885 [streams-wordcount-3549a54e-49db-4490-bd9f-7156e972021a-StreamThread-1] INFO org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=streams-wordcount-3549a54e-49db-4490-bd9f-7156e972021a-StreamThread-1-consumer, groupId=streams-wordcount] Resetting offset for partition streams-plaintext-input-2 to offset 0.
07:55:03.886 [streams-wordcount-3549a54e-49db-4490-bd9f-7156e972021a-StreamThread-1] INFO org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=streams-wordcount-3549a54e-49db-4490-bd9f-7156e972021a-StreamThread-1-consumer, groupId=streams-wordcount] Resetting offset for partition streams-plaintext-input-3 to offset 0.
07:55:03.886 [streams-wordcount-3549a54e-49db-4490-bd9f-7156e972021a-StreamThread-1] INFO org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=streams-wordcount-3549a54e-49db-4490-bd9f-7156e972021a-StreamThread-1-consumer, groupId=streams-wordcount] Resetting offset for partition streams-plaintext-input-0 to offset 0.
07:55:03.886 [streams-wordcount-3549a54e-49db-4490-bd9f-7156e972021a-StreamThread-1] INFO org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=streams-wordcount-3549a54e-49db-4490-bd9f-7156e972021a-StreamThread-1-consumer, groupId=streams-wordcount] Resetting offset for partition streams-plaintext-input-1 to offset 0.
07:55:03.886 [streams-wordcount-3549a54e-49db-4490-bd9f-7156e972021a-StreamThread-1] INFO org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=streams-wordcount-3549a54e-49db-4490-bd9f-7156e972021a-StreamThread-1-consumer, groupId=streams-wordcount] Resetting offset for partition streams-plaintext-input-4 to offset 0.
07:55:03.886 [streams-wordcount-3549a54e-49db-4490-bd9f-7156e972021a-StreamThread-1] INFO org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=streams-wordcount-3549a54e-49db-4490-bd9f-7156e972021a-StreamThread-1-consumer, groupId=streams-wordcount] Resetting offset for partition streams-plaintext-input-5 to offset 0
2) I put one message in the topic and inspect the consumer group, seeing that the record is in partition 4:
TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
streams-plaintext-input 0 - 0 - streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1-consumer-905a307a-4c49-4d8b-ac2e-5525ba2e8a8e /127.0.0.1 streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1-consumer
streams-plaintext-input 5 - 0 - streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1-consumer-905a307a-4c49-4d8b-ac2e-5525ba2e8a8e /127.0.0.1 streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1-consumer
streams-plaintext-input 1 - 0 - streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1-consumer-905a307a-4c49-4d8b-ac2e-5525ba2e8a8e /127.0.0.1 streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1-consumer
streams-plaintext-input 2 - 0 - streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1-consumer-905a307a-4c49-4d8b-ac2e-5525ba2e8a8e /127.0.0.1 streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1-consumer
streams-plaintext-input 3 - 0 - streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1-consumer-905a307a-4c49-4d8b-ac2e-5525ba2e8a8e /127.0.0.1 streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1-consumer
streams-plaintext-input 4 1 1 0 streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1-consumer-905a307a-4c49-4d8b-ac2e-5525ba2e8a8e /127.0.0.1 streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1-consumer
3) I restart the application. Now the resetting only affects the empty partitions (0, 1, 2, 3, 5):
07:57:39.477 [streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1] INFO org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1-consumer, groupId=streams-wordcount] Resetting offset for partition streams-plaintext-input-2 to offset 0.
07:57:39.478 [streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1] INFO org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1-consumer, groupId=streams-wordcount] Resetting offset for partition streams-plaintext-input-3 to offset 0.
07:57:39.478 [streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1] INFO org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1-consumer, groupId=streams-wordcount] Resetting offset for partition streams-plaintext-input-0 to offset 0.
07:57:39.479 [streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1] INFO org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1-consumer, groupId=streams-wordcount] Resetting offset for partition streams-plaintext-input-1 to offset 0.
07:57:39.479 [streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1] INFO org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1-consumer, groupId=streams-wordcount] Resetting offset for partition streams-plaintext-input-5 to offset 0.
4) I insert another message, inspect the consumer group state and the same thing happens: the record is in partition 2 and when restarting the application it only resets the empty partitions (0, 1, 3, 5):
TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
streams-plaintext-input 0 - 0 - streams-wordcount-addb08ed-62ce-47f9-a446-f2ee0592c53d-StreamThread-1-consumer-cb04e2bd-598f-455f-b913-1370b4144dd6 /127.0.0.1 streams-wordcount-addb08ed-62ce-47f9-a446-f2ee0592c53d-StreamThread-1-consumer
streams-plaintext-input 5 - 0 - streams-wordcount-addb08ed-62ce-47f9-a446-f2ee0592c53d-StreamThread-1-consumer-cb04e2bd-598f-455f-b913-1370b4144dd6 /127.0.0.1 streams-wordcount-addb08ed-62ce-47f9-a446-f2ee0592c53d-StreamThread-1-consumer
streams-plaintext-input 1 - 0 - streams-wordcount-addb08ed-62ce-47f9-a446-f2ee0592c53d-StreamThread-1-consumer-cb04e2bd-598f-455f-b913-1370b4144dd6 /127.0.0.1 streams-wordcount-addb08ed-62ce-47f9-a446-f2ee0592c53d-StreamThread-1-consumer
streams-plaintext-input 2 1 1 0 streams-wordcount-addb08ed-62ce-47f9-a446-f2ee0592c53d-StreamThread-1-consumer-cb04e2bd-598f-455f-b913-1370b4144dd6 /127.0.0.1 streams-wordcount-addb08ed-62ce-47f9-a446-f2ee0592c53d-StreamThread-1-consumer
streams-plaintext-input 3 - 0 - streams-wordcount-addb08ed-62ce-47f9-a446-f2ee0592c53d-StreamThread-1-consumer-cb04e2bd-598f-455f-b913-1370b4144dd6 /127.0.0.1 streams-wordcount-addb08ed-62ce-47f9-a446-f2ee0592c53d-StreamThread-1-consumer
streams-plaintext-input 4 1 1 0 streams-wordcount-addb08ed-62ce-47f9-a446-f2ee0592c53d-StreamThread-1-consumer-cb04e2bd-598f-455f-b913-1370b4144dd6 /127.0.0.1 streams-wordcount-addb08ed-62ce-47f9-a446-f2ee0592c53d-StreamThread-1-consumer
08:00:42.313 [streams-wordcount-addb08ed-62ce-47f9-a446-f2ee0592c53d-StreamThread-1] INFO org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=streams-wordcount-addb08ed-62ce-47f9-a446-f2ee0592c53d-StreamThread-1-consumer, groupId=streams-wordcount] Resetting offset for partition streams-plaintext-input-3 to offset 0.
08:00:42.314 [streams-wordcount-addb08ed-62ce-47f9-a446-f2ee0592c53d-StreamThread-1] INFO org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=streams-wordcount-addb08ed-62ce-47f9-a446-f2ee0592c53d-StreamThread-1-consumer, groupId=streams-wordcount] Resetting offset for partition streams-plaintext-input-0 to offset 0.
08:00:42.314 [streams-wordcount-addb08ed-62ce-47f9-a446-f2ee0592c53d-StreamThread-1] INFO org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=streams-wordcount-addb08ed-62ce-47f9-a446-f2ee0592c53d-StreamThread-1-consumer, groupId=streams-wordcount] Resetting offset for partition streams-plaintext-input-1 to offset 0.
08:00:42.314 [streams-wordcount-addb08ed-62ce-47f9-a446-f2ee0592c53d-StreamThread-1] INFO org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=streams-wordcount-addb08ed-62ce-47f9-a446-f2ee0592c53d-StreamThread-1-consumer, groupId=streams-wordcount] Resetting offset for partition streams-plaintext-input-5 to offset 0.
What's the reason why there are two consumers configured?
Restore Consumer Client is a dedicated consumer for fault tolerance and state management. It is the responsible for restoring the state from the changelog topics. It is displayed seperately from the application consumer client. You can find more information here :
https://docs.confluent.io/current/streams/monitoring.html#kafka-restore-consumer-client-id
Why does the second one have auto.offset.reset = earliest when I haven't configured it explicitly and the Kafka default is latest?
You are right, auto.offset.reset default value is latest in Kafka Consumer. But in Spring Cloud Stream, default value for consumer startOffset is earliest. Hence it shows earliest in second consumer. Also it depends on spring.cloud.stream.bindings.<channelName>.group binding. If it is set explicitly, then startOffset is set to earliest, otherwise it is set to latest for anonymous consumer.
Reference : Spring Cloud Stream Kafka Consumer Properties
I want the default (auto.offset.reset = latest) behaviour and it
seems to be working fine. However, doesn't it contradict what I see in
the logs?
In case of anonymous consumer group, the default value for startOffset will be latest.

IntelliJ scala worksheet: Reduce debug logging

When using the worksheet with Slick, it logs so much debug that I don't see the actual result of what I'm doing. I've been trying to figure out how to disable the debug-logging for hours now, but I can't figure it out.
How do I disable the (slick)/(worksheet) logging?
from the worksheet:
db.run(
countries.take(5)
.map(_.country)
.result
)
Which outputs ~200 lines of:
countries: slick.lifted.TableQuery[Country] = Rep(TableExpansion)
z: slick.lifted.Query[Country,Country#TableElementType,Seq] = Rep(Filter #968224334)
res0: java.sql.Connection = org.postgresql.jdbc4.Jdbc4Connection#40d24bbd
BEFORE
22:17:58.192 [NGSession 241: 127.0.0.1: compile-server] DEBUG slick.compiler.QueryCompiler - Source:
| Bind
| from s2: Take
| from: TableExpansion
| table s3: Table country
| columns: ProductNode
| 1: Path s3.id : String'
| 2: Path s3.name : String'
| count: LiteralNode 5 (volatileHint=false)
| select: Pure t4
| value: Path s2.name : String'
22:17:58.193 [NGSession 241: 127.0.0.1: compile-server] DEBUG slick.compiler.AssignUniqueSymbols - Detected features: UsedFeatures(false,false,false,false)
22:17:58.194 [NGSession 241: 127.0.0.1: compile-server] DEBUG slick.compiler.QueryCompiler - After phase assignUniqueSymbols:
| Bind
| from s5: Take
| from: TableExpansion
| table s6: Table country
| columns: ProductNode
| 1: Path s6.id : String'
| 2: Path s6.name : String'
| count: LiteralNode 5 (volatileHint=false)
| select: Pure t8
| value: Path s5.name : String'
22:17:58.195 [NGSession 241: 127.0.0.1: compile-server] DEBUG slick.compiler.QueryCompiler - After phase inferTypes: (no change)
22:17:58.196 [NGSession 241: 127.0.0.1: compile-server] DEBUG slick.compiler.ExpandTables - Found Selects for NominalTypes: #t7
22:17:58.197 [NGSession 241: 127.0.0.1: compile-server] DEBUG slick.compiler.ExpandTables - With correct table types:
| Bind : Vector[t8<String'>]
| from s5: Take : Vector[#t7<{id: String', name: String'}>]
| from: Table country : Vector[#t7<{id: String', name: String'}>]
| count: LiteralNode 5 (volatileHint=false) : Long
| select: Pure t8 : Vector[t8<String'>]
| value: Path s5.name : String'
22:17:58.197 [NGSession 241: 127.0.0.1: compile-server] DEBUG slick.compiler.ExpandTables - Table expansions: #t7 -> (s6,ProductNode)
22:17:58.198 [NGSession 241: 127.0.0.1: compile-server] DEBUG slick.compiler.QueryCompiler - After phase expandTables:
| Bind : Vector[t8<String'>]
| from s5: Take : Vector[#t7<{id: String', name: String'}>]
| from: Table country : Vector[#t7<{id: String', name: String'}>]
| count: LiteralNode 5 (volatileHint=false) : Long
| select: Pure t8 : Vector[t8<String'>]
| value: Path s5.name : String'
22:17:58.199 [NGSession 241: 127.0.0.1: compile-server] DEBUG slick.compiler.QueryCompiler - After phase forceOuterBinds:
| Bind : Vector[t8<String'>]
| from s5: Take : Vector[#t7<{id: String', name: String'}>]
| from: Table country : Vector[#t7<{id: String', name: String'}>]
| count: LiteralNode 5 (volatileHint=false) : Long
| select: Pure t8 : Vector[t8<String'>]
| value: Path s5.name : String'
22:17:58.200 [NGSession 241: 127.0.0.1: compile-server] DEBUG slick.compiler.QueryCompiler - After phase removeMappedTypes: (no change)
22:17:58.200 [NGSession 241: 127.0.0.1: compile-server] DEBUG slick.compiler.QueryCompiler - After phase expandSums: (no change)
22:17:58.201 [NGSession 241: 127.0.0.1: compile-server] DEBUG slick.compiler.QueryCompiler - After phase expandRecords:
| Bind : Vector[t8<String'>]
| from s5: Take : Vector[#t7<{id: String', name: String'}>]
| from: Table country : Vector[#t7<{id: String', name: String'}>]
| count: LiteralNode 5 (volatileHint=false) : Long
| select: Pure t8 : Vector[t8<String'>]
| value: Path s5.name : String'
22:17:58.202 [NGSession 241: 127.0.0.1: compile-server] DEBUG slick.compiler.FlattenProjections - Flattening projection t8
22:17:58.202 [NGSession 241: 127.0.0.1: compile-server] DEBUG slick.compiler.FlattenProjections - Analyzing s5.name with symbols
| Path s5.name : String'
22:17:58.203 [NGSession 241: 127.0.0.1: compile-server] DEBUG slick.compiler.FlattenProjections - Translated s5.name to:
| Path s5.name
22:17:58.203 [NGSession 241: 127.0.0.1: compile-server] DEBUG slick.compiler.FlattenProjections - Flattening node at Path
| Path s5.name
22:17:58.204 [NGSession 241: 127.0.0.1: compile-server] DEBUG slick.compiler.FlattenProjections - Adding definition: s9 -> Path s5.name
22:17:58.204 [NGSession 241: 127.0.0.1: compile-server] DEBUG slick.compiler.FlattenProjections - Adding translation for t8: (Map(List() -> s9), UnassignedType)
22:17:58.204 [NGSession 241: 127.0.0.1: compile-server] DEBUG slick.compiler.FlattenProjections - Flattened projection to
| Pure t8
| value: StructNode
| s9: Path s5.name
22:17:58.205 [NGSession 241: 127.0.0.1: compile-server] DEBUG slick.compiler.QueryCompiler - After phase flattenProjections:
| Bind : Vector[t8<{s9: String'}>]
| from s5: Take : Vector[#t7<{id: String', name: String'}>]
| from: Table country : Vector[#t7<{id: String', name: String'}>]
| count: LiteralNode 5 (volatileHint=false) : Long
| select: Pure t8 : Vector[t8<{s9: String'}>]
| value: StructNode : {s9: String'}
| s9: Path s5.name : String'
(and it goes on and on and on...)
So how do I turn off this logging?
Assuming that you are using the default sbt directory structure, you need to configure logback to log only what you want. To do so, add a file src/main/resources/logback.xml with the following content:
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
</encoder>
</appender>
<logger name="slick.lifted" level="INFO" />
<root level="debug">
<appender-ref ref="STDOUT" />
</root>
</configuration>
Of course, you can configure other log level besides INFO and also other log levels for specific slick packages.
And finally, if you are using Playframework directory structure, put the logback.xml file inside conf directory instead.
I had a similar problem with a Scala worksheet with Spark code where it wrote literally hundreds of DEBUG messages to the log - all starting with [NGSession 241: 127.0.0.1: compile] DEBUG ...
The project uses several modules and no matter where I put the adjusted logback.xml file, it didn't get picked up.
I noticed that this NGSession relates to the Scala compile server. By specifically telling the server which logback.xml to use, it was picked up:
Add the parameter...
After that I restarted the compile server (Stop/Start) and it worked.