after configuration kubelet reserved resoure, kubelet can not to handle cgroup - kubernetes

i have few question for kubelet-reserved-resoure. i don't know my configuration is actually working or not.
step1:
i'm create cgroup dir by using follow command
for i in `ls -L /sys/fs/cgroup`; do mkdir -p /sys/fs/cgroup/$i/kube-reserved.slice; done
for i in `ls -L /sys/fs/cgroup`; do mkdir -p /sys/fs/cgroup/$i/system-reserved.slice; done
step2:
additional the kubelet args like this:
--enforce-node-allocatable=pods,kube-reserved,system-reserved \
--kube-reserved=cpu=1,memory=1Gi \
--kube-reserved-cgroup=/kube-reserved.slice \
--system-reserved=cpu=1,memory=1Gi \
--system-reserved-cgroup=/system-reserved.slice \
--cgroup-root=/ --v=4
after kubelet started. i can see the node Allocatable is changed as i expected.
Capacity:
cpu: 8
ephemeral-storage: 9480420Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 16414252Ki
pods: 110
Allocatable:
cpu: 6
ephemeral-storage: 8737155058
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 14214700Ki
pods: 110
Kubelet logs for system-reserved.slice:
root#k8s-node02:~# journalctl -xeu kubelet |grep -v ignoring |grep system-reserved.slice
Jan 20 11:23:46 k8s-node02 kubelet[7117]: I0120 11:23:46.712207 7117 factory.go:177] Factory "docker" was unable to handle container "/system-reserved.slice"
Jan 20 11:23:46 k8s-node02 kubelet[7117]: I0120 11:23:46.712214 7117 factory.go:166] Error trying to work out if we can handle /system-reserved.slice: /system-reserved.slice not handled by systemd handler
Jan 20 11:23:46 k8s-node02 kubelet[7117]: I0120 11:23:46.712218 7117 factory.go:177] Factory "systemd" was unable to handle container "/system-reserved.slice"
Jan 20 11:23:46 k8s-node02 kubelet[7117]: I0120 11:23:46.712225 7117 factory.go:177] Factory "containerd" was unable to handle container "/system-reserved.slice"
Jan 20 11:24:46 k8s-node02 kubelet[7117]: I0120 11:24:46.715222 7117 factory.go:177] Factory "docker" was unable to handle container "/system-reserved.slice"
Jan 20 11:24:46 k8s-node02 kubelet[7117]: I0120 11:24:46.715229 7117 factory.go:166] Error trying to work out if we can handle /system-reserved.slice: /system-reserved.slice not handled by systemd handler
Jan 20 11:24:46 k8s-node02 kubelet[7117]: I0120 11:24:46.715232 7117 factory.go:177] Factory "systemd" was unable to handle container "/system-reserved.slice"
Jan 20 11:24:46 k8s-node02 kubelet[7117]: I0120 11:24:46.715238 7117 factory.go:177] Factory "containerd" was unable to handle container "/system-reserved.slice"
Kubelet logs for kube-reserved.slice:
root#k8s-node02:~# journalctl -xeu kubelet |grep -v ignoring |grep kube-reserved.slice
Jan 20 11:23:46 k8s-node02 kubelet[7117]: I0120 11:23:46.711765 7117 factory.go:177] Factory "docker" was unable to handle container "/kube-reserved.slice"
Jan 20 11:23:46 k8s-node02 kubelet[7117]: I0120 11:23:46.711772 7117 factory.go:166] Error trying to work out if we can handle /kube-reserved.slice: /kube-reserved.slice not handled by systemd handler
Jan 20 11:23:46 k8s-node02 kubelet[7117]: I0120 11:23:46.711776 7117 factory.go:177] Factory "systemd" was unable to handle container "/kube-reserved.slice"
Jan 20 11:23:46 k8s-node02 kubelet[7117]: I0120 11:23:46.711783 7117 factory.go:177] Factory "containerd" was unable to handle container "/kube-reserved.slice"
Jan 20 11:24:46 k8s-node02 kubelet[7117]: I0120 11:24:46.713871 7117 factory.go:177] Factory "docker" was unable to handle container "/kube-reserved.slice"
Jan 20 11:24:46 k8s-node02 kubelet[7117]: I0120 11:24:46.713877 7117 factory.go:166] Error trying to work out if we can handle /kube-reserved.slice: /kube-reserved.slice not handled by systemd handler
Jan 20 11:24:46 k8s-node02 kubelet[7117]: I0120 11:24:46.713880 7117 factory.go:177] Factory "systemd" was unable to handle container "/kube-reserved.slice"
Jan 20 11:24:46 k8s-node02 kubelet[7117]: I0120 11:24:46.713886 7117 factory.go:177] Factory "containerd" was unable to handle container "/kube-reserved.slice"
this normal for setting resource-reserved? or my configuration is wrong?

Related

NullPointerException: ha.BootstrapStandby.run(BootstrapStandby.java:549)

$HADOOP_HOME/bin/hdfs --config $HADOOP_CONF_DIR namenode -bootstrapStandby
getting below error in Hadoop HA 3.3.3 on standbyNode for the above command.
{"name":"org.apache.hadoop.hdfs.server.namenode.NameNode","time":1659853529569,"date":"2022-08-07 06:25:29,569","level":"INFO","thread":"main","message":"registered UNIX signal handlers for [TERM, HUP, INT]"}
{"name":"org.apache.hadoop.hdfs.server.namenode.NameNode","time":1659853529682,"date":"2022-08-07 06:25:29,682","level":"INFO","thread":"main","message":"createNameNode [-bootstrapStandby]"}
{"name":"org.apache.hadoop.hdfs.server.namenode.ha.BootstrapStandby","time":1659853530068,"date":"2022-08-07 06:25:30,068","level":"INFO","thread":"main","message":"Found nn: apache-hadoop-namenode-0.apache-hadoop-namenode.backend.svc.cluster.local, ipc: hdfs:8020"}
{"name":"org.apache.hadoop.hdfs.server.namenode.NameNode","time":1659853530069,"date":"2022-08-07 06:25:30,069","level":"ERROR","thread":"main","message":"Failed to start namenode.","exceptionclass":"java.io.IOException","stack":["java.io.IOException: java.lang.NullPointerException","\tat org.apache.hadoop.hdfs.server.namenode.ha.BootstrapStandby.run(BootstrapStandby.java:549)","\tat org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1741)","\tat org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1834)","Caused by: java.lang.NullPointerException","\tat org.apache.hadoop.hdfs.server.namenode.ha.BootstrapStandby.parseConfAndFindOtherNN(BootstrapStandby.java:435)","\tat org.apache.hadoop.hdfs.server.namenode.ha.BootstrapStandby.run(BootstrapStandby.java:114)","\tat org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:81)","\tat org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:95)","\tat org.apache.hadoop.hdfs.server.namenode.ha.BootstrapStandby.run(BootstrapStandby.java:544)","\t... 2 more"]}{"name":"org.apache.hadoop.util.ExitUtil","time":1659853530074,"date":"2022-08-07 06:25:30,074","level":"INFO","thread":"main","message":"Exiting with status 1: java.io.IOException: java.lang.NullPointerException"}
{"name":"org.apache.hadoop.hdfs.server.namenode.NameNode","time":1659853530083,"date":"2022-08-07 06:25:30,083","level":"INFO","thread":"shutdown-hook-0","message":"SHUTDOWN_MSG: \n\nSHUTDOWN_MSG: Shutting down NameNode at apache-hadoop-namenode-1.apache-hadoop-namenode.backend.svc.cluster.local"}

Converting a Storm 1 Kafka Topology to Heron, have a few questions

Been experimenting with switching a Storm 1.0.6 topology to Heron. Taking a baby step by removing all but the Kafka spout to see how things go. Have a main method as follows (modified from the original Flux version):
import org.apache.heron.eco.Eco;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
public class KafkaTopology {
public static void main(String[] args) throws Exception {
List<String> argList = new ArrayList<String>(Arrays.asList(args));
String file = KafkaTopology.class.getClassLoader().getResource("topology.yaml").getFile();
argList.add("local");
argList.add("--eco-config-file");
argList.add(file);
file = KafkaTopology.class.getClassLoader().getResource("dev.properties").getFile();
argList.add("--props");
argList.add(file);
argList.add("--sleep");
argList.add("36000000");
String[] ecoArgs = argList.toArray(new String[argList.size()]);
Eco.main(ecoArgs);
}
}
YAML is this:
name: "kafkaTopology-XXX_topologyVersion_XXX"
type: "storm"
config:
topology.workers: ${workers.config}
topology.max.spout.pending: ${max.spout.pending}
topology.message.timeout.secs: 120
topology.testing.always.try.serialize: true
storm.zookeeper.session.timeout: 30000
storm.zookeeper.connection.timeout: 30000
storm.zookeeper.retry.times: 5
storm.zookeeper.retry.interval: 2000
properties:
kafka.mapper.zkServers: ${kafka.mapper.zkServers}
kafka.mapper.zkPort: ${kafka.mapper.zkPort}
bootstrap.servers: ${bootstrap.servers}
kafka.mapper.brokerZkStr: ${kafka.mapper.brokerZkStr}
kafka.topic.name: ${kafka.topic.name}
components:
- id: "zkHosts"
className: "org.apache.storm.kafka.ZkHosts"
constructorArgs:
- ${kafka.mapper.brokerZkStr}
- id: "rawMessageAndMetadataScheme"
className: "org.acme.storm.spout.RawMessageAndMetadataScheme"
- id: "messageMetadataSchemeAsMultiScheme"
className: "org.apache.storm.kafka.MessageMetadataSchemeAsMultiScheme"
constructorArgs:
- ref: "rawMessageAndMetadataScheme"
- id: "kafkaSpoutConfig"
className: "org.apache.storm.kafka.SpoutConfig"
constructorArgs:
# brokerHosts
- ref: "zkHosts"
# topic
- ${kafka.topic.name}
# zkRoot
- "/zkRootKafka.kafkaSpout.builder"
# id
- ${kafka.topic.name}
properties:
- name: "scheme"
ref: "messageMetadataSchemeAsMultiScheme"
- name: zkServers
value: ${kafka.mapper.zkServers}
- name: zkPort
value: ${kafka.mapper.zkPort}
# Retry Properties
- name: "retryInitialDelayMs"
value: 60000
- name: "retryDelayMultiplier"
value: 1.5
- name: "retryDelayMaxMs"
value: 14400000
- name: "retryLimit"
value: 0
# spout definitions
spouts:
- id: "kafka-spout"
className: "org.apache.storm.kafka.KafkaSpout"
parallelism: ${kafka.spout.parallelism}
constructorArgs:
- ref: "kafkaSpoutConfig"
Relevant POM entries:
<dependency>
<groupId>org.apache.heron</groupId>
<artifactId>heron-api</artifactId>
<version>0.20.3-incubating</version>
</dependency>
<dependency>
<groupId>org.apache.heron</groupId>
<artifactId>heron-storm</artifactId>
<version>0.20.3-incubating</version>
</dependency>
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-kafka</artifactId>
<version>1.0.6</version>
</dependency>
Main method seems to run fine:
Apr 30, 2021 4:38:49 PM org.apache.heron.eco.parser.EcoParser loadTopologyFromYaml
INFO: Parsing eco config file
Apr 30, 2021 4:38:49 PM org.apache.heron.eco.parser.EcoParser loadTopologyFromYaml
INFO: Performing property substitution.
Apr 30, 2021 4:38:49 PM org.apache.heron.eco.parser.EcoParser loadTopologyFromYaml
INFO: Performing environment variable substitution.
topology type is Storm
Apr 30, 2021 4:38:49 PM org.apache.heron.eco.builder.storm.EcoBuilder buildConfig
INFO: Building topology config
Apr 30, 2021 4:38:49 PM org.apache.heron.eco.Eco printTopologyInfo
INFO: ---------- TOPOLOGY DETAILS ----------
Apr 30, 2021 4:38:49 PM org.apache.heron.eco.Eco printTopologyInfo
INFO: Topology Name: kafkaTopology-XXX_topologyVersion_XXX
Apr 30, 2021 4:38:49 PM org.apache.heron.eco.Eco printTopologyInfo
INFO: --------------- SPOUTS ---------------
Apr 30, 2021 4:38:49 PM org.apache.heron.eco.Eco printTopologyInfo
INFO: kafka-spout [1] (org.apache.storm.kafka.KafkaSpout)
Apr 30, 2021 4:38:49 PM org.apache.heron.eco.Eco printTopologyInfo
INFO: ---------------- BOLTS ---------------
Apr 30, 2021 4:38:49 PM org.apache.heron.eco.Eco printTopologyInfo
INFO: --------------- STREAMS ---------------
Apr 30, 2021 4:38:49 PM org.apache.heron.eco.Eco printTopologyInfo
INFO: --------------------------------------
Apr 30, 2021 4:38:49 PM org.apache.heron.eco.builder.storm.EcoBuilder buildTopologyBuilder
INFO: Building components
Apr 30, 2021 4:38:49 PM org.apache.heron.eco.builder.storm.EcoBuilder buildTopologyBuilder
INFO: Building spouts
Apr 30, 2021 4:38:49 PM org.apache.heron.eco.builder.storm.EcoBuilder buildTopologyBuilder
INFO: Building bolts
Apr 30, 2021 4:38:49 PM org.apache.heron.eco.builder.storm.EcoBuilder buildTopologyBuilder
INFO: Building streams
Process finished with exit code 0
Question 1: The topology exits immediately, is there an Eco flag equivalent to Flux '--sleep' to keep it running for a while (to debug, etc.)?
Question 2: Was a little surprised that I needed to pull storm-kafka in (thought there would be a Heron equivalent) - is this correct (or some other artifact?) and if so, is 1.0.6 an OK version to use or does Heron work better with another version?
Question 3: The above was with type: "storm" in the YAML, trying type: "heron" gives the following error:
INFO: Building spouts
Exception in thread "main" java.lang.ClassCastException: class org.apache.storm.kafka.KafkaSpout cannot be cast to class org.apache.heron.api.spout.IRichSpout (org.apache.storm.kafka.KafkaSpout and org.apache.heron.api.spout.IRichSpout are in unnamed module of loader 'app')
at org.apache.heron.eco.builder.heron.SpoutBuilder.buildSpouts(SpoutBuilder.java:42)
at org.apache.heron.eco.builder.heron.EcoBuilder.buildTopologyBuilder(EcoBuilder.java:70)
at org.apache.heron.eco.Eco.submit(Eco.java:125)
at org.apache.heron.eco.Eco.main(Eco.java:161)
at KafkaTopology.main(KafkaTopology.java:26)
Process finished with exit code 1
Is this just the way it is using Kafka, type needs to be storm and not heron, or is there some workaround here?
Question 1: I'm not sure why the topology would shut down on you. Try to run your submit with the --verbose flag. At this time the functionality of the --sleep argument does not exist. It could be a feature added if you need.
Question 2: There is a Heron equivalent. After Heron was donated to Apache quite a lot of work had to be done to get binary releases out. Most of that work has been done. With the next release I would hope that all binary artifacts will be distributed appropriately.
Question 3:This issue occurs because based on the type specified it looks for bolts/spouts in a certain package. When "storm" is input it expects the classes it implements or extends to be of "org.apache.storm". When "heron" is input it expects the classes it implements or extends to be of "org.apache.heron". If you use the dependency storm-kafka the type will need to be "storm". The heron equivalents can be found here. https://search.maven.org/search?q=heron-kafka
https://search.maven.org/search?q=heron-kafka
There are several Kafka Spouts for Heron. I use Storm(storm-kafka-client-2.1) clone and use it in Production.
https://search.maven.org/artifact/com.github.thinker0.heron/heron-kafka-client/1.0.4.1/jar

What parameters should I pass for the schema-registry to run on non-master mode?

I want to run the schema-registry in non-master-mode in Kubernetes, I passed the environment variable master.eligibility=false, However, it's still electing the master.
Please point me where else I should change the configuration! There are no errors in the environment value being wrong.
cmd:
helm install helm-test-0.1.0.tgz --set env.name.SCHEMA_REGISTRY_KAFKASTORE_BOOTSERVERS="PLAINTEXT://xx.xx.xx.xx:9092\,PLAINTEXT://xx.xx.xx.xx:9092\,PLAINTEXT://xx.xx.xx.xx:9092" --set env.name.SCHEMA_REGISTRY_LISTENERS="http://0.0.0.0:8083" --set env.name.SCHEMA_REGISTRY_MASTER_ELIGIBILITY=false
Details:
replicaCount: 1
image:
repository: confluentinc/cp-schema-registry
tag: "5.0.0"
pullPolicy: IfNotPresent
env:
name:
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: "PLAINTEXT://xx.xxx.xx.xx:9092, PLAINTEXT://xx.xxx.xx.xx:9092, PLAINTEXT://xx.xxx.xx.xx:9092"
SCHEMA_REGISTRY_LISTENERS: "http://0.0.0.0:8883"
SCHEMA_REGISTRY_HOST_NAME: localhost
SCHEMA_REGISTRY_MASTER_ELIGIBILITY: false
Pod - schema-registry properties:
root#test-app-788455bb47-tjlhw:/# cat /etc/schema-registry/schema-registry.properties
master.eligibility=false
listeners=http://0.0.0.0:8883
host.name=xx.xx.xxx.xx
kafkastore.bootstrap.servers=PLAINTEXT://xx.xx.xx.xx:9092,PLAINTEXT://xx.xx.xx.xx:9092,PLAINTEXT://xx.xx.xx.xx:9092
echo "===> Launching ... "
+ echo '===> Launching ... '
exec /etc/confluent/docker/launch
+ exec /etc/confluent/docker/launch
===> Launching ...
===> Launching schema-registry ...
[2018-10-15 18:52:45,993] INFO SchemaRegistryConfig values:
resource.extension.class = []
metric.reporters = []
kafkastore.sasl.kerberos.kinit.cmd = /usr/bin/kinit
response.mediatype.default = application/vnd.schemaregistry.v1+json
kafkastore.ssl.trustmanager.algorithm = PKIX
inter.instance.protocol = http
authentication.realm =
ssl.keystore.type = JKS
kafkastore.topic = _schemas
metrics.jmx.prefix = kafka.schema.registry
kafkastore.ssl.enabled.protocols = TLSv1.2,TLSv1.1,TLSv1
kafkastore.topic.replication.factor = 3
ssl.truststore.password = [hidden]
kafkastore.timeout.ms = 500
host.name = xx.xxx.xx.xx
kafkastore.bootstrap.servers = [PLAINTEXT://xx.xxx.xx.xx:9092, PLAINTEXT://xx.xxx.xx.xx:9092, PLAINTEXT://xx.xxx.xx.xx:9092]
schema.registry.zk.namespace = schema_registry
kafkastore.sasl.kerberos.ticket.renew.window.factor = 0.8
kafkastore.sasl.kerberos.service.name =
schema.registry.resource.extension.class = []
ssl.endpoint.identification.algorithm =
compression.enable = false
kafkastore.ssl.truststore.type = JKS
avro.compatibility.level = backward
kafkastore.ssl.protocol = TLS
kafkastore.ssl.provider =
kafkastore.ssl.truststore.location =
response.mediatype.preferred = [application/vnd.schemaregistry.v1+json, application/vnd.schemaregistry+json, application/json]
kafkastore.ssl.keystore.type = JKS
authentication.skip.paths = []
ssl.truststore.type = JKS
kafkastore.ssl.truststore.password = [hidden]
access.control.allow.origin =
ssl.truststore.location =
ssl.keystore.password = [hidden]
port = 8081
kafkastore.ssl.keystore.location =
metrics.tag.map = {}
master.eligibility = false
Logs of the schema-registry pod:
(org.apache.kafka.clients.consumer.ConsumerConfig)
[2018-10-15 18:52:48,571] INFO Kafka version : 2.0.0-cp1 (org.apache.kafka.common.utils.AppInfoParser)
[2018-10-15 18:52:48,571] INFO Kafka commitId : 4b1dd33f255ddd2f (org.apache.kafka.common.utils.AppInfoParser)
[2018-10-15 18:52:48,599] INFO Cluster ID: V-MGQtptQnuWK_K9-wot1Q (org.apache.kafka.clients.Metadata)
[2018-10-15 18:52:48,602] INFO Initialized last consumed offset to -1 (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread)
[2018-10-15 18:52:48,605] INFO [kafka-store-reader-thread-_schemas]: Starting (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread)
[2018-10-15 18:52:48,715] INFO [Consumer clientId=KafkaStore-reader-_schemas, groupId=schema-registry-10.100.4.189-8083] Resetting offset for partition _schemas-0 to offset 0. (org.apache.kafka.clients.consumer.internals.Fetcher)
[2018-10-15 18:52:48,721] INFO Cluster ID: V-MGQtptQnuWK_K9-wot1Q (org.apache.kafka.clients.Metadata)
[2018-10-15 18:52:48,775] INFO Wait to catch up until the offset of the last message at 228 (io.confluent.kafka.schemaregistry.storage.KafkaStore)
[2018-10-15 18:52:49,831] INFO Joining schema registry with Kafka-based coordination (io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry)
[2018-10-15 18:52:49,852] INFO Kafka version : 2.0.0-cp1 (org.apache.kafka.common.utils.AppInfoParser)
[2018-10-15 18:52:49,852] INFO Kafka commitId : 4b1dd33f255ddd2f (org.apache.kafka.common.utils.AppInfoParser)
[2018-10-15 18:52:49,909] INFO Cluster ID: V-MGQtptQnuWK_K9-wot1Q (org.apache.kafka.clients.Metadata)
[2018-10-15 18:52:49,915] INFO [Schema registry clientId=sr-1, groupId=schema-registry] Discovered group coordinator ip-10-150-4-5.ec2.internal:9092 (id: 2147483647 rack: null) (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
[2018-10-15 18:52:49,919] INFO [Schema registry clientId=sr-1, groupId=schema-registry] (Re-)joining group (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
[2018-10-15 18:52:52,975] INFO [Schema registry clientId=sr-1, groupId=schema-registry] Successfully joined group with generation 92 (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
[2018-10-15 18:52:52,980] INFO Finished rebalance with master election result: Assignment{version=1, error=0, master='sr-1-abcd4cf2-8a02-4105-8361-9aa82107acd8', masterIdentity=version=1,host=ip-xx-xxx-xx-xx.ec2.internal,port=8083,scheme=http,masterEligibility=true} (io.confluent.kafka.schemaregistry.masterelector.kafka.KafkaGroupMasterElector)
[2018-10-15 18:52:53,088] INFO Adding listener: http://0.0.0.0:8083 (io.confluent.rest.Application)
[2018-10-15 18:52:53,347] INFO jetty-9.4.11.v20180605; built: 2018-06-05T18:24:03.829Z; git: d5fc0523cfa96bfebfbda19606cad384d772f04c; jvm 1.8.0_172-b01 (org.eclipse.jetty.server.Server)
[2018-10-15 18:52:53,428] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session)
[2018-10-15 18:52:53,429] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session)
[2018-10-15 18:52:53,432] INFO node0 Scavenging every 660000ms (org.eclipse.jetty.server.session)
Oct 15, 2018 6:52:54 PM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime
WARNING: A provider io.confluent.kafka.schemaregistry.rest.resources.SubjectsResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.kafka.schemaregistry.rest.resources.SubjectsResource will be ignored.
Oct 15, 2018 6:52:54 PM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime
WARNING: A provider io.confluent.kafka.schemaregistry.rest.resources.ConfigResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.kafka.schemaregistry.rest.resources.ConfigResource will be ignored.
Oct 15, 2018 6:52:54 PM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime
WARNING: A provider io.confluent.kafka.schemaregistry.rest.resources.SchemasResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.kafka.schemaregistry.rest.resources.SchemasResource will be ignored.
Oct 15, 2018 6:52:54 PM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime
WARNING: A provider io.confluent.kafka.schemaregistry.rest.resources.SubjectVersionsResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.kafka.schemaregistry.rest.resources.SubjectVersionsResource will be ignored.
Oct 15, 2018 6:52:54 PM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime
WARNING: A provider io.confluent.kafka.schemaregistry.rest.resources.CompatibilityResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.kafka.schemaregistry.rest.resources.CompatibilityResource will be ignored.
[2018-10-15 18:52:54,364] INFO HV000001: Hibernate Validator 5.1.3.Final (org.hibernate.validator.internal.util.Version)
[2018-10-15 18:52:54,587] INFO Started o.e.j.s.ServletContextHandler#764faa6{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler)
[2018-10-15 18:52:54,619] INFO Started o.e.j.s.ServletContextHandler#14a50707{/ws,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler)
[2018-10-15 18:52:54,642] INFO Started NetworkTrafficServerConnector#62656be4{HTTP/1.1,[http/1.1]}{0.0.0.0:8083} (org.eclipse.jetty.server.AbstractConnector)
[2018-10-15 18:52:54,644] INFO Started #9700ms (org.eclipse.jetty.server.Server)
[2018-10-15 18:52:54,644] INFO Server started, listening for requests... (io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain)
I checked and your configs look good. I believe, it is, in fact, starting as a follower and the logs are basically displaying who the master is in this case:
Assignment{version=1, error=0, master='sr-1-abcd4cf2-8a02-4105-8361-9aa82107acd8', masterIdentity=version=1,host=ip-xx-xxx-xx-xx.ec2.internal,port=8083,scheme=http,masterEligibility=true}

Not able to load data in to MySQL DB using spring batch

I am using org.springframework.batch.item.database.JdbcBatchItemWriter to write the files into DB and using org.springframework.batch.item.file.mapping.PassThroughFieldSetMapper to map the columns. Data was not getting inserted in to DB and not getting any error in logs.
<bean id="ItemWriter" class="org.springframework.batch.item.database.JdbcBatchItemWriter">
<property name="dataSource" ref="dataSource" />
<property name="sql">
<value>
<![CDATA[
insert into Student_Details(Name,Id,ClassId,Rank) values (:Name, :Id, :ClassId, :Rank)
]]>
</value>
</property>
<property name="itemSqlParameterSourceProvider">
<bean class="org.springframework.batch.item.database.BeanPropertyItemSqlParameterSourceProvider" />
</property>
2016-04-28 05:45:59,904 INFO [com.sam.test.mine.scheduler.SchedulerService] [fileFirmsChannelPool-2] INFO - <Ok file received: Student_details_20160116.OK>
Apr 28, 2016 5:45:59 AM org.springframework.batch.core.launch.support.SimpleJobLauncher run
INFO: Job: [FlowJob: [name=StudentDetailsJob]] launched with the following parameters: [{groupId=0, size=0,filename=file:/app/data/Student_details_20160116.txt, filenames=file:/app/data/Student_details_20160116.txt, now=1461836759909,type=STUDENT_DET}]
Apr 28, 2016 5:46:00 AM org.springframework.batch.core.job.SimpleStepHandler handleStep
INFO: Executing step: [cleanStudentDetails]
2016-04-28 05:46:00,362 INFO [com.sam.test.mine.batch.JdbcUpdateTasklet] [fileFirmsChannelPool-2] INFO - <Deleted table Student_Details successfully.>
Apr 28, 2016 5:46:00 AM org.springframework.batch.core.job.SimpleStepHandler handleStep
INFO: Executing step: [studentDetailsStep]
Apr 28, 2016 5:46:00 AM org.springframework.beans.factory.xml.XmlBeanDefinitionReader loadBeanDefinitions
INFO: Loading XML bean definitions from class path resource [org/springframework/jdbc/support/sql-error-codes.xml]
Apr 28, 2016 5:46:00 AM org.springframework.jdbc.support.SQLErrorCodesFactory <init>
INFO: SQLErrorCodes loaded: [DB2, Derby, H2, HSQL, Informix, MS-SQL, MySQL, Oracle, PostgreSQL, Sybase, Hana]
Apr 28, 2016 5:46:00 AM org.springframework.batch.core.job.SimpleStepHandler handleStep
INFO: Executing step: [archiveStudentDetails]
2016-04-28 05:46:00,894 INFO [com.sam.test.mine.batch.FileArchiverTasklet] [fileFirmsChannelPool-2] INFO - <Archiving ... >
2016-04-28 05:46:00,902 INFO [com.sam.test.mine.batch.FileArchiverTasklet] [fileFirmsChannelPool-2] INFO - <success moving file to archive: /app/data/Student_details_20160116.txt to /app/archive/20160428/Student_details_20160116.txt.execution#33912>
Apr 28, 2016 5:46:00 AM org.springframework.batch.core.launch.support.SimpleJobLauncher run
INFO: Job: [FlowJob: [name=StudentDetailsJob]] completed with the following parameters: [{groupId=0, size=0, filename=file:/app/data/Student_details_20160116.txt, filenames=file:/app/data/Student_details_20160116.txt, now=1461836759909, type=STUDENT_DET}] and the following status: [COMPLETED]
2016-04-28 05:46:00,975 INFO [com.sam.test.mine.scheduler.SchedulerService] [fileFirmsChannelPool-2] INFO - <finish deleting Ok file /app/data/Student_details_20160116.OK>>
I am seeing READ_COUNT values in BATCH_STEP_EXECUTION table but WRITE_COUNT as 0 and WRITE_SKIP_COUNT too same as read count.
-Amuthan
When your READ_COUNT matches your WRITE_SKIP_COUNT, it implies an exception is being thrown in your JdbcBatchItemWriter which has been registered as a Skippable exception in the skippable-exception-classes property.
<step id="step1">
<tasklet>
<chunk reader="soemReader" writer="jdbcWriter"
commit-interval="10" skip-limit="10">
<skippable-exception-classes>
<include class="com.package.YourException"/>
</skippable-exception-classes>
</chunk>
</tasklet>
</step>
I'd remove any Skippable exceptions unless you have a real business to swallow errors like this.

MongoDB getting killed on read load

Standalone mongodb on heavy read load is getting killed..
I see no instance of mongo being OOM killed in kern.log.. is there any way I can debug the root cause?
I looked at mongo logs -
db version 2.6.1 -
mongod(_ZN5mongo17PortMessageServer17handleIncomingMsgEPv+0x4fb) [0x117720b]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x8182) [0x7f82e5391182]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7f82e469630d]
2014-08-27T08:37:29.584+0000 [conn28] 808.autoComplete Assertion failure a <= 512*1024*1024 src/mongo/util/alignedbuilder.cpp 104
2014-08-27T08:37:29.601+0000 [conn28] 808.autoComplete 0x11c0e91 0x1163109 0x1146f4e 0x1144f9d 0xa6370f 0xa639c4 0xa563f7 0xa56a9a 0xa53e8f 0xa53f46 0xc3e557 0xc4a0f6 0xb90169 0xb9a388 0x76b6af 0x117720b 0x7f82e5391182 0x7f82e469630d mongod(_ZN5mongo16MyMessageHandler7processERNS_7MessageEPNS_21AbstractMessagingPortEPNS_9LastErrorE+0x9f) [0x76b6af]
mongod(_ZN5mongo17PortMessageServer17handleIncomingMsgEPv+0x4fb) [0x117720b]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x8182) [0x7f82e5391182]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7f82e469630d]
2014-08-27T08:37:29.652+0000 [conn28] dbexception in groupCommit causing immediate shutdown: 0 assertion src/mongo/util/alignedbuilder.cpp:104
2014-08-27T08:37:29.652+0000 [conn28] SEVERE: gc1
2014-08-27T08:37:29.661+0000 [conn28] SEVERE: Got signal: 6 (Aborted).
Backtrace:0x11c0e91 0x11c026e 0x7f82e45d1ff0 0x7f82e45d1f79 0x7f82e45d5388 0xb8c1bb 0xa5683d 0xa56a9a 0xa53e8f 0xa53f46 0xc3e557 0xc4a0f6 0xb90169 0xb9a388 0x76b6af 0x117720b 0x7f82e5391182 0x7f82e469630d