I see the tarmk-coldstandby.log keeps on increasing as below log snippet.
08.04.2016 15:20:08.984 DEBUG [defaultEventExecutorGroup-5-1] org.apache.jackrabbit.oak.plugins.segment.SegmentTracker Added segment ddfdd5d1-c66d-4a15-bbe4-554a02698dda to tracker cache (1024 bytes)
08.04.2016 15:20:09.005 DEBUG [defaultEventExecutorGroup-5-1] org.apache.jackrabbit.oak.plugins.segment.SegmentId Loading segment 7a81e4a1-0164-41b3-bade-765ad98e3892
08.04.2016 15:20:09.005 DEBUG [defaultEventExecutorGroup-5-1] org.apache.jackrabbit.oak.plugins.segment.SegmentTracker Added segment 7a81e4a1-0164-41b3-bade-765ad98e3892 to tracker cache (1024 bytes)
08.04.2016 15:20:09.062 DEBUG [defaultEventExecutorGroup-5-1] org.apache.jackrabbit.oak.plugins.segment.SegmentId Loading segment 9ceb0986-cf88-44de-bc5d-cf3c1c564d17
08.04.2016 15:20:09.062 DEBUG [defaultEventExecutorGroup-5-1] org.apache.jackrabbit.oak.plugins.segment.SegmentTracker Added segment 9ceb0986-cf88-44de-bc5d-cf3c1c564d17 to tracker cache (1024 bytes)
08.04.2016 15:20:09.093 DEBUG [defaultEventExecutorGroup-5-1] org.apache.jackrabbit.oak.plugins.segment.SegmentId Loading segment 9499975f-f741-40fe-b93f-fddb3c744c3e
08.04.2016 15:20:09.093 DEBUG [defaultEventExecutorGroup-5-1] org.apache.jackrabbit.oak.plugins.segment.SegmentTracker Added segment 9499975f-f741-40fe-b93f-fddb3c744c3e to tracker cache (1024 bytes)
08.04.2016 15:20:09.112 DEBUG [defaultEventExecutorGroup-5-1] org.apache.jackrabbit.oak.plugins.segment.SegmentId Loading segment 15b7c96d-3d9c-4a9e-b52f-c3d3daa99cb0
08.04.2016 15:20:09.112 DEBUG [defaultEventExecutorGroup-5-1] org.apache.jackrabbit.oak.plugins.segment.SegmentTracker Added segment 15b7c96d-3d9c-4a9e-b52f-c3d3daa99cb0 to tracker cache (1024 bytes)
08.04.2016 15:20:09.143 DEBUG [defaultEventExecutorGroup-5-1] org.apache.jackrabbit.oak.plugins.segment.SegmentId Loading segment 1512b2f6-9c58-4804-bb5a-3dd6d1c17379
08.04.2016 15:20:09.143 DEBUG [defaultEventExecutorGroup-5-1] org.apache.jackrabbit.oak.plugins.segment.SegmentTracker Added segment 1512b2f6-9c58-4804-bb5a-3dd6d1c17379 to tracker cache (1024 bytes)
08.04.2016 15:20:09.172 DEBUG [defaultEventExecutorGroup-5-1] org.apache.jackrabbit.oak.plugins.segment.SegmentId Loading segment 8b336377-ba60-4268-bd7e-646deecf01cb
08.04.2016 15:20:09.172 DEBUG [defaultEventExecutorGroup-5-1] org.apache.jackrabbit.oak.plugins.segment.SegmentTracker Added segment 8b336377-ba60-4268-bd7e-646deecf01cb to tracker cache (1024 bytes)
Could someone please help us to understand this log and to resolve this problem.
Thanks in advance !!
Primary and Standby instances sync segments from Primary to Standby instance, the logging entries are related to that.
You can change the logging level for tarmk-coldstandby.log in the OSGi console or if you have created Custom Sling OSGi config, it can be changed in the Sling OSGi config also.
Related
I have tried to install the nuxeo-platform-login-keycloak plugin on Nuxeo 7.10 to connect to KeyCloak 19.0.3 following the instructions in the Readme
https://github.com/nikes/nuxeo-platform-login-keycloak
Modified the POM.xml to point to Nuxeo 10.10, as 10.2-SNAPSHOT is not available anymore.
Added the keycloak tomcat adapters for 19.0.3 from here
https://www.keycloak.org/archive/downloads-19.0.3.html
with a basic config file ( realm, certificate etc..)
The problem is that Nuxeo does not start when I upload the plugins and config to /nxserver/
this is the server-error.log
======================================================================
= Starting Nuxeo Framework
======================================================================
* Server home = /opt/nuxeo/server
* Runtime home = /opt/nuxeo/server/nxserver
* Data Directory = /var/lib/nuxeo/data
* Log Directory = /var/log/nuxeo
* Configuration Directory = /opt/nuxeo/server/nxserver/config
* Temp Directory = /opt/nuxeo/server/tmp
======================================================================
2022-11-26 14:21:04,652 WARN [localhost-startStop-1] [org.nuxeo.runtime.model.impl.ComponentManagerImpl] Component org.nuxeo.runtime.trackers.files.threadstracking.config was blacklisted. Ignoring.
2022-11-26 14:21:05,187 INFO [localhost-startStop-1] [org.nuxeo.elasticsearch.ElasticSearchComponent] Registering local embedded configuration: EsLocalConfig(nuxeoCluster, /var/lib/nuxeo/data/elasticsearch, true, mmapfs), loaded from service:org.nuxeo.elasticsearch.defaultConfig
2022-11-26 14:21:05,188 INFO [localhost-startStop-1] [org.nuxeo.elasticsearch.ElasticSearchComponent] Registering index configuration: EsIndexConfig(nuxeo, default, doc), loaded from service:org.nuxeo.elasticsearch.defaultConfig
2022-11-26 14:21:05,188 INFO [localhost-startStop-1] [org.nuxeo.elasticsearch.ElasticSearchComponent] Registering index configuration: EsIndexConfig(nuxeo-audit, null, entry), loaded from service:org.nuxeo.elasticsearch.index.audit.contrib
2022-11-26 14:21:05,188 INFO [localhost-startStop-1] [org.nuxeo.elasticsearch.ElasticSearchComponent] Registering index configuration: EsIndexConfig(nuxeo-uidgen, null, seqId), loaded from service:org.nuxeo.elasticsearch.index.sequence.contrib
2022-11-26 14:21:06,487 WARN [localhost-startStop-1] [org.nuxeo.runtime.model.impl.ComponentManagerImpl] Component org.nuxeo.runtime.trackers.files.threadstracking.config was blacklisted. Ignoring.
2022-11-26 14:21:06,666 WARN [localhost-startStop-1] [org.nuxeo.automation.scripting.internals.ScriptingFactory] Class Filter is not available. jdk8u40 is required to activate Automation Javascript imports security.
2022-11-26 14:21:07,261 ERROR [localhost-startStop-1] [org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/nuxeo]] Exception sending context initialized event to listener instance of class org.nuxeo.runtime.deployment.NuxeoStarter
java.lang.NoSuchMethodError: com.fasterxml.jackson.core.JsonParser.getReadCapabilities()Lcom/fasterxml/jackson/core/util/JacksonFeatureSet;
at com.fasterxml.jackson.databind.DeserializationContext.<init>(DeserializationContext.java:212)
at com.fasterxml.jackson.databind.deser.DefaultDeserializationContext.<init>(DefaultDeserializationContext.java:50)
at com.fasterxml.jackson.databind.deser.DefaultDeserializationContext$Impl.<init>(DefaultDeserializationContext.java:391)
at com.fasterxml.jackson.databind.deser.DefaultDeserializationContext$Impl.createInstance(DefaultDeserializationContext.java:413)
at com.fasterxml.jackson.databind.ObjectMapper.createDeserializationContext(ObjectMapper.java:4737)
at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:4666)
at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3666)
at org.keycloak.adapters.KeycloakDeploymentBuilder.loadAdapterConfig(KeycloakDeploymentBuilder.java:196)
at org.keycloak.adapters.KeycloakDeploymentBuilder.build(KeycloakDeploymentBuilder.java:187)
at org.nuxeo.ecm.platform.ui.web.keycloak.KeycloakAuthenticationPlugin.initPlugin(KeycloakAuthenticationPlugin.java:87)
at org.nuxeo.ecm.platform.ui.web.auth.service.PluggableAuthenticationService.registerContribution(PluggableAuthenticationService.java:142)
at org.nuxeo.runtime.model.DefaultComponent.registerExtension(DefaultComponent.java:46)
at org.nuxeo.runtime.model.impl.ComponentInstanceImpl.registerExtension(ComponentInstanceImpl.java:193)
at org.nuxeo.runtime.model.impl.ComponentManagerImpl.registerExtension(ComponentManagerImpl.java:254)
at org.nuxeo.runtime.model.impl.RegistrationInfoImpl.activate(RegistrationInfoImpl.java:358)
at org.nuxeo.runtime.model.impl.RegistrationInfoImpl.resolve(RegistrationInfoImpl.java:436)
at org.nuxeo.runtime.model.impl.ComponentRegistry.resolveComponent(ComponentRegistry.java:177)
at org.nuxeo.runtime.model.impl.ComponentRegistry.addComponent(ComponentRegistry.java:125)
at org.nuxeo.runtime.model.impl.ComponentManagerImpl.register(ComponentManagerImpl.java:154)
at org.nuxeo.runtime.model.impl.DefaultRuntimeContext.deploy(DefaultRuntimeContext.java:121)
at org.nuxeo.runtime.model.impl.DefaultRuntimeContext.deploy(DefaultRuntimeContext.java:96)
at org.nuxeo.runtime.osgi.OSGiRuntimeService.loadComponents(OSGiRuntimeService.java:224)
at org.nuxeo.runtime.osgi.OSGiRuntimeService.createContext(OSGiRuntimeService.java:168)
at org.nuxeo.runtime.osgi.OSGiComponentLoader.bundleChanged(OSGiComponentLoader.java:100)
at org.nuxeo.osgi.OSGiAdapter.fireBundleEvent(OSGiAdapter.java:260)
at org.nuxeo.osgi.BundleImpl.setStarting(BundleImpl.java:394)
at org.nuxeo.osgi.BundleImpl.start(BundleImpl.java:290)
at org.nuxeo.osgi.BundleRegistry.doRegister(BundleRegistry.java:177)
at org.nuxeo.osgi.BundleRegistry.register(BundleRegistry.java:125)
at org.nuxeo.osgi.BundleRegistry.install(BundleRegistry.java:98)
at org.nuxeo.osgi.OSGiAdapter.install(OSGiAdapter.java:186)
at org.nuxeo.osgi.application.loader.FrameworkLoader.install(FrameworkLoader.java:278)
at org.nuxeo.osgi.application.loader.FrameworkLoader.doStart(FrameworkLoader.java:234)
at org.nuxeo.osgi.application.loader.FrameworkLoader.start(FrameworkLoader.java:126)
at org.nuxeo.runtime.deployment.NuxeoStarter.start(NuxeoStarter.java:118)
at org.nuxeo.runtime.deployment.NuxeoStarter.contextInitialized(NuxeoStarter.java:91)
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:5003)
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5517)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:901)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:877)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:652)
at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:677)
at org.apache.catalina.startup.HostConfig$DeployDescriptor.run(HostConfig.java:1912)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
2022-11-26 14:21:07,484 WARN [localhost-startStop-1] [org.nuxeo.ecm.platform.ui.web.application.config.JSFAnnotationProvider] container scanned classes unavailable, applying default scanning
2022-11-26 14:21:09,807 WARN [localhost-startStop-1] [org.jboss.seam.security.permission.PersistentPermissionResolver] no permission store available - please install a PermissionStore with the name 'org.jboss.seam.security.jpaPermissionStore' if persistent permissions are required.
2022-11-26 14:21:09,841 ERROR [localhost-startStop-1] [org.apache.catalina.core.StandardContext] One or more listeners failed to start. Full details will be found in the appropriate container log file
2022-11-26 14:21:09,845 ERROR [localhost-startStop-1] [org.apache.catalina.core.StandardContext] Context [/nuxeo] startup failed due to previous errors
2022-11-26 14:21:09,876 WARN [localhost-startStop-1] [org.nuxeo.runtime.deployment.NuxeoStarter] Deregister JDBC driver: org.h2.Driver#1784cb97
2022-11-26 14:21:09,876 WARN [localhost-startStop-1] [org.nuxeo.runtime.deployment.NuxeoStarter] Deregister JDBC driver: org.apache.derby.jdbc.AutoloadedDriver40#208ed2cf
2022-11-26 14:21:09,885 INFO [localhost-startStop-1] [org.apache.catalina.startup.HostConfig] Deployment of configuration descriptor /opt/nuxeo/server/conf/Catalina/localhost/nuxeo.xml has finished in 6,057 ms
2022-11-26 14:21:09,886 INFO [localhost-startStop-1] [org.apache.catalina.startup.HostConfig] Deploying web application directory /opt/nuxeo/server/webapps/manager
2022-11-26 14:21:10,053 INFO [localhost-startStop-1] [org.apache.catalina.startup.TldConfig] At least one JAR was scanned for TLDs yet contained no TLDs. Enable debug logging for this logger for a complete list of JARs that were scanned but no TLDs were found in them. Skipping unneeded JARs during scanning can improve startup time and JSP compilation time.
2022-11-26 14:21:10,073 INFO [localhost-startStop-1] [org.apache.catalina.startup.HostConfig] Deployment of web application directory /opt/nuxeo/server/webapps/manager has finished in 187 ms
2022-11-26 14:21:10,073 INFO [localhost-startStop-1] [org.apache.catalina.startup.HostConfig] Deploying web application directory /opt/nuxeo/server/webapps/ROOT
2022-11-26 14:21:10,197 INFO [localhost-startStop-1] [org.apache.catalina.startup.TldConfig] At least one JAR was scanned for TLDs yet contained no TLDs. Enable debug logging for this logger for a complete list of JARs that were scanned but no TLDs were found in them. Skipping unneeded JARs during scanning can improve startup time and JSP compilation time.
2022-11-26 14:21:10,199 INFO [localhost-startStop-1] [org.apache.catalina.startup.HostConfig] Deployment of web application directory /opt/nuxeo/server/webapps/ROOT has finished in 126 ms
2022-11-26 14:21:10,199 INFO [localhost-startStop-1] [org.apache.catalina.startup.HostConfig] Deploying web application directory /opt/nuxeo/server/webapps/host-manager
2022-11-26 14:21:10,322 INFO [localhost-startStop-1] [org.apache.catalina.startup.TldConfig] At least one JAR was scanned for TLDs yet contained no TLDs. Enable debug logging for this logger for a complete list of JARs that were scanned but no TLDs were found in them. Skipping unneeded JARs during scanning can improve startup time and JSP compilation time.
2022-11-26 14:21:10,325 INFO [localhost-startStop-1] [org.apache.catalina.startup.HostConfig] Deployment of web application directory /opt/nuxeo/server/webapps/host-manager has finished in 126 ms
2022-11-26 14:21:10,326 INFO [main] [org.apache.coyote.http11.Http11Protocol] Starting ProtocolHandler ["http-bio-0.0.0.0-8080"]
2022-11-26 14:21:10,335 INFO [main] [org.apache.coyote.ajp.AjpProtocol] Starting ProtocolHandler ["ajp-bio-0.0.0.0-8009"]
2022-11-26 14:21:10,336 INFO [main] [org.apache.catalina.startup.Catalina] Server startup in 7370 ms
When starting pyspark on the command line using pyspark, everything works as expected. However, when using Livy, it doesn't.
I made the connection using Postman. First I POST this to the sessions endpoint:
{
"kind": "pyspark",
"proxyUser": "spark"
}
The session spins up, I can see Spark getting started on YARN. However, I get this error in my container log:
18/09/12 15:53:00 ERROR repl.PythonInterpreter: Process has died with 1
18/09/12 15:53:00 ERROR repl.PythonInterpreter: Traceback (most recent call last):
File "/yarn/nm/usercache/livy/appcache/application_1535188013308_0051/container_1535188013308_0051_01_000001/tmp/3015653701235928503", line 643, in <module>
sys.exit(main())
File "/yarn/nm/usercache/livy/appcache/application_1535188013308_0051/container_1535188013308_0051_01_000001/tmp/3015653701235928503", line 533, in main
exec('from pyspark.shell import sc', global_dict)
File "<string>", line 1, in <module>
File "/opt/cloudera/parcels/SPARK2-2.3.0.cloudera3-1.cdh5.13.3.p0.458809/lib/spark2/python/lib/pyspark.zip/pyspark/shell.py", line 38, in <module>
File "/opt/cloudera/parcels/SPARK2-2.3.0.cloudera3-1.cdh5.13.3.p0.458809/lib/spark2/python/lib/pyspark.zip/pyspark/context.py", line 292, in _ensure_initialized
File "/opt/cloudera/parcels/SPARK2-2.3.0.cloudera3-1.cdh5.13.3.p0.458809/lib/spark2/python/lib/pyspark.zip/pyspark/java_gateway.py", line 47, in launch_gateway
File "/usr/lib64/python2.7/UserDict.py", line 23, in __getitem__
raise KeyError(key)
KeyError: 'PYSPARK_GATEWAY_SECRET'
The output from sessions/XYZ/log is:
{
"id": 16,
"from": 0,
"total": 46,
"log": [
"stdout: ",
"\nstderr: ",
"Warning: Skip remote jar hdfs://master1.lama.nuc:8020/livy/rsc/livy-api-0.4.0-SNAPSHOT.jar.",
"Warning: Skip remote jar hdfs://master1.lama.nuc:8020/livy/rsc/livy-rsc-0.4.0-SNAPSHOT.jar.",
"Warning: Skip remote jar hdfs://master1.lama.nuc:8020/livy/rsc/netty-all-4.0.29.Final.jar.",
"Warning: Skip remote jar hdfs://master1.lama.nuc:8020/livy/repl/commons-codec-1.9.jar.",
"Warning: Skip remote jar hdfs://master1.lama.nuc:8020/livy/repl/livy-core_2.11-0.4.0-SNAPSHOT.jar.",
"Warning: Skip remote jar hdfs://master1.lama.nuc:8020/livy/repl/livy-repl_2.11-0.4.0-SNAPSHOT.jar.",
"Warning: Skip remote jar hdfs://master1.lama.nuc:8020/lama/lama.main-assembly-0.9.0-spark2.3.0-hadoop2.6.5-SNAPSHOT.jar.",
"18/09/12 15:52:50 INFO client.RMProxy: Connecting to ResourceManager at master1.lama.nuc/192.168.42.100:8032",
"18/09/12 15:52:51 INFO yarn.Client: Requesting a new application from cluster with 6 NodeManagers",
"18/09/12 15:52:51 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (12288 MB per container)",
"18/09/12 15:52:51 INFO yarn.Client: Will allocate AM container, with 1408 MB memory including 384 MB overhead",
"18/09/12 15:52:51 INFO yarn.Client: Setting up container launch context for our AM",
"18/09/12 15:52:51 INFO yarn.Client: Setting up the launch environment for our AM container",
"18/09/12 15:52:51 INFO yarn.Client: Preparing resources for our AM container",
"18/09/12 15:52:51 INFO yarn.Client: Source and destination file systems are the same. Not copying hdfs://master1.lama.nuc:8020/livy/rsc/livy-api-0.4.0-SNAPSHOT.jar",
"18/09/12 15:52:52 INFO yarn.Client: Source and destination file systems are the same. Not copying hdfs://master1.lama.nuc:8020/livy/rsc/livy-rsc-0.4.0-SNAPSHOT.jar",
"18/09/12 15:52:52 INFO yarn.Client: Source and destination file systems are the same. Not copying hdfs://master1.lama.nuc:8020/livy/rsc/netty-all-4.0.29.Final.jar",
"18/09/12 15:52:52 INFO yarn.Client: Source and destination file systems are the same. Not copying hdfs://master1.lama.nuc:8020/livy/repl/commons-codec-1.9.jar",
"18/09/12 15:52:52 INFO yarn.Client: Source and destination file systems are the same. Not copying hdfs://master1.lama.nuc:8020/livy/repl/livy-core_2.11-0.4.0-SNAPSHOT.jar",
"18/09/12 15:52:52 INFO yarn.Client: Source and destination file systems are the same. Not copying hdfs://master1.lama.nuc:8020/livy/repl/livy-repl_2.11-0.4.0-SNAPSHOT.jar",
"18/09/12 15:52:52 INFO yarn.Client: Source and destination file systems are the same. Not copying hdfs://master1.lama.nuc:8020/lama/lama.main-assembly-0.9.0-spark2.3.0-hadoop2.6.5-SNAPSHOT.jar",
"18/09/12 15:52:52 INFO yarn.Client: Uploading resource file:/tmp/spark-37413ebc-9427-44d8-8a01-c4222eb899f8/__spark_conf__7516701035111969209.zip -> hdfs://master1.lama.nuc:8020/user/livy/.sparkStaging/application_1535188013308_0051/__spark_conf__.zip",
"18/09/12 15:52:53 INFO spark.SecurityManager: Changing view acls to: livy",
"18/09/12 15:52:53 INFO spark.SecurityManager: Changing modify acls to: livy",
"18/09/12 15:52:53 INFO spark.SecurityManager: Changing view acls groups to: ",
"18/09/12 15:52:53 INFO spark.SecurityManager: Changing modify acls groups to: ",
"18/09/12 15:52:53 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(livy); groups with view permissions: Set(); users with modify permissions: Set(livy); groups with modify permissions: Set()",
"18/09/12 15:52:57 INFO yarn.Client: Submitting application application_1535188013308_0051 to ResourceManager",
"18/09/12 15:52:57 INFO impl.YarnClientImpl: Submitted application application_1535188013308_0051",
"18/09/12 15:52:57 INFO yarn.Client: Application report for application_1535188013308_0051 (state: ACCEPTED)",
"18/09/12 15:52:57 INFO yarn.Client: ",
"\t client token: N/A",
"\t diagnostics: N/A",
"\t ApplicationMaster host: N/A",
"\t ApplicationMaster RPC port: -1",
"\t queue: root.users.livy",
"\t start time: 1536760377659",
"\t final status: UNDEFINED",
"\t tracking URL: http://master1.lama.nuc:8088/proxy/application_1535188013308_0051/",
"\t user: livy",
"18/09/12 15:52:57 INFO util.ShutdownHookManager: Shutdown hook called",
"18/09/12 15:52:57 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-795d9b05-a5ad-4930-ad8b-77034022bc17",
"18/09/12 15:52:57 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-37413ebc-9427-44d8-8a01-c4222eb899f8",
"\nYARN Diagnostics: "
]
}
What is wrong here? Using CDH 5.15.0 with Parcels and Spark2. Using Scala works with no problems.
Followup
I set the deployment mode from cluster to client. The KeyError goes away, but when trying to run even a simple sc.version I get Interpreter died with no traceback or error whatsoever.
I faced the same issue and solved it by upgrading to Livy 0.5.0.
Apparently CDH 5.15.0 has a fix for a security vulnerability (CVE-2018-1334), which introduced an incompatibility with Livy <0.5.0. Credit goes to Marcelo Vanzin for posting this in the livy-user mailing list archives.
Yesterday I sent a lot of messages to Kafka and trying to consume them and everything worked fine but today when I'm starting server it gets shutdown. I have no idea why it's happening because I still learning about Kafka. This is what appears in the server console (I notice repeating lines for offset from 0 to 49):
INFO [Log partition=Ranking-0, dir=C:\kafka-logs\kafka-logs-0] Completed load of log with 1 segments, log start offset 0 and log end offset 5 in 120 ms (kafka.log.Log)
[2018-06-17 12:18:08,379] WARN [Log partition=__consumer_offsets-0, dir=C:\kafka-logs\kafka-logs-0] Found a corrupted index file corresponding to log file C:\kafka-logs\kafka-logs-0\__consumer_offsets-0\00000000000000000000.log due to Corrupt index found, index file (C:\kafka-logs\kafka-logs-0\__consumer_offsets-0\00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
[2018-06-17 12:18:08,383] INFO [Log partition=__consumer_offsets-0, dir=C:\kafka-logs\kafka-logs-0] Recovering unflushed segment 0 (kafka.log.Log)
[2018-06-17 12:18:08,394] INFO [Log partition=__consumer_offsets-0, dir=C:\kafka-logs\kafka-logs-0] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
[2018-06-17 12:18:08,396] INFO [Log partition=__consumer_offsets-0, dir=C:\kafka-logs\kafka-logs-0] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 23 ms (kafka.log.Log)
And some lines like:
[2018-06-17 12:18:09,170] INFO [ThrottledRequestReaper-Fetch]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
[2018-06-17 12:18:09,170] INFO [ThrottledRequestReaper-Fetch]: Stopped (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
[2018-06-17 12:18:09,170] INFO [ThrottledRequestReaper-Produce]: Shutting down (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
And ending with:
[2018-06-17 12:18:10,173] INFO [ThrottledRequestReaper-Produce]: Stopped (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
[2018-06-17 12:18:10,173] INFO [ThrottledRequestReaper-Produce]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
[2018-06-17 12:18:10,173] INFO [ThrottledRequestReaper-Request]: Shutting down (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
[2018-06-17 12:18:11,175] INFO [ThrottledRequestReaper-Request]: Stopped (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
[2018-06-17 12:18:11,175] INFO [ThrottledRequestReaper-Request]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
[2018-06-17 12:18:11,181] INFO [KafkaServer id=0] shut down completed (kafka.server.KafkaServer)
[2018-06-17 12:18:11,181] ERROR Exiting Kafka. (kafka.server.KafkaServerStartable)
[2018-06-17 12:18:11,183] INFO [KafkaServer id=0] shutting down (kafka.server.KafkaServer)
Below my server properties:
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# see kafka.server.KafkaConfig for additional details and defaults
############################# Server Basics #############################
# The id of the broker. This must be set to a unique integer for each broker.
broker.id=0
############################# Socket Server Settings #############################
# The address the socket server listens on. It will get the value returned from
# java.net.InetAddress.getCanonicalHostName() if not configured.
# FORMAT:
# listeners = listener_name://host_name:port
# EXAMPLE:
# listeners = PLAINTEXT://your.host.name:9092
#listeners=PLAINTEXT://:9092
# Hostname and port the broker will advertise to producers and consumers. If not set,
# it uses the value for "listeners" if configured. Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
#advertised.listeners=PLAINTEXT://your.host.name:9092
host=9092
# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3
# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8
# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400
# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400
# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600
############################# Log Basics #############################
# A comma separated list of directories under which to store log files
log.dirs=C:/kafka-logs/kafka-logs-0
# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1
# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1
############################# Internal Topic Settings #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended for to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
############################# Log Flush Policy #############################
# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
# 1. Durability: Unflushed data may be lost if you are not using replication.
# 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
# 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.
# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000
# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000
############################# Log Retention Policy #############################
# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.
# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168
# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824
# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824
# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000
############################# Zookeeper #############################
# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=localhost:2181
# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000
############################# Group Coordinator Settings #############################
# The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
group.initial.rebalance.delay.ms=0
delete.topic.enable=true
I have Tomcat 7.0.42 and ActiveMQ 5.10 and have added the following resources to my context.xml file:
<Resource
auth="Container"
brokerName="MyActiveMQBrokerXML"
description="JMS Connection Factory"
factory="org.apache.activemq.jndi.JNDIReferenceFactory"
name="jms/ConnectionFactory"
type="org.apache.activemq.ActiveMQConnectionFactory"
useEmbeddedBroker="true"
brokerURL="vm://localhost?brokerConfig=xbean:activemq.xml"
/>
When starting Tomcat via is built-in startup script I get the following in the console out-put.
2015-01-27 09:49:24,064 [localhost-startStop-1] INFO org.apache.activemq.store.kahadb.plist.PListStoreImpl- PListStore:[C:\tomcat\apache-tomcat-7.0.57\bin\acti vemq-data\MyActiveMQBroker\tmp_storage] started
2015-01-27 09:49:24,068 [localhost-startStop-1] INFO org.apache.activemq.broker.BrokerService- Using Persistence Adapter: KahaDBPersistenceAdapter[C:\tomcat\ap ache-tomcat-7.0.57\bin\activemq-data\MyActiveMQBroker\KahaDB]
2015-01-27 09:49:24,471 [localhost-startStop-1] INFO org.apache.activemq.store.kahadb.MessageDatabase- KahaDB is version 5
2015-01-27 09:49:24,491 [localhost-startStop-1] INFO org.apache.activemq.store.kahadb.MessageDatabase- Recovering from the journal ...
2015-01-27 09:49:24,492 [localhost-startStop-1] INFO org.apache.activemq.store.kahadb.MessageDatabase- Recovery replayed 3 operations from the journal in 0.01 seconds.
2015-01-27 09:49:24,663 [localhost-startStop-1] INFO org.apache.activemq.broker.BrokerService- Apache ActiveMQ 5.10.0 (MyActiveMQBroker, ID:Jacob-PC-55865-1422 373764525-0:1) is starting
2015-01-27 09:49:24,707 [localhost-startStop-1] INFO org.apache.activemq.broker.TransportConnector- Connector vm://localhost?brokerConfig=xbean:activemq.xml st arted
2015-01-27 09:49:24,707 [localhost-startStop-1] INFO org.apache.activemq.broker.BrokerService- Apache ActiveMQ 5.10.0 (MyActiveMQBroker, ID:Jacob-PC-55865-1422 373764525-0:1) started
2015-01-27 09:49:24,708 [localhost-startStop-1] INFO org.apache.activemq.broker.BrokerService- For help or more information please see: http://activemq.apache. org
2015-01-27 09:49:24,711 [localhost-startStop-1] ERROR org.apache.activemq.broker.BrokerService- Memory Usage for the Broker (1024 mb) is more than the maximum a vailable for the JVM: 247 mb - resetting to 70% of maximum available: 173 mb
2015-01-27 09:49:24,728 [localhost-startStop-1] WARN org.apache.activemq.broker.BrokerRegistry- Broker localhost not started so using MyActiveMQBroker instead Jan 27, 2015 9:49:24 AM org.apache.catalina.startup.HostConfig deployDirectory
It appears to me that tomcat is not looking for the activemq.xml file or is at least not using it. This exact configuration works if I start tomcat through eclipse, but that is not a viable option for a production system.
I found that if deploying into production you have to use the absolute path to the file to get Tomcat to find the activemq.xml file. Although using the relative path is supposed to work, I could never get it to.
brokerURL="vm://localhost?brokerConfig=xbean:activemq.xml"
Supposedly only works if you've added to 'activemq.xml' to the class path. Note the presence of 'file' in 'xbean:file:/some/path/activemq.xml'.
<Resource id="MyJmsResourceAdapter" type="ActiveMQResourceAdapter">
BrokerXmlConfig = xbean:file:/usr/share/apache-tomee-plus-8.0.0-M1/conf/activemq.xml
ServerUrl = tcp://localhost:61616
</Resource>
im having this problem of mkdir keep returning false even though i have done all the necessary the following are my active codes.
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
File folder = new File(Environment.getExternalStorageDirectory().toString() + "/myappfolder" );
boolean maked = folder.mkdir();
System.out.println("the folder is maked or not? "+maked);
setContentView(R.layout.activity_main);
}
folder.mkdir() returns me false.
yes i have:
include
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/>
in my manifest.
i have also maked sure the folder does not exist in my sd card.
and in one of my project it display
06-22 01:23:58.130: W/ActivityManager(143): No content provider found for permission revoke: file:///data/local/tmp/simulation.apk
i even tried unplugging it from my computer in case sd card is mounted to my desktop.
i had also tried in sumsung galaxy s3
im mainly using acer A100 ICS factory data reset tablet.
i suspect something is blocking my tablet for allowing me to write on the sd card
i would really appreciate if someone can come out with a solution
The followings are my logcat
06-22 04:33:55.850: D/AndroidRuntime(18556): >>>>>> AndroidRuntime START com.android.internal.os.RuntimeInit <<<<<<
06-22 04:33:55.850: D/AndroidRuntime(18556): CheckJNI is OFF
06-22 04:33:56.030: D/AndroidRuntime(18556): Calling main entry com.android.commands.pm.Pm
06-22 04:33:56.030: D/AndroidRuntime(18556): Shutting down VM
06-22 04:33:56.040: D/dalvikvm(18556): GC_CONCURRENT freed 100K, 83% free 454K/2560K, paused 0ms+0ms
06-22 04:33:56.040: D/dalvikvm(18556): Debugger has detached; object registry had 1 entries
06-22 04:33:56.590: D/AndroidRuntime(18570): >>>>>> AndroidRuntime START com.android.internal.os.RuntimeInit <<<<<<
06-22 04:33:56.590: D/AndroidRuntime(18570): CheckJNI is OFF
06-22 04:33:56.760: D/AndroidRuntime(18570): Calling main entry com.android.commands.pm.Pm
06-22 04:33:56.770: W/ActivityManager(143): No content provider found for permission revoke: file:///data/local/tmp/myapp.apk
06-22 04:33:56.780: W/ActivityManager(143): No content provider found for permission revoke: file:///data/local/tmp/myapp.apk
06-22 04:33:56.850: I/PackageManager(143): Running dexopt on: com.android.myapp
06-22 04:33:57.150: D/dalvikvm(18581): DexOpt: load 41ms, verify+opt 183ms
06-22 04:33:57.230: I/ActivityManager(143): Force stopping package com.android.myapp uid=10071
06-22 04:33:57.320: D/PackageManager(143): New package installed in /data/app/com.android.myapp-1.apk
06-22 04:33:57.320: W/PackageManager(143): Unknown permission android.permission.READ_EXTERNAL_STORAGE in package com.android.myapp
06-22 04:33:57.440: D/PackageManager(143): generateServicesMap(android.accounts.AccountAuthenticator): 3 services unchanged
06-22 04:33:57.450: I/Launcher(1776): setLoadOnResume
06-22 04:33:57.470: D/dalvikvm(18159): GC_FOR_ALLOC freed 227K, 8% free 6680K/7239K, paused 17ms
06-22 04:33:57.490: D/SyncManager(143): setIsSyncable: Account {name=chinchyekoo#gmail.com, type=com.google}, provider com.google.android.gallery3d.GooglePhotoProvider -> 0
06-22 04:33:57.490: D/SyncManager(143): setIsSyncable: already set to 0, doing nothing
06-22 04:33:57.520: D/PackageManager(143): generateServicesMap(android.content.SyncAdapter): 18 services unchanged
06-22 04:33:57.570: D/BackupManagerService(143): Received broadcast Intent { act=android.intent.action.PACKAGE_ADDED dat=package:com.android.myapp flg=0x10000010 (has extras) }
06-22 04:33:57.570: V/BackupManagerService(143): addPackageParticipantsLocked: com.android.myapp
06-22 04:33:57.660: D/dalvikvm(143): GC_EXPLICIT freed 1056K, 24% free 11640K/15239K, paused 6ms+7ms
06-22 04:33:57.670: D/AndroidRuntime(18570): Shutting down VM
06-22 04:33:57.680: D/dalvikvm(18570): GC_CONCURRENT freed 101K, 83% free 459K/2560K, paused 0ms+0ms
06-22 04:33:57.680: D/jdwp(18570): Got wake-up signal, bailing out of select
06-22 04:33:57.680: D/dalvikvm(18570): Debugger has detached; object registry had 1 entries
06-22 04:33:57.680: I/AndroidRuntime(18570): NOTE: attach of thread 'Binder Thread #3' failed
06-22 04:33:57.830: D/dalvikvm(1776): GC_CONCURRENT freed 2458K, 47% free 10173K/19079K, paused 2ms+6ms
06-22 04:33:57.840: D/dalvikvm(17407): GC_CONCURRENT freed 276K, 6% free 7177K/7559K, paused 16ms+3ms
06-22 04:33:57.930: D/dalvikvm(1776): GC_FOR_ALLOC freed 994K, 47% free 10233K/19079K, paused 32ms
06-22 04:33:58.020: D/dalvikvm(1776): GC_FOR_ALLOC freed 769K, 45% free 10581K/19079K, paused 21ms
06-22 04:33:58.080: D/AndroidRuntime(18593): >>>>>> AndroidRuntime START com.android.internal.os.RuntimeInit <<<<<<
06-22 04:33:58.080: D/AndroidRuntime(18593): CheckJNI is OFF
06-22 04:33:58.090: D/dalvikvm(1776): GC_CONCURRENT freed 5K, 38% free 11836K/19079K, paused 2ms+3ms
06-22 04:33:58.180: D/dalvikvm(1776): GC_CONCURRENT freed 1398K, 37% free 12143K/19079K, paused 1ms+4ms
06-22 04:33:58.290: D/dalvikvm(1776): GC_FOR_ALLOC freed 2316K, 42% free 11191K/19079K, paused 25ms
06-22 04:33:58.370: D/AndroidRuntime(18593): Calling main entry com.android.commands.am.Am
06-22 04:33:58.370: I/ActivityManager(143): START {act=android.intent.action.MAIN cat=[android.intent.category.LAUNCHER] flg=0x10000000 cmp=com.android.myapp/.MainActivity} from pid 18593
06-22 04:33:58.420: D/AndroidRuntime(18593): Shutting down VM
06-22 04:33:58.420: I/AndroidRuntime(18593): NOTE: attach of thread 'Binder Thread #3' failed
06-22 04:33:58.420: D/dalvikvm(18593): GC_CONCURRENT freed 102K, 81% free 489K/2560K, paused 0ms+1ms
06-22 04:33:58.430: D/jdwp(18593): Got wake-up signal, bailing out of select
06-22 04:33:58.430: D/dalvikvm(18593): Debugger has detached; object registry had 1 entries
06-22 04:33:58.470: D/dalvikvm(18604): Late-enabling CheckJNI
06-22 04:33:58.490: I/ActivityManager(143): Start proc com.android.myapp for activity com.android.myapp/.MainActivity: pid=18604 uid=10071 gids={1015}
06-22 04:33:58.570: D/OpenGLRenderer(18507): Flushing caches (mode 1)
06-22 04:33:58.580: D/OpenGLRenderer(18507): Flushing caches (mode 0)
06-22 04:33:58.660: I/System.out(18604): folder newed going to make folder
06-22 04:33:58.660: I/System.out(18604): here/mnt/sdcard/myappfolder
06-22 04:33:58.660: I/System.out(18604): the folder is maked or not? false
06-22 04:33:58.680: D/dalvikvm(1776): GC_CONCURRENT freed 2487K, 47% free 10154K/19079K, paused 2ms+47ms
06-22 04:33:58.710: D/dalvikvm(18239): GC_CONCURRENT freed 339K, 6% free 6935K/7367K, paused 3ms+4ms
06-22 04:33:58.760: D/libEGL(18604): loaded /system/lib/egl/libEGL_tegra.so
06-22 04:33:58.780: D/libEGL(18604): loaded /system/lib/egl/libGLESv1_CM_tegra.so
6-22 04:33:58.810: D/dalvikvm(1776): GC_FOR_ALLOC freed 675K, 46% free 10318K/19079K, paused 35ms
06-22 04:33:58.830: D/libEGL(18604): loaded /system/lib/egl/libGLESv2_tegra.so
06-22 04:33:58.870: D/OpenGLRenderer(1776): Flushing caches (mode 1)
06-22 04:33:58.870: D/OpenGLRenderer(18604): Enabling debug mode 0
06-22 04:33:58.910: D/dalvikvm(1776): GC_FOR_ALLOC freed 642K, 46% free 10474K/19079K, paused 46ms
06-22 04:33:58.940: D/OpenGLRenderer(1776): Flushing caches (mode 0)
06-22 04:33:58.970: I/ActivityManager(143): Displayed com.android.myapp/.MainActivity: +516ms
06-22 04:33:59.000: D/dalvikvm(1776): GC_FOR_ALLOC freed 941K, 47% free 10204K/19079K, paused 41ms
06-22 04:33:59.010: I/dalvikvm-heap(1776): Grow heap (frag case) to 11.236MB for 1286224-byte allocation
06-22 04:33:59.050: D/dalvikvm(1776): GC_CONCURRENT freed 31K, 41% free 11429K/19079K, paused 2ms+4ms
06-22 04:34:00.210: E/DigitalAppWidgetUpdateService(17962): onReceive():948: screen mode = ORIENTATION_PORTRAIT
06-22 04:34:00.250: D/dalvikvm(17962): GC_CONCURRENT freed 321K, 8% free 6809K/7367K, paused 1ms+2ms
Two things.
Your code reads
File folder = new File(Environment.getExternalStorageDirectory().toString() + "/myappfolder" );
and you'd be better off using this syntax
File folder = new File(Environment.getExternalStorageDirectory().toString() , "myappfolder" );
If you're debugging your code, it's possible that the external SD Card has been dismounted. Go to Setting->Storage and verify that your SD Card is accessible to apps on your device.
After seeing your logcat, one thing seems to be causing you trouble. Your logcat has this line in it:
06-22 04:33:57.320: W/PackageManager(143): Unknown permission
android.permission.READ_EXTERNAL_STORAGE in package com.android.myapp
which means you're actually having trouble with your manifest (you're targetting an Android version which didn't have that permission implemented yet. I also suspect that if you remove that permission, your troubles will cease). According to the permission's description you don't need it anyway, since you're already granting WRITE_EXTERNAL_STORAGE.