I've just installed orion-psb-image-R3.3 using the image available #Fi-Lab. When I start the orion instance, I get a message telling that there is some problem with the contextBroker.pid:
" Starting contextBroker... cat: /var/log/contextBroker/contextBroker.pid: No existe el fichero o el directorio"
Despite this message, the context broker starts properly. I've just deployed a new machine with the aforementioned image, is there anything I need to do before /etc/init.d/contextBroker start.
Thanks
This 'error message' is harmless.
At starting the broker, a check is made to see whether a broker is already running (and for that the pid-file is needed). As the pid file doesn't exist, this unfortunate message pops up. We are working on improving the start/stop script, to be included probably in our next release (0.13.0).
Related
I have installed Cassandra on Kubernetes (9 pods) All the pods are up and running except
for one pod, which shows the below error.
org.apache.cassandra.db.commitlog.CommitLogReplayer$CommitLogReplayException: Encountered bad header at position 47137 of commit log /var/lib/cassandra/commitlog/CommitLog-600-1630582314923.log, with bad position but valid CRC
at org.apache.cassandra.db.commitlog.CommitLogReplayer.shouldSkipSegmentOnError(CommitLogReplayer.java:438)
at org.apache.cassandra.db.commitlog.CommitLogReplayer.handleUnrecoverableError(CommitLogReplayer.java:452)
at org.apache.cassandra.db.commitlog.CommitLogSegmentReader$SegmentIterator.computeNext(CommitLogSegmentReader.java:109)
at org.apache.cassandra.db.commitlog.CommitLogSegmentReader$SegmentIterator.computeNext(CommitLogSegmentReader.java:84)
at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
at org.apache.cassandra.db.commitlog.CommitLogReader.readCommitLogSegment(CommitLogReader.java:236)
at org.apache.cassandra.db.commitlog.CommitLogReader.readAllFiles(CommitLogReader.java:134)
at org.apache.cassandra.db.commitlog.CommitLogReplayer.replayFiles(CommitLogReplayer.java:154)
at org.apache.cassandra.db.commitlog.CommitLog.recoverFiles(CommitLog.java:213)
at org.apache.cassandra.db.commitlog.CommitLog.recoverSegmentsOnDisk(CommitLog.java:194)
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:338)
at com.datastax.bdp.server.DseDaemon.setup(DseDaemon.java:527)
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:702)
at com.datastax.bdp.DseModule.main(DseModule.java:96)
Caused by: org.apache.cassandra.db.commitlog.CommitLogReadHandler$CommitLogReadException: Encountered bad header at position 47137 of commit log /var/lib/cassandra/commitlog/CommitLog-600-1630582314923.log, with bad position but valid CRC
at org.apache.cassandra.db.commitlog.CommitLogSegmentReader$SegmentIterator.computeNext(CommitLogSegmentReader.java:111)
... 12 more
ERROR [main] 2021-09-06 06:19:08,990 JVMStabilityInspector.java:251 - JVM state determined to be unstable. Exiting forcefully due to:
org.apache.cassandra.db.commitlog.CommitLogReplayer$CommitLogReplayException: Encountered bad header at position 47137 of commit log /var/lib/cassandra/commitlog/CommitLog-600-1630582314923.log, with bad position but valid CRC
at org.apache.cassandra.db.commitlog.CommitLogReplayer.shouldSkipSegmentOnError(CommitLogReplayer.java:438)
at org.apache.cassandra.db.commitlog.CommitLogReplayer.handleUnrecoverableError(CommitLogReplayer.java:452)
at org.apache.cassandra.db.commitlog.CommitLogSegmentReader$SegmentIterator.computeNext(CommitLogSegmentReader.java:109)
at org.apache.cassandra.db.commitlog.CommitLogSegmentReader$SegmentIterator.computeNext(CommitLogSegmentReader.java:84)
at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
at org.apache.cassandra.db.commitlog.CommitLogReader.readCommitLogSegment(CommitLogReader.java:236)
at org.apache.cassandra.db.commitlog.CommitLogReader.readAllFiles(CommitLogReader.java:134)
at org.apache.cassandra.db.commitlog.CommitLogReplayer.replayFiles(CommitLogReplayer.java:154)
at org.apache.cassandra.db.commitlog.CommitLog.recoverFiles(CommitLog.java:213)
at org.apache.cassandra.db.commitlog.CommitLog.recoverSegmentsOnDisk(CommitLog.java:194)
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:338)
at com.datastax.bdp.server.DseDaemon.setup(DseDaemon.java:527)
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:702)
at com.datastax.bdp.DseModule.main(DseModule.java:96)
Can someone help me out please
For whatever reason, one of the commit log segments got corrupted on the node.
You can workaround the issue by manually deleting this file on the pod:
/var/lib/cassandra/commitlog/CommitLog-600-1630582314923.log
Interestingly, that commit log segment was created on September 2 (1630582314923) but the log entry you posted was from September 6. This indicates something happened to the pod which resulted in the corrupted file.
You'll need to review the Cassandra logs on the pod (not the pod logs itself) to determine the root cause and address it. Cheers!
Took the latest version of druid 0.16.0-incubating.
Had 2 question .
1) As mentioned in quick start , micro-quick-start doesnt work as it complains about no file jvm.config and main.config under /conf/druid/single-server/micro-quickstart/coordinator-overlord .
2) As micro qucik start failed i started to try with single-server-small.
Was trying to import data from kafka in single-server-small but unable to do so as it says extension is not loaded , by the way which i can see gets loaded in logs.
But i think my main problem is when ever i land up on 'Load data' Section on druid web page on localhost:8888 , it keeps me giving below error
"Failed to get overlord modules : Unable to determine destination for [/proxy/overlord/status]; is your coordinator/overlord running ?"
I can see my coordinator-overlord process up .
Any suggestions ?
Thanks
Delete your kafka logs and druid logs(the location set up in common.runtime.properties) and then try again.
We have an application in solaris during specific test case we will generate heap dump which will be written in to the server at specific path during this case we are getting following error in trace file
java.lang.OutOfMemoryError: Java heap space
Dumping heap to /ossrc/upgrade/JREheapdumps/java_pid16092.hprof ...
Dump file is incomplete: I/O error
and in /var/adm/messages we could see
Oct 28 13:00:10 ossuas2 nfs: [ID 733954 kern.info] NOTICE: [NFS4][Server: mashost][Mntpt: /ossrc/upgrade]NFS server mashost not
responding; still trying
Oct 28 13:02:53 ossuas2 nfs: [ID 733954 kern.info] NOTICE: [NFS4][Server: mashost][Mntpt: /usr/local]NFS server mashost not
responding; still trying
Oct 28 13:04:53 ossuas2 nfs: [ID 733954 kern.info] NOTICE: [NFS4][Server: mashost][Mntpt: /etc/opt/ericsson]NFS server mashost not
responding; still trying
Can anyone please help here why we are getting this problem and can any tell us can an application cause this impact on mashost ..????
First things first, check out the NFS service w/ svcbundle and svcs -- when it crashes, run:
# svcs -x nfs/client
on the client, and
# svcs -x nfs/server
on the server. I would expect one or both to be in a "maintenance" state. (You may see it fails to start properly at all). If it is in a maintenance mode, you should see a row marked "Reason:" that says why.
You might see "offline" -- in that case, startd will attempt to reboot the service multiple times and, if it fails after five attempts or hangs indefinitely, places it into "maintenance" state and stops restarting.
Check the logs in
/var/svc/log/<service-name FMRI>.log
There will be one on your client machine under "network-nfs-client:default" (probably, may have a name other than 'default' if it's been changed manually), and one on the server under "network-nfs-server:default"
See what you can glean from those.
svcbundle is all the time taking snapshots as backups of services, so you can try reverting to one of those.
# svcs -s nfs/server:default
svc:/network/nfs/server:default> listsnap
svc:/network/nfs/server:default> revert start [name_of_snapshot]
svc:/network/nfs/server:default> quit
# svcadm refresh nfs/server:default
# svcadm restart nfs/server:default
Make sure to include the ":default" tag, or if you saw a different tag from "svcs nfs/server" include it, that name defines an instance of the service, every running service is an instance.
If the process is failing to boot, you might have to look at the XML manifest under /lib/svc/manifest/network/nfs/ -- inside, you'll see dependencies (and services dependent on this one), then "exec_method"s, which define how the service starts, stops and restarts.
Instead of snapshots, you can can also restore it to default: use svccfg -s <FMRI> delete to clear it, then svcadm refresh <FMRI> and svcadm enable <FMRI>.
If the service is in maintenance state, once you've isolated and fixed the problem, you can manually clear that state by running svcadm clear <FMRI>.
I'm trying to deploy a project which utilizes Cometd2.9.1 in Wildfly8.1.0.
The project worked in glassfish4.0, however it won't work in wildfly. It can be deployed with no problem, but it stuck after I visit http://localhost:8080/Cometd3/ (Yes, the project name is Cometd3 while I ended up with cometd2.9.1)
It's weird because there is no error or exception message even I enabled debug log.
Because my log is too long to post here and I have completely no idea what is the problem, please download my log file here.
Please note that the log is just stop at
2014-09-17 14:57:09,893 FINE [javax.enterprise.resource.webcontainer.jsf.application] (default task-4) servletPath /faces
2014-09-17 14:57:09,900 FINE [javax.enterprise.resource.webcontainer.jsf.application] (default task-5) servletPath /faces
2014-09-17 14:57:09,907 FINE [javax.enterprise.resource.webcontainer.jsf.application] (default task-6) servletPath /faces
2014-09-17 14:58:44,505 DEBUG [org.jboss.ejb.client.txn] (Periodic Recovery) Send recover request for transaction origin node identifier 1 to EJB receiver with node name 972909-a43sv
And there are no other log after those messages.
What's more,when I say it is stuck, I mean not only the server side stuck, but also the browser, which means the browser is totally white and keep loading page when I open it.
Any thoughts? Thanks!
I can answer only from the CometD side of things.
The logs shows some server-side CometD activity (a LocalSession called echo handshakes), but after that it's like there are no requests to the server, so that CometD has nothing to do and just sits idle.
From your description of the problem looks more like a server problem than a CometD problem.
CometD typically runs very well in Jetty; I would try to deploy your application in Jetty and see how it goes.
I am having web application running in JBOSS AS 4.2.2.
Observed that jboss server automatically shuts down, and the following exception is observed in server.log
14:20:38,048 INFO [Server] Runtime shutdown hook called, forceHalt: true
14:20:38,049 INFO [Server] JBoss SHUTDOWN: Undeploying all packages
I want to enable TRACE for org.jboss.system.server.Server in jboss-log4j.xml, to hopefully get some more info when the server shuts down.
Please let me know how to enable TRACE for org.jboss.system.server.Server in jboss-log4j.xml.
I was able to add trace for server log and i could see the following output when JBOSS AS shuts down automatically:
2010-06-09 19:07:46,631 DEBUG [org.jboss.wsf.stack.jbws.RequestHandlerImpl] END handleRequest: jboss.ws:context=hpnp_lqs,endpoint=APIWebService
2010-06-09 19:07:46,631 DEBUG [org.jboss.ws.core.soap.MessageContextAssociation] popMessageContext: org.jboss.ws.core.jaxws.handler.SOAPMessageContextJAXWS#3290a11e (Thread http-0.0.0.0-8080-1)
2010-06-09 19:07:55,895 INFO [org.jboss.system.server.Server] Runtime shutdown hook called, forceHalt: true
2010-06-09 19:07:55,895 TRACE [org.jboss.system.server.Server] Shutdown caller:
java.lang.Throwable: Here
at org.jboss.system.server.ServerImpl$ShutdownHook.shutdown(ServerImpl.java:1017)
at org.jboss.system.server.ServerImpl$ShutdownHook.run(ServerImpl.java:996)
2010-06-09 19:07:55,895 INFO [org.jboss.system.server.Server] JBoss SHUTDOWN: Undeploying all packages
If anybody, has any clue, on what might be cause for automatic shutdown, pls help me.
Thanks!
There's a JBoss wiki page listing log output for various shutdown causes. It looks like yours was caused by a Ctrl-C. I assume you would have known if you hit Ctrl-C, though.
On unix-type servers, Ctrl-C generates a TERM signal, which could also come from someone or some script running as your jboss user or as root executing "kill <jboss pid>". If you're on linux I'd take a look at this question about the OOM killer.
One possible cause for this behaviour is console logout. We have observed this with our own server.
In brief, by default the Sun JVM listens to the event of the console user logging out, and shuts itself down automatically when that happens. To disable this, start the JVM with the -Xrs parameter.
See here for more details (look for Mysterious shutdowns).
One possible cause for a forced shutdown is if the virtual machine is out of memory.
I had this problem several years ago when a colleague implemented some very nasty bulk loading of objects from a database which caused jboss to shutdown on certain requests.
Try searching for "memory" or similar keywords in the log file and/or monitor the memory usage of the server.