jboss server log file - jboss

I am trying to run jboss.
But I get the following error:
[javac] C:\Program Files\jbpm-5.0-try3\jbpm-installer\build.xml:518: warning
: 'includeantruntime' was not set, defaulting to build.sysclasspath=last; set to
false for repeatable builds
[java] SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
[java] SLF4J: Defaulting to no-operation (NOP) logger implementation
[java] SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for fu
rther details.
[java] Task service started correctly !
[java] Task service running ...
as well, it does not run on port:8080.
What could be the problem?
How can I see the log file?

About the log files:
jboss should have the log files in the folder %JBOSS%\server\default\log.
About the port:8080:
I would check whether some other server is configured to listen on 8080 already. Especially if you are doing a lot of experimantal installations you can end up with several Jbosses, Tomcats and Glassfishes listening at the same port, so most of the servers won't receive their requests. At least to me it has happened once (but don't tell anybody).

It ccould be that logging failed to initialize and further errors prevent your application from correct deployment.
Check this first:
Failed to load class org.slf4j.impl.StaticLoggerBinder
This error is reported when the org.slf4j.impl.StaticLoggerBinder class could not be loaded into memory. This happens when no appropriate SLF4J binding could be found on the class path. Placing one (and only one) of slf4j-nop.jar, slf4j-simple.jar, slf4j-log4j12.jar, slf4j-jdk14.jar or logback-classic.jar on the class path should solve the problem.
As of SLF4J version 1.6, in the absence of a binding, SLF4J will default to a no-operation (NOP) logger implementation.

Related

JBoss 7 (EAP 6) CLI configuration: 'queue-address' is not found among the supported properties: [selector, entries, durable]

I am on JBoss EAP 6 and my task is to migrate a server to the Cloud.
I get JBoss to start, but then some queue fails with:
[echo] try to connect to local JBoss...
Checking for listener at 127.0.0.1:17545
Checking for listener at 127.0.0.1:17545
waitfor: condition was met
Property "jboss.not.started" has not been set
[echo] ...connection is available.
[antcall] Exiting /app/project/app/jboss-6.4-inst1/pi-deploy/tools/extension/configure.xml.
[echo] env.JBOSS_HOME=/opt/inet/jboss-6.4
[java] Executing '/opt/dbsinfra/zst/jdk-1.8.0_161/jre/bin/java' with arguments:
[java] '-jar'
[java] '/opt/inet/jboss-6.4/jboss-modules.jar'
[java] '-mp'
[java] '/opt/inet/jboss-6.4/modules'
[java] 'org.jboss.as.cli'
[java] '--file=/app/project/app/jboss-6.4-inst1/pi-deploy/../bin/configure.cli'
[java]
[java] The ' characters around the executable and arguments are
[java] not part of the command.
[java] INFO [org.jboss.modules] JBoss Modules version 1.3.10.Final-redhat-1
[java] INFO [org.xnio] XNIO Version 3.0.16.GA-redhat-1
[java] INFO [org.xnio.nio] XNIO NIO Implementation Version 3.0.16.GA-redhat-1
[java] INFO [org.jboss.remoting] JBoss Remoting version 3.3.12.Final-redhat-2
[java] INFO [org.jboss.as.cli.CommandContext] The batch executed successfully
[java] The batch executed successfully
[java] ERROR [org.jboss.as.cli.CommandContext] 'queue-address' is not found among the supported properties: [selector, entries, durable]
[java] 'queue-address' is not found among the supported properties: [selector, entries, durable]
You can see that the server gets configured via command line interface (CLI):
[java] '--file=/app/project/app/jboss-6.4-inst1/pi-deploy/../bin/configure.cli'
The offending part of the configure.cli script is:
#######################################################################
#
# JMS Queues
#
#######################################################################
# JMS Queue for business events
/subsystem=messaging:add()
/subsystem=messaging/hornetq-server=default:add()
/subsystem=messaging/hornetq-server=default/jms-queue=BusinessEventQueue:add(\
entries=["/queue/BusinessEventQueue"],\
queue-address="jms.queue.BusinessEventQueue"\ <------ HERE
)
/subsystem=messaging/hornetq-server=default/in-vm-connector=in-vm:add(server-id="0")
/subsystem=messaging/hornetq-server=default/in-vm-acceptor=in-vm:add(server-id="0")
/subsystem=messaging/hornetq-server=default/pooled-connection-factory=InVmJMSConnectionFactory:add(\
entries=["java:/InVmJMSConnectionFactory"],\
connector={"in-vm" => undefined}\
)
/subsystem=ejb3:write-attribute(name="default-resource-adapter-name", value="InVmJMSConnectionFactory")
/subsystem=ejb3:write-attribute(name=default-mdb-instance-pool, value="mdb-strict-max-pool")
What I don't get here is:
We're moving from JBoss EAP 6.4 to JBoss EAP 6.4 and the old/previous server is running OK.
I have never come in touch with anything JMS-like...
Question:
What is queue-address="jms.queue.BusinessEventQueue" here? Is this some kind of name?
How do you probably fix this? -> replace by name param?
Thanks
PS: the situation is a little more complex as I cannot just change a local file. The files are pulled from an SVN repo, so any attempt involves a commit... etc.
You cannot use queue-address as an attribute for JMS-Queue "add" operation. As it's not a supported operation. Try the below command it should work.
/subsystem=messaging/hornetq-server=default/jms-queue=BusinessEventQueue:add(\
entries=["/queue/BusinessEventQueue"])
When new JMS queue is created queue-address is set to jms.queue.BusinessEventQueue" by default. You can use below CLI command to check the value.
/subsystem=messaging/hornetq-server=default/jms-queue=BusinessEventQueue:read-attribute(name=queue-address)
if you have use queue-address while adding new JMS-queue, you will have use something like below.
jms-queue add --queue-address=BusinessEventQueue --entries=/queue/BusinessEventQueue
jms.queue would be added by default not need to pass that part.

Apache-Beam exception while running WordCount example in eclipse

Downloaded maven dependecies in eclipse using
<dependency>
<groupId>org.apache.beam</groupId>
<artifactId>beam-runners-direct-java</artifactId>
<version>0.2.0-incubating</version></dependency>
When download and run WordCount example after changing the gs path to the C://examples//misc.txt.Getting the below exception.I did not pass any runner.How to pass the runner option and output params while running from eclipse??
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
Exception in thread "main" java.lang.IllegalStateException: Failed to validate C://examples//misc.txt
at org.apache.beam.sdk.io.TextIO$Read$Bound.apply(TextIO.java:288)
at org.apache.beam.sdk.io.TextIO$Read$Bound.apply(TextIO.java:195)
at org.apache.beam.sdk.runners.PipelineRunner.apply(PipelineRunner.java:76)
at org.apache.beam.runners.direct.DirectRunner.apply(DirectRunner.java:205)
at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:401)
at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:324)
at org.apache.beam.sdk.values.PBegin.apply(PBegin.java:59)
at org.apache.beam.sdk.Pipeline.apply(Pipeline.java:174)
at org.apache.beam.examples.WordCount.main(WordCount.java:206)
Caused by: java.io.IOException: Unable to find handler for C://examples//misc.txt
at org.apache.beam.sdk.util.IOChannelUtils.getFactory(IOChannelUtils.java:188)
at org.apache.beam.sdk.io.TextIO$Read$Bound.apply(TextIO.java:283)
... 8 more
I believe Apache Beam will parse syntax C://directory/file similar to http://domain/file -- it will think that C is a protocol name and that directory is a domain. The inner exception is saying that C is an unknown protocol.
Please try to avoid using :// symbol when referring to local files. I'd suggest using regular Windows standard: C:\directory\file.

Apache phoenix not getting started

I have installed HBase 1.1.3 on multi cluster configuration and wanted to run Apache phoenix over it. I download phoenix 4.7, installed it as per the guidelines mentioned here: https://phoenix.apache.org/installation.html
But when i am running the following command: sqlline.py
it is getting hanged till the point shown below.
hadoop#hostname:~$ sqlline.py hostname
Setting property: [incremental, false]
Setting property: [isolation, TRANSACTION_READ_COMMITTED]
issuing: !connect jdbc:phoenix:localhost none none org.apache.phoenix.jdbc.PhoenixDriver
Connecting to jdbc:phoenix:localhost
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/phoenix-4.7.0-HBase-1.1-bin/phoenix-4.7.0-HBase-1.1-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
16/05/10 13:06:18 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Well it seems that the Phoenix client is unable to connect to the Hbase z-node present at zookeeper cluster. Please do the following :
Check if zookeeper is up .
Check with what name you have registered hbase at the zookeeper.If the name is not hbase we need to specify it to the client. in that case the command would look like sqlline.py hostname:2181:/znode-for-hbase-name.
Please check if u have added phoenix-[version]-server.jar to the lib folder in all hbase nodes and try again .
You need to add following jars to Hbase/lib directory.
phoenix-spark-4.7.0-HBase-1.1.jar
phoenix-4.7.0-HBase-1.1-server.jar

typesafe activator ui not launching

play.api.Application$$anon$1: Execution exception[[IllegalArgumentException: req
uirement failed: Source file 'C:\Users\shriv_000\.activator\1.3.6\templates\inde
x.db_e25b80033130c08.tmp' is a directory.]]
at play.api.Application$class.handleError(Application.scala:296) ~[play_
2.11-2.3.9.jar:2.3.9]
at play.api.DefaultApplication.handleError(Application.scala:402) [play_
2.11-2.3.9.jar:2.3.9]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$14$$anonfu
n$apply$1.applyOrElse(PlayDefaultUpstreamHandler.scala:205) [play_2.11-2.3.9.jar
:2.3.9]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$14$$anonfu
n$apply$1.applyOrElse(PlayDefaultUpstreamHandler.scala:202) [play_2.11-2.3.9.jar
:2.3.9]
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.s
cala:36) [scala-library.jar:0.13.8]
Caused by: java.lang.IllegalArgumentException: requirement failed: Source file '
C:\Users\shriv_000\.activator\1.3.6\templates\index.db_e25b80033130c08.tmp' is a
directory.
at scala.Predef$.require(Predef.scala:219) ~[scala-library.jar:0.13.8]
at sbt.IO$.copyFile(IO.scala:584) ~[client-all-2-11-0.3.5.jar:0.3.5]
at sbt.IO$.move(IO.scala:786) ~[client-all-2-11-0.3.5.jar:0.3.5]
at activator.package$RichIO$.createViaTemporary$extension(package.scala:
30) ~[activator-templates-cache-1.0-a0afb008ea619bf9d87dc010156cddffa8a6f880.jar
:1.3.6]
at activator.templates.repository.UriRemoteTemplateRepository$$anonfun$r
esolveIndexTo$1.apply(UriRemoteTemplateRepository.scala:228) ~[activator-templat
es-cache-1.0-a0afb008ea619bf9d87dc010156cddffa8a6f880.jar:1.0-a0afb008ea619bf9d8
7dc010156cddffa8a6f880]
[info] application - onStop received closing down the app
I am at activator 1.3.6
I have seen TypeSafe Activator installation error and it might be different then my problem. I have been able to create and launch a play project.
The solution provided there did not work for me.
I see this problem frequently.
I found a workaround. Try to execute activator as an administrator.

Scalding Tutorial with HDFS: Data is missing from one or more paths in: List(tutorial/data/hello.txt)

After configuring ssh and rsync when I try to run Scalding tutorial (https://github.com/Cascading/scalding-tutorial/) with command:
$ scripts/scald.rb --hdfs tutorial/Tutorial0.scala
I get the following error:
com.twitter.scalding.InvalidSourceException: [com.twitter.scalding.TextLineWrappedArray(tutorial/data/hello.txt)] Data is missing from one or more paths in: List(tutorial/data/hello.txt)
This error happens notwithstanding file tutorial/data/hello.txt really exists.
How to fix this?
Stdout:
$ scripts/scald.rb --hdfs tutorial/Tutorial0.scala
scripts/scald.rb:194: warning: already initialized constant SCALA_LIB_DIR
dkondratev#hadoop-n002.maxus.lan's password:
dkondratev#hadoop-n002.maxus.lan's password:
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/lib/phoenix/phoenix-4.0.0.2.1.2.1-471-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
14/07/07 19:05:45 INFO Configuration.deprecation: mapred.min.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
14/07/07 19:05:45 INFO Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
Exception in thread "main" java.lang.Throwable: GUESS: Data is missing from the path you provided.
If you know what exactly caused this error, please consider contributing to GitHub via following link.
https://github.com/twitter/scalding/wiki/Common-Exceptions-and-possible-reasons#comtwitterscaldinginvalidsourceexception
at com.twitter.scalding.Tool$.main(Tool.scala:132)
at com.twitter.scalding.Tool.main(Tool.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
I think the problem you are having is that you are telling Scalding to run in HDFS, but the file you are providing as input is in your local file system, not in HDFS. Before running the example, upload the file to your HDFS:
hadoop fs -mkdir tutorial
hadoop fs -mkdir tutorial/data
hadoop fs -put tutorial/data/hello.txt tutorial/data/hello.txt
Try to pack your job into a fat jar using Maven shade plugin and then run your Scalding job via the hadoop command:
hadoop jar your-uber.jar com.twitter.scalding.Tool bar.foo.MyClassJob --hdfs --input ... --output ...