Unable to start Drill in distributed mode - apache-zookeeper

I am trying to setup drillv1.18 running. Facing the error below.
The drill-override.conf points to the zookeeper which runs on port 12181. On starting in distributed mode, it fails with the following log output. But the embedded mode has no issues.
It appears like permission issue, but both zookeeper, drill, zookeeper data-dir all are running under the same user.
2020-05-10 16:23:01,160 [main] DEBUG o.apache.drill.exec.server.Drillbit - Construction started.
2020-05-10 16:23:01,448 [main] DEBUG o.a.d.e.c.zk.ZKClusterCoordinator - Connect localhost:12181, zkRoot drill, clusterId: drillbits1
2020-05-10 16:23:01,531 [main] INFO o.a.d.e.s.s.PersistentStoreRegistry - Using the configured PStoreProvider class: 'org.apache.drill.exec.store.sys.store.provider.ZookeeperPersistentStoreProvider'.
2020-05-10 16:23:01,718 [main] DEBUG o.a.drill.exec.ssl.SSLConfigServer - Using Hadoop configuration for SSL
2020-05-10 16:23:01,718 [main] DEBUG o.a.drill.exec.ssl.SSLConfigServer - Hadoop SSL configuration file: ssl-server.xml
2020-05-10 16:23:01,731 [main] DEBUG org.apache.drill.exec.ssl.SSLConfig - Initialized SSL context.
2020-05-10 16:23:01,731 [main] INFO o.a.drill.exec.rpc.user.UserServer - Rpc server configured to use TLS protocol 'TLSv1.2'
2020-05-10 16:23:01,738 [main] INFO o.apache.drill.exec.server.Drillbit - Construction completed (577 ms).
2020-05-10 16:23:01,738 [main] DEBUG o.apache.drill.exec.server.Drillbit - Startup begun.
2020-05-10 16:23:01,738 [main] DEBUG o.a.d.e.c.zk.ZKClusterCoordinator - Starting ZKClusterCoordination.
2020-05-10 16:23:03,775 [main] ERROR o.apache.drill.exec.server.Drillbit - Failure during initial startup of Drillbit.
org.apache.zookeeper.KeeperException$UnimplementedException: KeeperErrorCode = Unimplemented for /drill
at org.apache.zookeeper.KeeperException.create(KeeperException.java:106)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:1538)
at org.apache.curator.utils.ZKPaths.mkdirs(ZKPaths.java:351)
at org.apache.curator.framework.imps.ExistsBuilderImpl$2.call(ExistsBuilderImpl.java:230)
at org.apache.curator.framework.imps.ExistsBuilderImpl$2.call(ExistsBuilderImpl.java:224)
at org.apache.curator.connection.StandardConnectionHandlingPolicy.callWithRetry(StandardConnectionHandlingPolicy.java:67)
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:81)
at org.apache.curator.framework.imps.ExistsBuilderImpl.pathInForeground(ExistsBuilderImpl.java:221)
at org.apache.curator.framework.imps.ExistsBuilderImpl.forPath(ExistsBuilderImpl.java:206)
at org.apache.curator.framework.imps.ExistsBuilderImpl.forPath(ExistsBuilderImpl.java:35)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.createContainers(CuratorFrameworkImpl.java:265)
at org.apache.curator.framework.EnsureContainers.internalEnsure(EnsureContainers.java:69)
at org.apache.curator.framework.EnsureContainers.ensure(EnsureContainers.java:53)
at org.apache.curator.framework.recipes.cache.PathChildrenCache.ensurePath(PathChildrenCache.java:596)
at org.apache.curator.framework.recipes.cache.PathChildrenCache.rebuild(PathChildrenCache.java:327)
at org.apache.curator.framework.recipes.cache.PathChildrenCache.start(PathChildrenCache.java:304)
at org.apache.curator.framework.recipes.cache.PathChildrenCache.start(PathChildrenCache.java:252)
at org.apache.curator.x.discovery.details.ServiceCacheImpl.start(ServiceCacheImpl.java:99)
at org.apache.drill.exec.coord.zk.ZKClusterCoordinator.start(ZKClusterCoordinator.java:145)
at org.apache.drill.exec.server.Drillbit.run(Drillbit.java:220)
at org.apache.drill.exec.server.Drillbit.start(Drillbit.java:584)
at org.apache.drill.exec.server.Drillbit.start(Drillbit.java:554)
at org.apache.drill.exec.server.Drillbit.main(Drillbit.java:550)
Version 1.17 has no issues in starting in distributed mode.

The issue here is with the zookeeper version. Perhaps you use 3.4.X version, but the current version of Drill requires 3.5.X. As a workaround, you may replace zookeeper jar in jars/ext/zookeeper-3.5.7.jar and jars/ext/zookeeper-jute-3.5.7.jar with the jars that corresponds to your zookeeper version.

In Addition to the answer of Vova Vysotskyi, you may find more information in Drill documentation about this issue:
https://drill.apache.org/docs/distributed-mode-prerequisites/
Starting in Drill 1.18 the bundled ZooKeeper libraries are upgraded to version 3.5.7, preventing connections to older (< 3.5) ZooKeeper clusters. In order to connect to a ZooKeeper < 3.5 cluster, replace the ZooKeeper library JARs in ${DRILL_HOME}/jars/ext with zookeeper-3.4.x.jar then restart the cluster.

Related

On-Premise Data-Prepper Connection Refused

Recently i've been experimenting with deploying Statefull applications onto Kubernetes. For my dev environment everything is on-premise, either on my local machine or on remote VMs. I deployed OpenSearch through its helm chart, got it and dashboards up and running, and everything was going well. I am now trying to setup data-prepper running as a docker container on my local machine (the kubernetes cluster is on remote VMs, not sure if this matters). I have the kube service that defines access to OpenSearch port-forwarded to my machine and am able to access it using "curl -u : https://localhost:9200 -k". Since my only interest is seeing it up and running I don't care (yet) that it is insecure. When I setup my data-prepper pipeline to hit OpenSearch in the exact same way, it is refusing the connection and I'm at a loss as to why.
pipelines.yaml:
simple-sample-pipeline:
workers: 2
delay: "5000"
source:
random:
sink:
- opensearch:
hosts: [ "https://localhost:9200" ]
insecure: true
username: <user>
password: <admin>
index: test
data-prepper-config.yaml
ssl: false
Docker command to run container:
docker run --name data-prepper \
-v C:/users/<profile>/documents/pipelines.yaml:/usr/share/data-prepper/pipelines.yaml \
-v C:/users/<profile>/documents/data-prepper.yaml:/usr/share/data-prepper/data-prepper-config.yaml \
opensearchproject/data-prepper:latest
logs exerpt:
WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will impact performance.
2022-06-07T19:39:50,959 [main] INFO com.amazon.dataprepper.parser.config.DataPrepperAppConfiguration - Command line args: /usr/share/data-prepper/pipelines.yaml,/usr/share/data-prepper/data-prepper-config.yaml
2022-06-07T19:39:50,960 [main] INFO com.amazon.dataprepper.parser.config.DataPrepperArgs - Using /usr/share/data-prepper/pipelines.yaml configuration file
2022-06-07T19:39:54,599 [main] INFO com.amazon.dataprepper.parser.PipelineParser - Building pipeline [simple-sample-pipeline] from provided configuration
2022-06-07T19:39:54,600 [main] INFO com.amazon.dataprepper.parser.PipelineParser - Building [random] as source component for the pipeline [simple-sample-pipeline]
2022-06-07T19:39:54,624 [main] INFO com.amazon.dataprepper.parser.PipelineParser - Building buffer for the pipeline [simple-sample-pipeline]
2022-06-07T19:39:54,634 [main] INFO com.amazon.dataprepper.parser.PipelineParser - Building processors for the pipeline [simple-sample-pipeline]
2022-06-07T19:39:54,635 [main] INFO com.amazon.dataprepper.parser.PipelineParser - Building sinks for the pipeline [simple-sample-pipeline]
2022-06-07T19:39:54,635 [main] INFO com.amazon.dataprepper.parser.PipelineParser - Building [opensearch] as sink component
2022-06-07T19:39:54,643 [main] INFO com.amazon.dataprepper.plugins.sink.opensearch.OpenSearchSink - Initializing OpenSearch sink
2022-06-07T19:39:54,649 [main] INFO com.amazon.dataprepper.plugins.sink.opensearch.ConnectionConfiguration - Using the username provided in the config.
2022-06-07T19:39:54,789 [main] INFO com.amazon.dataprepper.plugins.sink.opensearch.ConnectionConfiguration - Using the trust all strategy
2022-06-07T19:39:54,881 [main] ERROR com.amazon.dataprepper.plugin.PluginCreator - Encountered exception while instantiating the plugin OpenSearchSink
java.lang.reflect.InvocationTargetException: null
-----
Caused by: java.net.ConnectException: Connection refused

Running Apache Atlas standalone

I am trying to run Apache Atlas in a standalone fashion on Ubuntu - meaning without having to setup Solr and/or HBase.
What I did (according to the documentation: http://atlas.apache.org/0.8.1/InstallationSteps.html) was cloning the Git repository, build the maven project with embadded HBase an dSolr:
mvn clean package -Pdist,embedded-hbase-solr
Unpacked the resuting tar.gz file and executed bin/atlas_start.py - without having changed any configuration. To my understanding of the documentatino that should actually start up HBase along with Atlas - right?
The is what I find in logs/applocation.log:
2017-11-30 17:14:24,093 INFO - [main:] ~ >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> (Atlas:216)
2017-11-30 17:14:24,093 INFO - [main:] ~ Server starting with TLS ? false on port 21000 (Atlas:217)
2017-11-30 17:14:24,093 INFO - [main:] ~ <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< (Atlas:218)
2017-11-30 17:14:27,684 INFO - [main:] ~ No authentication method configured. Defaulting to simple authentication (LoginProcessor:102)
2017-11-30 17:14:28,527 INFO - [main:] ~ Logged in user daniel (auth:SIMPLE) (LoginProcessor:77)
2017-11-30 17:14:31,777 INFO - [main:] ~ Not running setup per configuration atlas.server.run.setup.on.start. (SetupSteps$SetupRequired:189)
2017-11-30 17:14:39,456 WARN - [main-SendThread(localhost:2181):] ~ Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect (ClientCnxn$SendThread:110$
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
2017-11-30 17:14:39,594 WARN - [main:] ~ Possibly transient ZooKeeper, quorum=localhost:2181, exception=org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = Connecti$
2017-11-30 17:14:40,593 WARN - [main-SendThread(localhost:2181):] ~ Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect (ClientCnxn$SendThread:110$
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
...
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
2017-11-30 17:14:56,185 WARN - [main:] ~ Possibly transient ZooKeeper, quorum=localhost:2181, exception=org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = Connecti$
2017-11-30 17:14:56,186 ERROR - [main:] ~ ZooKeeper exists failed after 4 attempts (RecoverableZooKeeper:277)
2017-11-30 17:14:56,186 WARN - [main:] ~ hconnection-0x1dba4e060x0, quorum=localhost:2181, baseZNode=/hbase Unable to set watcher on znode (/hbase/hbaseid) (ZKUtil:544)
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/hbaseid
at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1045)
at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:221)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.checkExists(ZKUtil.java:541)
at org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:65)
To me it reads as if no HBase (and Zookeeper) are started by the script.
Am I missing something?
Thanks for your hints!
OK, meanwhile I figured out the issue. The start script obviously does not execute the script conf/atlas-env.sh which sets some environment variable. Among this are MANAGE_LOCAL_HBASE and MANAGE_LOCAL_SOLR. So if you set those two env vars to true (and set JAVA_HOME properly which is needed for the embedded HBase), then Atlas automatically starts HBase and Solr - and we get a local running instance of Atlas!
Maybe this helps someone who comes across the same issue in future!
Update March 2021
There are two ways of running apache atlas:
A) Building it from scratch:
git clone https://github.com/apache/atlas
mvn clean install -DskipTests
mvn clean package -Pdist -DskipTests
Running atlas_start.py:
python <atlas-directory>/conf/atlas_start.py
B) Using docker image:
docker-compose.yml
version: "3.3"
services:
atlas:
image: sburn/apache-atlas
container_name: atlas
ports:
- "21000:21000"
volumes:
- "./bash_script:/app"
command: bash -exc "/opt/apache-atlas-2.1.0/bin/atlas_start.py"
docker-compose up

GraphML inport into Titan

I'm new in Titan world. I would like to import data stored in GraphML file into a database.
I downloaded titan-1.0.0-hadoop1
I run ./titan.sh
I run ./gremlin.sh
In Gremlin console I wrote:
:remote connect tinkerpop.server ../conf/remote.yaml
Next, I wrote:
graph.io(IoCore.graphml()).readGraph("/tmp/file.graphml")
I got message:
No such property: graph for class: groovysh_evaluate
Could you help me?
IMO the most interesting logs from gremlin-server.log:
84 [main] INFO org.apache.tinkerpop.gremlin.server.GremlinServer - Configuring Gremlin Server from conf/gremlin-server/gremlin-server.yaml
158 [main] INFO org.apache.tinkerpop.gremlin.server.util.MetricManager - Configured Metrics ConsoleReporter configured with report interval=180000ms
160 [main] INFO org.apache.tinkerpop.gremlin.server.util.MetricManager - Configured Metrics CsvReporter configured with report interval=180000ms to fileName=/tmp/gremlin-server-metrics.csv
196 [main] INFO org.apache.tinkerpop.gremlin.server.util.MetricManager - Configured Metrics JmxReporter configured with domain= and agentId=
197 [main] INFO org.apache.tinkerpop.gremlin.server.util.MetricManager - Configured Metrics Slf4jReporter configured with interval=180000ms and loggerName=org.apache.tinkerpop.gremlin.server.Settings$Slf4jReporterMetrics
1111 [main] WARN org.apache.tinkerpop.gremlin.server.GremlinServer - Graph [graph] configured at [conf/gremlin-server/titan-berkeleyje-server.properties] could not be instantiated and will not be available in Gremlin Server. GraphFactory message: GraphFactory could not instantiate this Graph implementation [class com.thinkaurelius.titan.core.TitanFactory]
java.lang.RuntimeException: GraphFactory could not instantiate this Graph implementation [class com.thinkaurelius.titan.core.TitanFactory]
...
1113 [main] INFO org.apache.tinkerpop.gremlin.server.util.ServerGremlinExecutor - Initialized Gremlin thread pool. Threads in pool named with pattern gremlin-*
1499 [main] INFO org.apache.tinkerpop.gremlin.groovy.engine.ScriptEngines - Loaded nashorn ScriptEngine
2044 [main] INFO org.apache.tinkerpop.gremlin.groovy.engine.ScriptEngines - Loaded gremlin-groovy ScriptEngine
2488 [main] WARN org.apache.tinkerpop.gremlin.groovy.engine.GremlinExecutor - Could not initialize gremlin-groovy ScriptEngine with scripts/empty-sample.groovy as script could not be evaluated - javax.script.ScriptException: groovy.lang.MissingPropertyException: No such property: graph for class: Script1
2488 [main] INFO org.apache.tinkerpop.gremlin.server.util.ServerGremlinExecutor - Initialized GremlinExecutor and configured ScriptEngines.
2581 [main] WARN org.apache.tinkerpop.gremlin.server.AbstractChannelizer - Could not instantiate configured serializer class - org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV1d0 - it will not be available. There is no graph named [graph] configured to be used in the useMapperFromGraph setting
2582 [main] INFO org.apache.tinkerpop.gremlin.server.AbstractChannelizer - Configured application/vnd.gremlin-v1.0+gryo-stringd with org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV1d0
2719 [main] WARN org.apache.tinkerpop.gremlin.server.AbstractChannelizer - Could not instantiate configured serializer class - org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerGremlinV1d0 - it will not be available. There is no graph named [graph] configured to be used in the useMapperFromGraph setting
2720 [main] WARN org.apache.tinkerpop.gremlin.server.AbstractChannelizer - Could not instantiate configured serializer class - org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV1d0 - it will not be available. There is no graph named [graph] configured to be used in the useMapperFromGraph setting
...
You need to create a graph. the graph keyword isn't declared anywhere in your script.
This is briefly covered in the Titan Server documentation, but it is easily overlooked.
The :> is the "submit" command which sends the Gremlin on that line to the currently active remote.
In step 5, you need to submit your script command to the remote server. In the Gremlin Console, you do this by starting your command with :submit or :> for shorthand.
:> graph.io(IoCore.graphml()).readGraph("/tmp/file.graphml")
If you don't submit the script to the remote server, the Gremlin Console will attempt to process the script within the console's JVM. graph is not defined locally, and that is why you saw the error in step 6.
Update: Based on your gremlin-server.log it looks like the issue is that the user that starts Titan with ./bin/titan.sh start doesn't have the appropriate file permissions to create the directory (db/berkeley) used by the default graph configuration (titan-berkeleyje-server.properties). Try updating the file permissions on the $TITAN_HOME directory.

Zookeeper: Connection request from old client will be dropped if server is in r-o mode

storm version: 0.82
zookeeper version: 3.4.5.
We have a small storm cluster (1 nimbus and 3 supervisors), so using just 1 zookeeper instance that's co-located with storm nimbus.
Infrequently we start getting the following errors in the zookeeper logs and our storm cluster comes to a standstill.
2014-04-05 13:27:32,885 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFact
ory#197] - Accepted socket connection from /10.0.1.183:56121
2014-04-05 13:27:32,886 [myid:] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer#7
93] - Connection request from old client /10.0.1.183:56121; will be dropped if server is in r-o mode
2014-04-05 13:27:32,886 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer#8
32] - Client attempting to renew session 0x1452dd02834002e at /10.0.1.183:56121
2014-04-05 13:27:32,886 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer#5
95] - Established session 0x1452dd02834002e with negotiated timeout 40000 for client /10.0.1.183:561
21
On the storm end we start seeing the following in supervisor and worker logs:
2014-04-05 11:37:29 ConnectionStateManager [WARN] There are no ConnectionStateListeners registered.
2014-04-05 11:37:29 cluster [WARN] Received event :disconnected::none: with disconnected Zookeeper.
2014-04-05 11:37:31 ClientCnxn [WARN] Session 0x1452dd028340015 for server null, unexpected error,
losing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1119)
2014-04-05 11:37:42 CuratorFrameworkImpl [ERROR] Background operation retry gave up
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss
at org.apache.zookeeper.KeeperException.create(KeeperException.java:90)
at com.netflix.curator.framework.imps.CuratorFrameworkImpl.processBackgroundOperation(Curat
rFrameworkImpl.java:380)
at com.netflix.curator.framework.imps.BackgroundSyncImpl$1.processResult(BackgroundSyncImpl
java:49)
at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:617)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506)
Do we need to downgrade zookeeper to 3.3.3 or is there a known issue/config that we're missing?
We also experienced several issues with Storm 0.9 and Zookeeper 3.4.X, even though not exactly the one you describe.
Storm mailing list are also reporting such incompatibility issues:
https://mail.google.com/mail/u/0/#search/label%3Astorm+zookeeper+3.4/144313a45ba069b5
https://mail.google.com/mail/u/0/#search/label%3Astorm+zookeeper+3.4/1447d95d10ce7582
This later one is pointing us to this Storm pull request, which should hopefully let us use ZK 3.4.X with future versions of Storm when it will be released:
https://github.com/apache/incubator-storm/pull/29
Until then, I would recommend downgrading ZK to 3.3.6 (you may install a specific separate instance of ZK for Storm if you absolutely need ZK 3.4.X for another system). You could also clone the Storm code and merge that pull request locally or compile the latest version of the trunk, but that's a bit adventurous and more tiresome than just waiting for those nice folks to just deliver a new release for us :)
A workaround for this situation is to clear storm's data directory (configured in strom.yaml==>storm.local.dir), then restart the supervisor. I did that in my test environment by clear storm's data directory and restart the nimbus and supervisor.
I think it's caused by a previous crash of the storm cluster, and the supervisor can not recovery from such a spot.

Unable to start JBoss from within Eclipse

I am unable to start JBoss server 5.1.0.GA version from eclipse Indigo.
Eclipse shows me message box saying 'Server JBoss v5.0 at localhost was unable to start within 500 seconds. If the server requires more time, try increasing the timeout in the server editor.' but in the console window I can see that JBoss has been actually started.
here is some part of log which I can see in console window of eclipse :
SecureDeploymentManager/remote - EJB3.x Default Remote Business Interface
SecureDeploymentManager/remote-org.jboss.deployers.spi.management.deploy.DeploymentManager - EJB3.x Remote Business Interface
15:14:20,212 INFO [SessionSpecContainer] Starting jboss.j2ee:jar=profileservice-secured.jar,name=SecureManagementView,service=EJB3
15:14:20,212 INFO [EJBContainer] STARTED EJB: org.jboss.profileservice.ejb.SecureManagementView ejbName: SecureManagementView
15:14:20,222 INFO [JndiSessionRegistrarBase] Binding the following Entries in Global JNDI:
SecureManagementView/remote - EJB3.x Default Remote Business Interface
SecureManagementView/remote-org.jboss.deployers.spi.management.ManagementView - EJB3.x Remote Business Interface
15:14:20,252 INFO [SessionSpecContainer] Starting jboss.j2ee:jar=profileservice-secured.jar,name=SecureProfileService,service=EJB3
15:14:20,262 INFO [EJBContainer] STARTED EJB: org.jboss.profileservice.ejb.SecureProfileServiceBean ejbName: SecureProfileService
15:14:20,272 INFO [JndiSessionRegistrarBase] Binding the following Entries in Global JNDI:
SecureProfileService/remote - EJB3.x Default Remote Business Interface
SecureProfileService/remote-org.jboss.profileservice.spi.ProfileService - EJB3.x Remote Business Interface
15:14:20,362 INFO [TomcatDeployment] deploy, ctxPath=/admin-console
15:14:20,412 INFO [config] Initializing Mojarra (1.2_12-b01-FCS) for context '/admin-console'
15:14:23,486 INFO [TomcatDeployment] deploy, ctxPath=/BannedListSearch
15:14:27,532 INFO [TomcatDeployment] deploy, ctxPath=/IWorkWebApp
15:14:27,813 INFO [TomcatDeployment] deploy, ctxPath=/
15:14:29,155 INFO [TomcatDeployment] deploy, ctxPath=/TestWebProject
15:14:30,036 INFO [TomcatDeployment] deploy, ctxPath=/displaytag-examples-1.2
15:14:30,136 INFO [TomcatDeployment] deploy, ctxPath=/jmx-console
15:14:30,276 INFO [TomcatDeployment] deploy, ctxPath=/HelloWebService
15:14:30,407 ERROR [EngineConfigurationFactoryServlet] Unable to find config file. Creating new servlet engine config file: /WEB-INF/server-config.wsdd
15:14:30,687 INFO [Http11Protocol] Starting Coyote HTTP/1.1 on http-127.0.0.1-8081
15:14:30,707 INFO [AjpProtocol] Starting Coyote AJP/1.3 on ajp-127.0.0.1-8009
15:14:30,707 INFO [ServerImpl] JBoss (Microcontainer) [5.1.0.GA (build: SVNTag=JBoss_5_1_0_GA date=200905221053)] Started in 48s:110ms
I have increased start Timeout of server to 500 seconds then also I am getting same error. I have not changed anything else.
I am able to start JBoss from command prompt successfully but same server is not getting started from eclipse.
Please help me to start the JBoss server.
Sound to me like the http port you are configured in JBoss is different to the port you have in the Eclipse configuration for JBoss.
Eclipse uses the port configuration to listen to JBoss' port so that it can determine that JBoss has actually started. If they differ, Eclipse thinks JBoss has never started although it actually has according to the log console. Make the ports match and it will probably work.
Updated: According to your log, JBoss is using port 8081 for HTTP:
Starting Coyote HTTP/1.1 on http-127.0.0.1-8081
Now you have to tell Eclipse to listen to that port so that it can figure out whether JBoss has started (default is 8080 and therefore Eclipse will never be aware of it!). Go to your servers view, double click on your JBoss server, and the configuration screen will come up:
You have to edit the HTTP port (in the 'Port' box) and set it to 8081 so that it matches your server's.