Accessing Shark tables (Hive) from Scala (shark-shell) - scala

I have shark-0.8.0 which runs on hive-0.9.0. I am able to program on Hive by invoking shark. I created a few tables and loaded them with data.
Now, I am trying to access the data from these tables using Scala. I invoked the Scala shell using shark-shell. But when I try to select, I get an error that the table is not present.
scala> val artists = sc.sql2rdd("select artist from default.lastfm")
Hive history file=/tmp/hduser2/hive_job_log_hduser2_201405091617_1513149542.txt
151.738: [GC 317312K->83626K(1005568K), 0.0975990 secs]
151.836: [Full GC 83626K->76005K(1005568K), 0.4523880 secs]
152.313: [GC 80536K->76140K(1005568K), 0.0030990 secs]
152.316: [Full GC 76140K->62214K(1005568K), 0.1716240 secs]
FAILED: Error in semantic analysis: Line 1:19 Table not found 'lastfm'
shark.api.QueryExecutionException: FAILED: Error in semantic analysis: Line 1:19 Table not found 'lastfm'
at shark.SharkDriver.tableRdd(SharkDriver.scala:149)
at shark.SharkContext.sql2rdd(SharkContext.scala:100)
at <init>(<console>:17)
at <init>(<console>:22)
at <init>(<console>:24)
at <init>(<console>:26)
at <init>(<console>:28)
at <init>(<console>:30)
at <init>(<console>:32)
at .<init>(<console>:36)
at .<clinit>(<console>)
at .<init>(<console>:11)
at .<clinit>(<console>)
at $export(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:629)
at org.apache.spark.repl.SparkIMain$Request$$anonfun$10.apply(SparkIMain.scala:890)
at scala.tools.nsc.interpreter.Line$$anonfun$1.apply$mcV$sp(Line.scala:43)
at scala.tools.nsc.io.package$$anon$2.run(package.scala:25)
at java.lang.Thread.run(Thread.java:744)
From the documentation (https://github.com/amplab/shark/wiki/Shark-User-Guide), these steps are enough to get Shark up and running and select data using Scala. Or am I missing something? Is there some configuration file that needs to be modified to enable access to Shark from shark-shell ?

Have you updated your shark-hive directory configuration to properly reflect the hive metastore jdbc connection info?
You will need to copy the hive-default.xml to hive-site.xml . Then ensure the metastore properties are set.
Here is the basic info in hive-site.xml
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://myhost/metastore</value>
<description>the URL of the MySQL database</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hive</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>mypassword</value>
</property>
You can get more details here: configuring hive metastore

Related

oozie-hive beeline not working with kerberos

We have recently migrated from our old HDP cluster(without kerberos) to new HDP cluster(having kerberos). We are facing some authentication issues while running our ozzie jobs on new clutser. Please refer to workflow.xml below. The first action 'hive-101' works fine, however the second action hive-102 fails.
<credentials>
<credential name="hs2-creds" type="hive2">
<property>
<name>hive2.server.principal</name>
<value>${jdbcPrincipal}</value>
</property>
<property>
<name>hive2.jdbc.url</name>
<value>${jdbcURL}</value>
</property>
</credential>
</credentials>
<start to="hive-101"/>
<action name="hive-101" cred="hs2-creds">
<hive2 xmlns="uri:oozie:hive2-action:0.2">
<jdbc-url>${jdbcURL}</jdbc-url>
<password>${hivepassword}</password>
<query>SELECT count(*) FROM table1;</query>
</hive2>
<ok to="hive-102"/>
<error to="fail"/>
</action>
<action name="hive-102" retry-max="${maxretry}" retry-interval="${retryinterval}">
<shell xmlns="uri:oozie:shell-action:0.3">
<exec>beeline</exec>
<argument>jdbc:hive2://zk01.abc.com:2181,zk02.abc.com:2181,zk03.abc.com:2181/${hivedatabase};principal=hive/_HOST#ABC.COM;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2</argument>
<argument>--outputformat=vertical</argument>
<argument>--silent=true</argument>
<argument>-e</argument>
<argument>
SELECT max(id) as mx_id FROM ${hivedatabase}.table1;
</argument>
<capture-output/>
</shell>
<ok to="end"/>
<error to="fail"/>
</action>
Below are the error details
ERROR transport.TSaslTransport: SASL negotiation failure
javax.security.sasl.SaslException: GSS initiate failed
at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211) ~[?:1.8.0_212]
Caused by: org.ietf.jgss.GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)
at sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147) ~[?:1.8.0_212]
at sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:122) ~[?:1.8.0_212]
WARN jdbc.HiveConnection: Failed to connect to nn02.abc.com:10000
WARN jdbc.HiveConnection: Could not open client transport with JDBC Uri: jdbc:hive2://nn02.abc.com:10000/db_test;principal=hive/_HOST#ABC.COM;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2: GSS initiate failed Retrying 0 of 1
ERROR transport.TSaslTransport: SASL negotiation failure
javax.security.sasl.SaslException: GSS initiate failed
at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211) ~[?:1.8.0_212]
Caused by: org.ietf.jgss.GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)
at sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147) ~[?:1.8.0_212]
The shell action will run on an arbitrary data node as the Unix user who started the Oozie workflow. That user who tries to run the shell command won't be automatically authenticated with Kerberos.
I believe you will have to place a Kerberos keytab for the user on each data node. Then your Oozie shell action will need to run a script that runs a kinit using the keytab and then runs the beeline command.
From Apache Oozie by Mohammad Kamrul Islam and Aravind Srinivasan
On a nonsecure Hadoop cluster, the shell command will execute as the Unix user who runs the TaskTracker (Hadoop 1) or the YARN container (Hadoop 2). This is typically a system-defined user. On secure Hadoop clusters running Kerberos, the shell commands will run as the Unix user who submitted the workflow containing the action.

SchemaSpy DB2 Connection Failure

I tried to analyze a DB2 database with SchemaSpy, but got a warning 'Connection Failure'. I tried this way:
java -jar schemaspy-6.0.0.jar -configFile schemaspy.properties --logging.pattern.console="%d{HH:mm:ss.SSS} %clr(%-5level) - %msg%n" --logging.level.org.schemaspy=TRACE
(found the logging part on https://github.com/schemaspy/schemaspy/issues/250)
The .properties file looks like this:
schemaspy.t=db2
schemaspy.dp=C:\tmp\db2jcc.jar
schemaspy.host=**host**
schemaspy.port=50000
schemaspy.db=**db**
schemaspy.u=**user**
schemaspy.p=**password**
schemaspy.o=D:\**\schemaspy-output\
schemaspy.s=**schema**
The error I got was:
14:24:20.297 DEBUG - Unable to find driverClass COM.ibm.db2.jdbc.app.DB2Driver'
14:24:20.308 WARN - Connection Failure
org.schemaspy.model.ConnectionFailure: Failed to connect to database URL [jdbc:db2:zumtest] Failed to create any of 'COM.ibm.db2.jdbc.app.DB2Driver' driver from driverPath 'C:\tmp\db2jcc.jar' with sibling jars no.
Resulting in classpath:
file:/C:/tmp/db2jcc.jar
at org.schemaspy.DbDriverLoader.getConnection(DbDriverLoader.java:101)
at org.schemaspy.DbDriverLoader.getConnection(DbDriverLoader.java:75)
at org.schemaspy.service.SqlService.connect(SqlService.java:68)
at org.schemaspy.SchemaAnalyzer.analyze(SchemaAnalyzer.java:186)
at org.schemaspy.SchemaAnalyzer.analyze(SchemaAnalyzer.java:107)
at org.schemaspy.cli.SchemaSpyRunner.runAnalyzer(SchemaSpyRunner.java:97)
at org.schemaspy.cli.SchemaSpyRunner.run(SchemaSpyRunner.java:86)
at org.schemaspy.Main.main(Main.java:48)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:48)
at org.springframework.boot.loader.Launcher.launch(Launcher.java:87)
at org.springframework.boot.loader.Launcher.launch(Launcher.java:50)
at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:51)
Caused by: org.schemaspy.model.ConnectionFailure: Failed to create any of 'COM.ibm.db2.jdbc.app.DB2Driver' driver from driverPath 'C:\tmp\db2jcc.jar' with sibling jars no.
Resulting in classpath:
file:/C:/tmp/db2jcc.jar
at org.schemaspy.DbDriverLoader.getDriver(DbDriverLoader.java:147)
at org.schemaspy.DbDriverLoader.getConnection(DbDriverLoader.java:93)
... 15 common frames omitted
I guess, the error comes from the wrong class path? But how can I fix this? I tried to change the line in db2.properties
driver=COM.ibm.db2.jdbc.app.DB2Driver
to
driver=COM.ibm.db2.jcc.DB2Driver
, because I extracted this class path from the driver's .jar flie, but it did not help.
this worked for me using db2jcc.jar
# type of database. Run with -dbhelp for details
schemaspy.t=db2
# optional path to alternative jdbc drivers.
schemaspy.dp=/lib/db2jcc.jar
# database properties: host, port number, name user, password
schemaspy.host=xxxxxx
schemaspy.port=xxxxx
schemaspy.db=xxx
schemaspy.u=xx
schemaspy.p=xx
# output dir to save generated files
schemaspy.o=/output
# db scheme for which generate diagrams
schemaspy.s=xxxx
driver=com.ibm.db2.jcc.DB2Driver
connectionSpec=jdbc:db2://xxx.xx.x.xxx:[PORT]/[DBNAME]
schemaspy.cat=%
The command used:
java -jar schemaspy-6.1.0.jar -configFile db2.properties --logging.pattern.console="%d{HH:mm:ss.SSS} %clr(%-5level) - %msg%n" --logging.level.org.schemaspy=TRACE

GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt) while connecting Polybase with Kerberos

We want to connect our SQL Server 2016 Enterprise via Polybase with our Kerberized OnPrem Hadoop-Cluster with Cloudera 5.14.
I followed the Microsoft PolyBase Guide to configure Polybase. After working few days on this topic I'm not able to continue because of an exception: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
Microsoft has an built in diagnostic tool for troubleshooting the connectivity with PolyBase and Kerberos. On this troubleshooting guide from Microsoft there are 4 checkpoints and I'm stuck on checkpoint 4.
Short information about the checkpoints (where I'm successfull):
Checkpoint 1: Successfull! Authenticated against the KDC and received a TGT
Checkpoint 2: Successfull! Regarding troubleshooting guide PolyBase will make an attempt to access the HDFS and fail because the request did not contain the necessary Service Ticket.
Checkpoint 3: Sucessfull! A second hex dump indicates that SQL Server successfully used the TGT and acquired the applicable Service Ticket for the name node's SPN from the KDC.
Checkpoint 4: Not successfull SQL Server was authenticated by Hadoop using the ST (Service Ticket) and a session was granted to access the secured resource.
krb5.conf file
[libdefaults]
default_realm = COMPANY.REALM.COM
dns_lookup_kdc = false
dns_lookup_realm = false
ticket_lifetime = 86400
renew_lifetime = 604800
forwardable = true
default_tgs_enctypes = aes256-cts-hmac-sha1-96 aes128-cts-hmac-sha1-96
default_tkt_enctypes = aes256-cts-hmac-sha1-96 aes128-cts-hmac-sha1-96
permitted_enctypes = aes256-cts-hmac-sha1-96 aes128-cts-hmac-sha1-96
udp_preference_limit = 1
kdc_timeout = 3000
[realms]
COMPANY.REALM.COM = {
kdc = ipadress.kdc.host
admin_server = ipadress.kdc.host
}
[logging]
default = FILE:/var/log/krb5/kdc.log
kdc = FILE:/var/log/krb5/kdc.log
admin_server = FILE:/var/log/krb5/kadmind.log
core-site.xml for Polybase on SQL-Server
<?xml version="1.0" encoding="utf-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property>
<property>
<name>ipc.client.connect.max.retries</name>
<value>2</value>
</property>
<property>
<name>ipc.client.connect.max.retries.on.timeouts</name>
<value>2</value>
</property>
<!-- kerberos security information, PLEASE FILL THESE IN ACCORDING TO HADOOP CLUSTER CONFIG -->
<property>
<name>polybase.kerberos.realm</name>
<value>COMPANY.REALM.COM</value>
</property>
<property>
<name>polybase.kerberos.kdchost</name>
<value>ipadress.kdc.host</value>
</property>
<property>
<name>hadoop.security.authentication</name>
<value>KERBEROS</value>
</property>
</configuration>
hdfs-site.xml for Polybase on SQL-Server
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.block.size</name>
<value>268435456</value>
</property>
<!-- Client side file system caching is disabled below for credential refresh and
settting the below cache disabled options to true might result in
stale credentials when an alter credential or alter datasource is performed
-->
<property>
<name>fs.wasb.impl.disable.cache</name>
<value>true</value>
</property>
<property>
<name>fs.wasbs.impl.disable.cache</name>
<value>true</value>
</property>
<property>
<name>fs.asv.impl.disable.cache</name>
<value>true</value>
</property>
<property>
<name>fs.asvs.impl.disable.cache</name>
<value>true</value>
</property>
<property>
<name>fs.hdfs.impl.disable.cache</name>
<value>true</value>
</property>
<!-- kerberos security information, PLEASE FILL THESE IN ACCORDING TO HADOOP CLUSTER CONFIG -->
<property>
<name>dfs.namenode.kerberos.principal</name>
<value>hdfs/_HOST#COMPANY.REALM.COM</value>
</property>
</configuration>
Polybase Exception
[2018-06-22 12:51:50,349] WARN 2872[main] - org.apache.hadoop.security.UserGroupInformation.hasSufficientTimeElapsed(UserGroupInformation.java:1156) - Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
[2018-06-22 12:51:53,568] WARN 6091[main] - org.apache.hadoop.security.UserGroupInformation.hasSufficientTimeElapsed(UserGroupInformation.java:1156) - Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
[2018-06-22 12:51:56,127] WARN 8650[main] - org.apache.hadoop.security.UserGroupInformation.hasSufficientTimeElapsed(UserGroupInformation.java:1156) - Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
[2018-06-22 12:51:58,998] WARN 11521[main] - org.apache.hadoop.security.UserGroupInformation.hasSufficientTimeElapsed(UserGroupInformation.java:1156) - Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
[2018-06-22 12:51:59,139] WARN 11662[main] - org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:676) - Couldn't setup connection for hdfs#COMPANY.REALM.COM to IPADRESS_OF_NAMENODE:8020
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
Log Entry on NameNode
Socket Reader #1 for port 8020: readAndProcess from client IP-ADRESS_SQL-SERVER threw exception [javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: Failure unspecified at GSS-API level (Mechanism level: AES128 CTS mode with HMAC SHA1-96 encryption type not in permitted_enctypes list)]]
Auth failed for IP-ADRESS_SQL-SERVER:60484:null (GSS initiate failed) with true cause: (GSS initiate failed)
The confusing part for me is the log entry from our NameNode because AES128 CTS mode with HMAC SHA1-96 is already in the list of permitted enctypes as shown in krb5.conf and in Cloudera Manager UI
We appreciate your help!
The problem has itself taken care after we restarted the cluster.
I think the problem was that the krb5.conf file in our Hadoop-Cluster could not be distributed on all nodes because of some running services. There was also a warning in the Cloudera Manager about a stale configuration regarding Kerberos.
Many thanks to everyone!

how to work with glusterfs-hadoop plugin?

i installed glusterfs and works fine, after that i installed hadoop 1.x and works fine with hdfs, but when i use glusterfs-hadoop plugin to use glusterfs as the filesystem backend for my hadoop i get error, i use github site for glusterfs-hadoop plugin. and copy jar file to hadoop library directory, and change my core-site.xml to this:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>fs.glusterfs.impl</name>
<value>org.apache.hadoop.fs.glusterfs.GlusterFileSystem</value>
</property>
<property>
<name>fs.default.name</name>
<value>glusterfs://fedora1:9010</value>
</property>
<property>
<name>fs.AbstractFileSystem.glusterfs.impl</name>
<value>org.apache.hadoop.fs.local.GlusterFs</value>
</property>
<property>
<name>fs.glusterfs.volumes</name>
<value>test1</value>
</property>
<property>
<name>fs.glusterfs.volume.fuse.test1</name>
<value>/mnt/Hadoop</value>
</property>
</configuration>
and when execute start-mapred.sh, jobtracker and tasktracker start whitout any problem, but when execute this command "hadoop fs -mkdir ossl" i get this output:
15/04/14 12:52:53 INFO glusterfs.GlusterVolume: Initializing gluster volume..
15/04/14 12:52:53 INFO glusterfs.GlusterFileSystem: Configuring GlusterFS
15/04/14 12:52:53 INFO glusterfs.GlusterFileSystem: Initializing GlusterFS, CRC disabled.
15/04/14 12:52:53 INFO glusterfs.GlusterFileSystem: GIT INFO={git.commit.id.abbrev=f0fee73, git.commit.user.email=bchilds#redhat.com, git.commit.message.full=Merge pull request #122 from childsb/getfattrparse
Refactor and cleanup the BlockLocation parsing code, git.commit.id=f0fee73c336ac19461d5b5bb91a77e05cff73361, git.commit.message.short=Merge pull request #122 from childsb/getfattrparse, git.commit.user.name=bradley childs, git.build.user.name=Unknown, git.commit.id.describe=GA-12-gf0fee73, git.build.user.email=Unknown, git.branch=master, git.commit.time=31.03.2015 # 00:36:46 IRDT, git.build.time=12.04.2015 # 14:45:49 IRDT}
15/04/14 12:52:53 INFO glusterfs.GlusterFileSystem: GIT_TAG=GA
15/04/14 12:52:53 INFO glusterfs.GlusterFileSystem: Configuring GlusterFS
15/04/14 12:52:53 INFO glusterfs.GlusterVolume: Initializing gluster volume..
15/04/14 14:36:01 INFO glusterfs.GlusterVolume: Gluster volume: test at : /mnt/hadoop
15/04/14 14:36:01 INFO glusterfs.GlusterVolume: Working directory is : /
15/04/14 14:36:01 INFO glusterfs.GlusterVolume: Write buffer size : 131072
15/04/14 14:36:01 INFO glusterfs.GlusterVolume: Default block size : 67108864
15/04/14 14:36:01 INFO glusterfs.GlusterVolume: Directory list order : fs ordering
15/04/14 14:36:01 INFO glusterfs.GlusterVolume: File timestamp lease significant digits removed : 0
mkdir: Error undefined volume:fedora1:9010 in path: glusterfs://fedora1:9010/ossl
please help me, thanks for your reply.
If I'm not mistaken this should work:
<property>
<name>fs.default.name</name>
<value>glusterfs:///fedora1:9010</value>
</property>

Unable to start a spring boot web application with spring batch admin

I am trying to run a web application with embedded tomcat including a number of spring batch jobs and the spring batch admin. However when I try to run the generated fat jar I get the following error, can anybody from spring batch or boot team help:
Error registering Tomcat:j2eeType=WebModule,name=//localhost/*,J2EEApplication=none,J2EEServer=none
Adding more information:
Version of spring boot: 1.1.9.RELEASE from spring.io parent pom version 1.0.3.RELEASE
I tried running it from STS as well as using mvn spring-boot:run with the same effect.
The batch jobs read from a file and write to hornetq.
The complete stack trace is as follows:
2014-11-14 14:22:45.236 ERROR 404 --- [ost-startStop-1] org.apache.tomcat.util.modeler.Registry : Error registering Tomcat:j2eeType=WebModule,name=//localhost/*,J2EEApplication=none,J2EEServer=none`
javax.management.RuntimeOperationsException: null
at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:411)`
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898)`
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324)
at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
at org.apache.tomcat.util.modeler.Registry.registerComponent(Registry.java:742)
at org.apache.catalina.util.LifecycleMBeanBase.register(LifecycleMBeanBase.java:158)
at org.apache.catalina.util.LifecycleMBeanBase.initInternal(LifecycleMBeanBase.java:61)
at org.apache.catalina.core.ContainerBase.initInternal(ContainerBase.java:1084)
at org.apache.catalina.core.StandardContext.initInternal(StandardContext.java:6506)
at org.apache.catalina.util.LifecycleBase.init(LifecycleBase.java:102)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:139)
at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1575)
at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1565)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalArgumentException: Repository: cannot add mbean for pattern name Tomcat:j2eeType=WebModule,name=//localhost/*,J2EEApplication=none,J2EEServer=none
... 19 common frames omitted
I assume the name is not a valid object name: //localhost/*
Could you try with a different name, or either escape or quote the name. This should fix the problem.