Spark 1.6.2 (YARN master)
Package name: com.example.spark.Main
Basic SparkSQL code
val conf = new SparkConf()
conf.setAppName("SparkSQL w/ Hive")
val sc = new SparkContext(conf)
val hiveContext = new HiveContext(sc)
import hiveContext.implicits._
// val rdd = <some RDD making>
val df = rdd.toDF()
df.write.saveAsTable("example")
And stacktrace...
No X11 DISPLAY variable was set, but this program performed an operation which requires it.
at java.awt.GraphicsEnvironment.checkHeadless(GraphicsEnvironment.java:204)
at java.awt.Window.<init>(Window.java:536)
at java.awt.Frame.<init>(Frame.java:420)
at java.awt.Frame.<init>(Frame.java:385)
at com.trend.iwss.jscan.runtime.BaseDialog.getActiveFrame(BaseDialog.java:75)
at com.trend.iwss.jscan.runtime.AllowDialog.make(AllowDialog.java:32)
at com.trend.iwss.jscan.runtime.PolicyRuntime.showAllowDialog(PolicyRuntime.java:325)
at com.trend.iwss.jscan.runtime.PolicyRuntime.stopActionInner(PolicyRuntime.java:240)
at com.trend.iwss.jscan.runtime.PolicyRuntime.stopAction(PolicyRuntime.java:172)
at com.trend.iwss.jscan.runtime.PolicyRuntime.stopAction(PolicyRuntime.java:165)
at com.trend.iwss.jscan.runtime.NetworkPolicyRuntime.checkURL(NetworkPolicyRuntime.java:284)
at com.trend.iwss.jscan.runtime.NetworkPolicyRuntime._preFilter(NetworkPolicyRuntime.java:164)
at com.trend.iwss.jscan.runtime.PolicyRuntime.preFilter(PolicyRuntime.java:132)
at com.trend.iwss.jscan.runtime.NetworkPolicyRuntime.preFilter(NetworkPolicyRuntime.java:108)
at org.apache.commons.logging.LogFactory$5.run(LogFactory.java:1346)
at java.security.AccessController.doPrivileged(Native Method)
at org.apache.commons.logging.LogFactory.getProperties(LogFactory.java:1376)
at org.apache.commons.logging.LogFactory.getConfigurationFile(LogFactory.java:1412)
at org.apache.commons.logging.LogFactory.getFactory(LogFactory.java:455)
at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:657)
at org.apache.hadoop.hive.shims.HadoopShimsSecure.<clinit>(HadoopShimsSecure.java:60)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at org.apache.hadoop.hive.shims.ShimLoader.createShim(ShimLoader.java:146)
at org.apache.hadoop.hive.shims.ShimLoader.loadShims(ShimLoader.java:141)
at org.apache.hadoop.hive.shims.ShimLoader.getHadoopShims(ShimLoader.java:100)
at org.apache.spark.sql.hive.client.ClientWrapper.overrideHadoopShims(ClientWrapper.scala:116)
at org.apache.spark.sql.hive.client.ClientWrapper.<init>(ClientWrapper.scala:69)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:249)
at org.apache.spark.sql.hive.HiveContext.metadataHive$lzycompute(HiveContext.scala:345)
at org.apache.spark.sql.hive.HiveContext.metadataHive(HiveContext.scala:255)
at org.apache.spark.sql.hive.HiveContext.setConf(HiveContext.scala:459)
at org.apache.spark.sql.hive.HiveContext.defaultOverrides(HiveContext.scala:233)
at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:236)
at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:101)
at com.example.spark.Main1$.main(Main.scala:52)
at com.example.spark.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:738)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
ivysettings.xml file not found in HIVE_HOME or HIVE_CONF_DIR,/etc/hive/2.5.3.0-37/0/ivysettings.xml will be used
This same code was working a week ago on a fresh HDP cluster, and it works fine in the sandbox... the only thing I remember doing was trying to change around the JAVA_HOME variable, but I am fairly sure I undid those changes.
I'm at a loss - not sure how to start tracking down the issue.
The cluster is headless, so of-course it has no X11 display, but what piece of new HiveContext even needs to pop-up any JFrame?
Based on the logs, I'd say it's a Java configuration issue I messed up and something within org.apache.hadoop.hive.shims.HadoopShimsSecure.<clinit>(HadoopShimsSecure.java:60) got triggered, therefore a Java security dialog is appearing, but I don't know.
Can't do X11 forwarding, and tried to do export SPARK_OPTS="-Djava.awt.headless=true" before a spark-submit, and that didn't help.
Tried these, but again, can't forward and don't have a display
Getting a HeadlessException: No X11 DISPLAY variable was set
"No X11 DISPLAY variable" - what does it mean?
The error seems to be reproducible on two of the Spark clients.
Only on one machine did I try changing JAVA_HOME.
Did an Ambari Hive service-check. Didn't fix it.
Can connect fine to Hive database via Hive/Beeline CLI
As far as the Spark code is concerned, this seemed to mitigate the error.
val conf = SparkConf()
conf.set("spark.executor.extraJavaOptions" , "-Djava.awt.headless=true")
Original answer
Found this post. Spring 3.0.5 - java.awt.HeadlessException - com.trend.iwss.jscan
Basically, Trend Micro is inserting some com.trend.iwss.jscan package into the JAR files that are downloaded via Maven through a company firewall , and I have no control over that.
(link not working) http://esupport.trendmicro.com/Pages/IWSx-3x-Some-files-and-folders-are-added-to-the-Jar-files-after-passin.aspx
Wayback Machine to the rescue...
If anyone else has input, I would also like to hear it.
Problem:
When downloading some .JAR files via IWSA, a directory filled with .class file, which is not related to what is being downloaded, is added to the jar file (com\trend\iwss\jscan\runtime\).
Solution:
This happens because if a JAR file is originally unsigned, IWSA will insert some code into the applet to monitor and restrict potential harmful actions.
For IWSS/IWSA, every "get" request is the same so it will not know if you are trying to download an archive or an applet, which will be executed by your browser.
This code is added for security reason to monitor the behavior of the "possible" applet to be sure that it does not do any harm to the machine and its environment.
To prevent this issue, please follow these steps:
Log on to the IWSS web console.
Go to HTTP > Applets and ActiveX > Policies > Java Applet Security Rules.
Under Java Applet Security, change the value of "No signature" to either "Pass" or "Block", depending on what you want to do with the unsigned .JAR files.
Click Save.
Related
Connection to databricks works fine, working with DataFrames goes smoothly (operations like join, filter, etc).
The problem appears when I call cache on a dataframe.
py4j.protocol.Py4JJavaError: An error occurred while calling o342.cache.
: java.io.InvalidClassException: failed to read class descriptor
...
Caused by: java.lang.ClassNotFoundException: org.apache.spark.rdd.RDD$client53442a94a3$$anonfun$mapPartitions$1$$anonfun$apply$23
at java.lang.ClassLoader.findClass(ClassLoader.java:523)
at org.apache.spark.util.ParentClassLoader.findClass(ParentClassLoader.java:35)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at org.apache.spark.util.ParentClassLoader.loadClass(ParentClassLoader.java:40)
at org.apache.spark.util.ChildFirstURLClassLoader.loadClass(ChildFirstURLClassLoader.java:48)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.spark.util.Utils$.classForName(Utils.scala:257)
at org.apache.spark.sql.util.ProtoSerializer.org$apache$spark$sql$util$ProtoSerializer$$readResolveClassDescriptor(ProtoSerializer.scala:4316)
at org.apache.spark.sql.util.ProtoSerializer$$anon$4.readClassDescriptor(ProtoSerializer.scala:4304)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1857)
... 71 more
I work with java8 as required, clearing pycache doesn't help.
The same code submitted as a job to databricks works fine.
It looks like a local problem on a bridge python-jvm level but java version (8) and python (3.7) is as required. Switching to java13 produces quite the same message.
Versions databricks-connect==6.2.0, openjdk version "1.8.0_242", Python 3.7.6
EDIT:
Behavior depends on how DF is created, if the source of DF is external then it works fine, if DF is created locally then such error appears.
# works fine
df = spark.read.csv("dbfs:/some.csv")
df.cache()
# ERROR in 'cache' line
df = spark.createDataFrame([("a",), ("b",)])
df.cache()
This is a known issue and I think a recent patch fixed it. This was seen for Azure, I am not sure whether you are using which Azure or AWS but it's solved. Please check the issue - https://github.com/MicrosoftDocs/azure-docs/issues/52431
I am trying to connect from a Hive Database to a collection in MongoDB using a driver (jars) provided on the wiki site. Here are the steps I did: -
I created a collection in MongoDB called "Diamond" under a database called "Moe" and it has got 20 documents:
I wanted to connect from Hive via the Hadoop MongoDB Driver and view these documents via Hive.
I have both MongoDB and Hive installed on the same server and configured. However I don't see any variable called the HIVE_CLASPATH I wonder where that is.
So I installed 3 divers on the server: -
mongo-hadoop-core-1.5.2.jar;
mongo-hadoop-hive-1.5.2.jar;
mongo-java-driver-3.0.0.jar;
Now, I connect to Hive, and then add these 2 jar's to my classpath by the following commands: - (they get added successfully)
add jar /hadoopgdc/hadoop-2.6.0/share/hadoop/common/lib/mongo-hadoop-hive-1.5.2.jar;
add jar /hadoopgdc/hadoop-2.6.0/share/hadoop/common/lib/mongo-hadoop-core-1.5.2.jar;
add jar /hadoopgdc/hadoop-2.6.0/share/hadoop/common/lib/mongo-java-driver-3.0.0.jar;
Now I create a table in HIVE: -
CREATE TABLE Diamond
(
carat DOUBLE,
cut STRING,
color STRING,
clarity STRING,
depth DOUBLE,
table DOUBLE,
price DOUBLE,
xcord DOUBLE,
ycord DOUBLE,
zcord DOUBLE
)
STORED BY 'com.mongodb.hadoop.hive.MongoStorageHandler'
WITH SERDEPROPERTIES('mongo.columns.mapping'='{"carat":"carat","cut":"cut",
"color":"color", "clarity":"clarity", "depth":"depth", "table":"table",
"price":"price", "xcord":"x", "ycord":"y", "zcord":"z"}')
TBLPROPERTIES('mongo.uri'='mongodb://localhost:27017/Moe.Diamond');
However when I execute the above command in Hive I get the error below: -
java.lang.NoClassDefFoundError: com/mongodb/util/JSON
at com.mongodb.hadoop.hive.BSONSerDe.initialize(BSONSerDe.java:110)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:210)
at org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:268)
at org.apache.hadoop.hive.ql.metadata.Table.getDeserializer(Table.java:261)
at org.apache.hadoop.hive.ql.metadata.Table.getCols(Table.java:587)
at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:573)
at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:3784)
at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:256)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:155)
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1355)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1139)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:945)
at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:259)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:756)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:614)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
Caused by: java.lang.ClassNotFoundException: com.mongodb.util.JSON
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
... 23 more
FAILED: Execution Error, return code -101 from
org.apache.hadoop.hive.ql.exec.DDLTask
I have tried the following: -
- placing the jars in every possible directory with no effect
- The class that is supposed to be missing, is pretty much present in the jar file.
- oh yes and the MongoStorageHandler class is very much in the jar.
I am done breaking my head with this !! If anyone can shed some light on what I could do to alleviate my anxiety, it would be great.
Thanks again.
Mario
I identified what the issue was. To connect from HIVE to MongoDB, the MongoDb Driver uses invokes a java class in a hive jar library
**
java.lang.ClassNotFoundException:
org.apache.hadoop.hive.ql.hooks.PreExecutePrinter
**
Now this class is supposed to be found in the jar file - hive-exec-0.11.0.1.3.2.0-111.jar. However it is available only in more recent versions of HIVE and not older ones.
It is not available in 0.11.0.1.3.2.0-111 but is visibly detectable in 0.13.0.2.1.7.0-784.
The solution here was to connect to a version of HIVE that is supported by the driver. MongoDB does state that its driver supports a certain version of Hadoop, but doesn't drill down to the individual Application (HIVE / SQOOP).
I am unable to get MyEclipse to connect to the marketplace. I am aware of the proxy setup. These are the steps I followed within a proxy environment and within a direct environment.
A. Within the Company Network. (browers use automatic configuration script)
Chose Native option. Does not work.
Chose Manual option. Set the domain, username. Opened the proxy script to figure out available proxy servers. Verified independently that these proxy servers work. Does not work.
Modified the vmargs to provide the http host, user, password and port properties. Does not work.
Did steps 1-3 with restarts of Eclipse.
B. Within home environment. (Direct connection to internet)
Tried Direct Option. Does not work.
Tried Native Option. Does not work.
The error message that I constantly see (through error logs) is this.
java.lang.reflect.InvocationTargetException
at org.eclipse.epp.internal.mpc.ui.commands.MarketplaceWizardCommand$3.run(MarketplaceWizardCommand.java:203)
at org.eclipse.jface.operation.ModalContext$ModalContextThread.run(ModalContext.java:121)
Caused by: org.eclipse.core.runtime.CoreException: HTTP Server Unknown HTTP Response Code (-1):http://marketplace.eclipse.org/catalogs/api/p
at org.eclipse.equinox.internal.p2.transport.ecf.RepositoryTransport.stream(RepositoryTransport.java:161)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.eclipse.epp.internal.mpc.core.util.AbstractP2TransportFactory.invokeStream(AbstractP2TransportFactory.java:35)
at org.eclipse.epp.internal.mpc.core.util.TransportFactory$1.stream(TransportFactory.java:69)
at org.eclipse.epp.internal.mpc.core.service.RemoteMarketplaceService.processRequest(RemoteMarketplaceService.java:141)
at org.eclipse.epp.internal.mpc.core.service.RemoteMarketplaceService.processRequest(RemoteMarketplaceService.java:80)
at org.eclipse.epp.internal.mpc.core.service.DefaultCatalogService.listCatalogs(DefaultCatalogService.java:36)
at org.eclipse.epp.internal.mpc.ui.commands.MarketplaceWizardCommand$3.run(MarketplaceWizardCommand.java:200)
... 1 more
Caused by: org.eclipse.ecf.filetransfer.BrowseFileTransferException: Could not connect to http://marketplace.eclipse.org/catalogs/api/p
at com.genuitec.pulse2.common.http.ecf.PulseRetrieveFileTransfer.openStreams(Unknown Source)
at org.eclipse.ecf.provider.filetransfer.retrieve.AbstractRetrieveFileTransfer.sendRetrieveRequest(AbstractRetrieveFileTransfer.java:889)
at org.eclipse.ecf.provider.filetransfer.retrieve.AbstractRetrieveFileTransfer.sendRetrieveRequest(AbstractRetrieveFileTransfer.java:576)
at org.eclipse.ecf.provider.filetransfer.retrieve.MultiProtocolRetrieveAdapter.sendRetrieveRequest(MultiProtocolRetrieveAdapter.java:106)
at org.eclipse.equinox.internal.p2.transport.ecf.FileReader.sendRetrieveRequest(FileReader.java:349)
at org.eclipse.equinox.internal.p2.transport.ecf.FileReader.read(FileReader.java:213)
at org.eclipse.equinox.internal.p2.transport.ecf.RepositoryTransport.stream(RepositoryTransport.java:153)
... 11 more
Is there any alternative or any other step that I can take to resolve this problem. I know I can go through the manual update by downloading the plugin and all. But I really want to solve this issue.
MyEclipse Version Information:
MyEclipse Blue Edition
Version: 10.7.1 Blue
Build id: 10.7.1-Blue-20130201
Apparently it could be a bug.
I just read this whole bug report here.
I tried adding the VM arguments to ensure the the HTTPClient workaround can be achieved via a configuration change. Did not work.
However, I was able to remove the http client libraries from the plugins folder, courtesy Comment #27 and #29 on the bug report.
Now I'm able to connect over proxy and direct as well.
New to stack exchange and Giraph so please overlook mistakes and ask any clarifying questions.
OS: ubuntu 13.10
Hadoop/Yarn: hadoop-2.2.0/ (2-node cluster)
Giraph: 1.0.0 (EDIT: trunk)
I'm getting a NullPointerException (NPE) when I attempt to run the following example:
$ hadoop jar
$GIRAPH_HOME/giraph-examples/target/giraph-examples-1.1.0-SNAPSHOT-for-hadoop-2.2.0-jar-with-dependencies.jar
org.apache.giraph.GiraphRunner
org.apache.giraph.examples.SimpleShortestPathsComputation -vif
org.apache.giraph.io.formats.JsonLongDoubleFloatDoubleVertexInputFormat
-vip /user/hduser/rrdata/tiny_graph.txt -vof org.apache.giraph.io.formats.IdWithValueTextOutputFormat -op
/user/hduser/rrdata/output/tiny_graph.out -w 1
Stack Trace:
Exception in thread "main" java.lang.NullPointerException at
org.apache.giraph.yarn.GiraphYarnClient.checkJobLocalZooKeeperSupported(GiraphYarnClient.java:460)
at
org.apache.giraph.yarn.GiraphYarnClient.run(GiraphYarnClient.java:116)
at org.apache.giraph.GiraphRunner.run(GiraphRunner.java:96) at
org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at
org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84) at
org.apache.giraph.GiraphRunner.main(GiraphRunner.java:126) at
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606) at
org.apache.hadoop.util.RunJar.main(RunJar.java:212)
It seems zookeeper related. I installed zookeeper but not having used it before it seems like the configs are wrong. I've tried -Dgiraph.zkList=hostname:port and related options but get 'Unrecognized option' exception.
Looking for the correct zookeeper settings for this scenarios. I'll post a reply if I figure it out.
This is an example how you can specify -D flags:
hadoop jar giraph-examples-1.1.0-SNAPSHOT-for-hadoop-2.2-jar-with-dependencies.jar org.apache.giraph.GiraphRunner -D giraph.zkList="zkNode.net:2081" org.apache.giraph.examples.SimpleShortestPathsComputation -vif org.apache.giraph.io.formats.JsonLongDoubleFloatDoubleVertexInputFormat -vip /user/rav/giraph/input/tiny_graph.txt -vof org.apache.giraph.io.formats.IdWithValueTextOutputFormat -op /user/rav/giraph/output/shortestpaths -w 1
Btw local zookeeper is not supported in Giraph yet (GiraphYarnClient):
/**
* Check if the job's configuration is for a local run. These can all be
* removed as we expand the functionality of the "pure YARN" Giraph profile.
*/
private void checkJobLocalZooKeeperSupported() {
final boolean isZkExternal = giraphConf.isZookeeperExternal();
final String checkZkList = giraphConf.getZookeeperList();
if (!isZkExternal || checkZkList.isEmpty()) {
throw new IllegalArgumentException("Giraph on YARN does not currently" +
"support Giraph-managed ZK instances: use a standalone ZooKeeper.");
}
}
Unfortunately checkZkList is NULL so you will never see this exception :)
The reason for the NPE is probably the lack of a giraphConf to check for ZK settings. i think this is due to earlier problems in the run. Looks like the examples jar was not supplied using a -yj argument. the jar you run with "hadoop jar" is typically giraph-core itself.
Good luck, please post on the Giraph user list if you have more trouble.
I'm running a GWT app in a dev mode with a custom Jetty container. The app loads fine for the first time, however, if I refresh it, I get the following errors in the Development Mode window (paths changed):
00:16:44.854 [ERROR] Unable to create file 'C:\somePath\src\war\msjavaSnack\C4EA130FD0ED44BE513FEEDDE13614DA.cache.png'
java.io.FileNotFoundException: C:\somePath\src\war\msjavaSnack\C4EA130FD0ED44BE513FEEDDE13614DA.cache.png (The requested operation cannot be performed on a file with a user-mapped section open)
at java.io.FileOutputStream.open(Native Method)
at java.io.FileOutputStream.<init>(FileOutputStream.java:194)
at java.io.FileOutputStream.<init>(FileOutputStream.java:145)
at com.google.gwt.core.ext.linker.impl.StandardLinkerContext.writeArtifactToFile(StandardLinkerContext.java:658)
at com.google.gwt.core.ext.linker.impl.StandardLinkerContext.produceOutputDirectory(StandardLinkerContext.java:595)
at com.google.gwt.dev.DevMode.produceOutput(DevMode.java:476)
at com.google.gwt.dev.DevModeBase.relink(DevModeBase.java:1131)
at com.google.gwt.dev.DevModeBase.access$000(DevModeBase.java:67)
at com.google.gwt.dev.DevModeBase$2.accept(DevModeBase.java:1076)
at com.google.gwt.dev.shell.ShellModuleSpaceHost$1.accept(ShellModuleSpaceHost.java:122)
at com.google.gwt.dev.shell.StandardRebindOracle$Rebinder.rebind(StandardRebindOracle.java:59)
at com.google.gwt.dev.shell.StandardRebindOracle.rebind(StandardRebindOracle.java:154)
at com.google.gwt.dev.shell.ShellModuleSpaceHost.rebind(ShellModuleSpaceHost.java:119)
at com.google.gwt.dev.shell.ModuleSpace.rebind(ModuleSpace.java:531)
at com.google.gwt.dev.shell.ModuleSpace.rebindAndCreate(ModuleSpace.java:414)
at com.google.gwt.dev.shell.GWTBridgeImpl.create(GWTBridgeImpl.java:39)
at com.google.gwt.core.client.GWT.create(GWT.java:98)
at com.extjs.gxt.ui.client.GXT.<clinit>(GXT.java:38)
at com.extjs.gxt.ui.client.widget.Component.<clinit>(Component.java:202)
at msjava.snack.gui.client.MSHeaderPanelViewport.<init>(MSHeaderPanelViewport.java:62)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at com.google.gwt.dev.shell.ModuleSpace.rebindAndCreate(ModuleSpace.java:422)
at com.google.gwt.dev.shell.ModuleSpace.onLoad(ModuleSpace.java:361)
at com.google.gwt.dev.shell.OophmSessionHandler.loadModule(OophmSessionHandler.java:185)
at com.google.gwt.dev.shell.BrowserChannelServer.processConnection(BrowserChannelServer.java:380)
at com.google.gwt.dev.shell.BrowserChannelServer.run(BrowserChannelServer.java:222)
at java.lang.Thread.run(Thread.java:662)
UPDATE
In the ProcessExplorer I can see that the JVM process hosting Jetty has an open handle to the file so this is probably the reason why the other JVM (hosting the dev mode) can't write to it. Any way to work it around?
It seems that two (open/close) (open/close) rutines over the same file happening too soon one after another causes this... Some developers suggest to call the gc. Check for closing correctly every i/o operation. Not to perform complete (open,loop(writes),close) too soon. It seems that when one operation it is going to be finished while the second request arrives and that throws the problem.
I found the solution on this page:
http://docs.codehaus.org/display/JETTY/Files+locked+on+Windows
EDIT:
The above URL is dead now.
Now it's quite easy to find the solution in Jetty's wiki:
Jetty provides a configuration switch in the webdefault.xml file for
the DefaultServlet that enables or disables the use of memory-mapped
files. If you are running on Windows and are having file-locking
problems, you should set this switch to disable memory-mapped file
buffers.
The default webdefault.xml file is in the lib/jetty.jar at
org/eclipse/jetty/webapp/webdefault.xml. Extract it to a convenient
disk location and edit it to change useFileMappedBuffer to false.
https://wiki.eclipse.org/Jetty/Howto/Deal_with_Locked_Windows_Files