Kafka vertica consumer and rejection table - apache-kafka

I see very strange behavior using vertica kafka consumer:
2016-07-27 04:22:17.307 com.vertica.solutions.kafka.scheduler.StreamCoordinator::Main [INFO] Starting frame # 2016-07-27 04:22:17.307
2016-07-27 04:22:17.330 com.vertica.solutions.kafka.scheduler.FrameScheduler::Main [INFO] Starting compute batches for new Frame.
2016-07-27 04:22:17.431 com.vertica.solutions.kafka.scheduler.FrameScheduler::Main [INFO] Completed computing batch set for current Frame.
2016-07-27 04:22:17.469 com.vertica.solutions.kafka.scheduler.LaneWorker::Lane Worker 2 ("openx"."requests"-CREATE#2016-07-27 04:22:17.431) [ERROR] Rolling back MB: [Vertica][VJDBC](4213) ROLLBACK: Object "requests_rej" already exists
java.sql.SQLSyntaxErrorException: [Vertica][VJDBC](4213) ROLLBACK: Object "requests_rej" already exists
at com.vertica.util.ServerErrorData.buildException(Unknown Source)
at com.vertica.dataengine.VResultSet.fetchChunk(Unknown Source)
at com.vertica.dataengine.VResultSet.initialize(Unknown Source)
at com.vertica.dataengine.VQueryExecutor.readExecuteResponse(Unknown Source)
at com.vertica.dataengine.VQueryExecutor.handleExecuteResponse(Unknown Source)
at com.vertica.dataengine.VQueryExecutor.execute(Unknown Source)
at com.vertica.jdbc.common.SPreparedStatement.executeWithParams(Unknown Source)
at com.vertica.jdbc.common.SPreparedStatement.executeUpdate(Unknown Source)
at com.vertica.solutions.kafka.scheduler.MicroBatch.execute(MicroBatch.java:193)
at com.vertica.solutions.kafka.scheduler.LaneWorker.run(LaneWorker.java:69)
at java.lang.Thread.run(Thread.java:745)
Caused by: com.vertica.support.exceptions.SyntaxErrorException: [Vertica][VJDBC](4213) ROLLBACK: Object "requests_rej" already exists
... 11 more
2016-07-27 04:22:17.469 com.vertica.solutions.kafka.scheduler.LaneWorker::Lane Worker 2 [INFO] Lane Worker 2 waiting for batch...
2016-07-27 04:22:17.469 com.vertica.solutions.kafka.scheduler.StreamCoordinator::Main [INFO] Sleeping for 9838 milliseconds until 2016-07-27 04:22:27.307. Started frame # 2016-07-27 04:22:17.307.
2016-07-27 04:22:27.308 com.vertica.solutions.kafka.scheduler.StreamCoordinator::Main [INFO] Starting frame # 2016-07-27 04:22:27.307
2016-07-27 04:22:27.331 com.vertica.solutions.kafka.scheduler.FrameScheduler::Main [INFO] Starting compute batches for new Frame.
2016-07-27 04:22:27.427 com.vertica.solutions.kafka.scheduler.FrameScheduler::Main [INFO] Completed computing batch set for current Frame.
I do as documentation says, but on write into vertica see this error! Why I see it? How can I fix it?

The error clearly mentions that
Object "requests_rej" already exists
Try dropping the object and running your job.

Related

vertica kafka scheduler stops processing messages

Could you please help me? I have vertica's scheduler, it reads data from apache kafka's topic. Periodically, the scheduler stops processing messages from the topic, there are no errors in the scheduler log, the scheduler process itself continues to work, but does not process messages and it cannot be stopped correctly, you can only kill this process. With what it can be connected and what else it is possible to look at the given problem?
Below are the last entries from the scheduler log before the problem occurred
2022-07-12 05:30:20.986 com.vertica.solutions.kafka.scheduler.StreamCoordinator::Main [INFO] Starting frame # 2022-07-12 05:30:20.986
2022-07-12 05:30:48.837 com.vertica.solutions.kafka.scheduler.LaneWorker::Lane Worker 1 [INFO] Lane Worker 1 waiting for batch...
2022-07-12 05:30:48.837 com.vertica.solutions.kafka.scheduler.StreamCoordinator::Main [INFO] Sleeping for 2149 milliseconds until 2022-07-12 05:30:50.986. Started frame # 2022-07-12 05:30:20.986.
2022-07-12 05:30:51.018 com.vertica.solutions.kafka.scheduler.StreamCoordinator::Main [INFO] Starting frame # 2022-07-12 05:30:51.018
2022-07-12 05:31:21.431 com.vertica.solutions.kafka.scheduler.LaneWorker::Lane Worker 1 [INFO] Lane Worker 1 waiting for batch...
2022-07-12 05:31:21.456 com.vertica.solutions.kafka.scheduler.StreamCoordinator::Main [INFO] Starting frame # 2022-07-12 05:31:21.456
2022-07-12 05:31:30.111 com.vertica.solutions.kafka.scheduler.LaneWorker::Lane Worker 1 [INFO] Lane Worker 1 waiting for batch...
2022-07-12 05:31:30.111 com.vertica.solutions.kafka.scheduler.StreamCoordinator::Main [INFO] Sleeping for 21345 milliseconds until 2022-07-12 05:31:51.456. Started frame # 2022-07-12 05:31:21.456.
2022-07-12 05:31:51.480 com.vertica.solutions.kafka.scheduler.StreamCoordinator::Main [INFO] Starting frame # 2022-07-12 05:31:51.48
2022-07-12 05:32:13.280 com.vertica.solutions.kafka.scheduler.LaneWorker::Lane Worker 1 [INFO] Lane Worker 1 waiting for batch...
2022-07-12 05:32:13.281 com.vertica.solutions.kafka.scheduler.StreamCoordinator::Main [INFO] Sleeping for 8200 milliseconds until 2022-07-12 05:32:21.48. Started frame # 2022-07-12 05:31:51.48.
2022-07-12 05:32:21.505 com.vertica.solutions.kafka.scheduler.StreamCoordinator::Main [INFO] Starting frame # 2022-07-12 05:32:21.505
2022-07-12 05:35:19.932 com.vertica.solutions.kafka.scheduler.config.ConfigurationRefresher::Main_cfg_refresh [INFO] Refreshing Scheduler (refresh interval reached).
2022-07-12 05:35:19.932 com.vertica.solutions.kafka.scheduler.config.ConfigurationRefresher::Main_cfg_refresh [INFO] refresh trial 0

Weird spurious errors during compilation of Play / Activator program?

I have a large and moderately complex web application, using the standard Typesafe Stack of Play, Scala and Akka. (I'm in the process of adding scala.js, but this problem predates that.)
A couple of months ago, I upgraded to the current Activator-based version of the world (having fallen behind the times), and that's mostly worked fine. But ever since then, I am semi-frequently getting a weird error when I try to compile:
[Querki] $ compile
[trace] Stack trace suppressed: run last scalajvm/compile:compile for the full output.
[error] (scalajvm/compile:compile) java.io.FileNotFoundException: C:\Users\jducoeur\Documents\GitHub\Querki\querki\scalajvm\target\scala-2.11\classes.bak\sbt2595416303704565240.class (Access is denied)
[error] Total time: 3 s, completed Sep 2, 2014 3:26:43 PM
Once it starts, it usually keeps recurring until I do a clean recompilation, so it's kind of a pain in the tuchus -- full recompilation of the system takes a while, and this is typically happening a couple of times a day. (OTOH, sometimes it simply stops happening, apparently for no reason.) Restarting Activator does not appear to help.
Anyone have any idea what's going on here? As far as I can tell, the classes.bak folder gets created temporarily during the compile process -- I can see it appear and then disappear again once the compile is finished, regardless of whether it succeeds. It definitely does not exist before the failed compiles start. It appears as if sbt is creating a temp folder in a broken state, or something like that.
If it's relevant, this is running on a Windows 7 box; I am using sbt 0.13.5, scala 2.11.1 and Activator 1.2.10.
ETA: Also conceivably relevant, now that I think of it -- I'm also running GitHub for Windows. I mention this because it's a Java-based app that is clearly doing something to check for changes to the tree periodically. (It auto-refreshes now and then; whether it is scanning the tree for changes or listening a la JNotify, I don't know.)
ETA2: Hah -- finally remembered to print the full log before cleaning. Here's the full stack traceback: I don't know what to make of it, but possibly someone on the sbt side of things can use it:
[Querki] $ compile
[info] Compiling 1 Scala source to C:\Users\jducoeur\Documents\GitHub\Querki\querki\scalajs\target\scala-2.11\classes...
[info] Fast optimizing C:\Users\jducoeur\Documents\GitHub\Querki\querki\scalajvm\target\scala-2.11\classes\public\javascripts\querki-client-fastopt.js
[trace] Stack trace suppressed: run last scalajvm/compile:compile for the full output.
[error] (scalajvm/compile:compile) java.io.FileNotFoundException: C:\Users\jducoeur\Documents\GitHub\Querki\querki\scalajvm\target\scala-2.11\classes.bak\sbt1117112335838311075.class (Access is denied)
[error] Total time: 22 s, completed Sep 5, 2014 3:17:25 PM
[Querki] $ last scalajvm/compile:compile
scalajs/compile:compile
[debug]
[debug] Initial source changes:
[debug] removed:Set()
[debug] added: Set()
[debug] modified: Set(C:\Users\jducoeur\Documents\GitHub\Querki\querki\scala\src\main\scala\qtexttest\LineParsers.scala)
[debug] Removed products: Set()
[debug] External API changes: API Changes: Set()
[debug] Modified binary dependencies: Set()
[debug] Initial directly invalidated sources: Set(C:\Users\jducoeur\Documents\GitHub\Querki\querki\scala\src\main\scala\qtexttest\LineParsers.scala)
[debug]
[debug] Sources indirectly invalidated by:
[debug] product: Set()
[debug] binary dep: Set()
[debug] external source: Set()
[debug] All initially invalidated sources: Set(C:\Users\jducoeur\Documents\GitHub\Querki\querki\scala\src\main\scala\qtexttest\LineParsers.scala)
[info] Compiling 1 Scala source to C:\Users\jducoeur\Documents\GitHub\Querki\querki\scalajs\target\scala-2.11\classes...
[debug] Getting compiler-interface from component compiler for Scala 2.11.1
[debug] Getting compiler-interface from component compiler for Scala 2.11.1
[debug] Running cached compiler 1f1ebd37, interfacing (CompilerInterface) with Scala compiler version 2.11.1
[debug] Calling Scala compiler with arguments (CompilerInterface):
[debug] -Xplugin:C:\Users\jducoeur\.ivy2\cache\org.scala-lang.modules.scalajs\scalajs-compiler_2.11.1\jars\scalajs-compiler_2.11.1-0.5.4.jar
[debug] -bootclasspath
[debug] C:\Program Files\Java\jdk1.6.0_38\jre\lib\resources.jar;C:\Program Files\Java\jdk1.6.0_38\jre\lib\rt.jar;C:\Program Files\Java\jdk1.6.0_38\jre\lib\sunrsasign.jar;C:\Program Files\Java\jdk1.6.0_38\jre\lib\jsse.jar;C:\Program Files\Java\jdk1.6.0_38\jre\lib\jce.jar;C:\Program Files\Java\jdk1.6.0_38\jre\lib\charsets.jar;C:\Program Files\Java\jdk1.6.0_38\jre\lib\modules\jdk.boot.jar;C:\Program Files\Java\jdk1.6.0_38\jre\classes;C:\Users\jducoeur\.ivy2\cache\org.scala-lang\scala-library\jars\scala-library-2.11.1.jar
[debug] -classpath
[debug] C:\Users\jducoeur\Documents\GitHub\Querki\querki\scalajs\target\scala-2.11\classes;C:\Users\jducoeur\.ivy2\cache\org.scala-lang.modules.scalajs\scalajs-library_2.11\jars\scalajs-library_2.11-0.5.4.jar;C:\Users\jducoeur\.ivy2\cache\org.scala-lang.modules.scalajs\scalajs-dom_sjs0.5_2.11\jars\scalajs-dom_sjs0.5_2.11-0.6.jar;C:\Users\jducoeur\.ivy2\cache\org.scala-lang.modules.scalajs\scalajs-jquery_sjs0.5_2.11\jars\scalajs-jquery_sjs0.5_2.11-0.6.jar;C:\Users\jducoeur\.ivy2\cache\org.webjars\jquery\jars\jquery-1.10.2.jar
[debug] Scala compilation took 6.133828284 s
[debug] New invalidations:
[debug] Set()
[debug] Initial set of included nodes: Set()
[debug] Previously invalidated, but (transitively) depend on new invalidations:
[debug] Set()
[debug] All newly invalidated sources after taking into account (previously) recompiled sources:Set()
scalajvm/compile:compile
[debug]
[debug] Initial source changes:
[debug] removed:Set()
[debug] added: Set()
[debug] modified: Set(C:\Users\jducoeur\Documents\GitHub\Querki\querki\scala\src\main\scala\qtexttest\LineParsers.scala, C:\Users\jducoeur\Documents\GitHub\Querki\querki\scalajvm\app\qtext\LineParsers.scala)
[debug] Removed products: Set()
[debug] External API changes: API Changes: Set()
[debug] Modified binary dependencies: Set()
[debug] Initial directly invalidated sources: Set(C:\Users\jducoeur\Documents\GitHub\Querki\querki\scala\src\main\scala\qtexttest\LineParsers.scala, C:\Users\jducoeur\Documents\GitHub\Querki\querki\scalajvm\app\qtext\LineParsers.scala)
[debug]
[debug] Sources indirectly invalidated by:
[debug] product: Set()
[debug] binary dep: Set()
[debug] external source: Set()
[debug] All initially invalidated sources: Set(C:\Users\jducoeur\Documents\GitHub\Querki\querki\scala\src\main\scala\qtexttest\LineParsers.scala, C:\Users\jducoeur\Documents\GitHub\Querki\querki\scalajvm\app\qtext\LineParsers.scala)
java.io.FileNotFoundException: C:\Users\jducoeur\Documents\GitHub\Querki\querki\scalajvm\target\scala-2.11\classes.bak\sbt1117112335838311075.class (Access is denied)
at java.io.FileOutputStream.open(Native Method)
at java.io.FileOutputStream.<init>(FileOutputStream.java:194)
at java.io.FileOutputStream.<init>(FileOutputStream.java:145)
at sbt.Using$$anonfun$fileOutputChannel$1.apply(Using.scala:82)
at sbt.Using$$anonfun$fileOutputChannel$1.apply(Using.scala:82)
at sbt.Using$$anon$2.openImpl(Using.scala:72)
at sbt.OpenFile$class.open(Using.scala:46)
at sbt.Using$$anon$2.open(Using.scala:70)
at sbt.Using$$anon$2.open(Using.scala:70)
at sbt.Using.apply(Using.scala:24)
at sbt.IO$$anonfun$copyFile$3.apply(IO.scala:583)
at sbt.IO$$anonfun$copyFile$3.apply(IO.scala:582)
at sbt.Using.apply(Using.scala:25)
at sbt.IO$.copyFile(IO.scala:582)
at sbt.IO$.move(IO.scala:764)
at sbt.inc.ClassfileManager$$anonfun$transactional$1$$anon$2.sbt$inc$ClassfileManager$$anonfun$$anon$$move(ClassfileManager.scala:77)
at sbt.inc.ClassfileManager$$anonfun$transactional$1$$anon$2$$anonfun$delete$3.apply(ClassfileManager.scala:53)
at sbt.inc.ClassfileManager$$anonfun$transactional$1$$anon$2$$anonfun$delete$3.apply(ClassfileManager.scala:52)
at scala.collection.immutable.HashSet$HashSet1.foreach(HashSet.scala:153)
at scala.collection.immutable.HashSet$HashTrieSet.foreach(HashSet.scala:306)
at scala.collection.immutable.HashSet$HashTrieSet.foreach(HashSet.scala:306)
at scala.collection.immutable.HashSet$HashTrieSet.foreach(HashSet.scala:306)
at sbt.inc.ClassfileManager$$anonfun$transactional$1$$anon$2.delete(ClassfileManager.scala:52)
at sbt.inc.Incremental$.prune(Incremental.scala:58)
at sbt.inc.IncrementalCommon.cycle(Incremental.scala:96)
at sbt.inc.Incremental$$anonfun$1.apply(Incremental.scala:38)
at sbt.inc.Incremental$$anonfun$1.apply(Incremental.scala:37)
at sbt.inc.Incremental$.manageClassfiles(Incremental.scala:65)
at sbt.inc.Incremental$.compile(Incremental.scala:37)
at sbt.inc.IncrementalCompile$.apply(Compile.scala:27)
at sbt.compiler.AggressiveCompile.compile2(AggressiveCompile.scala:157)
at sbt.compiler.AggressiveCompile.compile1(AggressiveCompile.scala:71)
at sbt.compiler.AggressiveCompile.apply(AggressiveCompile.scala:46)
at sbt.Compiler$.apply(Compiler.scala:75)
at sbt.Compiler$.apply(Compiler.scala:66)
at sbt.Defaults$.sbt$Defaults$$compileTaskImpl(Defaults.scala:770)
at sbt.Defaults$$anonfun$compileTask$1.apply(Defaults.scala:762)
at sbt.Defaults$$anonfun$compileTask$1.apply(Defaults.scala:762)
at scala.Function1$$anonfun$compose$1.apply(Function1.scala:47)
at sbt.$tilde$greater$$anonfun$$u2219$1.apply(TypeFunctions.scala:42)
at sbt.std.Transform$$anon$4.work(System.scala:64)
at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:237)
at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:237)
at sbt.ErrorHandling$.wideConvert(ErrorHandling.scala:18)
at sbt.Execute.work(Execute.scala:244)
at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:237)
at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:237)
at sbt.ConcurrentRestrictions$$anon$4$$anonfun$1.apply(ConcurrentRestrictions.scala:160)
at sbt.CompletionService$$anon$2.call(CompletionService.scala:30)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
[error] (scalajvm/compile:compile) java.io.FileNotFoundException: C:\Users\jducoeur\Documents\GitHub\Querki\querki\scalajvm\target\scala-2.11\classes.bak\sbt1117112335838311075.class (Access is denied)
[Querki] $

How to prevent java.lang.OutOfMemoryError: PermGen space? [duplicate]

This question already has answers here:
How to prevent java.lang.OutOfMemoryError: PermGen space at Scala compilation?
(9 answers)
Closed 9 years ago.
I am frequently getting an OutOfMemoryError from SBT.
> test
[error] java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: PermGen space
[error] Use 'last' for the full log.
> last
[debug] Running task... Cancelable: false, check cycles: false
[debug]
[debug] Initial source changes:
[debug] removed:Set()
[debug] added: Set()
[debug] modified: Set()
[debug] Removed products: Set()
[debug] Modified external sources: Set()
[debug] Modified binary dependencies: Set()
[debug] Initial directly invalidated sources: Set()
[debug]
[debug] Sources indirectly invalidated by:
[debug] product: Set()
[debug] binary dep: Set()
[debug] external source: Set()
[debug] Initially invalidated: Set()
[debug] Copy resource mappings:
[debug]
[debug]
[debug] Initial source changes:
[debug] removed:Set()
[debug] added: Set()
[debug] modified: Set()
[debug] Removed products: Set()
[debug] Modified external sources: Set()
[debug] Modified binary dependencies: Set()
[debug] Initial directly invalidated sources: Set()
[debug]
[debug] Sources indirectly invalidated by:
[debug] product: Set()
[debug] binary dep: Set()
[debug] external source: Set()
[debug] Initially invalidated: Set()
[debug] Copy resource mappings:
[debug]
[debug] Framework implementation 'org.scalacheck.ScalaCheckFramework' not present.
[debug] Framework implementation 'org.specs.runner.SpecsFramework' not present.
[debug] Framework implementation 'org.scalatest.tools.ScalaTestFramework' not present.
[debug] Framework implementation 'com.novocode.junit.JUnitFramework' not present.
[debug] Subclass fingerprints: Stream((org.specs2.specification.SpecificationStructure,false,org.specs2.runner.Fingerprints$$anon$1#34d6488c), ?)
[debug] Annotation fingerprints: Stream()
[debug] Running Test ExpandoObjectTest : subclass(false, org.specs2.specification.SpecificationStructure) with arguments
java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: PermGen space
at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
at java.util.concurrent.FutureTask.get(FutureTask.java:111)
at sbt.ConcurrentRestrictions$$anon$4.take(ConcurrentRestrictions.scala:196)
at sbt.Execute.next$1(Execute.scala:85)
at sbt.Execute.processAll(Execute.scala:88)
at sbt.Execute.runKeep(Execute.scala:68)
at sbt.EvaluateTask$.run$1(EvaluateTask.scala:162)
at sbt.EvaluateTask$.runTask(EvaluateTask.scala:177)
at sbt.Aggregation$$anonfun$4.apply(Aggregation.scala:46)
at sbt.Aggregation$$anonfun$4.apply(Aggregation.scala:44)
at sbt.EvaluateTask$.withStreams(EvaluateTask.scala:137)
at sbt.Aggregation$.runTasksWithResult(Aggregation.scala:44)
at sbt.Aggregation$.runTasks(Aggregation.scala:59)
at sbt.Aggregation$$anonfun$applyTasks$1.apply(Aggregation.scala:31)
at sbt.Aggregation$$anonfun$applyTasks$1.apply(Aggregation.scala:30)
at sbt.Command$$anonfun$applyEffect$2$$anonfun$apply$3.apply(Command.scala:62)
at sbt.Command$$anonfun$applyEffect$2$$anonfun$apply$3.apply(Command.scala:62)
at sbt.Command$.process(Command.scala:90)
at sbt.MainLoop$$anonfun$next$1$$anonfun$apply$1.apply(MainLoop.scala:71)
at sbt.MainLoop$$anonfun$next$1$$anonfun$apply$1.apply(MainLoop.scala:71)
at sbt.State$$anon$2.process(State.scala:170)
at sbt.MainLoop$$anonfun$next$1.apply(MainLoop.scala:71)
at sbt.MainLoop$$anonfun$next$1.apply(MainLoop.scala:71)
at sbt.ErrorHandling$.wideConvert(ErrorHandling.scala:18)
at sbt.MainLoop$.next(MainLoop.scala:71)
at sbt.MainLoop$.run(MainLoop.scala:64)
at sbt.MainLoop$$anonfun$runWithNewLog$1.apply(MainLoop.scala:53)
at sbt.MainLoop$$anonfun$runWithNewLog$1.apply(MainLoop.scala:50)
at sbt.Using.apply(Using.scala:25)
at sbt.MainLoop$.runWithNewLog(MainLoop.scala:50)
at sbt.MainLoop$.runAndClearLast(MainLoop.scala:33)
at sbt.MainLoop$.runLoggedLoop(MainLoop.scala:17)
at sbt.MainLoop$.runLogged(MainLoop.scala:13)
at sbt.xMain.run(Main.scala:26)
at xsbt.boot.Launch$.run(Launch.scala:55)
at xsbt.boot.Launch$$anonfun$explicit$1.apply(Launch.scala:45)
at xsbt.boot.Launch$.launch(Launch.scala:69)
at xsbt.boot.Launch$.apply(Launch.scala:16)
at xsbt.boot.Boot$.runImpl(Boot.scala:31)
at xsbt.boot.Boot$.main(Boot.scala:20)
at xsbt.boot.Boot.main(Boot.scala)
Caused by: java.lang.OutOfMemoryError: PermGen space
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:791)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:423)
at java.lang.ClassLoader.loadClass(ClassLoader.java:356)
at sbt.Project$$anon$5.apply(Project.scala:130)
at sbt.Project$$anon$5.apply(Project.scala:128)
at sbt.LogManager$.commandBase$1(LogManager.scala:59)
at sbt.LogManager$.command$1(LogManager.scala:60)
at sbt.LogManager$$anonfun$suppressedMessage$1.apply(LogManager.scala:61)
at sbt.LogManager$$anonfun$suppressedMessage$1.apply(LogManager.scala:61)
at sbt.ConsoleLogger.trace(ConsoleLogger.scala:163)
at sbt.AbstractLogger.log(Logger.scala:32)
at sbt.MultiLogger$$anonfun$dispatch$1.apply(MultiLogger.scala:40)
at sbt.MultiLogger$$anonfun$dispatch$1.apply(MultiLogger.scala:38)
at scala.collection.LinearSeqOptimized$class.foreach(LinearSeqOptimized.scala:59)
at scala.collection.immutable.List.foreach(List.scala:76)
at sbt.MultiLogger.dispatch(MultiLogger.scala:38)
at sbt.MultiLogger.trace(MultiLogger.scala:30)
at sbt.TestLogger$$anon$2.trace(TestReportListener.scala:71)
at sbt.TestLogger.endGroup(TestReportListener.scala:88)
at sbt.TestRunner$$anonfun$run$5.apply(TestFramework.scala:87)
at sbt.TestRunner$$anonfun$run$5.apply(TestFramework.scala:87)
at sbt.TestFramework$$anonfun$safeForeach$1.apply(TestFramework.scala:112)
at sbt.TestFramework$$anonfun$safeForeach$1.apply(TestFramework.scala:112)
at scala.collection.LinearSeqOptimized$class.foreach(LinearSeqOptimized.scala:59)
[error] java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: PermGen space
[error] Use 'last' for the full log.
Sometimes it also exits abruptly with:
sbt appears to be exiting abnormally.
The log file for this session is at /var/folders/vf/3khb58091wd0_1rz1yh6knb00000gp/T/sbt3242766352271599341.log
java.lang.OutOfMemoryError: PermGen space
Error during sbt execution: java.lang.OutOfMemoryError: PermGen space
Any solutions?
This sometimes happens if you compile huge codebases - a lot of classes get loaded into the VM running sbt.
You need to increase the PermGen space for sbt - use the flag -XX:MaxPermSize=256m, where 256 you can change with the desired size of the permanent generation.
Run:
cat `which sbt`
to locate you sbt startup script. Then edit it to include the flag with the java command that runs the sbt launcher in the similar way as it is described here for modifying -Xmx and -Xms.
Adding the -XX:+CMSClassUnloadingEnabled flag should also enable sbt to unload the classloaders with classes from the previous compilation runs that are no longer being used.
EDIT:
Alternatively, you can set these options in the SBT_OPTS environment variable if you are using the extended script for running sbt.

Hadoop Eclipse PlugIn Error: Call to localhost/127.0.0.1:54311 failed on local exception: java.io.EOFException

I saw a similar question being raised one year ago. Here is the link:
see here
I have similar configurations, but facing the same EOFException error.
Is it some problem with the Hadoop plugin for Eclipse or something to do with my Hadoop configurations? (NOTE: I have followed the standard configuration; so there is no chance of a mistake; Also when I run bin/start-all.sh the single-node Hadoop cluster is running fine)
Below is the stack trace from Eclipse while connecting to HDFS:
java.io.IOException: Call to localhost/127.0.0.1:54311 failed on local exception: java.io.EOFException
at org.apache.hadoop.ipc.Client.wrapException(Client.java:775)
at org.apache.hadoop.ipc.Client.call(Client.java:743)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at org.apache.hadoop.mapred.$Proxy1.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
at org.apache.hadoop.mapred.JobClient.createRPCProxy(JobClient.java:470)
at org.apache.hadoop.mapred.JobClient.init(JobClient.java:455)
at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:442)
at org.apache.hadoop.eclipse.server.HadoopServer.getJobClient(HadoopServer.java:473)
at org.apache.hadoop.eclipse.server.HadoopServer$LocationStatusUpdater.run(HadoopServer.java:102)
at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55)
Caused by: java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:392)
at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:501)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:446)
Hadoop NameNode Log is as follows:
STARTUP_MSG: Starting NameNode STARTUP_MSG: host = somnath-laptop/127.0.1.1 STARTUP_MSG: args = [] STARTUP_MSG: version = 1.0.4 STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1393290; compiled by 'hortonfo' on Wed Oct 3 05:13:58 UTC 2012 ************************************************************/
2013-02-11 13:26:56,505 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2013-02-11 13:26:56,520 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2013-02-11 13:26:56,521 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2013-02-11 13:26:56,521 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2013-02-11 13:26:56,792 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2013-02-11 13:26:56,797 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
2013-02-11 13:26:56,807 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.
2013-02-11 13:26:56,809 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode registered.
2013-02-11 13:26:56,882 INFO org.apache.hadoop.hdfs.util.GSet: VM type = 64-bit
2013-02-11 13:26:56,882 INFO org.apache.hadoop.hdfs.util.GSet: 2% max memory = 17.77875 MB
2013-02-11 13:26:56,883 INFO org.apache.hadoop.hdfs.util.GSet: capacity = 2^21 = 2097152 entries
2013-02-11 13:26:56,883 INFO org.apache.hadoop.hdfs.util.GSet: recommended=2097152, actual=2097152
2013-02-11 13:26:56,914 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hduser
2013-02-11 13:26:56,914 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2013-02-11 13:26:56,914 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
2013-02-11 13:26:56,922 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100
2013-02-11 13:26:56,922 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
2013-02-11 13:26:56,957 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean
2013-02-11 13:26:57,023 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2013-02-11 13:26:57,109 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files = 7
2013-02-11 13:26:57,123 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files under construction = 0
2013-02-11 13:26:57,124 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 654 loaded in 0 seconds.
2013-02-11 13:26:57,124 INFO org.apache.hadoop.hdfs.server.common.Storage: Edits file /app/hadoop/tmp/dfs/name/current/edits of size 4 edits # 0 loaded in 0 seconds.
2013-02-11 13:26:57,126 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 654 saved in 0 seconds.
2013-02-11 13:26:57,548 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 654 saved in 0 seconds.
2013-02-11 13:26:57,825 INFO org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0 entries 0 lookups
2013-02-11 13:26:57,825 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage in 923 msecs
2013-02-11 13:26:57,830 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe mode ON. The ratio of reported blocks 0.0000 has not reached the threshold 0.9990. Safe mode will be turned off automatically.
2013-02-11 13:26:57,838 INFO org.apache.hadoop.util.HostsFileReader: Refreshing hosts (include/exclude) list
2013-02-11 13:26:57,845 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source FSNamesystemMetrics registered.
2013-02-11 13:26:57,870 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcDetailedActivityForPort54310 registered.
2013-02-11 13:26:57,871 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcActivityForPort54310 registered.
2013-02-11 13:26:57,873 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: localhost/127.0.0.1:54310
2013-02-11 13:26:57,879 INFO org.apache.hadoop.ipc.Server: Starting SocketReader
2013-02-11 13:27:03,041 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2013-02-11 13:27:03,142 INFO org.apache.hadoop.http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
2013-02-11 13:27:03,156 INFO org.apache.hadoop.http.HttpServer: dfs.webhdfs.enabled = false
2013-02-11 13:27:03,164 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50070
2013-02-11 13:27:03,165 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50070 webServer.getConnectors()[0].getLocalPort() returned 50070
2013-02-11 13:27:03,166 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50070
2013-02-11 13:27:03,166 INFO org.mortbay.log: jetty-6.1.26
2013-02-11 13:27:03,585 INFO org.mortbay.log: Started SelectChannelConnector#0.0.0.0:50070
2013-02-11 13:27:03,585 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at: 0.0.0.0:50070
2013-02-11 13:27:03,586 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2013-02-11 13:27:03,587 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 54310: starting
2013-02-11 13:27:03,587 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 54310: starting
2013-02-11 13:27:03,587 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 54310: starting
2013-02-11 13:27:03,588 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 54310: starting
2013-02-11 13:27:03,588 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 54310: starting
2013-02-11 13:27:03,588 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 54310: starting
2013-02-11 13:27:03,588 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 54310: starting
2013-02-11 13:27:03,588 INFO org.apache.hadoop.ipc.Server: IPC Server handler 6 on 54310: starting
2013-02-11 13:27:03,588 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 54310: starting
2013-02-11 13:27:03,588 INFO org.apache.hadoop.ipc.Server: IPC Server handler 8 on 54310: starting
2013-02-11 13:27:03,589 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 54310: starting
2013-02-11 13:27:07,306 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hduser cause:org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /app/hadoop/tmp/mapred/system. Name node is in safe mode. The ratio of reported blocks 0.0000 has not reached the threshold 0.9990. Safe mode will be turned off automatically.
2013-02-11 13:27:07,308 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 54310, call delete(/app/hadoop/tmp/mapred/system, true) from 127.0.0.1:51327: error:
org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /app/hadoop/tmp/mapred/system. Name node is in safe mode. The ratio of reported blocks 0.0000 has not reached the threshold 0.9990. Safe mode will be turned off automatically.
org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /app/hadoop/tmp/mapred/system. Name node is in safe mode. The ratio of reported blocks 0.0000 has not reached the threshold 0.9990. Safe mode will be turned off automatically. at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:1994) at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:1974) at
org.apache.hadoop.hdfs.server.namenode.NameNode.delete(NameNode.java:792) at
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at
org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563) at
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388) at
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384) at
java.security.AccessController.doPrivileged(Native Method) at
javax.security.auth.Subject.doAs(Subject.java:416) at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121) at
org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
2013-02-11 13:27:14,657 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:50010 storage DS-747527201-127.0.1.1-50010-1360274163059
2013-02-11 13:27:14,661 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/127.0.0.1:50010
2013-02-11 13:27:14,687 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe mode extension entered. The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 29 seconds.
2013-02-11 13:27:14,687 INFO org.apache.hadoop.hdfs.StateChange: *BLOCK* NameSystem.processReport: from 127.0.0.1:50010, blocks: 1, processing time: 3 msecs
2013-02-11 13:27:15,988 WARN org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying to get groups for user webuser org.apache.hadoop.util.Shell$ExitCodeException: id: webuser: No such user at org.apache.hadoop.util.Shell.runCommand(Shell.java:255) at
org.apache.hadoop.util.Shell.run(Shell.java:182) at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:375) at
org.apache.hadoop.util.Shell.execCommand(Shell.java:461) at
org.apache.hadoop.util.Shell.execCommand(Shell.java:444) at
org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:68) at
org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:45) at
org.apache.hadoop.security.Groups.getGroups(Groups.java:79) at
org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1026) at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.(FSPermissionChecker.java:50) at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5210) at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkTraverse(FSNamesystem.java:5193) at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:2019) at
org.apache.hadoop.hdfs.server.namenode.NameNode.getFileInfo(NameNode.java:848) at
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at
java.lang.reflect.Method.invoke(Method.java:616) at
org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563) at
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388) at
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384) at
java.security.AccessController.doPrivileged(Native Method) at
javax.security.auth.Subject.doAs(Subject.java:416) at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121) at
org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
2013-02-11 13:27:15,988 WARN org.apache.hadoop.security.UserGroupInformation: No groups available for user webuser
2013-02-11 13:27:16,007 WARN org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying to get groups for user webuser org.apache.hadoop.util.Shell$ExitCodeException: id: webuser: No such user at org.apache.hadoop.util.Shell.runCommand(Shell.java:255) at org.apache.hadoop.util.Shell.run(Shell.java:182) at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:375) at
org.apache.hadoop.util.Shell.execCommand(Shell.java:461) at
org.apache.hadoop.util.Shell.execCommand(Shell.java:444) at
org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:68) at
org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:45) at
org.apache.hadoop.security.Groups.getGroups(Groups.java:79) at
org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1026) at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.(FSPermissionChecker.java:50) at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5210) at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkTraverse(FSNamesystem.java:5193) at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:2019) at
org.apache.hadoop.hdfs.server.namenode.NameNode.getFileInfo(NameNode.java:848) at
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at
java.lang.reflect.Method.invoke(Method.java:616) at
org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563) at
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388) at
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384) at
java.security.AccessController.doPrivileged(Native Method) at
javax.security.auth.Subject.doAs(Subject.java:416) at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121) at
org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
2013-02-11 13:27:16,008 WARN org.apache.hadoop.security.UserGroupInformation: No groups available for user webuser
2013-02-11 13:27:16,023 WARN org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying to get groups for user webuser org.apache.hadoop.util.Shell$ExitCodeException: id: webuser: No such user at
org.apache.hadoop.util.Shell.runCommand(Shell.java:255) at
org.apache.hadoop.util.Shell.run(Shell.java:182) at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:375) at
org.apache.hadoop.util.Shell.execCommand(Shell.java:461) at
org.apache.hadoop.util.Shell.execCommand(Shell.java:444) at
org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:68) at
org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:45) at
org.apache.hadoop.security.Groups.getGroups(Groups.java:79) at
org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1026) at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.(FSPermissionChecker.java:50) at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5210) at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPathAccess(FSNamesystem.java:5178) at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:2338) at
org.apache.hadoop.hdfs.server.namenode.NameNode.getListing(NameNode.java:831) at
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at
java.lang.reflect.Method.invoke(Method.java:616) at
org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563) at
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388) at
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384) at
java.security.AccessController.doPrivileged(Native Method) at
javax.security.auth.Subject.doAs(Subject.java:416) at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121) at
org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
2013-02-11 13:27:16,024 WARN org.apache.hadoop.security.UserGroupInformation: No groups available for user webuser
2013-02-11 13:27:17,327 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hduser cause:org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /app/hadoop/tmp/mapred/system. Name node is in safe mode. The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 27 seconds.
2013-02-11 13:27:17,327 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 54310, call delete(/app/hadoop/tmp/mapred/system, true) from 127.0.0.1:51335: error:
org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /app/hadoop/tmp/mapred/system. Name node is in safe mode. The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 27 seconds.
org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /app/hadoop/tmp/mapred/system. Name node is in safe mode. The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 27 seconds. at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:1994) at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:1974) at
org.apache.hadoop.hdfs.server.namenode.NameNode.delete(NameNode.java:792) at
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at
java.lang.reflect.Method.invoke(Method.java:616) at
org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563) at
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388) at
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384) at
java.security.AccessController.doPrivileged(Native Method) at
javax.security.auth.Subject.doAs(Subject.java:416) at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121) at
org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
2013-02-11 13:27:27,339 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hduser cause:org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /app/hadoop/tmp/mapred/system. Name node is in safe mode. The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 17 seconds.
2013-02-11 13:27:27,339 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 54310, call delete(/app/hadoop/tmp/mapred/system, true) from 127.0.0.1:51336: error:
org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /app/hadoop/tmp/mapred/system. Name node is in safe mode. The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 17 seconds.
org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /app/hadoop/tmp/mapred/system. Name node is in safe mode. The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 17 seconds. at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:1994) at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:1974) at
org.apache.hadoop.hdfs.server.namenode.NameNode.delete(NameNode.java:792) at
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at
java.lang.reflect.Method.invoke(Method.java:616) at
org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563) at
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388) at
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384) at
java.security.AccessController.doPrivileged(Native Method) at
javax.security.auth.Subject.doAs(Subject.java:416) at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121) at
org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
Any quick help will be appreciated.
I had the same problem, I solved it by
Make it .jar through Eclipse.
Run in Hadoop hadoop jar <jar_name> main_class_name.

Maven build failure: Read time out

I have tried all the possible solutions found here, including:
find ~/.m2 -name "*.lastUpdated" -exec grep -q "Could not transfer" {} \; -print -exec rm {} \;
mvn -U clean
It does not work anyway, the specific log is as below:
btw: I was trying to run a struts2 sample code from mkyong site
[DEBUG] =======================================================================
[DEBUG] com.mkyong.common:Struts2Example:war:com.mkyong.common
[DEBUG] junit:junit:jar:3.8.1:test
[DEBUG] org.apache.struts:struts2-core:jar:2.1.8:compile
[DEBUG] com.opensymphony:xwork-core:jar:2.1.6:compile
[DEBUG] org.springframework:spring-test:jar:2.5.6:compile
[DEBUG] commons-logging:commons-logging:jar:1.1.1:compile
[DEBUG] org.freemarker:freemarker:jar:2.3.15:compile
[DEBUG] ognl:ognl:jar:2.7.3:compile
[DEBUG] commons-fileupload:commons-fileupload:jar:1.2.1:compile
[DEBUG] commons-io:commons-io:jar:1.3.2:compile
[DEBUG] com.sun:tools:jar:1.5.0:system
[DEBUG] org.apache.struts:struts2-convention-plugin:jar:2.1.8:compile
[DEBUG] Using connector WagonRepositoryConnector with priority 0 for http://repo
.maven.apache.org/maven2
Downloading: http://repo.maven.apache.org/maven2/org/apache/struts/struts2-core/
2.1.8/struts2-core-2.1.8.jar
[DEBUG] Writing resolution tracking file D:\ProgramData\apache-maven-repo\.m2\re
pository\org\apache\struts\struts2-core\2.1.8\struts2-core-2.1.8.jar.lastUpdated
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 52:31.162s
[INFO] Finished at: Fri Jan 18 20:16:49 CST 2013
[INFO] Final Memory: 2M/5M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal on project Struts2Example: Could not resolve depe
ndencies for project com.mkyong.common:Struts2Example:war:com.mkyong.common: Cou
ld not transfer artifact org.apache.struts:struts2-core:jar:2.1.8 from/to centra
l (http://repo.maven.apache.org/maven2): GET request of: org/apache/struts/strut
s2-core/2.1.8/struts2-core-2.1.8.jar from central failed: Read timed out -> [Hel
p 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal o
n project Struts2Example: Could not resolve dependencies for project com.mkyong.
common:Struts2Example:war:com.mkyong.common: Could not transfer artifact org.apa
che.struts:struts2-core:jar:2.1.8 from/to central (http://repo.maven.apache.org/
maven2): GET request of: org/apache/struts/struts2-core/2.1.8/struts2-core-2.1.8
.jar from central failed
at org.apache.maven.lifecycle.internal.LifecycleDependencyResolver.getDe
pendencies(LifecycleDependencyResolver.java:210)
at org.apache.maven.lifecycle.internal.LifecycleDependencyResolver.resol
veProjectDependencies(LifecycleDependencyResolver.java:117)
at org.apache.maven.lifecycle.internal.MojoExecutor.ensureDependenciesAr
eResolved(MojoExecutor.java:258)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor
.java:201)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor
.java:153)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor
.java:145)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProje
ct(LifecycleModuleBuilder.java:84)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProje
ct(LifecycleModuleBuilder.java:59)
at org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBu
ild(LifecycleStarter.java:183)
at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(Lifecycl
eStarter.java:161)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:320)
at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:156)
at org.apache.maven.cli.MavenCli.execute(MavenCli.java:537)
at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:196)
at org.apache.maven.cli.MavenCli.main(MavenCli.java:141)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
sorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Laun
cher.java:290)
at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.jav
a:230)
at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(La
uncher.java:409)
at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:
352)
Caused by: org.apache.maven.project.DependencyResolutionException: Could not res
olve dependencies for project com.mkyong.common:Struts2Example:war:com.mkyong.co
mmon: Could not transfer artifact org.apache.struts:struts2-core:jar:2.1.8 from/
to central (http://repo.maven.apache.org/maven2): GET request of: org/apache/str
uts/struts2-core/2.1.8/struts2-core-2.1.8.jar from central failed
at org.apache.maven.project.DefaultProjectDependenciesResolver.resolve(D
efaultProjectDependenciesResolver.java:189)
at org.apache.maven.lifecycle.internal.LifecycleDependencyResolver.getDe
pendencies(LifecycleDependencyResolver.java:185)
... 22 more
Caused by: org.sonatype.aether.resolution.DependencyResolutionException: Could n
ot transfer artifact org.apache.struts:struts2-core:jar:2.1.8 from/to central (h
ttp://repo.maven.apache.org/maven2): GET request of: org/apache/struts/struts2-c
ore/2.1.8/struts2-core-2.1.8.jar from central failed
at org.sonatype.aether.impl.internal.DefaultRepositorySystem.resolveDepe
ndencies(DefaultRepositorySystem.java:375)
at org.apache.maven.project.DefaultProjectDependenciesResolver.resolve(D
efaultProjectDependenciesResolver.java:183)
... 23 more
Caused by: org.sonatype.aether.resolution.ArtifactResolutionException: Could not
transfer artifact org.apache.struts:struts2-core:jar:2.1.8 from/to central (htt
p://repo.maven.apache.org/maven2): GET request of: org/apache/struts/struts2-cor
e/2.1.8/struts2-core-2.1.8.jar from central failed
at org.sonatype.aether.impl.internal.DefaultArtifactResolver.resolve(Def
aultArtifactResolver.java:538)
at org.sonatype.aether.impl.internal.DefaultArtifactResolver.resolveArti
facts(DefaultArtifactResolver.java:216)
at org.sonatype.aether.impl.internal.DefaultRepositorySystem.resolveDepe
ndencies(DefaultRepositorySystem.java:358)
... 24 more
Caused by: org.sonatype.aether.transfer.ArtifactTransferException: Could not tra
nsfer artifact org.apache.struts:struts2-core:jar:2.1.8 from/to central (http://
repo.maven.apache.org/maven2): GET request of: org/apache/struts/struts2-core/2.
1.8/struts2-core-2.1.8.jar from central failed
at org.sonatype.aether.connector.wagon.WagonRepositoryConnector$4.wrap(W
agonRepositoryConnector.java:951)
at org.sonatype.aether.connector.wagon.WagonRepositoryConnector$4.wrap(W
agonRepositoryConnector.java:941)
at org.sonatype.aether.connector.wagon.WagonRepositoryConnector$GetTask.
run(WagonRepositoryConnector.java:669)
at org.sonatype.aether.util.concurrency.RunnableErrorForwarder$1.run(Run
nableErrorForwarder.java:60)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExec
utor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor
.java:908)
at java.lang.Thread.run(Thread.java:619)
Caused by: org.apache.maven.wagon.TransferFailedException: GET request of: org/a
pache/struts/struts2-core/2.1.8/struts2-core-2.1.8.jar from central failed
at org.apache.maven.wagon.AbstractWagon.getTransfer(AbstractWagon.java:3
49)
at org.apache.maven.wagon.AbstractWagon.getTransfer(AbstractWagon.java:3
10)
at org.apache.maven.wagon.AbstractWagon.getTransfer(AbstractWagon.java:2
87)
at org.apache.maven.wagon.StreamWagon.getIfNewer(StreamWagon.java:97)
at org.apache.maven.wagon.StreamWagon.get(StreamWagon.java:61)
at org.sonatype.aether.connector.wagon.WagonRepositoryConnector$GetTask.
run(WagonRepositoryConnector.java:601)
... 4 more
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:129)
at org.apache.maven.wagon.providers.http.httpclient.impl.io.AbstractSess
ionInputBuffer.read(AbstractSessionInputBuffer.java:187)
at org.apache.maven.wagon.providers.http.httpclient.impl.io.ContentLengt
hInputStream.read(ContentLengthInputStream.java:176)
at org.apache.maven.wagon.providers.http.httpclient.conn.EofSensorInputS
tream.read(EofSensorInputStream.java:138)
at org.apache.maven.wagon.AbstractWagon.transfer(AbstractWagon.java:493)
at org.apache.maven.wagon.AbstractWagon.getTransfer(AbstractWagon.java:3
39)
... 9 more
Read timed out means you have a network problem. Either repo.maven.apache.org is down (very unlikely) or you have connectivity issues on your end.
Read Timeout errors can also happen with big dependency resolution on slow networks (that you may not be able to fix by yourself). You may also want to consider this other thread, here you will find how to increase the Maven Wagon timeout which will help you in such cases.
Also check the official documentation for more details.
But using wget I am able to download instance. It is almost halt here..waited more than 15 mins..
INFO]
[INFO] --- maven-checkstyle-plugin:2.13:check (validate-checkstyle) # onos ---
[DEBUG] Using connector WagonRepositoryConnector with priority 0.0 for http://repo.maven.apache.org/maven2
Downloading: http://repo.maven.apache.org/maven2/org/apache/maven/doxia/doxia-module-xhtml/1.2/doxia-module-xhtml-1.2.pom
2/2 KB
Regards,
Bala