I'm trying to rewrite the mule start script so it works as a service on a RHEL.
Currently I have it mostly done.
It is starting and I have the most of the log files being successfully written where I want them.
But there's a file named literally .log that I do not know what is for, neither where to configure (its name and path).
Such file is adding the following nasty lines in the mule_ee.log upon start up:
log4j:ERROR setFile(null,true) call failed.
java.io.FileNotFoundException: .log (Permission denied)
at java.io.FileOutputStream.openAppend(Native Method)
at java.io.FileOutputStream.<init>(FileOutputStream.java:207)
at java.io.FileOutputStream.<init>(FileOutputStream.java:131)
at org.apache.log4j.FileAppender.setFile(FileAppender.java:294)
at org.apache.log4j.FileAppender.activateOptions(FileAppender.java:165)
at org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307)
at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:172)
at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:104)
at org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:809)
at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:735)
at org.apache.log4j.PropertyConfigurator.configureRootCategory(PropertyConfigurator.java:615)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:502)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:547)
at org.mule.module.launcher.log4j.ApplicationAwareRepositorySelector.configureFrom(ApplicationAwareRepositorySelector.java:166)
at org.mule.module.launcher.log4j.ApplicationAwareRepositorySelector.getLoggerRepository(ApplicationAwareRepositorySelector.java:95)
at org.apache.log4j.LogManager.getLoggerRepository(LogManager.java:208)
at org.apache.log4j.LogManager.getLogger(LogManager.java:228)
at org.mule.module.logging.MuleLoggerFactory.getLogger(MuleLoggerFactory.java:77)
at org.mule.module.logging.DispatchingLogger.getLogger(DispatchingLogger.java:419)
at org.mule.module.logging.DispatchingLogger.isInfoEnabled(DispatchingLogger.java:191)
at org.apache.commons.logging.impl.SLF4JLog.isInfoEnabled(SLF4JLog.java:78)
at org.mule.module.launcher.application.DefaultMuleApplication.init(DefaultMuleApplication.java:188)
at org.mule.module.launcher.application.PriviledgedMuleApplication.init(PriviledgedMuleApplication.java:46)
at org.mule.module.launcher.application.ApplicationWrapper.init(ApplicationWrapper.java:64)
at org.mule.module.launcher.DefaultMuleDeployer.deploy(DefaultMuleDeployer.java:46)
at org.mule.module.launcher.DeploymentService.guardedDeploy(DeploymentService.java:398)
at org.mule.module.launcher.DeploymentService.start(DeploymentService.java:181)
at org.mule.module.launcher.MuleContainer.start(MuleContainer.java:157)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at org.mule.module.reboot.MuleContainerWrapper.start(MuleContainerWrapper.java:56)
at org.tanukisoftware.wrapper.WrapperManager$12.run(WrapperManager.java:3925)`
What is that .log file for? Where is the conf file to set it up to be written in a place where the mule user has permissions to write?
It seems that log4j can't find that file because OS permissions. You can chmod to add permissions in your MULE_HOME dir. And also review your log4j config to see why is trying to read .log file.
The .log file seems to be from the 00_mmc-agent app.
The problem therefore was in the log4j.properties file located in ./apps/00_mmc-agent/classes such file has the following appender configured: log4j.appender.file.File=${app.name}.log
That ${app.name} variable seem to be not properly configured (not even in the default not modified mule starting script). hence the .log file name.
In ./apps/mmc/webapps/mmc/WEB-INF/classes/log4j.properties the appender is configured like this log4j.appender.R.File=${mule.home}/logs/mmc-console-app.log
So, to fix the startup error I modified the appender of the log4j.properties file located in ./apps/00_mmc-agent/classes to have this path:
log4j.appender.file.File=${mule.home}/logs/00_mmc-agent.log
Related
I am trying run a Scala Play app and got this exception:
Caused by: com.typesafe.config.ConfigException$Missing: No configuration setting found for key 'memo'
at com.typesafe.config.impl.SimpleConfig.findKeyOrNull(SimpleConfig.java:152) ~[config-1.3.1.jar:na]
at com.typesafe.config.impl.SimpleConfig.findOrNull(SimpleConfig.java:170) ~[config-1.3.1.jar:na]
at com.typesafe.config.impl.SimpleConfig.find(SimpleConfig.java:184) ~[config-1.3.1.jar:na]
at com.typesafe.config.impl.SimpleConfig.find(SimpleConfig.java:189) ~[config-1.3.1.jar:na]
at com.typesafe.config.impl.SimpleConfig.getObject(SimpleConfig.java:264) ~[config-1.3.1.jar:na]
at com.typesafe.config.impl.SimpleConfig.getConfig(SimpleConfig.java:270) ~[config-1.3.1.jar:na]
at com.typesafe.config.impl.SimpleConfig.getConfig(SimpleConfig.java:37) ~[config-1.3.1.jar:na]
at lila.common.PlayApp$.loadConfig(PlayApp.scala:24) ~[na:na]
at lila.memo.Env$.lila$memo$Env$$$anonfun$1(Env.scala:28) ~[na:na]
at lila.common.Chronometer$.sync(Chronometer.scala:56) ~[na:na]
I put external reference to the configuration file such as :
run -Dconfig.resource=conf/base.conf
I am new to Scala.Tried to find out conifg-1.3.1-jar but unable to find out.Can you please suggest how to overcome this situation.
**In the base.conf there is configuration for the memo is present.
Try without conf/, run -Dconfig.resource=base.conf
From Play 2.6 docs
Using -Dconfig.resource
This will search for an alternative configuration file in the application classpath (you usually provide these alternative configuration files into your application conf/ directory before packaging). Play will look into conf/ so you don’t have to add conf/.
$ /path/to/bin/ -Dconfig.resource=prod.conf
Using -Dconfig.file
You can also specify another local configuration file not packaged into the application artifacts:
$ /path/to/bin/ -Dconfig.file=/opt/conf/prod.conf
If you are configuring an sbt task from Intellij you must quote the entire command.
"run -Dconfig.resource=prod.conf"
I deployed the war file in jboss-as-web-7.0.2. Final\standlone\deployment\xyz.war file. it deployed successfully. In my war file WEB-INF\classes\xyz.cer file is exist. I used the xyz.cer file for digital signature. It exist in war file. when i call my webservice it will throw the error java.io.FileNotFoundException
This is the log file:
16:49:58,434 ERROR [stderr] (http--0.0.0.0-8080-1)
java.io.FileNotFoundException: C:\Users\Administr
ator\Desktop\jboss-as-web-7.0.2.Final\standalone\tmp\vfs\temp9f2b48049e74ca68\xxx.war-3c4
430f5fbb60e88\WEB-INF\classes\xyz.cer (The system cannot find the file specif ied)
On Windows, this may mean that you do not have permissions to the file, that the file is currently open in another process which prevents writing to it, or that the file is set to read-only. It is occurred simply due to the lack of proper access permission,
When I try to 'Activate' my newly deployed .WAR file on the weblogic - I get an error in the AdminServer log file.
iption here]2]2
weblogic.management.DeploymentException: Exception occured while
downloading files
at weblogic.deploy.internal.targetserver.datamanagement.AppDataUpdate.doDownload(AppDataUpdate.java:49)
at weblogic.deploy.internal.targetserver.datamanagement.DataUpdate.download(DataUpdate.java:57)
at weblogic.deploy.internal.targetserver.datamanagement.Data.prepareDataUpdate(Data.java:117)
at weblogic.deploy.internal.targetserver.BasicDeployment.prepareDataUpdate(BasicDeployment.java:750)
at weblogic.deploy.internal.targetserver.operations.AbstractOperation.prepareDataUpdate(AbstractOperation.java:918)
Caused By:
java.io.IOException: There is not enough space on the disk
at java.io.FileOutputStream.writeBytes(Native Method)
at java.io.FileOutputStream.write(FileOutputStream.java:326)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.write(BufferedOutputStream.java:121)
There is lots of space on the E drive where weblogic and application resides. I tried to move the log files out of tmp and restarted one of the instance(there are 2 instances-load balanced) and didn't work.
Any suggestions? Thanks in advance.
I should have checked the both Servers on which my application runs. The A10P002 server disk was out of space. I just deleted old .Log files and server is back to normal and I am able to deploy - No more 'java.io.IOException'.
I am trying to run KMeans on hadoop, using this guidelines.
http://www.slideshare.net/titusdamaiyanti/hadoop-installation-k-means-clustering-mapreduce?qid=44b5881c-089d-474b-b01d-c35a2f91cc67&v=qf1&b=&from_search=1#likes-panel
I am running this in eclipse-luna. when I executed, both map and reduce are showing they are complete 100%. But i am not getting output. Instead i am getting following error at the end. Please some help me to solve this..
15/03/20 11:29:44 INFO mapred.JobClient: Cleaning up the staging area file:/tmp/hadoop-hduser/mapred/staging/hduser378797276/.staging/job_local378797276_0002
15/03/20 11:29:44 ERROR security.UserGroupInformation: PriviledgedActionException as:hduser cause:java.io.IOException: No input paths specified in job
Exception in thread "main" java.io.IOException: No input paths specified in job
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:193)
at org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat.listStatus(SequenceFileInputFormat.java:55)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:252)
at org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1054)
at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1071)
at org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:983)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:550)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:580)
at com.clustering.mapreduce.KMeansClusteringJob.main(KMeansClusteringJob.java:114)
You have to provide the input file location before running the map reduce program. There are two ways for providing the input file location.
Using ecliple go to run configration and provide the file name as arguments
Convert your program to jar file and run the below command inside your hadoop cluster
hadoop jar NameOfYourJarFile InputFileLocation OutPutFileLocation `
I'm running a GWT app in a dev mode with a custom Jetty container. The app loads fine for the first time, however, if I refresh it, I get the following errors in the Development Mode window (paths changed):
00:16:44.854 [ERROR] Unable to create file 'C:\somePath\src\war\msjavaSnack\C4EA130FD0ED44BE513FEEDDE13614DA.cache.png'
java.io.FileNotFoundException: C:\somePath\src\war\msjavaSnack\C4EA130FD0ED44BE513FEEDDE13614DA.cache.png (The requested operation cannot be performed on a file with a user-mapped section open)
at java.io.FileOutputStream.open(Native Method)
at java.io.FileOutputStream.<init>(FileOutputStream.java:194)
at java.io.FileOutputStream.<init>(FileOutputStream.java:145)
at com.google.gwt.core.ext.linker.impl.StandardLinkerContext.writeArtifactToFile(StandardLinkerContext.java:658)
at com.google.gwt.core.ext.linker.impl.StandardLinkerContext.produceOutputDirectory(StandardLinkerContext.java:595)
at com.google.gwt.dev.DevMode.produceOutput(DevMode.java:476)
at com.google.gwt.dev.DevModeBase.relink(DevModeBase.java:1131)
at com.google.gwt.dev.DevModeBase.access$000(DevModeBase.java:67)
at com.google.gwt.dev.DevModeBase$2.accept(DevModeBase.java:1076)
at com.google.gwt.dev.shell.ShellModuleSpaceHost$1.accept(ShellModuleSpaceHost.java:122)
at com.google.gwt.dev.shell.StandardRebindOracle$Rebinder.rebind(StandardRebindOracle.java:59)
at com.google.gwt.dev.shell.StandardRebindOracle.rebind(StandardRebindOracle.java:154)
at com.google.gwt.dev.shell.ShellModuleSpaceHost.rebind(ShellModuleSpaceHost.java:119)
at com.google.gwt.dev.shell.ModuleSpace.rebind(ModuleSpace.java:531)
at com.google.gwt.dev.shell.ModuleSpace.rebindAndCreate(ModuleSpace.java:414)
at com.google.gwt.dev.shell.GWTBridgeImpl.create(GWTBridgeImpl.java:39)
at com.google.gwt.core.client.GWT.create(GWT.java:98)
at com.extjs.gxt.ui.client.GXT.<clinit>(GXT.java:38)
at com.extjs.gxt.ui.client.widget.Component.<clinit>(Component.java:202)
at msjava.snack.gui.client.MSHeaderPanelViewport.<init>(MSHeaderPanelViewport.java:62)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at com.google.gwt.dev.shell.ModuleSpace.rebindAndCreate(ModuleSpace.java:422)
at com.google.gwt.dev.shell.ModuleSpace.onLoad(ModuleSpace.java:361)
at com.google.gwt.dev.shell.OophmSessionHandler.loadModule(OophmSessionHandler.java:185)
at com.google.gwt.dev.shell.BrowserChannelServer.processConnection(BrowserChannelServer.java:380)
at com.google.gwt.dev.shell.BrowserChannelServer.run(BrowserChannelServer.java:222)
at java.lang.Thread.run(Thread.java:662)
UPDATE
In the ProcessExplorer I can see that the JVM process hosting Jetty has an open handle to the file so this is probably the reason why the other JVM (hosting the dev mode) can't write to it. Any way to work it around?
It seems that two (open/close) (open/close) rutines over the same file happening too soon one after another causes this... Some developers suggest to call the gc. Check for closing correctly every i/o operation. Not to perform complete (open,loop(writes),close) too soon. It seems that when one operation it is going to be finished while the second request arrives and that throws the problem.
I found the solution on this page:
http://docs.codehaus.org/display/JETTY/Files+locked+on+Windows
EDIT:
The above URL is dead now.
Now it's quite easy to find the solution in Jetty's wiki:
Jetty provides a configuration switch in the webdefault.xml file for
the DefaultServlet that enables or disables the use of memory-mapped
files. If you are running on Windows and are having file-locking
problems, you should set this switch to disable memory-mapped file
buffers.
The default webdefault.xml file is in the lib/jetty.jar at
org/eclipse/jetty/webapp/webdefault.xml. Extract it to a convenient
disk location and edit it to change useFileMappedBuffer to false.
https://wiki.eclipse.org/Jetty/Howto/Deal_with_Locked_Windows_Files