Install4j - UndefinedVariableException for installer variable - install4j

I know there has been a bug on this subject, but it seems I'm still getting this error when the installer running on Linux environment (on windows it working fine).
I'm using v6.1.6 [build: 6459]
What I tried to do:
I defined a Set a variable action in which I'm getting the $JAVA_HOME path if exists and set it to linuxJavaHome variable.
Later on - I defined Run executable or batch file action in which I'm getting the java version using the installer previous linuxJavaHome variable.
I checked my code on multiple Linux environments and the variable is defined just fine with the java home path. Moreover, I've also tried debugging my code to be sure and it seems OK.
My only problem is when I'm building and running our installer from JENKINS with silent mode (-q), I'm always getting this UndefinedVariableException. I refined the problematic point and it happens on the 1st Set a variable screen.
The exception stacktrace is:
com.install4j.api.beans.UndefinedVariableException: installer:linuxJavaHome
13:03:08 Error log: /home/administrator/installer/install4jError2195873440024941405.log
13:03:08 com.install4j.api.beans.UndefinedVariableException: installer:linuxJavaHome
13:03:08 at com.install4j.runtime.installer.InstallerVariables$InstallerReplacementCallback.handleError(InstallerVariables.java:971)
13:03:08 at com.install4j.runtime.installer.InstallerVariables$InstallerReplacementCallback.getReplacement(InstallerVariables.java:950)
13:03:08 at com.install4j.runtime.util.StringUtil.replaceVariable(StringUtil.java:68)
13:03:08 at com.install4j.runtime.installer.InstallerVariables.replaceVariables(InstallerVariables.java:337)
13:03:08 at com.install4j.runtime.installer.InstallerVariables.replaceVariables(InstallerVariables.java:326)
13:03:08 at com.install4j.runtime.installer.InstallerVariables.replaceVariables(InstallerVariables.java:322)
13:03:08 at com.install4j.runtime.installer.InstallerVariables.replaceVariables(InstallerVariables.java:362)
13:03:08 at com.install4j.api.beans.AbstractBean.replaceVariables(AbstractBean.java:89)
13:03:08 at com.install4j.runtime.beans.actions.misc.RunExecutableAction.getExecutable(RunExecutableAction.java:58)
13:03:08 at com.install4j.runtime.beans.actions.misc.RunExecutableAction.execute(RunExecutableAction.java:292)
13:03:08 at com.install4j.runtime.beans.actions.SystemInstallOrUninstallAction.install(SystemInstallOrUninstallAction.java:29)
13:03:08 at com.install4j.runtime.installer.ContextImpl$7.executeAction(ContextImpl.java:1668)
13:03:08 at com.install4j.runtime.installer.ContextImpl$7.fetchValue(ContextImpl.java:1659)
13:03:08 at com.install4j.runtime.installer.ContextImpl$7.fetchValue(ContextImpl.java:1656)
13:03:08 at com.install4j.runtime.installer.helper.comm.actions.FetchObjectAction.execute(FetchObjectAction.java:14)
13:03:08 at com.install4j.runtime.installer.helper.comm.HelperCommunication.executeActionDirect(HelperCommunication.java:274)
13:03:08 at com.install4j.runtime.installer.helper.comm.HelperCommunication.executeActionInt(HelperCommunication.java:259)
13:03:08 at com.install4j.runtime.installer.helper.comm.HelperCommunication.executeActionChecked(HelperCommunication.java:187)
13:03:08 at com.install4j.runtime.installer.helper.comm.HelperCommunication.fetchObjectChecked(HelperCommunication.java:170)
13:03:08 at com.install4j.runtime.installer.ContextImpl.performActionIntStatic(ContextImpl.java:1656)
13:03:08 at com.install4j.runtime.installer.InstallerContextImpl.performActionInt(InstallerContextImpl.java:151)
13:03:08 at com.install4j.runtime.installer.ContextImpl.performAction(ContextImpl.java:1103)
13:03:08 at com.install4j.runtime.installer.controller.Controller.executeAction(Controller.java:368)
13:03:08 at com.install4j.runtime.installer.controller.Controller.executeActions(Controller.java:334)
13:03:08 at com.install4j.runtime.installer.controller.Controller.executeActionGroup(Controller.java:405)
13:03:08 at com.install4j.runtime.installer.controller.Controller.executeActions(Controller.java:339)
13:03:08 at com.install4j.runtime.installer.controller.Controller.executeActionGroup(Controller.java:405)
13:03:08 at com.install4j.runtime.installer.controller.Controller.executeActions(Controller.java:339)
13:03:08 at com.install4j.runtime.installer.controller.Controller.handleCommand(Controller.java:195)
13:03:08 at com.install4j.runtime.installer.controller.Controller.handleStartup(Controller.java:116)
13:03:08 at com.install4j.runtime.installer.controller.Controller.start(Controller.java:73)
13:03:08 at com.install4j.runtime.installer.Installer.runInProcess(Installer.java:59)
13:03:08 at com.install4j.runtime.installer.Installer.main(Installer.java:46)
13:03:08 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
13:03:08 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
13:03:08 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
13:03:08 at java.lang.reflect.Method.invoke(Method.java:498)
13:03:08 at com.exe4j.runtime.LauncherEngine.launch(LauncherEngine.java:65)
13:03:08 at com.install4j.runtime.launcher.UnixLauncher.main(UnixLauncher.java:57)

In the end what was working for me was neglecting the use of the $JAVA_HOME variable and using the installer inner variable ${installer:sys.javaHome} which works great both on Linux & Windows.

The "Set a variable" action probably sets a null variable value because the environment variable JAVA_HOME is not defined. You can select the "Fail if value is null" property of that action to catch this problem earlier.

Related

spring cloud stream kafka client excetion

I used spring-cloud-starter stream-kafka to connect to kafka, version 2.1.2. When I started the service in the k8s cluster, the first few could connect successfully. The following errors occurred in subsequent projects:
org.springframework.context.ApplicationContextException: Failed to start bean 'org.springframework.kafka.config.internalKafkaListenerEndpointRegistry'; nested exception is org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata
at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:185)
at org.springframework.context.support.DefaultLifecycleProcessor.access$200(DefaultLifecycleProcessor.java:53)
at org.springframework.context.support.DefaultLifecycleProcessor$LifecycleGroup.start(DefaultLifecycleProcessor.java:360)
at org.springframework.context.support.DefaultLifecycleProcessor.startBeans(DefaultLifecycleProcessor.java:158)
at org.springframework.context.support.DefaultLifecycleProcessor.onRefresh(DefaultLifecycleProcessor.java:122)
at org.springframework.context.support.AbstractApplicationContext.finishRefresh(AbstractApplicationContext.java:893)
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.finishRefresh(ServletWebServerApplicationContext.java:163)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:552)
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:142)
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:775)
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:397)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:316)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1260)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1248)
at com.zynn.service.module.user.ZynnServiceModuleUserApplication.main(ZynnServiceModuleUserApplication.java:15)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:48)
at org.springframework.boot.loader.Launcher.launch(Launcher.java:87)
at org.springframework.boot.loader.Launcher.launch(Launcher.java:50)
at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:51)
I need some help. Any Suggestions are welcome. Thanks
I have found the reason for the error. I must configure item ”spring.kafka.bootstrap-servers” and item “spring.cloud.stream.kafka.binder.brokers”, otherwise I will create a connection pointing to localhost:9092,This connection is impossible to connect to, so after I configure it simultaneously, the project works

Get NoClassDefFoundError when connecting to mongoDB

I get these type of errors
java.lang.NoClassDefFoundError: com/mongodb/Mongo
Controller.ConnectMongoDB.connectDB(ConnectMongoDB.java:8)
Controller.Controller.doPost(Controller.java:44)
javax.servlet.http.HttpServlet.service(HttpServlet.java:650)
javax.servlet.http.HttpServlet.service(HttpServlet.java:731)
org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)

Spark Job failing with exit status 15

I am trying to run simple word count job in spark but I am getting exception while running job.
For more detailed output, check application tracking page:http://quickstart.cloudera:8088/proxy/application_1446699275562_0006/Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1446699275562_0006_02_000001
Exit code: 15
Stack trace: ExitCodeException exitCode=15:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
at org.apache.hadoop.util.Shell.run(Shell.java:455)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero exit code 15
Failing this attempt. Failing the application.
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: root.cloudera
start time: 1446910483956
final status: FAILED
tracking URL: http://quickstart.cloudera:8088/cluster/app/application_1446699275562_0006
user: cloudera
Exception in thread "main" org.apache.spark.SparkException: Application finished with failed status
at org.apache.spark.deploy.yarn.Client.run(Client.scala:626)
at org.apache.spark.deploy.yarn.Client$.main(Client.scala:651)
at org.apache.spark.deploy.yarn.Client.main(Client.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:569)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:166)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
I checked log from following command
yarn logs -applicationId application_1446699275562_0006
Here is log
15/11/07 07:35:09 ERROR yarn.ApplicationMaster: User class threw exception: Output directory hdfs://quickstart.cloudera:8020/user/cloudera/WordCountOutput already exists
org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory hdfs://quickstart.cloudera:8020/user/cloudera/WordCountOutput already exists
at org.apache.hadoop.mapred.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:132)
at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopDataset(PairRDDFunctions.scala:1053)
at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopFile(PairRDDFunctions.scala:954)
at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopFile(PairRDDFunctions.scala:863)
at org.apache.spark.rdd.RDD.saveAsTextFile(RDD.scala:1290)
at org.com.td.sparkdemo.spark.WordCount$.main(WordCount.scala:23)
at org.com.td.sparkdemo.spark.WordCount.main(WordCount.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:480)
15/11/07 07:35:09 INFO yarn.ApplicationMaster: Final app status: FAILED, exitCode: 15, (reason: User class threw exception: Output directory hdfs://quickstart.cloudera:8020/user/cloudera/WordCountOutput already exists)
15/11/07 07:35:14 ERROR yarn.ApplicationMaster: SparkContext did not initialize after waiting for 100000 ms. Please check earlier log output for errors. Failing the application.
Exception clearly indicates that WordCountOutput directory already exists but I made sure that directory is not there before running job.
Why I am getting this error even though directory was not there before running my job?
I came across same issue and fixed it by adding below highlighted part.
SparkConf sparkConf = new SparkConf().setAppName("Sentiment Scoring").set("spark.hadoop.validateOutputSpecs", "true");
Thanks,
Sathish.
For us this was caused by "running beyond physical memory limits". After increasing executer memory, the issue was fixed.
This Error mostly occurs while you submitting the wrong spark parameter in the spark-submit command. Please check the configuration parameters. In my case, I failed to submit masternamenode address where I need to read resources from HDFS.
Before : Where yarn threw Error- 15
dfs://masternamenode/TESTCGNATDATA/
After: Able to run the application
hdfs://masternamenode/TESTCGNATDATA/

I have created a MapReduce job in Eclipse. I face the following error on running it. No errors are there in code

14/06/30 19:08:08 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
14/06/30 19:08:08 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
14/06/30 19:08:08 ERROR security.UserGroupInformation: PriviledgedActionException as:Anish Gupta (auth:SIMPLE) cause:java.io.IOException: Cannot run program "chmod": CreateProcess error=2, The system cannot find the file specified
Exception in thread "main" java.io.IOException: Cannot run program "chmod": CreateProcess error=2, The system cannot find the file specified
at java.lang.ProcessBuilder.start(Unknown Source)
at org.apache.hadoop.util.Shell.runCommand(Shell.java:200)
at org.apache.hadoop.util.Shell.run(Shell.java:182)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:375)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:461)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:444)
at org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:533)
at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:524)
at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:344)
at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:195)
at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:126)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:888)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:882)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Unknown Source)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1278)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:882)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:526)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:556)
at numbersum.NumberDriver.main(NumberDriver.java:34)
Caused by: java.io.IOException: CreateProcess error=2, The system cannot find the file specified
at java.lang.ProcessImpl.create(Native Method)
at java.lang.ProcessImpl.<init>(Unknown Source)
at java.lang.ProcessImpl.start(Unknown Source)
... 20 more

HDFS command line put throwing an exception

I am trying to put a file into hdfs using ssh from my client pc to the NameNode server. There are 2 machines: One NameNode and one DataNode. Here is the command I am trying;
$ bin/hadoop fs -fs hdfs://MY_IP:MY_PORT -put example.txt example.txt
But it throws an exception. The logs say;
2013-05-23 09:25:31,808 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:_my_user_name_ cause:java.io.IOException: Unknown protocol to DataNode: org.apache.hadoop.hdfs.protocol.ClientProtocol
2013-05-23 09:25:31,808 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on _my_port_, call getProtocolVersion(org.apache.hadoop.hdfs.protocol.ClientProtocol, 61) from _ip_:_port_: error: java.io.IOException: Unknown protocol to DataNode: org.apache.hadoop.hdfs.protocol.ClientProtocol
java.io.IOException: Unknown protocol to DataNode: org.apache.hadoop.hdfs.protocol.ClientProtocol
at org.apache.hadoop.hdfs.server.datanode.DataNode.getProtocolVersion(DataNode.java:1759)
at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
What can be the problem? Thanks a lot
Check your core-site.xml file and verify the hdfs url.
Check the version of your client's hadoop and cluster's hadoop, my guess is your client version is less than the cluster version.
java.io.IOException: Unknown protocol to DataNode: org.apache.hadoop.hdfs.protocol.ClientProtocol
at org.apache.hadoop.hdfs.server.datanode.DataNode.getProtocolVersion(DataNode.java:1759)