NoClassDefFoundError when running HelloWorld.class - noclassdeffounderror

Im getting this error when I try to run HelloWorld.class
From this it looks like it's trying to run HelloWorld/class. The program should simply print out HelloWorld!.
package threads;
public class HelloWorld {
public static void main(String[] args) {
System.out.println("Hello World!");
}
}
Any ideas?

Check your classpath: Select Start > Control Panel > System > Advanced > Environment Variables > System Variables > CLASSPATH.
You can make a new variable there OR in the command prompt type: SET CLASSPATH=.;C:\Program Files\Java\jdk-10.0.2(or whatever version you are using)\bin.
Type: cd C:\Users\David\Desktop\eclipse\JNP\bin\threads
this is your DIRECTORY NOT your CLASSPATH
Type: javac HelloWorld.java
a class file named HelloWorld.CLASS should appear in the threads folder.
Type: java HelloWorld
Also make sure you have named the file HelloWorld.java
I hope this helped!

Related

The driver executable does not exist: C:\geckodriver.exe issue in Eclipse IDE

Please help me with this issue that is recurring every time I run my code.
I have extracted Geckodriver files in C Drive but when I run my code, the error that comes up is 'Exception in thread "main" java.lang.IllegalStateException: The driver executable does not exist: C:\geckodriver.exe'.
My code is given below:
package Basics;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.firefox.FirefoxDriver;
public class Browserinvocation {
public static void main(String[] args) {
// TODO Auto-generated method stub
System.setProperty("webdriver.gecko.driver","C:\\geckodriver.exe");
WebDriver driver = new FirefoxDriver();//FirefoxDriver class is used to implement methods present in Webdriver-Invocation of browser
driver.get("https://www.amazon.in/");// Get method to hit the url in browser
}
}
Error in console :
Exception in thread "main" java.lang.IllegalStateException: The driver
executable does not exist: C:\geckodriver.exe at
com.google.common.base.Preconditions.checkState(Preconditions.java:534)
at
org.openqa.selenium.remote.service.DriverService.checkExecutable(DriverService.java:136)
at
org.openqa.selenium.remote.service.DriverService.findExecutable(DriverService.java:131)
at
org.openqa.selenium.firefox.GeckoDriverService.access$100(GeckoDriverService.java:41)
at
org.openqa.selenium.firefox.GeckoDriverService$Builder.findDefaultExecutable(GeckoDriverService.java:141)
at
org.openqa.selenium.remote.service.DriverService$Builder.build(DriverService.java:339)
at
org.openqa.selenium.firefox.FirefoxDriver.toExecutor(FirefoxDriver.java:158)
at
org.openqa.selenium.firefox.FirefoxDriver.(FirefoxDriver.java:120)
at
org.openqa.selenium.firefox.FirefoxDriver.(FirefoxDriver.java:98)
at Basics.Browserinvocation.main(Browserinvocation.java:13)
Above exception occurs whenever Precondition does not find path of relevant driver mentioned in System.setProperty() method by any reason like below:
if path mentioned have different/wrong/single slashes.
Driver file itself is not present at mentioned location.
If path is mentioned in properties file or config file with double quotes.
Just check once before execution.
You should add the path to geckodriver.exe using / rather than \\. Change your line
System.setProperty("webdriver.gecko.driver","C:\\geckodriver.exe");
to the following
System.setProperty("webdriver.gecko.driver","C:/geckodriver.exe");
Your code is running at my side, might be you are not extracting the gecko driver.
Change the path and try it once, it should worked
Please let me know selenium jars version and your firefox browser version
System.setProperty("webdriver.gecko.driver", "C:/Users/sankalp.gupta/Desktop/JAVASEL/geckodriver.exe");
WebDriver driver=new FirefoxDriver();
driver.get("https://www.amazon.in");
System.out.println(driver.getCurrentUrl());
driver.close();
System.setProperty("webdriver.gecko.driver","C:\\geckodriver.exe");
Here remove . in between gecko and driver
Just download geckodriver.exe and move it to drive C:

Postgres PL/JAVA: java.lang.ClassNotFoundException error after loading JAR file in database

I am getting the java.lang.ClassNotFoundException: error inside Postgres when running a function that calls a JAR file I have loaded. I have installed and configured PL/JAVA (including the delivered examples) in my database and can run the examples to success. I am not attempting to load/install my first JAR, but I am doing something wrong.
My host controls the OS version: CentOS 6.8. Postgres is version 8.4.
I am attempting to install my own very simple java class, which is a derivative of the delivered example Parameters.addOne class. All my code is in /tmp. Here are the steps I've followed:
Doug.java:
package com.msmetric;
import java.math.BigDecimal;
import java.sql.Date;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Time;
import java.sql.Timestamp;
import java.text.DateFormat;
import java.text.SimpleDateFormat;
import java.util.TimeZone;
import java.util.logging.Logger;
public class Doug {
public static int addOne(int value) {
return value + 1;
}
}
Compile Doug.java using 'javac Doug.java' succeeds.
Create JAR file with Doug.class file in it using 'jar -cvf Doug.jar Doug.class. This works fine.
Now I load the JAR file into Postgres (public schema), change the classpath, create the function that calls the JAR, then attempt to run at psql prompt.
Run sqlj.install_jar from psql:
select sqlj.install_jar('file:/tmp/Doug.jar','Doug',false);
Set the classpath inside Postgres (from psql prompt postgres=#):
select sqlj.set_classpath('public','Doug');
Create the function that calls the JAR. This create function code is taken directly from the examples.ddr file that came with PL/JAVA. I simply changed org.postgres to com.msmetric.
create or replace function addone(int) returns int as 'com.msmetric.Doug.addOne(java.lang.Integer)' language java;
Now with the JAR loaded and function created, I attempt to run it. This function should simply add 1 to the number provided.
select addone(3);
Results:
ERROR: java.lang.ClassNotFoundException: com.msmetric.Doug
Thoughts?
I'm very sorry I didn't see your question sooner. Underneath all the exotic details (PostgreSQL, PL/Java, schemas, classpaths...), there's just a bit of basic Java going on here: if a jar file contains a class Doug.class in package com.msmetric, its path within the jar has to reflect that: it has to be com/msmetric/Doug.class. Otherwise, it won't be found.
You can set up that whole structure step by step:
javac Doug.java
mkdir com
mkdir com/msmetric
mv Doug.class com/msmetric/
jar -cvf Doug.jar com/msmetric/Doug.class
Or, you can let javac do more of the work for you:
mkdir classes
javac -d classes Doug.java
jar -cvf Doug.jar -C classes .
When you give javac a -ddirectory option, instead of just writing class files next to their .java sources, it will put them all in their proper places under the directory you named, and then you can just tell jar to change into that directory and slurp them all up (don't overlook the . at the end of that jar command).
Once you fix that, if you retry your original steps, you'll see that you now get a different error:
ERROR: Unable to find static method com.msmetric.Doug.addOne with signature (Ljava/lang/Integer;)I
That happens because you declared the function in Doug.java with int addOne(int value) (that is, taking a primitive int argument), but you declared it in SQL with returns int as 'com.msmetric.Doug.addOne(java.lang.Integer)' taking an Integer object.
Once you correct that:
create or replace function addone(int) returns int as 'com.msmetric.Doug.addOne(int)' language java;
you'll be able to see:
# select addone(3);
addone
--------
4
(1 row)
If you happen to see this belated answer, may I ask what version of PL/Java you are using? That's one detail you didn't mention. If it is older than 1.5.0, there are newer features that can help you out. For one, you can just annotate that function:
#Function
public static int addOne(int value) {
return value + 1;
}
and have javac spit out not only the Doug.class file but also a pljava.ddr file with your SQL function declaration already written correctly (no mixing up argument types!). There is a way to include that .ddr file into the jar you create so that you can just call sqlj.install_jar with the last parameter true so it runs the commands in the .ddr and your functions are ready to use. There's a Hello, world example in the docs that shows more of how it's done.
Cheers,
-Chap

my tess4j in java with a error "Failed loading language 'osd'"

public static void main(String[] args) throws TesseractException {
ITesseract instance = new Tesseract();
instance.setLanguage("osd");
instance.setDatapath("/usr/local/Cellar/tesseract/3.04.01_1/share/");
String tent = instance.doOCR(new File("/Users/qwf/Desktop/111.jpg"));
System.out.println(tent);
}
when i run the code ,there is 2 error:
Failed loading language 'osd'
Tesseract couldn't load any languages!
i don't konw how tess4j find there is any languages can be use
and when i run the "tesseract --list-langs" in iterm
the result is "eng osd"
i think the tesseract works well but why tess4j don't work
i have set the TESSDATA_PREFIX environment variable
echo $TESSDATA_PREFIX
output : /Users/qwf/tessdata/3.04.01_1/
i install tessdata by homebrew
Wrong order: set the datapath before the language.
instance.setDatapath("/usr/local/Cellar/tesseract/3.04.01_1/share/");
instance.setLanguage("osd");

Debug MapReduce (of Hadoop 2.2 or higher) in Eclipse

I am able to debug MapReduce (of Hadoop 1.2.1) in Eclipse by following the steps in http://www.thecloudavenue.com/2012/10/debugging-hadoop-mapreduce-program-in.html. But how do I debug MapReduce (of Hadoop 2.2 or higher) in Eclipse?
You can debug in same way.
You just run you MapReduce code in standalone mode and use eclipse to debug MR code like any Java code.
Here are the steps I setup in Eclipse. Environment: Ubuntu 16.04.2, Eclipse Neon.3 Release (4.6.3RC2), jdk1.8.0_121. I did a fresh hadoop-2.7.3 installation under /j01/srv/hadoop, which is my $HADOOP_HOME. Replace $HADOOP_HOME value with your actual path wherever referenced below. For hadoop running from Eclipse, you do not need to do any hadoop configurations, what really needed is to pull the right set of hadoop jars into Eclipse.
Step 1 Create new Java Project
File > New > Project...
Select Java Project, Next
Enter Project name: hadoopmr
Click Configure default...
Source folder name: src/main/java
Output folder name: target/classes
Click Apply, OK, then Next
Click tab Libraries
Click Add External JARs...
Browse to hadoop installation folder, and add the following jars, when done click Finish
$HADOOP_HOME/share/hadoop/common/hadoop-common-2.7.3.jar
$HADOOP_HOME/share/hadoop/common/hadoop-nfs-2.7.3.jar
$HADOOP_HOME/share/hadoop/common/lib/avro-1.7.4.jar
$HADOOP_HOME/share/hadoop/common/lib/commons-cli-1.2.jar
$HADOOP_HOME/share/hadoop/common/lib/commons-collections-3.2.2.jar
$HADOOP_HOME/share/hadoop/common/lib/commons-configuration-1.6.jar
$HADOOP_HOME/share/hadoop/common/lib/commons-io-2.4.jar
$HADOOP_HOME/share/hadoop/common/lib/commons-lang-2.6.jar
$HADOOP_HOME/share/hadoop/common/lib/commons-logging-1.1.3.jar
$HADOOP_HOME/share/hadoop/common/lib/hadoop-auth-2.7.3.jar
$HADOOP_HOME/share/hadoop/common/lib/httpclient-4.2.5.jar
$HADOOP_HOME/share/hadoop/common/lib/httpcore-4.2.5.jar
$HADOOP_HOME/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar
$HADOOP_HOME/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar
$HADOOP_HOME/share/hadoop/common/lib/log4j-1.2.17.jar
$HADOOP_HOME/share/hadoop/common/lib/slf4j-api-1.7.10.jar
$HADOOP_HOME/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar
$HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.3.jar
$HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.3.jar
$HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3.jar
$HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.3.jar
$HADOOP_HOME/share/hadoop/mapreduce/lib-examples/hsqldb-2.0.0.jar
$HADOOP_HOME/share/hadoop/tools/lib/guava-11.0.2.jar
$HADOOP_HOME/share/hadoop/yarn/hadoop-yarn-api-2.7.3.jar
$HADOOP_HOME/share/hadoop/yarn/hadoop-yarn-common-2.7.3.jar
Step 2 Create a MapReduce example
Create a new package: org.apache.hadoop.examples
Create WordCount.java under package org.apache.hadoop.examples with the following contents:
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.examples;
import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;
public class WordCount {
public static class TokenizerMapper
extends Mapper<Object, Text, Text, IntWritable>{
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(Object key, Text value, Context context
) throws IOException, InterruptedException {
StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens()) {
word.set(itr.nextToken());
context.write(word, one);
}
}
}
public static class IntSumReducer extends Reducer<Text,IntWritable,Text,IntWritable> {
private IntWritable result = new IntWritable();
public void reduce(Text key, Iterable<IntWritable> values,
Context context
) throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
result.set(sum);
context.write(key, result);
}
}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
if (otherArgs.length < 2) {
System.err.println("Usage: wordcount <in> [<in>...] <out>");
System.exit(2);
}
Job job = Job.getInstance(conf, "word count");
job.setJarByClass(WordCount.class);
job.setMapperClass(TokenizerMapper.class);
job.setCombinerClass(IntSumReducer.class);
job.setReducerClass(IntSumReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
for (int i = 0; i < otherArgs.length - 1; ++i) {
FileInputFormat.addInputPath(job, new Path(otherArgs[i]));
}
FileOutputFormat.setOutputPath(job,
new Path(otherArgs[otherArgs.length - 1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
Create input.txt under /home/hadoop/input/ (or your path) with the following contents:
What do you mean by Object
What is Java Virtual Machine
How to create Java Object
How Java enabled High Performance
Step 3 Setup Debug Configuration
In Eclipse, open WordCount.java, set breakpoints in places you like.
Right click on WordCount.java, Debug As > Debug Configurations...
Select Java Application, click New launch configuration on top-left icon
Enter org.apache.hadoop.examples.WordCount in Main class box
Click Arguments tab
enter
/home/hadoop/input/input.txt /home/hadoop/output
into Program arguments
Click Apply, then Debug
Program starts along with hadoop, it should hit the breakpoints you set.
Check results at
ls -l /home/hadoop/output
-rw-r--r-- 1 hadoop hadoop 131 Apr 5 22:59 part-r-00000
-rw-r--r-- 1 hadoop hadoop 0 Apr 5 22:59 _SUCCESS
Notes:
1) If program does not run, make sure Project > Build Automatically is checked. Project > Clean… to force a build
2) You can get more examples from
jar xvf $HADOOP_HOME/share/hadoop/mapreduce/sources/hadoop-mapreduce-examples-2.7.3-sources.jar
Copy them into this project to continue explore
3) You can download this eclipse project from
git clone https://github.com/drachenrio/hadoopmr
In Eclipse, File > Import... > Existing Projects into Workspace > Next
Browse to cloned project and import it
Open .classpath, replace /j01/srv/hadoop-2.7.3 with your hadoop installation home

Groovy Grape imports aren't resolved by Eclipse

groovy eclipse plugin version: 1.7.5.xx-20101020-1000-e36-release.
import com.jidesoft.swing.JideSplitButton
#Grab(group='com.jidesoft', module='jide-oss', version='[2.2.1,2.3.0)')
public class TestClassAnnotation {
public static String testMethod () {
return JideSplitButton.class.name
}
}
new TestClassAnnotation().testMethod()
the first line error: Groovy:unable to resolve class com.jidesoft.swing.JideSplitButton
it can run as groovy shell, but the error warning is bore
When I compile this in the editor, I get the same error as I do when I compile or run from the command line:
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
General error during conversion: Error grabbing Grapes -- [unresolved dependency: com.jidesoft#jide-oss;[2.2.1,2.3.0): not found]
(and then a very long stack trace)
Is this what you are seeing?
I'm rather late to this question, but I wonder if
#Grab(group='com.jidesoft', module='jide-oss', version='[2.2.1,2.3.0)')
shouldn't be
#Grab(group='com.jidesoft', module='jide-oss', version='[2.2.1,2.3.0]')
It looks to me like a syntax error where groovy expects to be passed a list.
Try placing #Grab right above the import statement. Just like that:
#Grab(group='com.jidesoft', module='jide-oss',version='[2.2.1,2.3.0]')
import com.jidesoft.swing.JideSplitButton
... your code continues here