I cannot used dynamic valude during the request time in gatling scala - scala

Please help me.
I write this code.
When I want to get a dynamic value and when I use this dynamic value during the request body it does not generate or I cannot use this value.
The request body is written as this type.
body:FormUrlEncodedRequestBody{patchedContentType='null', charset=UTF-8, content=age=30&name=Test+Name&description=Test+Request&token1=%23%7Btoken1%7D&token2=%23%7Btoken2%7D}
I write 2 example userFeeder1 and userFeeder2
My code is there
package tests
import io.gatling.core.Predef.*
import io.gatling.core.feeder.Feeder
import io.gatling.core.scenario.Simulation
import io.gatling.http.Predef.*
class FirstTestCase extends Simulation {
private val httpProtocol = http
.baseUrl("https://test.test.com")
// .inferHtmlResources(BlackList(""".*\.js""", """.*\.css""", """.*\.gif""", """.*\.jpeg""", """.*\.jpg""", """.*\.ico""", """.*\.woff""", """.*\.woff2""", """.*\.(t|o)tf""", """.*\.png""", """.*detectportal\.firefox\.com.*"""), WhiteList())
.acceptHeader("text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9")
.acceptEncodingHeader("gzip, deflate")
.acceptLanguageHeader("en-US,en;q=0.9,az;q=0.8,tr;q=0.7")
.contentTypeHeader("application/x-www-form-urlencoded")
.originHeader("https://test.test.com")
.upgradeInsecureRequestsHeader("1")
.userAgentHeader("Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/104.0.0.0 Safari/537.36")
private val headers_0 = Map(
"sec-ch-ua" -> """Chromium";v="104", " Not A;Brand";v="99", "Google Chrome";v="104""",
"sec-ch-ua-mobile" -> "?0",
"sec-ch-ua-platform" -> "macOS",
"sec-fetch-dest" -> "document",
"sec-fetch-mode" -> "navigate",
"sec-fetch-site" -> "same-site",
"sec-fetch-user" -> "?1"
)
val Age = "30"
val Name = "Test Name"
val Description = "Test Request"
def generateToken: Map[String, String] = {
val tokens = (Age.length + Age + Name.length + Name + Description.length + Description).toString().toLowerCase()
Map(
"token1" -> tokens
)
}
val userFeeder1: Feeder[String] = Iterator.continually(generateToken)
val userFeeder2: Feeder[Any] =
Iterator.continually(
Map(
"token2" -> "dfngvdndksfdslfkdsergerfewrwehewrhwefnsdnf"
)
)
val search =
exec(http("request_0")
.post("/test/test")
.formParam("age", _ => Age)
.formParam("name", _ => Name)
.formParam("description", _ => Description)
.formParam("token1", _ => "#{token1}")
.formParam("token2", _ => "#{token2}")
)
val scn = scenario("Scenario Name")
.feed(userFeeder1)
.feed(userFeeder2)
.exec(search)
{
setUp(
scn.inject(rampUsers(1).during(1))
).protocols(httpProtocol)
}
}
pom XML is there
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.gatling.tests</groupId>
<artifactId>gatling_scala_project</artifactId>
<!-- <version>2.0-SNAPSHOT</version>-->
<!-- <properties>-->
<!-- <maven.compiler.source>1.8</maven.compiler.source>-->
<!-- <maven.compiler.target>1.8</maven.compiler.target>-->
<!-- <encoding>UTF-8</encoding>-->
<!-- <gatling.version>3.8.4</gatling.version>-->
<!-- <gatling-maven-plugin.version>4.2.7</gatling-maven-plugin.version>-->
<!-- <maven-compiler-plugin.version>3.10.1</maven-compiler-plugin.version>-->
<!-- <maven-jar-plugin.version>3.2.2</maven-jar-plugin.version>-->
<!-- </properties>-->
<version>3.8.4</version>
<properties>
<!-- use the following if you're compiling with JDK 8-->
<maven.compiler.source>1.8</maven.compiler.source>
<maven.compiler.target>1.8</maven.compiler.target>
<!-- comment the 2 lines above and uncomment the line bellow if you're compiling with JDK 11 or 17 -->
<!-- <maven.compiler.release>11</maven.compiler.release>-->
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<gatling.version>${project.version}</gatling.version>
<gatling-maven-plugin.version>4.2.7</gatling-maven-plugin.version>
<maven-compiler-plugin.version>3.10.1</maven-compiler-plugin.version>
<maven-jar-plugin.version>3.2.2</maven-jar-plugin.version>
</properties>
<dependencies>
<dependency>
<groupId>io.gatling.highcharts</groupId>
<artifactId>gatling-charts-highcharts</artifactId>
<version>${gatling.version}</version>
</dependency>
<dependency>
<groupId>io.gatling</groupId>
<artifactId>gatling-app</artifactId>
<version>${gatling.version}</version>
</dependency>
<dependency>
<groupId>io.gatling</groupId>
<artifactId>gatling-recorder</artifactId>
<version>${gatling.version}</version>
</dependency>
<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-lang3</artifactId>
<version>3.11</version>
</dependency>
<dependency>
<groupId>com.google.guava</groupId>;
<artifactId>guava</artifactId>
<version>23.0</version>
</dependency>
</dependencies>
<build>
<testSourceDirectory>src/test/scala</testSourceDirectory>
<plugins>
<plugin>
<groupId>io.gatling</groupId>
<artifactId>gatling-maven-plugin</artifactId>
<version>${gatling-maven-plugin.version}</version>
</plugin>
</plugins>
</build>
</project>
Do I need to download any plugins?

Related

Rest app (Jersey/JAX-RS) runs in glassfish through eclipse but not when deployed in tomcat

I created a web app using eclipse and (Jersey/JAX-RS). Although when i "run on server" from eclipse in glassfish everything is ok when I deploy the war file I produce either from maven install or from eclipse "export war file" it is deployed succesfully but when i try to access the same rest resources that work in the first option described above, they are not available "404" error.
This is my web.xml file :
<?xml version = "1.0" encoding = "UTF-8"?>
<web-app xmlns:xsi = "http://www.w3.org/2001/XMLSchema-instance"
xmlns = "http://java.sun.com/xml/ns/javaee"
xsi:schemaLocation="http://java.sun.com/xml/ns/javaee
http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd"
id = "WebApp_ID" version = "3.0">
<display-name>Rate it</display-name>
<welcome-file-list>
<welcome-file>login.html</welcome-file>
</welcome-file-list>
<listener>
<listener-class>
com.rateit.util.MyAppServletContextListener
</listener-class>
</listener>
<servlet>
<servlet-name>RateIt login services</servlet-name>
<servlet-class>org.glassfish.jersey.servlet.ServletContainer</servlet-class>
<init-param>
<param-name>jersey.config.server.provider.packages</param-name>
<param-value>com.rateit.login</param-value>
</init-param>
<init-param>
<param-name>jersey.config.server.provider.classnames</param-name>
<param-value>org.glassfish.jersey.media.multipart.MultiPartFeature</param-value>
</init-param>
</servlet>
<servlet-mapping>
<servlet-name>RateIt login services</servlet-name>
<url-pattern>/loginServices/*</url-pattern>
</servlet-mapping>
</web-app>
This is my pom file :
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.gamechangerapps</groupId>
<artifactId>rateit</artifactId>
<packaging>war</packaging>
<version>0.0.1-SNAPSHOT</version>
<name>rateit Maven Webapp</name>
<url>http://maven.apache.org</url>
<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.12</version>
</dependency>
<dependency>
<groupId>javax.ws.rs</groupId>
<artifactId>javax.ws.rs-api</artifactId>
<version>2.0.1</version>
</dependency>
<dependency>
<groupId>org.glassfish.jersey.core</groupId>
<artifactId>jersey-client</artifactId>
<version>2.22.2</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>commons-io</groupId>
<artifactId>commons-io</artifactId>
<version>2.5</version>
</dependency>
<!-- https://mvnrepository.com/artifact/com.eclipsesource.minimal-json/minimal-json -->
<dependency>
<groupId>com.eclipsesource.minimal-json</groupId>
<artifactId>minimal-json</artifactId>
<version>0.9.4</version>
</dependency>
<!-- https://mvnrepository.com/artifact/log4j/log4j -->
<dependency>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
<version>1.2.17</version>
</dependency>
<dependency>
<groupId>org.mongodb</groupId>
<artifactId>mongo-java-driver</artifactId>
<version>3.2.2</version>
</dependency>
<dependency>
<groupId>javax.mail</groupId>
<artifactId>mail</artifactId>
<version>1.4</version>
</dependency>
<dependency>
<groupId>org.glassfish.jersey.media</groupId>
<artifactId>jersey-media-multipart</artifactId>
<version>2.19</version>
</dependency>
<dependency>
<groupId>javax.servlet</groupId>
<artifactId>javax.servlet-api</artifactId>
<version>3.1.0</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>com.google.code.gson</groupId>
<artifactId>gson</artifactId>
<version>2.8.0</version>
</dependency>
</dependencies>
<build>
<finalName>rateit</finalName>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-war-plugin</artifactId>
<configuration>
<webXml>WebContent\WEB-INF\web.xml</webXml>
</configuration>
</plugin>
<plugin>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<source>1.7</source>
<target>1.7</target>
</configuration>
</plugin>
</plugins>
</build>
</project>
and this is my project structure :
project structure
This is LoginServices class
package com.rateit.login;
import java.io.InputStream;
import java.util.List;
import javax.servlet.ServletContext;
import javax.ws.rs.Consumes;
import javax.ws.rs.FormParam;
import javax.ws.rs.GET;
import javax.ws.rs.POST;
import javax.ws.rs.Path;
import javax.ws.rs.QueryParam;
import javax.ws.rs.core.Context;
import javax.ws.rs.core.MediaType;
import javax.ws.rs.core.Response;
import javax.ws.rs.core.Response.ResponseBuilder;
import javax.ws.rs.core.UriBuilder;
import org.apache.log4j.Logger;
import org.glassfish.jersey.media.multipart.FormDataContentDisposition;
import org.glassfish.jersey.media.multipart.FormDataParam;
import com.rateit.causecodes.LoginCodes;
import com.rateit.inbox.InboxServices;
import com.rateit.mailing.MailServices;
import com.rateit.property.Building;
import com.rateit.property.PropertyDao;
import com.rateit.rating.RatingQueue;
import com.rateit.util.BaseUriProvider;
import com.rateit.util.FileOperations;
import com.rateit.util.GlobalVariables;
import com.rateit.util.JobIdGenerator;
#Path("/Welcome")
public class LoginServices {
UserDao userDao = UserDao.getInstance();
MailServices mailBot = MailServices.getInstance();
FileOperations fileManager = new FileOperations();
PropertyDao propDao = PropertyDao.getInstance();
InboxServices inServ = new InboxServices();
String baseUri = BaseUriProvider.getStreamInstance().getBaseUri();
static Logger logger = Logger.getLogger(LoginServices.class);
#Context
private ServletContext context;
/**
* Service for Login purposes
*
* #FormParam String user email
* #FormParam String user password
*
* #return LoginCodes LOGIN_SUCCESFUL/LOGIN_FAILURE_INVALID_EMAIL/LOGIN_FAILURE_INVALID_PASSWORD/LOGIN_FAILURE_GENERIC
*
*/
#Path("/login")
#POST
#Consumes("application/x-www-form-urlencoded")
public Response login(#FormParam("email") String e, #FormParam("password") String p) {
ResponseBuilder builder = null;
// get new JobId for this operation
int jobId = JobIdGenerator.getStreamInstance().getJobId();
// Create the user object
User userInput = new User(e,p);
String loginCheck = LoginCodes.LOGIN_FAILURE_GENERIC.getDescription();
User userFromDB = userDao.getUser(userInput.getEmail());
logger.info("JobID="+jobId+" for user with email "+userInput.getEmail()+" this user "+userFromDB.getEmail()+" was found in DB");
// login successful
if(userInput.getEmail().equals(userFromDB.getEmail()) && userInput.getPassword().equals(userFromDB.getPassword())){
loginCheck = LoginCodes.LOGIN_SUCCESFUL.getDescription();
builder = Response.seeOther(UriBuilder.fromUri(baseUri+"/login.html").queryParam("user", userFromDB.getEmail()).queryParam("loginCheck", loginCheck).build());
}
// invalid mail
else if(!userInput.getEmail().equals(userFromDB.getEmail())){
loginCheck = LoginCodes.LOGIN_FAILURE_INVALID_EMAIL.getDescription();
builder = Response.seeOther(UriBuilder.fromUri(baseUri+"/login.html").queryParam("loginCheck", loginCheck).build());
}
// invalid password
else if(userInput.getEmail().equals(userFromDB.getEmail()) && !userInput.getPassword().equals(userFromDB.getPassword())){
loginCheck = LoginCodes.LOGIN_FAILURE_INVALID_PASSWORD.getDescription();
builder = Response.seeOther(UriBuilder.fromUri(baseUri+"/login.html").queryParam("loginCheck", loginCheck).build());
}
// both password and mail invalid
else{
builder = Response.seeOther(UriBuilder.fromUri(baseUri+"/login.html").queryParam("loginCheck", loginCheck).build());
}
logger.info("JobID="+jobId+" Login attempt for user "+e+" "+loginCheck);
return builder.build();
}
The problem is the follwing :
The following test can not hit the rest api
#Test
public void loginUserSuccessTest()
{
// Create a user and add him to the DB
User testUser = new User("testuser#rateit.com","1234");
mockDaoObject.addnewUser(testUser);
// Create a HTTP request
Client client = ClientBuilder.newClient();
//WebTarget target = client.target(baseUri+"/loginServices/Welcome/login");
WebTarget target = client.target("http://localhost:8080/rateit/loginServices/Welcome/login");
target.property(ClientProperties.FOLLOW_REDIRECTS, false);
Builder basicRequest = target.request();
// Create a form with the front end parameters
Form form = new Form();
form.param("email", testUser.getEmail());
form.param("password", testUser.getPassword());
// Send the request and get the response
Response response = basicRequest.post(Entity.form(form), Response.class);
// Delete the user
mockDaoObject.deleteUser(testUser);
// Check that the response is success
boolean correctCause = response.getLocation().toString().contains("loginCheck=User+logged+in+successfully");
boolean correctUser = response.getLocation().toString().contains("user=");
assertEquals(true, correctCause);
assertEquals(true, correctUser);
assertEquals(303, response.getStatus());
}
Any advice will be helpful
After some investigation i found the issue. The problem was that some dependencies were missing and also a configuration in eclipse.
I found the error by checking the logs/localhost.date log in tomcat.
There i found the following exception :
15-Oct-2019 23:04:16.492 INFO [http-nio-8080-exec-42] org.apache.catalina.core.ApplicationContext.log Marking servlet [RateIt login services] as unavailable
15-Oct-2019 23:04:16.493 SEVERE [http-nio-8080-exec-42] org.apache.catalina.core.StandardWrapperValve.invoke Allocate exception for servlet [RateIt login services]
java.lang.ClassNotFoundException: org.glassfish.jersey.servlet.ServletContainer
at org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1365)
at org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1188)
at org.apache.catalina.core.DefaultInstanceManager.loadClass(DefaultInstanceManager.java:540)
at org.apache.catalina.core.DefaultInstanceManager.loadClassMaybePrivileged(DefaultInstanceManager.java:521)
at org.apache.catalina.core.DefaultInstanceManager.newInstance(DefaultInstanceManager.java:150)
at org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:1042)
at org.apache.catalina.core.StandardWrapper.allocate(StandardWrapper.java:761)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:135)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:526)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:139)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92)
at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:678)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343)
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:408)
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66)
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:861)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1579)
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Unknown Source)
Based on this error i fixed the deployment issue with the 3 following steps :
I added the following dependencies in my pom.xml file
org.glassfish.jersey.containers
jersey-container-servlet
2.19
org.glassfish.jersey.core
jersey-server
2.19
I added the maven dependencies in WEB-INF/lib folder through eclipse
MyProject->Properties->Deployment Assembly->Add(Maven Dependencies:source and WEB-INF/lib:Deploy Path
I created the war file through export in eclipse and not through maven task

Fuse Error deploying bundle IndexOutOfBoundsException

I am new to FUSE. I am trying to start a simple REST service.
I am using Jboss Fuse 6.3.
The bundle installs but i cannot start it without the error.
After installing the bundle, it appears as "Active", but with the tag "Failure".
The log prints the following error:
12:29:17,046 | ERROR | Thread-53 | BlueprintContainerImpl | 23 - org.apache.aries.blueprint.core - 1.4.5 | Unable to start blueprint container for bundle null/0.0.0
java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at java.util.ArrayList.rangeCheck(Unknown Source)[:1.8.0_151]
at java.util.ArrayList.get(Unknown Source)[:1.8.0_151]
at org.apache.aries.blueprint.container.BlueprintContainerImpl.readDirectives(BlueprintContainerImpl.java:214)
at org.apache.aries.blueprint.container.BlueprintContainerImpl.doRun(BlueprintContainerImpl.java:296)
at org.apache.aries.blueprint.container.BlueprintContainerImpl.run(BlueprintContainerImpl.java:270)
at org.apache.aries.blueprint.container.BlueprintExtender.createContainer(BlueprintExtender.java:294)
at org.apache.aries.blueprint.container.BlueprintExtender.createContainer(BlueprintExtender.java:263)
This is my code:
Picture of project structure
Pom.xml
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>org.fusesource.example</groupId>
<artifactId>rest-service</artifactId>
<version>0.0.1-SNAPSHOT</version>
<packaging>jar</packaging>
<name>rest-service</name>
<url>http://maven.apache.org</url>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
</properties>
<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>3.8.1</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.servicemix.specs</groupId>
<artifactId>org.apache.servicemix.specs.jsr311-api-1.1.1</artifactId>
<version>1.9.0</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.apache.servicemix</groupId>
<artifactId>servicemix-http</artifactId>
<version>2013.01</version>
</dependency>
<dependency>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
<version>1.2.16</version>
</dependency>
</dependencies>
<build>
<defaultGoal>install</defaultGoal>
<plugins>
<plugin>
<groupId>org.apache.felix</groupId>
<artifactId>maven-bundle-plugin</artifactId>
<version>2.3.6</version>
<extensions>true</extensions>
<configuration>
<instructions>
<Bundle-SymbolicName>${project.artifactId}</Bundle-SymbolicName>
<Import-Package>* </Import-Package>
</instructions>
</configuration>
</plugin>
</plugins>
</build>
</project>
Blueprint.xml:
<?xml version = "1.0" encoding = "UTF-8"?>
<blueprint xmlns = "http://www.osgi.org/xmlns/blueprint/v1.0.0"
xmlns:xsi = "http://www.w3.org/2001/XMLSchema-instance"
xmlns:jaxrs = "http://cxf.apache.org/blueprint/jaxrs"
xmlns:cxf="http://cxf.apache.org/blueprint/core"
xsi:schemaLocation = "http://www.osgi.org/xmlns/blueprint/v1.0.0 http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd
http://cxf.apache.org/blueprint/jaxrs http://cxf.apache.org/schemas/blueprint/jaxrs.xsd
http://cxf.apache.org/blueprint/core http://cxf.apache.org/schemas/blueprint/core.xsd">
<jaxrs:server id="servicios" address="/serviciosPrueba">
<jaxrs:serviceBeans>
<ref component-id="miServicio" />
</jaxrs:serviceBeans>
</jaxrs:server>
<bean id="miServicio" class="com.rest.Servicio" />
</blueprint>
Java:
package com.rest;
import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.Produces;
import javax.ws.rs.core.MediaType;
#Path("/servicioPrueba")
public class Servicio {
#GET
#Path("/getData")
#Produces(MediaType.APPLICATION_JSON)
public String getUser() {
String reponse = "This is standard response from REST";
return reponse;
}
}
Thank you very much.
In pom.xml
Please change
<packaging>jar</packaging>
to
<packaging>bundle</packaging>
otherwise it will miss the necessary OSGi headers
I resolved the second error adding specific versions of dependencies in pom.xml:
<configuration>
<instructions>
<Bundle-SymbolicName>${project.artifactId}</Bundle-SymbolicName>
<Import-Package>
javax.ws.rs;version="[2.0,3)",
javax.ws.rs.core;version="[2.0,3)",
*
</Import-Package>
</instructions>
</configuration>

“value $ is not a member of StringContext” - Missing Scala plugin?

I'm using maven with scala archetype. I'm getting that error:
“value $ is not a member of StringContext”
I already tried to add several things in pom.xml, but nothing worked very well...
My code:
import org.apache.spark.ml.evaluation.RegressionEvaluator
import org.apache.spark.ml.regression.LinearRegression
import org.apache.spark.ml.tuning.{ParamGridBuilder, TrainValidationSplit}
// To see less warnings
import org.apache.log4j._
Logger.getLogger("org").setLevel(Level.ERROR)
// Start a simple Spark Session
import org.apache.spark.sql.SparkSession
val spark = SparkSession.builder().getOrCreate()
// Prepare training and test data.
val data = spark.read.option("header","true").option("inferSchema","true").format("csv").load("USA_Housing.csv")
// Check out the Data
data.printSchema()
// See an example of what the data looks like
// by printing out a Row
val colnames = data.columns
val firstrow = data.head(1)(0)
println("\n")
println("Example Data Row")
for(ind <- Range(1,colnames.length)){
println(colnames(ind))
println(firstrow(ind))
println("\n")
}
////////////////////////////////////////////////////
//// Setting Up DataFrame for Machine Learning ////
//////////////////////////////////////////////////
// A few things we need to do before Spark can accept the data!
// It needs to be in the form of two columns
// ("label","features")
// This will allow us to join multiple feature columns
// into a single column of an array of feautre values
import org.apache.spark.ml.feature.VectorAssembler
import org.apache.spark.ml.linalg.Vectors
// Rename Price to label column for naming convention.
// Grab only numerical columns from the data
val df = data.select(data("Price").as("label"),$"Avg Area Income",$"Avg Area House Age",$"Avg Area Number of Rooms",$"Area Population")
// An assembler converts the input values to a vector
// A vector is what the ML algorithm reads to train a model
// Set the input columns from which we are supposed to read the values
// Set the name of the column where the vector will be stored
val assembler = new VectorAssembler().setInputCols(Array("Avg Area Income","Avg Area House Age","Avg Area Number of Rooms","Area Population")).setOutputCol("features")
// Use the assembler to transform our DataFrame to the two columns
val output = assembler.transform(df).select($"label",$"features")
// Create a Linear Regression Model object
val lr = new LinearRegression()
// Fit the model to the data
// Note: Later we will see why we should split
// the data first, but for now we will fit to all the data.
val lrModel = lr.fit(output)
// Print the coefficients and intercept for linear regression
println(s"Coefficients: ${lrModel.coefficients} Intercept: ${lrModel.intercept}")
// Summarize the model over the training set and print out some metrics!
// Explore this in the spark-shell for more methods to call
val trainingSummary = lrModel.summary
println(s"numIterations: ${trainingSummary.totalIterations}")
println(s"objectiveHistory: ${trainingSummary.objectiveHistory.toList}")
trainingSummary.residuals.show()
println(s"RMSE: ${trainingSummary.rootMeanSquaredError}")
println(s"MSE: ${trainingSummary.meanSquaredError}")
println(s"r2: ${trainingSummary.r2}")
and my pom.xml is that:
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>test</groupId>
<artifactId>outrotest</artifactId>
<version>1.0-SNAPSHOT</version>
<name>${project.artifactId}</name>
<description>My wonderfull scala app</description>
<inceptionYear>2015</inceptionYear>
<licenses>
<license>
<name>My License</name>
<url>http://....</url>
<distribution>repo</distribution>
</license>
</licenses>
<properties>
<maven.compiler.source>1.6</maven.compiler.source>
<maven.compiler.target>1.6</maven.compiler.target>
<encoding>UTF-8</encoding>
<scala.version>2.11.5</scala.version>
<scala.compat.version>2.11</scala.compat.version>
</properties>
<dependencies>
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>${scala.version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-mllib_2.11</artifactId>
<version>2.0.1</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.11</artifactId>
<version>2.0.1</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.11</artifactId>
<version>2.0.2</version>
</dependency>
<dependency>
<groupId>com.databricks</groupId>
<artifactId>spark-csv_2.11</artifactId>
<version>1.5.0</version>
</dependency>
<!-- Test -->
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.11</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.specs2</groupId>
<artifactId>specs2-junit_${scala.compat.version}</artifactId>
<version>2.4.16</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.specs2</groupId>
<artifactId>specs2-core_${scala.compat.version}</artifactId>
<version>2.4.16</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.scalatest</groupId>
<artifactId>scalatest_${scala.compat.version}</artifactId>
<version>2.2.4</version>
<scope>test</scope>
</dependency>
</dependencies>
<build>
<sourceDirectory>src/main/scala</sourceDirectory>
<testSourceDirectory>src/test/scala</testSourceDirectory>
<plugins>
<plugin>
<!-- see http://davidb.github.com/scala-maven-plugin -->
<groupId>net.alchim31.maven</groupId>
<artifactId>scala-maven-plugin</artifactId>
<version>3.2.0</version>
<executions>
<execution>
<goals>
<goal>compile</goal>
<goal>testCompile</goal>
</goals>
<configuration>
<args>
<!--<arg>-make:transitive</arg>-->
<arg>-dependencyfile</arg>
<arg>${project.build.directory}/.scala_dependencies</arg>
</args>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>2.18.1</version>
<configuration>
<useFile>false</useFile>
<disableXmlReport>true</disableXmlReport>
<!-- If you have classpath issue like NoDefClassError,... -->
<!-- useManifestOnlyJar>false</useManifestOnlyJar -->
<includes>
<include>**/*Test.*</include>
<include>**/*Suite.*</include>
</includes>
</configuration>
</plugin>
</plugins>
</build>
</project>
I have no idea about how to fix it. Does anybody have any idea?
Add this.. it will work
val spark = SparkSession.builder().getOrCreate()
import spark.implicits._ // << add this
You can use the col function instead just import it like this :
import org.apache.spark.sql.functions.col
And then change the $"column" to col("column")
Hope it helps
#Apurva's answer initially worked for me in that the error vanished from IntelliJ
But then it resulted in "Could not find implicit value for spark" during sbt compile phase
I found a strange work-around by importing spark.implicits._ from SparkSession referenced from DataFrame instead of one obtained by getOrCreate
import df.sparkSession.implicits._
where df is a DataFrame
This could be because my code was placed inside a case class that received an implicit val spark: SparkSession parameter; but I'm not really sure as to why this fix worked for me
I'm using spark 1.6. The above answers are great but unfortunately doesn't work in 1.6
The way I solved it was by using df.col("column-name")
val df = df_mid
.withColumn("dt", date_format(df_mid.col("timestamp"), "yyyy-MM-dd"))
.filter("dt != 'null'")

java.lang.NoSuchMethodError: scala.reflect.api.JavaUniverse.runtimeMirror

java.lang.NoSuchMethodError: scala.reflect.api.JavaUniverse.runtimeMirror(Ljava/lang/ClassLoader;)Lscala/reflect/api/JavaMirrors$JavaMirror;
at org.elasticsearch.spark.serialization.ReflectionUtils$.org$elasticsearch$spark$serialization$ReflectionUtils$$checkCaseClass(ReflectionUtils.scala:42)
at org.elasticsearch.spark.serialization.ReflectionUtils$$anonfun$checkCaseClassCache$1.apply(ReflectionUtils.scala:84)
it is seems scala version uncompatible,but i see the document of spark ,spark 2.10 and scala 2.11.8 is ok.
that is my pom.xml and that is just a test for spark to write to elasticsearch with es-hadoop,i have no idea how to solve this exception. `
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>cn.jhTian</groupId>
<artifactId>sparkLink</artifactId>
<version>0.0.1-SNAPSHOT</version>
<packaging>jar</packaging>
<name>${project.artifactId}</name>
<description>My wonderfull scala app</description>
<inceptionYear>2015</inceptionYear>
<licenses>
<license>
<name>My License</name>
<url>http://....</url>
<distribution>repo</distribution>
</license>
</licenses>
<properties>
<encoding>UTF-8</encoding>
<scala.version>2.11.8</scala.version>
<scala.compat.version>2.11</scala.compat.version>
</properties>
<repositories>
<repository>
<id>ainemo</id>
<name>xylink</name>
<url>http://10.170.209.180:8081/nexus/content/groups/public/</url>
</repository>
</repositories>
<dependencies>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.11</artifactId>
<version>2.1.0</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>2.6.4</version><!-- 2.64 -->
</dependency>
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>${scala.version}</version>
</dependency>
<!--<dependency>-->
<!--<groupId>org.scala-lang</groupId>-->
<!--<artifactId>scala-compiler</artifactId>-->
<!--<version>${scala.version}</version>-->
<!--</dependency>-->
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-reflect</artifactId>
<version>${scala.version}</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-hdfs</artifactId>
<version>2.6.4</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_2.11</artifactId>
<version>2.1.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming-kafka-0-8_2.11</artifactId>
<version>2.1.0</version>
</dependency>
<dependency>
<groupId>com.google.protobuf</groupId>
<artifactId>protobuf-java</artifactId>
<version>3.1.0</version>
</dependency>
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch-hadoop</artifactId>
<version>5.3.0 </version>
</dependency>
<!-- Test -->
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.10</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.specs2</groupId>
<artifactId>specs2-core_${scala.compat.version}</artifactId>
<version>2.4.16</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.scalatest</groupId>
<artifactId>scalatest_${scala.compat.version}</artifactId>
<version>2.2.4</version>
<scope>test</scope>
</dependency>
</dependencies>
</project>'
this is my code
import org.apache.spark.{SparkConf, SparkContext}
import org.elasticsearch.spark._
/**
* Created by jhTian on 2017/4/19.
*/
object EsWrite {
def main(args: Array[String]) {
val sparkConf = new SparkConf()
.set("es.nodes", "1.1.1.1")
.set("es.port", "9200")
.set("es.index.auto.create", "true")
.setAppName("es-spark-demo")
val sc = new SparkContext(sparkConf)
val job1 = Job("C开发工程师","http://job.c.com","c公司","10000")
val job2 = Job("C++开发工程师","http://job.c++.com","c++公司","10000")
val job3 = Job("C#开发工程师","http://job.c#.com","c#公司","10000")
val job4 = Job("Java开发工程师","http://job.java.com","java公司","10000")
val job5 = Job("Scala开发工程师","http://job.scala.com","java公司","10000")
// val numbers = Map("one" -> 1, "two" -> 2, "three" -> 3)
// val airports = Map("arrival" -> "Otopeni", "SFO" -> "San Fran")
// val rdd=sc.makeRDD(Seq(numbers,airports))
val rdd=sc.makeRDD(Seq(job1,job2,job3,job4,job5))
rdd.saveToEs("job/info")
sc.stop()
}
}
case class Job(jobName:String, jobUrl:String, companyName:String, salary:String)'
Generally NoSuchMethodError implies the caller was compiled with a different version than was found on the classpath at runtime (or you have multiple versions on the CP).
In your case, I'd guess that es-hadoop is built against a different version of Scala I've not used maven in a little while but I think the command you need to get some useful into is mvn depdencyTree. Use the output to see which version of Scala es-hadoop is built with and then configure your project to use the same Scala version.
To get stable/reproducible builds I'd recommend using something like the maven-enforcer-plugin:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-enforcer-plugin</artifactId>
<version>1.4.1</version>
<executions>
<execution>
<id>enforce</id>
<configuration>
<rules>
<dependencyConvergence />
</rules>
</configuration>
<goals>
<goal>enforce</goal>
</goals>
</execution>
</executions>
</plugin>
it can be annoying initially but once you have all your dependencies sorted you shouldn't get issues like this anymore.
use dependency like this
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch-spark-20_2.11</artifactId>
<version>5.2.2</version>
</dependency>
for spark 2.0 and scala 2.11

Not able run code on spark-submit

I have this code of scala which I want to run on terminal using spark-submit command. There seems to no problem while running it in intellij IDE.
Code
package com.scryAnalytics.NLPAnnotationController.Work
import java.net.MalformedURLException
import java.util.{ArrayList, Arrays}
import com.scryAnalytics.NLPAnnotationController.Configuration.{VOCPConstants, VocpConfiguration}
import com.scryAnalytics.NLPAnnotationController.DAO.NLPEntitiesDAO
import com.scryAnalytics.NLPGeneric.{NLPEntities, _}
import com.vocp.ner.main.GateNERImpl
import gate.util.GateException
import org.apache.hadoop.hbase.client.{HBaseAdmin, Put}
import org.apache.hadoop.hbase.io.ImmutableBytesWritable
import org.apache.hadoop.hbase.mapreduce.{MultiTableOutputFormat, TableInputFormat, TableOutputFormat}
import org.apache.hadoop.hbase.util.Bytes
import org.apache.hadoop.hbase.{HTableDescriptor, TableName}
import org.apache.hadoop.mapreduce.Job
import org.apache.log4j.Logger
import org.apache.spark.{SparkConf, SparkContext}
class NLPProcessingLog {
var log: Logger = Logger.getLogger(classOf[NLPProcessingLog])
log.info("Logger Initialized .....")
}
object NlpProcessing {
val logger = new NLPProcessingLog
#throws(classOf[Exception])
def nlpAnnotationExtraction(conf: org.apache.hadoop.conf.Configuration, batchString: String): Int = {
logger.log.info("In Main Object..")
//Initializing Spark Context
val sc = new SparkContext(new SparkConf().setAppName("NLPAnnotationController").setMaster("local"))
val batchId =
if (batchString == "newbatch")
java.lang.Long.toString(System.currentTimeMillis())
else batchString
conf.set("batchId", batchId)
val inputCfs = Arrays.asList(conf.get(VOCPConstants.INPUTCOLUMNFAMILIES).split(","): _*)
try {
conf.set(TableInputFormat.INPUT_TABLE, conf.get(VOCPConstants.INPUTTABLE))
conf.set(TableOutputFormat.OUTPUT_TABLE, conf.get(VOCPConstants.OUTPUTTABLE))
val job: Job = Job.getInstance(conf, "NLPAnnotationJob")
job.getConfiguration().set(TableOutputFormat.OUTPUT_TABLE, conf.get(VOCPConstants.OUTPUTTABLE))
job.setOutputFormatClass(classOf[MultiTableOutputFormat])
val admin = new HBaseAdmin(conf)
if (!admin.isTableAvailable(conf.get(VOCPConstants.OUTPUTTABLE))) {
val tableDesc = new HTableDescriptor(TableName.valueOf(conf.get(VOCPConstants.OUTPUTTABLE)))
admin.createTable(tableDesc)
}
val hBaseRDD = sc.newAPIHadoopRDD(conf, classOf[TableInputFormat],
classOf[org.apache.hadoop.hbase.io.ImmutableBytesWritable],
classOf[org.apache.hadoop.hbase.client.Result])
val processedFilteredRDD = hBaseRDD.map(x => x._2).filter { result =>
val flag = Bytes.toString(result.getValue(Bytes.toBytes("f"),
Bytes.toBytes("is_processed")))
(flag == null) || (flag == 0)
}
println(processedFilteredRDD.count())
val messageRDD = processedFilteredRDD.filter { x => x != null }.map { result =>
val message = Bytes.toString(result.getValue(Bytes.toBytes("p"),
Bytes.toBytes("message")))
(Bytes.toString(result.getRow()), message)
}
println("Number of partitions " + messageRDD.getNumPartitions)
val pluginHome = conf.get(VOCPConstants.GATE_PLUGIN_ARCHIVE)
val requiredNLPEntities = new ArrayList[NLPEntities]()
requiredNLPEntities.add(NLPEntities.POS_TAGGER)
requiredNLPEntities.add(NLPEntities.VP_CHUNKER)
requiredNLPEntities.add(NLPEntities.NP_CHUNKER)
val nlpGenericRDD = messageRDD.mapPartitions { iter =>
val nlpModule = new GateGenericNLP(pluginHome, requiredNLPEntities)
iter.map { x =>
val nlpGenericJson = nlpModule.generateNLPEntities(x._2)
val genericNLPObject = Utility.jsonToGenericNLP(nlpGenericJson)
(x._1, x._2, genericNLPObject)
}
}
val requiredNEREntities = new ArrayList[String]()
requiredNEREntities.add("DRUG")
requiredNEREntities.add("SE")
requiredNEREntities.add("REG")
requiredNEREntities.add("ALT_THERAPY")
requiredNEREntities.add("ALT_DRUG")
val nlpRDD = nlpGenericRDD.mapPartitions { iter =>
val nerModule = new GateNERImpl(pluginHome, requiredNEREntities)
iter.map { x =>
val nerJson = nerModule.generateNER(x._2, Utility.objectToJson(x._3))
val nerJsonObject = Utility.jsonToGateNer(nerJson)
val nlpEntities: NLPEntitiesDAO = new NLPEntitiesDAO
nlpEntities.setToken(x._3.getToken())
nlpEntities.setSpaceToken(x._3.getSpaceToken())
nlpEntities.setSentence(x._3.getSentence())
nlpEntities.setSplit(x._3.getSplit())
nlpEntities.setVG(x._3.getVG)
nlpEntities.setNounChunk(x._3.getNounChunk)
nlpEntities.setDRUG(nerJsonObject.getDRUG())
nlpEntities.setREG(nerJsonObject.getREG())
nlpEntities.setSE(nerJsonObject.getSE())
nlpEntities.setALT_DRUG(nerJsonObject.getALT_DRUG())
nlpEntities.setALT_THERAPY(nerJsonObject.getALT_THERAPY())
(x._1, nlpEntities)
}
}
//outputRDD.foreach(println)
val newRDD = nlpRDD.map { k => convertToPut(k) }
newRDD.saveAsNewAPIHadoopDataset(job.getConfiguration())
return 0
} catch {
case e: MalformedURLException => {
e.printStackTrace()
return 1
}
case e: GateException =>
{
e.printStackTrace()
return 1
}
}
}
def convertToPut(genericNlpWithRowKey: (String, NLPEntitiesDAO)): (ImmutableBytesWritable, Put) = {
val rowkey = genericNlpWithRowKey._1
val genericNLP = genericNlpWithRowKey._2
val put = new Put(Bytes.toBytes(rowkey))
val genCFDataBytes = Bytes.toBytes("gen")
val nerCFDataBytes = Bytes.toBytes("ner")
val flagCFDataBytes = Bytes.toBytes("f")
put.add(genCFDataBytes, Bytes.toBytes("token"),
Bytes.toBytes(Utility.objectToJson((genericNLP.getToken()))));
put.add(genCFDataBytes, Bytes.toBytes("spaceToken"),
Bytes.toBytes(Utility.objectToJson((genericNLP.getSpaceToken()))));
put.add(genCFDataBytes, Bytes.toBytes("sentence"),
Bytes.toBytes(Utility.objectToJson((genericNLP.getSentence()))));
put.add(genCFDataBytes, Bytes.toBytes("verbGroup"),
Bytes.toBytes(Utility.objectToJson((genericNLP.getVG()))));
put.add(genCFDataBytes, Bytes.toBytes("split"),
Bytes.toBytes(Utility.objectToJson((genericNLP.getSplit()))));
put.add(genCFDataBytes, Bytes.toBytes("nounChunk"),
Bytes.toBytes(Utility.objectToJson((genericNLP.getNounChunk()))));
put.add(nerCFDataBytes, Bytes.toBytes("drug"),
Bytes.toBytes(Utility.objectToJson((genericNLP.getDRUG()))))
put.add(nerCFDataBytes, Bytes.toBytes("sideEffect"),
Bytes.toBytes(Utility.objectToJson((genericNLP.getSE()))))
put.add(nerCFDataBytes, Bytes.toBytes("regimen"),
Bytes.toBytes(Utility.objectToJson((genericNLP.getREG()))))
put.add(nerCFDataBytes, Bytes.toBytes("altTherapy"),
Bytes.toBytes(Utility.objectToJson((genericNLP.getALT_THERAPY()))))
put.add(nerCFDataBytes, Bytes.toBytes("altDrug"),
Bytes.toBytes(Utility.objectToJson((genericNLP.getALT_DRUG()))))
put.add(flagCFDataBytes, Bytes.toBytes("is_processed"),
Bytes.toBytes("1"))
put.add(flagCFDataBytes, Bytes.toBytes("dStatus"),
Bytes.toBytes("0"))
put.add(flagCFDataBytes, Bytes.toBytes("rStatus"),
Bytes.toBytes("0"))
put.add(flagCFDataBytes, Bytes.toBytes("adStatus"),
Bytes.toBytes("0"))
put.add(flagCFDataBytes, Bytes.toBytes("atStatus"),
Bytes.toBytes("0"))
(new ImmutableBytesWritable(Bytes.toBytes(rowkey)), put)
}
def pipeLineExecute(args: Array[String]): Int = {
var batchString = ""
val usage = "Usage: NLPAnnotationController" + " -inputTable tableName -outputTable tableName" +
" -batchId batchId / -newbatch \n"
if (args.length == 0) {
System.err.println(usage)
return -1
}
val conf = VocpConfiguration.create
for (i <- 0 until args.length by 2) {
if ("-inputTable" == args(i)) {
conf.set(VOCPConstants.INPUTTABLE, args(i + 1))
} else if ("-outputTable" == args(i)) {
conf.set(VOCPConstants.OUTPUTTABLE, args(i + 1))
} else if ("-batchId" == args(i)) {
batchString = args(i)
} else if ("-newbatch" == args(i)) {
batchString = "newbatch"
} else {
throw new IllegalArgumentException("arg " + args(i) + " not recognized")
}
}
val result = nlpAnnotationExtraction(conf, batchString)
result
}
def main(args: Array[String]) {
val res = pipeLineExecute(args)
System.exit(res)
}
}
I am trying to make a fat jar file to be executed using spark-submit.
pom.xml
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.scryAnalytics</groupId>
<artifactId>NLPAnnotationController</artifactId>
<version>1.0-SNAPSHOT</version>
<packaging>jar</packaging>
<name>NLPAnnotationController2</name>
<url>http://maven.apache.org</url>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<hadoop.version>2.6.0-cdh5.7.2</hadoop.version>
<jdk.version>1.7</jdk.version>
<sdk.version>2.10.5</sdk.version>
<hbase.version>0.98.16-hadoop2</hbase.version>
</properties>
<repositories>
<repository>
<id>cloudera</id>
<url>https://repository.cloudera.com/artifactory/cloudera-repos/</url>
</repository>
</repositories>
<pluginRepositories>
<pluginRepository>
<id>scala-tools.org</id>
<name>Scala-tools Maven2 Repository</name>
<url>http://scala-tools.org/repo-releases</url>
</pluginRepository>
</pluginRepositories>
<dependencies>
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>2.10.5</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.10</artifactId>
<version>1.6.1</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-mllib_2.10</artifactId>
<version>1.6.1</version>
</dependency>
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-client</artifactId>
<version>${hbase.version}</version>
</dependency>
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-common</artifactId>
<version>${hbase.version}</version>
</dependency>
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-spark</artifactId>
<version>1.2.0-cdh5.7.2</version>
</dependency>
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-server</artifactId>
<version>${hbase.version}</version>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit-dep</artifactId>
<version>4.8.2</version>
</dependency>
<dependency>
<groupId>uk.ac.gate</groupId>
<artifactId>gate-core</artifactId>
<version>8.1</version>
</dependency>
<dependency>
<groupId>uk.ac.gate</groupId>
<artifactId>gate-compiler-jdt</artifactId>
<version>4.3.2-P20140317-1600</version>
</dependency>
<dependency>
<groupId>com.thoughtworks.xstream</groupId>
<artifactId>xstream</artifactId>
<version>1.4.8</version>
</dependency>
<dependency>
<groupId>org.codehaus.jackson</groupId>
<artifactId>jackson-core-asl</artifactId>
<version>1.9.13</version>
</dependency>
<dependency>
<groupId>org.codehaus.jackson</groupId>
<artifactId>jackson-mapper-asl</artifactId>
<version>1.9.13</version>
</dependency>
<dependency>
<groupId>com.scryAnalytics</groupId>
<artifactId>NLPGeneric</artifactId>
<version>1.1</version>
</dependency>
<dependency>
<groupId>NER</groupId>
<artifactId>NER</artifactId>
<version>1.2</version>
</dependency>
</dependencies>
<build>
<finalName>NLPAnnotationController</finalName>
<plugins>
<!-- download source code in Eclipse, best practice -->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-eclipse-plugin</artifactId>
<version>2.9</version>
<configuration>
<downloadSources>true</downloadSources>
<downloadJavadocs>false</downloadJavadocs>
</configuration>
</plugin>
<!-- Set a compiler level -->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.3</version>
<configuration>
<source>${jdk.version}</source>
<target>${jdk.version}</target>
</configuration>
</plugin>
<!-- Maven Assembly Plugin -->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-assembly-plugin</artifactId>
<version>2.4.1</version>
<configuration>
<!-- get all project dependencies -->
<descriptors>
<descriptor>src/main/assembly/hadoop-job.xml</descriptor>
</descriptors>
<descriptorRefs>
<descriptorRef>jar-with-dependencies</descriptorRef>
</descriptorRefs>
<!-- MainClass in mainfest make a executable jar -->
<archive>
<manifest>
<mainClass>com.scryAnalytics.NLPAnnotationController.Work.NlpProcessing</mainClass>
</manifest>
</archive>
</configuration>
<executions>
<execution>
<id>make-assembly</id>
<!-- bind to the packaging phase -->
<phase>package</phase>
<goals>
<goal>single</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
<resources>
<resource>
<directory>conf</directory>
</resource>
</resources>
</build>
Error
spark-submit target/NLPAnnotationController-job.jar -inputTable posts -outputTable posts -batchId 1
java.lang.ClassNotFoundException: com.scryAnalytics.NLPAnnotationController.Work.NlpProcessing
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:278)
at org.apache.spark.util.Utils$.classForName(Utils.scala:174)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:689)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
As I have said, the works perfectly fine on intellij.
Any help would be appreciated.