I want to use phantom with my scala IDE.So for this i clone the git hub repository and created a .jar file of phantom using sbt -> compile -> package.I add this .jar file to build path in my Scala IDE but still while importing
import com.websudos.phantom.connectors._
is throwing error that
object connector is not a member of com.websudos.phantom.
While using auto complete function of scala ide it is showing only the import for
import com.websudos.phantom.example
.I don't know if the jar files got created for example then why it is not created for other.
I search in internet but all other option are given as to add dependency in sbt build path but i dont want to use it.
Use sbt-assebly instead to create a fat jar.
https://github.com/sbt/sbt-assembly
Related
I have created spark scala(version 2.11) application and try to build using maven(version-3) using IntelliJ. At first time,able to compile and built the jar using maven successfully and able to test spark application using jar on cluster as well.Next time,I have modified some of the existing scala class code and tried to build again, code compiled and generate jar file successfully without any issues but there are no scala classes in latest jar file.I would like to know why maven build is not generating class file when I build.Can you please let me know what could be the problem and how Can I fix it ?
The easiest way to build scala applications for spark is to use SBT and fat jar plugin. Details were already described there:
How to build an Uber JAR (Fat JAR) using SBT within IntelliJ IDEA?
Just don't forget to exclude spark jars from fat jar with provided.
I have been using sbt on windows and a custom build.sbt script in conjunction with an import Chisel._ in the top-level file in order to generate Verilog from my Chisel source successfully.
I'm trying to get an IDE working on Windows to expedite Chisel development. I've gone with the Eclipse based SCALA IDE http://scala-ide.org/download/sdk.html/
I want to compile the Chisel library so that the import Chisel._ can be resolved locally, without having to go off and download the source from the repository each timeand recompile the source. When I download the Chisel-master repo from Git and include the src\main folder in my SCALA project in the SCALA IDE, I get lots of syntax errors in the Chisel SCALA files that prevent me from building the project.
Has anyone done anything like this before on Windows or have any knowledge of working with the SCALA IDE as it may just be a case of undefined symbols in the project configuration?
Not sure exactly what you did with build.sbt respect to recompile (I think it download it only the first time, then it caches it for the future). But I'm using ScalaIDE for Chisel on linux, using the default build.sbt files, maybe you can try to get it working out of the box first to help narrow down the issue.
Here are the steps I took in order to get ScalaIDE work with Chisel:
the latest Scala IDE uses 2.11.8, the current Chisel repository defaults to 2.11.7. So I had to change all the build.sbt reference to scalaVersion from 2.11.7 to 2.11.8
I used sbteclipse
https://github.com/typesafehub/sbteclipse
To create importable the workspace to setup the compilation dependencies.
Except for chiselFrontEnd. For some reason, this package is not added to the dependency. I have to Add chiselFrontEnd as a javabuildpath dependency manually (Properties/JavaBuildPath, under Projects) for my own projects.
To resolve undefined symbols, you can also add a JAR onto the project build path using Project Properties > Java Build Path > Libraries > Add External JARs...
If you are getting your JARs through Maven / SBT, they should be in:
C:\Users\<name>\.ivy2\local\edu.berkeley.cs\chisel3_2.11\jars
If you are using publish-local with chisel3, your JARs should be in
C:\Users\<name>\.ivy2\cache\edu.berkeley.cs\chisel3_2.11\jars
Note that chisel3 is compiled into one JAR, including coreMacros and chiselFrontend sub-projects
Of course, this is a more quick-and-dirty solution compared to something that can parse SBT files.
I have to do some assignments with Scala and I'm a newbie for this language.
In the assignment, the prof requests me to implement serialization and deserialization, using twitter/chill.
https://github.com/twitter/chill/
However, I don't know how to import the libary into my IDE Intellij.
Each time I use val instantiator = new ScalaKryoInstantiator.
IDE notify me that: Cannot resolve symbol ScalaKryoInstantiator.
Could anyone can help me to resolve this issue?
Thanks and Best Regards,
Long.
One way to add a library in a Scala project, if you use sbt, is to add an sbt dependency. Most libraries and jars can be found and downloaded from the maven repositories: https://mvnrepository.com/artifact/com.twitter/chill_2.9.2/0.2.3
If you click the sbt tab on the page above, maven gives you a code snippet which can be directly pasted into your build.sbt. In order to compile and run your project using sbt, open the terminal in the directory where your build.sbt is located and use the "sbt run" command.
If you don't use sbt, download the jar from maven and follow the instructions in this answer: How to add external library in IntelliJ IDEA?
I am trying to use an LSH implementation of Scala(https://github.com/marufaytekin/lsh-spark) in my Spark project.I cloned the repository with some changes to the sbt file (added Organisation)
To use this implementation , I compiled it using sbt compile and moved the jar file to the "lib" folder of my project and updated the sbt configuration file of my project , which looks like this ,
Now when I try to compile my project using sbt compile , It fails to load the external jar file ,showing the error message "unresolved dependency: com.lendap.spark.lsh.LSH#lsh-scala_2.10;0.0.1-SNAPSHOT: not found".
Am i following the right steps for adding an external jar file ?
How do i solve the dependency issue
As an alternative, you can build the lsh-spark project and add the jar in your spark application.
To add the external jars, addJar option can be used while executing spark application. Refer Running spark application on yarn
This issue isn't related to spark but to sbt configuration.
Make sure you followed the correct folder structure imposed by sbt and added your jar in the lib folder, as explained here - lib folder should be at the same level as build.sbt (cf. this post).
You might also want to check out this SO post.
As shown in image, its giving error when i am importing the Spark packages. Please help. When i hover there, it shows "object apache is not a member of package org".
I searched on this error, it shows spark jars has not been imported. So, i imported "spark-assembly-1.4.1-hadoop2.2.0.jar" too. But still same error.Below is what i actually want to run:
import org.apache.spark.{SparkConf, SparkContext}
object ABC {
def main(args: Array[String]){
//Scala Main Method
println("Spark Configuration")
val conf = new SparkConf()
conf.setAppName("My First Spark Scala Application")
conf.setMaster("spark://ip-10-237-224-94:7077")
println("Creating Spark Context")
}
}
Adding spark-core jar in your classpath should resolve your issue. Also if you are using some build tools like Maven or Gradle (if not then you should because spark-core has lot many dependencies and you would keep getting such problem for different jars), try to use Eclipse task provided by these tools to properly set classpath in your project.
I was also receiving the same error, in my case it was compatibility issue. As Spark 2.2.1 is not compatible with Scala 2.12(it is compatible with 2.11.8) and my IDE was supporting Scala 2.12.3.
I resolved my error by
1) Importing the jar files from the basic folder of Spark. During the installation of Spark in our C drive we have a folder named Spark which contains Jars folder in it. In this folder one can get all the basic jar files.
Goto to Eclipse right click on the project -> properties-> Java Build Path. Under 'library' category we will get an option of ADD EXTERNAL JARs.. Select this option and import all the jar files of 'jars folder'. click on Apply.
2) Again goto properties -> Scala Compiler ->Scala Installation -> Latest 2.11 bundle (dynamic)*
*before selecting this option one should check the compatibility of SPARK and SCALA.
The problem is Scala is NOT backward compatible. Hence each Spark module is complied against specific Scala library. But when we run from eclipse, we have one SCALA VERSION which was used to compile and create the spark Dependency Jar which we add to the build path, and SECOND SCALA VERSION is there as the eclipse run time environment. Both may conflict.
This is a hard reality, although, we wish Scala to be ,backward compatible. Or at least a complied jar file created could be backward compatible.
Hence, the recommendation is , use Maven or similar where dependency version can be managed.
If you are doing this in the context of Scala within a Jupyter Notebook, you'll get this error. You have to install the Apache Toree kernel:
https://github.com/apache/incubator-toree
and create your notebooks with that kernel.
You also have to start the Jupyter Notebook with:
pyspark