Scala REPL startup command line - scala

When I start the Scala REPL, is there a way to put Scala code or source files on the command line to automatically run? I would like to have some imports done automatically (on the command line in a bash script) at startup for user convenience.

Instead of Scala REPL use Ammonite. It is much better than the Scala REPL.
The best part about ammonite is that you can create a file called predef.sc under ~/.ammonite folder. Here you can specify SBT dependencies like interp.load.ivy("joda-time" % "joda-time" % jodaVersion) and also imports like import scala.concurrent.ExecutionContext.Implicits.global
Now each time you start the ammonite REPL you have all SBT dependencies and imports already present. How cool is that.

If your project has a build.sbt file then you can add something like this to that file:
initialCommands in console :=
"""
import java.net._
import java.time._
import java.time.format._
import java.time.temporal._
import javax.mail._
import javax.mail.internet._
"""
Not sure how clear that is but it is simply a block string using triple quotes
Saves a lot of time having that collection of basic classes (whatever classes make sense for your work) handy!

Related

How to run a shell file in Apache Spark using scala

I need to execute a shell file at the end of my code in spark using scala. I used count and groupby functions in my code. I should mention that, my code works perfectly without the last line of code.
(import sys.process._
/////////////////////////linux commands
val procresult="./Users/saeedtkh/Desktop/seprator.sh".!!)
could you please help me how to fix it.
You must use sys.process._ package from Scala SDK and use DSL with !:
import sys.process._
val result = "ls -al".!
Or make same with scala.sys.process.Process:
import scala.sys.process._
Process("cat data.txt")!

How to import libraries in Spark Notebook

I'm having trouble importing magellan-1.0.4-s_2.11 in spark notebook. I've downloaded the jar from https://spark-packages.org/package/harsha2010/magellan and have tried placing SPARK_HOME/bin/spark-shell --packages harsha2010:magellan:1.0.4-s_2.11 in the Start of Customized Settings section of the spark-notebook file of the bin folder.
Here are my imports
import magellan.{Point, Polygon, PolyLine}
import magellan.coord.NAD83
import org.apache.spark.sql.magellan.MagellanContext
import org.apache.spark.sql.magellan.dsl.expressions._
import org.apache.spark.sql.Row
import org.apache.spark.sql.types._
And my errors...
<console>:71: error: object Point is not a member of package org.apache.spark.sql.magellan
import magellan.{Point, Polygon, PolyLine}
^
<console>:72: error: object coord is not a member of package org.apache.spark.sql.magellan
import magellan.coord.NAD83
^
<console>:73: error: object MagellanContext is not a member of package org.apache.spark.sql.magellan
import org.apache.spark.sql.magellan.MagellanContext
I then tried to import the new library like any other library by placing it into the main script like so:
$lib_dir/magellan-1.0.4-s_2.11.jar"
This didn't work and I'm left scratching my head wondering what I've done wrong. How do I import libraries such as magellan into spark notebook?
Try evaluating something like
:dp "harsha2010" % "magellan" % "1.0.4-s_2.11"
It will load the library into Spark, allowing it to be imported - assuming it can be obtained though the Maven repo. In my case it failed with a message:
failed to load 'harsha2010:magellan:jar:1.0.4-s_2.11 (runtime)' from ["Maven2 local (file:/home/dev/.m2/repository/, releases+snapshots) without authentication", "maven-central (http://repo1.maven.org/maven2/, releases+snapshots) without authentication", "spark-packages (http://dl.bintray.com/spark-packages/maven/, releases+snapshots) without authentication", "oss-sonatype (https://oss.sonatype.org/content/repositories/releases/, releases+snapshots) without authentication"] into /tmp/spark-notebook/aether/b2c7d8c5-1f56-4460-ad39-24c4e93a9786
I think file was to big and connection was interrupted before whole file could be downloaded.
Workaround
So I downloaded the JAR manually from:
http://dl.bintray.com/spark-packages/maven/harsha2010/magellan/1.0.4-s_2.11/
and copied it into the:
/tmp/spark-notebook/aether/b2c7d8c5-1f56-4460-ad39-24c4e93a9786/harsha2010/magellan/1.0.4-s_2.11
And then :dp command worked. Try Calling it first, and if it will fail copy JAR into the right path to make things work.
Better solution
I should investigate why download failed to fix it in the first place... or put that library in my local M2 repo. But that should get you going.
I would suggest to check this:
https://github.com/spark-notebook/spark-notebook/blob/master/docs/metadata.md#import-download-dependencies
and
https://github.com/spark-notebook/spark-notebook/blob/master/docs/metadata.md#add-spark-packages
I think the :dp magic command is depreciated, instead you should add your custom dependencies in the notebook metadata. You can go in the menu Edit > Edit notebook metadata, there add something like:
"customDeps": [
"harsha2010 % magellan % 1.0.4-s_2.11"
]
Once done, you will need to restart the kernel, you can check in the browser console if the package is being downloaded properly.
The easy way, you should set or add the EXTRA_CLASSPATH environnent variable to point to your .jar file downloaded :
export EXTRA_CLASSPATH = </link/to/your.jar> or set EXTRA_CLASSPATH= </link/to/your.jar> in wondows OS. Here find the detailed solution.

Automatically execute statements in sbt console [duplicate]

When using sbt console I find myself repeatedly entering some import statements. It would be great if there was a way to tell sbt to always run commands. Is there a way?
At the moment I have a kinda crufty solution:
( echo "import my.app._
import my.app.is.sooo.cool._" && cat ) | sbt console
Googleability words:
Initial commands, first commands, initial expressions, build file, initial statements, startup expressions, startup commands, startup statements.
You can use initialCommands:
initialCommands in console := """import my.app._
import my.app.is.sooo.cool._"""
Given that "sbt console" lets you run the scala repl, why not just create a custom .scala file (say "default.scala") where to store all the imports, and then just run :load /path/to/default.scala? This would achieve what you need in a persistent way.

Swift REPL: how to import, load, evaluate, or require a .swift file?

In the Swift REPL, how to import (a.k.a. load, evaluate, require) a typical text *.swift file?
I want to use the code from this file: ~/src/Foo.swift
Syntax like this doesn't work: import ~/src/Foo.swift
For comparison:
An equivalent solution in the Swift REPL for a framework is: import Foundation
An equivalent solution in the Ruby REPL for a *.ruby file is: require "~/src/foo"
These are similar questions that are /not/ what I'm asking:
How to use/make a Swift command-line script, executable, module, etc.
How to use/make an XCode playground, project, library, framework, etc.
How to launch the REPL with a pre-existing list of files.
You need to use -I, so if your modulename.swiftmodule file is in ~/mymodules than launch swift with
swift -I ~/mymodules
and then you will be able to import your module
import module name
Should be that easy
In swift,you can't import a typical *.swift file.
For Import Declaration can only be the following syntax:
“
‌ import-declaration → attributesopt import import-kindopt import-path
import-kind → typealias| struct| class| enum| protocol| var| func
‌ import-path → import-path-identifier| import-path-identifier.import-path
‌ import-path-identifier → identifier| operator”
From: Apple Inc. “The Swift Programming Language (Swift 2)”。 iBooks.
which can be described as these formats:
import module
import import kind module.symbol name
import module.submodule
import head.swift is incompatible with import import-kind module.symbol-name
Usually compile the files you want to import as a framework.Then it can be regarded as a module. use -F framework_directory/ to specify 3rd-party frameworks' search path.
Create a file. For example:
// test.swift
import headtest
print("Hello World")
open your terminal
goto the directory where you create the file.
execute command line
swift -F headtest test.swift
And done.
Simply insert the shebang line at the top of your script:
#!/usr/bin/env xcrun swift
You can copy/paste the source code into the repl and execute it.
Not ideal, obviously, but sometimes useful.
Now Swift REPL supports packages. We can do this by the following steps:
In Xcode, select menu File > New > Package. Choose a name for the package, for example MyLibrary.
Copy your codes or .swift files to the Sources/MyLibrary/ directory in your package.
Remember to make your interface public.
In the command line, go to the package directory and run REPL
Like this
cd MyLibrary/
swift run --repl
In the REPL, import your library
Like this
import MyLibrary
Now you can your codes in the REPL.
It looks like it's not possible to import file and get Xcode to use REPL for it using specifications you gave. I think you can still do it creating a proper framework, but it's not exactly what you was looking.

Why does the scala-ide not allow multiple package definitions at the top of a file?

In scala it is common practice to stack package statements to allow shorter imports, but when I load a file using stacked packages into the scala ide and I attempt to use an import starting with the same organization I get a compiler error from what appears to be the presentation compiler. The code compiles fine in sbt outside of the IDE.
An example code snippet is as follows:
package com.coltfred
package util
package time
import com.github.nscala_time.time.Imports._
On the import I get the error object github is not a member of package com.coltfred.util.com.
If I move the import to a single line the error will go away, but we've used this practice frequently in our code base so changing them all to be single line package statements would be a pain.
Why is this happening and is there anything I can do to fix it?
Edit:
I used the eclipse-sbt plugin to generate the eclipse project file for this. The directory structure is what it should be and all of the dependencies are in the classpath.
Edit 2:
It turns out there was a file in the test tree of the util package (which should have been in the same package), but had a duplicate package statement at the top. I didn't check the test tree because it shouldn't affect the compilation of the main tree, but apparently I was wrong.
Not sure why the Scala IDE is not liking this, but you can force the import to start at the top level using _root_:
import _root_.com.github.nscala_time.time.Imports._
See if that avoids irritating the IDE.
This is a common annoyance that annoyed paulp into an attempt to fix it. His idea was that a dir that doesn't contribute class files shouldn't be taken as a package. If you can take util as scala.util, you should do so in preference to foo.util where that util is empty.
The util dir is the usual suspect, because who doesn't have a util dir lying around, and in particular, ./util?
apm#mara:~/tmp/coltfred$ mkdir -p com/coltfred/util/time
apm#mara:~/tmp/coltfred$ mkdir -p com/coltfred/util/com
apm#mara:~/tmp/coltfred$ vi com/coltfred/util/time/test.scala
apm#mara:~/tmp/coltfred$ scalac com/coltfred/util/time/test.scala
./com/coltfred/util/time/test.scala:5: error: object github is not a member of package com.coltfred.util.com
import com.github.nscala_time.time._
^
one error found
apm#mara:~/tmp/coltfred$ cat com/coltfred/util/time/test.scala
package com.coltfred
package util
package time
import com.github.nscala_time.time._
class Test
apm#mara:~/tmp/coltfred$
To debug, find out where the offending package is getting loaded from.