Scala application structure - scala

I am learning Scala now and I want to write some silly little app like a console Twitter client, or whatever. The question is, how to structure application on disk and logically. I know python, and there I would just create some files with classes and then import them in the main module like import util.ssh or from tweets import Retweet (strongly hoping you wouldn't mind that names, they are just for reference). But how should I do this stuff using Scala? Also, I have not much experience with JVM and Java, so I am a complete newbie here.

I'm going to disagree with Jens, here, though not all that much.
Project Layout
My own suggestion is that you model your efforts on Maven's standard directory layout.
Previous versions of SBT (before SBT 0.9.x) would create it automatically for you:
dcs#ayanami:~$ mkdir myproject
dcs#ayanami:~$ cd myproject
dcs#ayanami:~/myproject$ sbt
Project does not exist, create new project? (y/N/s) y
Name: myproject
Organization: org.dcsobral
Version [1.0]:
Scala version [2.7.7]: 2.8.1
sbt version [0.7.4]:
Getting Scala 2.7.7 ...
:: retrieving :: org.scala-tools.sbt#boot-scala
confs: [default]
2 artifacts copied, 0 already retrieved (9911kB/134ms)
Getting org.scala-tools.sbt sbt_2.7.7 0.7.4 ...
:: retrieving :: org.scala-tools.sbt#boot-app
confs: [default]
15 artifacts copied, 0 already retrieved (4096kB/91ms)
[success] Successfully initialized directory structure.
Getting Scala 2.8.1 ...
:: retrieving :: org.scala-tools.sbt#boot-scala
confs: [default]
2 artifacts copied, 0 already retrieved (15118kB/160ms)
[info] Building project myproject 1.0 against Scala 2.8.1
[info] using sbt.DefaultProject with sbt 0.7.4 and Scala 2.7.7
> quit
[info]
[info] Total session time: 8 s, completed May 6, 2011 12:31:43 PM
[success] Build completed successfully.
dcs#ayanami:~/myproject$ find . -type d -print
.
./project
./project/boot
./project/boot/scala-2.7.7
./project/boot/scala-2.7.7/lib
./project/boot/scala-2.7.7/org.scala-tools.sbt
./project/boot/scala-2.7.7/org.scala-tools.sbt/sbt
./project/boot/scala-2.7.7/org.scala-tools.sbt/sbt/0.7.4
./project/boot/scala-2.7.7/org.scala-tools.sbt/sbt/0.7.4/compiler-interface-bin_2.7.7.final
./project/boot/scala-2.7.7/org.scala-tools.sbt/sbt/0.7.4/compiler-interface-src
./project/boot/scala-2.7.7/org.scala-tools.sbt/sbt/0.7.4/compiler-interface-bin_2.8.0.RC2
./project/boot/scala-2.7.7/org.scala-tools.sbt/sbt/0.7.4/xsbti
./project/boot/scala-2.8.1
./project/boot/scala-2.8.1/lib
./target
./lib
./src
./src/main
./src/main/resources
./src/main/scala
./src/test
./src/test/resources
./src/test/scala
So you'll put your source files inside myproject/src/main/scala, for the main program, or myproject/src/test/scala, for the tests.
Since that doesn't work anymore, there are some alternatives:
giter8 and sbt.g8
Install giter8, clone ymasory's sbt.g8 template and adapt it to your necessities, and use that. See below, for example, this use of unmodified ymasory's sbt.g8 template. I think this is one of the best alternatives to starting new projects when you have a good notion of what you want in all your projects.
$ g8 ymasory/sbt
project_license_url [http://www.gnu.org/licenses/gpl-3.0.txt]:
name [myproj]:
project_group_id [com.example]:
developer_email [john.doe#example.com]:
developer_full_name [John Doe]:
project_license_name [GPLv3]:
github_username [johndoe]:
Template applied in ./myproj
$ tree myproj
myproj
├── build.sbt
├── LICENSE
├── project
│   ├── build.properties
│   ├── build.scala
│   └── plugins.sbt
├── README.md
├── sbt
└── src
└── main
└── scala
└── Main.scala
4 directories, 8 files
np plugin
Use softprops's np plugin for sbt. In the example below, the plugin is configured on ~/.sbt/plugins/build.sbt, and its settings on ~/.sbt/np.sbt, with standard sbt script. If you use paulp's sbt-extras, you'd need to install these things under the right Scala version subdirectory in ~/.sbt, as it uses separate configurations for each Scala version. In practice, this is the one I use most often.
$ mkdir myproj; cd myproj
$ sbt 'np name:myproj org:com.example'
[info] Loading global plugins from /home/dcsobral/.sbt/plugins
[warn] Multiple resolvers having different access mechanism configured with same name 'sbt-plugin-releases'. To avoid conflict, Remove duplicate project resolvers (`resolvers`) or rename publishing resolver (`publishTo`).
[info] Set current project to default-c642a2 (in build file:/home/dcsobral/myproj/)
[info] Generated build file
[info] Generated source directories
[success] Total time: 0 s, completed Apr 12, 2013 12:08:31 PM
$ tree
.
├── build.sbt
├── src
│   ├── main
│   │   ├── resources
│   │   └── scala
│   └── test
│   ├── resources
│   └── scala
└── target
└── streams
└── compile
└── np
└── $global
└── out
12 directories, 2 files
mkdir
You could simply create it with mkdir:
$ mkdir -p myproj/src/{main,test}/{resource,scala,java}
$ tree myproj
myproj
└── src
├── main
│   ├── java
│   ├── resource
│   └── scala
└── test
├── java
├── resource
└── scala
9 directories, 0 files
Source Layout
Now, about the source layout. Jens recommends following Java style. Well, the Java directory layout is a requirement -- in Java. Scala does not have the same requirement, so you have the option of following it or not.
If you do follow it, assuming the base package is org.dcsobral.myproject, then source code for that package would be put inside myproject/src/main/scala/org/dcsobral/myproject/, and so on for sub-packages.
Two common ways of diverging from that standard are:
Omitting the base package directory, and only creating subdirectories for the sub-packages.
For instance, let's say I have the packages org.dcsobral.myproject.model, org.dcsobral.myproject.view and org.dcsobral.myproject.controller, then the directories would be myproject/src/main/scala/model, myproject/src/main/scala/view and myproject/src/main/scala/controller.
Putting everything together. In this case, all source files would be inside myproject/src/main/scala. This is good enough for small projects. In fact, if you have no sub-projects, it is the same as above.
And this deals with directory layout.
File Names
Next, let's talk about files. In Java, the practice is separating each class in its own file, whose name will follow the name of the class. This is good enough in Scala too, but you have to pay attention to some exceptions.
First, Scala has object, which Java does not have. A class and object of the same name are considered companions, which has some practical implications, but only if they are in the same file. So, place companion classes and objects in the same file.
Second, Scala has a concept known as sealed class (or trait), which limits subclasses (or implementing objects) to those declared in the same file. This is mostly done to create algebraic data types with pattern matching with exhaustiveness check. For example:
sealed abstract class Tree
case class Node(left: Tree, right: Tree) extends Tree
case class Leaf(n: Int) extends Tree
scala> def isLeaf(t: Tree) = t match {
| case Leaf(n: Int) => println("Leaf "+n)
| }
<console>:11: warning: match is not exhaustive!
missing combination Node
def isLeaf(t: Tree) = t match {
^
isLeaf: (t: Tree)Unit
If Tree was not sealed, then anyone could extend it, making it impossible for the compiler to know whether the match was exhaustive or not. Anyway, sealed classes go together in the same file.
Another naming convention is to name the files containing a package object (for that package) package.scala.
Importing Stuff
The most basic rule is that stuff in the same package see each other. So, put everything in the same package, and you don't need to concern yourself with what sees what.
But Scala also have relative references and imports. This requires a bit of an explanation. Say I have the following declarations at the top of my file:
package org.dcsobral.myproject
package model
Everything following will be put in the package org.dcsobral.myproject.model. Also, not only everything inside that package will be visible, but everything inside org.dcsobral.myproject will be visible as well. If I just declared package org.dcsobral.myproject.model instead, then org.dcsobral.myproject would not be visible.
The rule is pretty simple, but it can confuse people a bit at first. The reason for this rule is relative imports. Consider now the following statement in that file:
import view._
This import may be relative -- all imports can be relative unless you prefix it with _root_.. It can refer to the following packages: org.dcsobral.myproject.model.view, org.dcsobral.myproject.view, scala.view and java.lang.view. It could also refer to an object named view inside scala.Predef. Or it could be an absolute import refering to a package named view.
If more than one such package exists, it will pick one according to some precedence rules. If you needed to import something else, you can turn the import into an absolute one.
This import makes everything inside the view package (wherever it is) visible in its scope. If it happens inside a class, and object or a def, then the visibility will be restricted to that. It imports everything because of the ._, which is a wildcard.
An alternative might look like this:
package org.dcsobral.myproject.model
import org.dcsobral.myproject.view
import org.dcsobral.myproject.controller
In that case, the packages view and controller would be visible, but you'd have to name them explicitly when using them:
def post(view: view.User): Node =
Or you could use further relative imports:
import view.User
The import statement also enable you to rename stuff, or import everything but something. Refer to relevant documentation about it for more details.
So, I hope this answer all your questions.

Scala supports and encourages the package structure of Java /JVM and pretty much the same recommendation apply:
mirror the package structure in the directory structure. This isn't necessary in Scala, but it helps to find your way around
use your inverse domain as a package prefix. For me that means everything starts with de.schauderhaft. Use something that makes sense for you, if you don't have you own domain
only put top level classes in one file if they are small and closely related. Otherwise stick with one class/object per file. Exceptions: companion objects go in the same file as the class. Implementations of a sealed class go into the same file.
if you app grows you might want to have something like layers and modules and mirror those in the package structure, so you might have a package structure like this: <domain>.<module>.<layer>.<optional subpackage>.
don't have cyclic dependencies on a package, module or layer level

Related

How to specify the main class (in the root directory) for Mill to run?

I am new to the sbt and mill, and I am practicing to use both tool to build the chisel (scala project). View this github repo as a reference, I am wondering to know how to write the mill-version build.sh in that repo.
Here is my directory structure
─ chisel-template (root directory / projects directory)
├── build.sc
├── build.sh
├── src
| └─main
| └─scala
| └─lab1
| └─Mux2.scala
└── _temphelper.scala
What the build.sh do is preparing a boilerplate as main function in the root directory to make compile and run process much easier, and it's sbt version. I'm curious that why sbt can detect the main function (_temphelper.Elaborate) even it's not in the src/main directory. And when I change to use Mill, Mill can't detect the _temphelper.scala at all, unless I move the file to root/src/main. Is there settings that can make Mill do what sbt can do?
I'm not sure whether this issue is related to...
altering the sourceDirectories in sbt and chiselMoudule.sources in Mill. Any advice is welcome.
modify the millSourcePath to realize my request.
My quetions is What setting should I do to make mill can detect the main class that be in the root directory?
This is because sbt is including any Scala files it finds in the project root directory as sources files, unless told otherwise.
In contrast, Mill will only use the source files found under whatever directories are specified with sources. As a consequence, you may want to add the project root directory as source directory, but I strongly advice to do not so.
Best is to move the _temphelper.scala file either to one of the source directories or create a new dedicated directory, move the file there and add this directory to the sources like this:
object chiselModule extends CrossSbtModule // ...
{
def sources = T.sources {
super.sources() ++ Seq(PathRef(T.workspace / "your" / "new" / "directory"))
}
}

Best practices for Scala physical directory structure [duplicate]

I am learning Scala now and I want to write some silly little app like a console Twitter client, or whatever. The question is, how to structure application on disk and logically. I know python, and there I would just create some files with classes and then import them in the main module like import util.ssh or from tweets import Retweet (strongly hoping you wouldn't mind that names, they are just for reference). But how should I do this stuff using Scala? Also, I have not much experience with JVM and Java, so I am a complete newbie here.
I'm going to disagree with Jens, here, though not all that much.
Project Layout
My own suggestion is that you model your efforts on Maven's standard directory layout.
Previous versions of SBT (before SBT 0.9.x) would create it automatically for you:
dcs#ayanami:~$ mkdir myproject
dcs#ayanami:~$ cd myproject
dcs#ayanami:~/myproject$ sbt
Project does not exist, create new project? (y/N/s) y
Name: myproject
Organization: org.dcsobral
Version [1.0]:
Scala version [2.7.7]: 2.8.1
sbt version [0.7.4]:
Getting Scala 2.7.7 ...
:: retrieving :: org.scala-tools.sbt#boot-scala
confs: [default]
2 artifacts copied, 0 already retrieved (9911kB/134ms)
Getting org.scala-tools.sbt sbt_2.7.7 0.7.4 ...
:: retrieving :: org.scala-tools.sbt#boot-app
confs: [default]
15 artifacts copied, 0 already retrieved (4096kB/91ms)
[success] Successfully initialized directory structure.
Getting Scala 2.8.1 ...
:: retrieving :: org.scala-tools.sbt#boot-scala
confs: [default]
2 artifacts copied, 0 already retrieved (15118kB/160ms)
[info] Building project myproject 1.0 against Scala 2.8.1
[info] using sbt.DefaultProject with sbt 0.7.4 and Scala 2.7.7
> quit
[info]
[info] Total session time: 8 s, completed May 6, 2011 12:31:43 PM
[success] Build completed successfully.
dcs#ayanami:~/myproject$ find . -type d -print
.
./project
./project/boot
./project/boot/scala-2.7.7
./project/boot/scala-2.7.7/lib
./project/boot/scala-2.7.7/org.scala-tools.sbt
./project/boot/scala-2.7.7/org.scala-tools.sbt/sbt
./project/boot/scala-2.7.7/org.scala-tools.sbt/sbt/0.7.4
./project/boot/scala-2.7.7/org.scala-tools.sbt/sbt/0.7.4/compiler-interface-bin_2.7.7.final
./project/boot/scala-2.7.7/org.scala-tools.sbt/sbt/0.7.4/compiler-interface-src
./project/boot/scala-2.7.7/org.scala-tools.sbt/sbt/0.7.4/compiler-interface-bin_2.8.0.RC2
./project/boot/scala-2.7.7/org.scala-tools.sbt/sbt/0.7.4/xsbti
./project/boot/scala-2.8.1
./project/boot/scala-2.8.1/lib
./target
./lib
./src
./src/main
./src/main/resources
./src/main/scala
./src/test
./src/test/resources
./src/test/scala
So you'll put your source files inside myproject/src/main/scala, for the main program, or myproject/src/test/scala, for the tests.
Since that doesn't work anymore, there are some alternatives:
giter8 and sbt.g8
Install giter8, clone ymasory's sbt.g8 template and adapt it to your necessities, and use that. See below, for example, this use of unmodified ymasory's sbt.g8 template. I think this is one of the best alternatives to starting new projects when you have a good notion of what you want in all your projects.
$ g8 ymasory/sbt
project_license_url [http://www.gnu.org/licenses/gpl-3.0.txt]:
name [myproj]:
project_group_id [com.example]:
developer_email [john.doe#example.com]:
developer_full_name [John Doe]:
project_license_name [GPLv3]:
github_username [johndoe]:
Template applied in ./myproj
$ tree myproj
myproj
├── build.sbt
├── LICENSE
├── project
│   ├── build.properties
│   ├── build.scala
│   └── plugins.sbt
├── README.md
├── sbt
└── src
└── main
└── scala
└── Main.scala
4 directories, 8 files
np plugin
Use softprops's np plugin for sbt. In the example below, the plugin is configured on ~/.sbt/plugins/build.sbt, and its settings on ~/.sbt/np.sbt, with standard sbt script. If you use paulp's sbt-extras, you'd need to install these things under the right Scala version subdirectory in ~/.sbt, as it uses separate configurations for each Scala version. In practice, this is the one I use most often.
$ mkdir myproj; cd myproj
$ sbt 'np name:myproj org:com.example'
[info] Loading global plugins from /home/dcsobral/.sbt/plugins
[warn] Multiple resolvers having different access mechanism configured with same name 'sbt-plugin-releases'. To avoid conflict, Remove duplicate project resolvers (`resolvers`) or rename publishing resolver (`publishTo`).
[info] Set current project to default-c642a2 (in build file:/home/dcsobral/myproj/)
[info] Generated build file
[info] Generated source directories
[success] Total time: 0 s, completed Apr 12, 2013 12:08:31 PM
$ tree
.
├── build.sbt
├── src
│   ├── main
│   │   ├── resources
│   │   └── scala
│   └── test
│   ├── resources
│   └── scala
└── target
└── streams
└── compile
└── np
└── $global
└── out
12 directories, 2 files
mkdir
You could simply create it with mkdir:
$ mkdir -p myproj/src/{main,test}/{resource,scala,java}
$ tree myproj
myproj
└── src
├── main
│   ├── java
│   ├── resource
│   └── scala
└── test
├── java
├── resource
└── scala
9 directories, 0 files
Source Layout
Now, about the source layout. Jens recommends following Java style. Well, the Java directory layout is a requirement -- in Java. Scala does not have the same requirement, so you have the option of following it or not.
If you do follow it, assuming the base package is org.dcsobral.myproject, then source code for that package would be put inside myproject/src/main/scala/org/dcsobral/myproject/, and so on for sub-packages.
Two common ways of diverging from that standard are:
Omitting the base package directory, and only creating subdirectories for the sub-packages.
For instance, let's say I have the packages org.dcsobral.myproject.model, org.dcsobral.myproject.view and org.dcsobral.myproject.controller, then the directories would be myproject/src/main/scala/model, myproject/src/main/scala/view and myproject/src/main/scala/controller.
Putting everything together. In this case, all source files would be inside myproject/src/main/scala. This is good enough for small projects. In fact, if you have no sub-projects, it is the same as above.
And this deals with directory layout.
File Names
Next, let's talk about files. In Java, the practice is separating each class in its own file, whose name will follow the name of the class. This is good enough in Scala too, but you have to pay attention to some exceptions.
First, Scala has object, which Java does not have. A class and object of the same name are considered companions, which has some practical implications, but only if they are in the same file. So, place companion classes and objects in the same file.
Second, Scala has a concept known as sealed class (or trait), which limits subclasses (or implementing objects) to those declared in the same file. This is mostly done to create algebraic data types with pattern matching with exhaustiveness check. For example:
sealed abstract class Tree
case class Node(left: Tree, right: Tree) extends Tree
case class Leaf(n: Int) extends Tree
scala> def isLeaf(t: Tree) = t match {
| case Leaf(n: Int) => println("Leaf "+n)
| }
<console>:11: warning: match is not exhaustive!
missing combination Node
def isLeaf(t: Tree) = t match {
^
isLeaf: (t: Tree)Unit
If Tree was not sealed, then anyone could extend it, making it impossible for the compiler to know whether the match was exhaustive or not. Anyway, sealed classes go together in the same file.
Another naming convention is to name the files containing a package object (for that package) package.scala.
Importing Stuff
The most basic rule is that stuff in the same package see each other. So, put everything in the same package, and you don't need to concern yourself with what sees what.
But Scala also have relative references and imports. This requires a bit of an explanation. Say I have the following declarations at the top of my file:
package org.dcsobral.myproject
package model
Everything following will be put in the package org.dcsobral.myproject.model. Also, not only everything inside that package will be visible, but everything inside org.dcsobral.myproject will be visible as well. If I just declared package org.dcsobral.myproject.model instead, then org.dcsobral.myproject would not be visible.
The rule is pretty simple, but it can confuse people a bit at first. The reason for this rule is relative imports. Consider now the following statement in that file:
import view._
This import may be relative -- all imports can be relative unless you prefix it with _root_.. It can refer to the following packages: org.dcsobral.myproject.model.view, org.dcsobral.myproject.view, scala.view and java.lang.view. It could also refer to an object named view inside scala.Predef. Or it could be an absolute import refering to a package named view.
If more than one such package exists, it will pick one according to some precedence rules. If you needed to import something else, you can turn the import into an absolute one.
This import makes everything inside the view package (wherever it is) visible in its scope. If it happens inside a class, and object or a def, then the visibility will be restricted to that. It imports everything because of the ._, which is a wildcard.
An alternative might look like this:
package org.dcsobral.myproject.model
import org.dcsobral.myproject.view
import org.dcsobral.myproject.controller
In that case, the packages view and controller would be visible, but you'd have to name them explicitly when using them:
def post(view: view.User): Node =
Or you could use further relative imports:
import view.User
The import statement also enable you to rename stuff, or import everything but something. Refer to relevant documentation about it for more details.
So, I hope this answer all your questions.
Scala supports and encourages the package structure of Java /JVM and pretty much the same recommendation apply:
mirror the package structure in the directory structure. This isn't necessary in Scala, but it helps to find your way around
use your inverse domain as a package prefix. For me that means everything starts with de.schauderhaft. Use something that makes sense for you, if you don't have you own domain
only put top level classes in one file if they are small and closely related. Otherwise stick with one class/object per file. Exceptions: companion objects go in the same file as the class. Implementations of a sealed class go into the same file.
if you app grows you might want to have something like layers and modules and mirror those in the package structure, so you might have a package structure like this: <domain>.<module>.<layer>.<optional subpackage>.
don't have cyclic dependencies on a package, module or layer level

What is the simplest way to create an importable file in Scala?

TLDR: What is the simplest way to create an importable file in Scala?
This is a beginner's question.
I've been learning Scala for a few weeks and now have some code that I would like to share between a couple of files/different projects. I have tried a lot of import structures but none of them worked. The requirements are:
File to be imported should reside in a totally different directory.
File to be imported should be importable by independent projects.
File to be imported is a single .scala file.
(Optional) file to be imported should contain defs, objects and case classes.
Example:
File to be imported location: /some/path/to_be_imported.scala.
File using project (1) location: /abc/def/will_import01.scala.
File using project (2) location: /xyz/rst/will_import02.scala.
I'm not trying to create a package or distribute it.
See how I would address this considering the programming language I already know:
Since I'm versed in Python I'll give an expected version of the answer should this problem refer o Python:
In that case you could:
Put your file on the same directory of your executed file then just run: python3 ./your_file.py. For instance:
➜ another_path|$ python3 ./main_module/main_file.py
1
self printing
➜ another_path|$ tree .
=======================================================================
.
└── main_module
├── main_file.py
├── __pycache__
│   └── sample_file_to_be_imported.cpython-36.pyc
└── sample_file_to_be_imported.py
Notice that they are in the exact same directory (this contradicts point 2 above nevertheless it solves the problem).
Add the directory of your file to the PYTHONPATH environment variable then run your module (best answer):
➜ random_path|$ PYTHONPATH=$PYTHONPATH:./sample_module python3 ./main_module/main_file.py
1
self printing
=======================================================================
➜ random_path|$ tree .
.
├── main_module
│   └── main_file.py
└── sample_module
├── __pycache__
│   └── sample_file_to_be_imported.cpython-36.pyc
└── sample_file_to_be_imported.py
3 directories, 3 files
Content of the files:
➜ random_path|$ cat ./main_module/main_file.py
from sample_file_to_be_imported import func1, Class01
print(func1())
x = Class01()
x.cprint()
=======================================================================
➜ random_path|$ cat ./sample_module/sample_file_to_be_imported.py
def func1():
return 1
class Class01():
def cprint(self):
print('self printing')
Edit 01: #felipe-rubin answer does not work:
$ scala -cp /tmp/scala_stack_exchange/ myprogram.scala
/tmp/scala_stack_exchange/path01/myprogram.scala:3: error: not found: value Timer
val x = Timer(1)
^
one error found
=======================================================================
➜ path01 tree /tmp/scala_stack_exchange
/tmp/scala_stack_exchange
├── anotherpath
│   ├── Timer.class
│   └── timer.scala
└── path01
└── myprogram.scala
2 directories, 3 files
=======================================================================
$ cat /tmp/scala_stack_exchange/anotherpath/timer.scala
class Timer(a: Int) {
def time(): Unit = println("time this")
}
=======================================================================
$ cat /tmp/scala_stack_exchange/path01/myprogram.scala
import anotherpath.Timer
val x = Timer(1)
x.time()
The simplest way would be to compile a .scala file with scalac:
Linux/OSX: scalac mypackage/Example.scala
Windows: scalac mypackage\Example.scala
The above should generate a .class file (or more).
Assuming the file contains a class called Example you can import it somewhere else like this:
import mypackage.Example
When compiling another file which does the above import, you will need to have 'mypackage' in the classpath. You can add directories to the classpath when calling scalac by using the -cp flag like:
Linux/OSX: scalac -cp .:path/to/folder/where/mypackage/is/located AnotherExample.scala
Windows: scalac -cp .;path\to\folder\where\mypackage\is\located AnotherExample.scala
Doing this for bigger projects gets complicated, in which case you might resort to a build tool (e.g. SBT) or an IDE (e.g. IntelliJ Idea) to do the complicated work for you.
Other notes:
If you don't have scalac, you can get it from the scala website ('download binaries' option)
the -cp flag stand for "classpath". There is also a -classpath flag which does the same thing
Welcome to Scala :)
I finally got this working. Thanks for the valuable input from the other answers.
I have diversified the name of every path, file and object to be as general as possible. This probably does not follow the guidelines of the scala community but is the most explicit, illustrative help I could find. Project layout:
File Layout
$ tree /tmp/scala_stack_exchange
/tmp/scala_stack_exchange
├── anotherpath
│   ├── file_defines_class.scala
│   └── some_package_name
│   ├── MyObj.class
│   └── MyObj$.class
└── path01
└── myprogram.scala
3 directories, 4 files
Where I want to run myprogram.scala which should import classes defined in file_defines_class.scala.
Preparation
Compile the file you want to be imported by other modules:
cd /tmp/scala_stack_exchange/anotherpath && scalac ./file_defines_class.scala
Execution
cd /tmp/scala_stack_exchange/path01 && scala -cp /tmp/scala_stack_exchange/anotherpath/ ./myprogram.scala
Results
myobj time
Contents of the files
// /tmp/scala_stack_exchange/path01/myprogram.scala
import some_package_name.MyObj
val x = new MyObj(10)
x.time()
// /tmp/scala_stack_exchange/anotherpath/file_defines_class.scala
package some_package_name
object MyObj
case class MyObj(i: Int) {
def time(): Unit = println("myobj time")
}
Feels like magic. However this whole process is rather cumbersome :/

Defining sbt task that invokes method from project code?

I'm using SBT to build a scala project. I want to define a very simple task, that when I input generate in sbt:
sbt> generate
It will invoke my my.App.main(..) method to generate something.
There is a App.scala file in myproject/src/main/scala/my, and the simplified code is like this:
object App {
def main(args: Array[String]) {
val source = readContentOfFile("mysource.txt")
val result = convert(source)
writeToFile(result, "mytarget.txt");
}
// ignore some methods here
}
I tried to add following code into myproject/build.sbt:
lazy val generate = taskKey[Unit]("Generate my file")
generate := {
my.App.main(Array())
}
But which doesn't compile since it can't find my.App.
Then I tried to add it to myproject/project/build.scala:
import sbt._
import my._
object HelloBuild extends Build {
lazy val generate = taskKey[Unit]("Generate my file")
generate := {
App.main(Array())
}
}
But it still can't be compiled, that it can't find package my.
How to define such a task in SBT?
In .sbt format, do:
lazy val generate = taskKey[Unit]("Generate my file")
fullRunTask(generate, Compile, "my.App")
This is documented at http://www.scala-sbt.org/0.13.2/docs/faq.html, “How can I create a custom run task, in addition to run?”
Another approach would be:
lazy val generate = taskKey[Unit]("Generate my file")
generate := (runMain in Compile).toTask(" my.App").value
which works fine in simple cases but isn't as customizable.
Update: Jacek's advice to use resourceGenerators or sourceGenerators instead is good, if it fits your use case — can't tell from your description whether it does.
The other answers fit the question very well, but I think the OP might benefit from mine, too :)
The OP asked about "I want to define a very simple task, that when I input generate in sbt will invoke my my.App.main(..) method to generate something." that might ultimately complicate the build.
Sbt already offers a way to generate files at build time - sourceGenerators and resourceGenerators - and I can't seem to notice a need to define a separate task for this from having read the question.
In Generating files (see the future version of the document in the commit) you can read:
sbt provides standard hooks for adding source or resource generation
tasks.
With the knowledge one could think of the following solution:
sourceGenerators in Compile += Def.task {
my.App.main(Array()) // it's not going to work without one change, though
Seq[File]() // a workaround before the above change is in effect
}.taskValue
To make that work you should return a Seq[File] that contains files generated (and not the empty Seq[File]()).
The main change for the code to work is to move the my.App class to project folder. It then becomes a part of the build definition. It also reflects what the class does as it's really a part of the build not the artifact that's the product of it. When the same code is a part of the build and the artifact itself you don't keep the different concerns separate. If the my.App class participates in a build, it should belong to it - hence the move to the project folder.
The project's layout would then be as follows:
$ tree
.
├── build.sbt
└── project
├── App.scala
└── build.properties
Separation of concerns (aka #joescii in da haus)
There's a point in #joescii's answer (which I extend in the answer) - "to make it a separate project that other projects can use. To do this, you will need to put your App object into a separate project and include it as a dependency in project/project", i.e.
Let's assume you've got a separate project build-utils with App.scala under src/main/scala. It's a regular sbt configuration with just the Scala code.
jacek:~/sandbox/so/generate-project-code
$ tree build-utils/
build-utils/
└── src
└── main
└── scala
└── App.scala
You could test it out as a regular Scala application without messing up with sbt. No additional setup's required (and frees your mind from sbt that might be beneficial at times - less setup is always of help).
In another project - project-code - that uses App.scala that is supposed to be a base for the build, build.sbt is as follows:
project-code/build.sbt
lazy val generate = taskKey[Unit]("Generate my file")
generate := {
my.App.main(Array())
}
Now the most important part - the wiring between projects so the App code is visible for the build of project-code:
project-code/project/build.sbt
lazy val buildUtils = RootProject(
uri("file:/Users/jacek/sandbox/so/generate-project-code/build-utils")
)
lazy val plugins = project in file(".") dependsOn buildUtils
With the build definition(s), executing generate gives you the following:
jacek:~/sandbox/so/generate-project-code/project-code
$ sbt
[info] Loading global plugins from /Users/jacek/.sbt/0.13/plugins
[info] Loading project definition from /Users/jacek/sandbox/so/generate-project-code/project-code/project
[info] Updating {file:/Users/jacek/sandbox/so/generate-project-code/build-utils/}build-utils...
[info] Resolving org.fusesource.jansi#jansi;1.4 ...
[info] Done updating.
[info] Updating {file:/Users/jacek/sandbox/so/generate-project-code/project-code/project/}plugins...
[info] Resolving org.fusesource.jansi#jansi;1.4 ...
[info] Done updating.
[info] Compiling 1 Scala source to /Users/jacek/sandbox/so/generate-project-code/build-utils/target/scala-2.10/classes...
[info] Set current project to project-code (in build file:/Users/jacek/sandbox/so/generate-project-code/project-code/)
> generate
Hello from App.main
[success] Total time: 0 s, completed May 2, 2014 2:54:29 PM
I've changed the code of App to be:
> eval "cat ../build-utils/src/main/scala/App.scala"!
package my
object App {
def main(args: Array[String]) {
println("Hello from App.main")
}
}
The project structure is as follows:
jacek:~/sandbox/so/generate-project-code/project-code
$ tree
.
├── build.sbt
└── project
├── build.properties
└── build.sbt
Other changes aka goodies
I'd also propose some other changes to the code of the source generator:
Move the code out of main method to a separate method that returns the files generated and have main call it. It'll make reusing the code in sourceGenerators easier (without unnecessary Array() to call it as well as explicitly returning the files).
Use filter or map functions for convert (to add a more functional flavour).
The solution that #SethTisue proposes will work. Another approach is to make it a separate project that other projects can use. To do this, you will need to put your App object into a separate project and include it as a dependency in project/project, OR package it as an sbt plugin ideally with this task definition included.
For an example of how to create a lib that is packaged as a plugin, take a look at snmp4s. The gen directory contains the code that does some code generation (analogous to your App code) and the sbt directory contains the sbt plugin wrapper for gen.

IDEA 12 Create Scala Play 2.0: Project Files Changed

PROBLEM:
When I'm working with a Scala Play 2.0.4 application from within IntelliJ IDEA 12, I'm getting a lot of red syntax highlighting errors that don't show up as errors when I run the application from within Play! at the command line.
QUESTION:
Are there others who are successfully running Scala Play 2.0 applications from within IntelliJ IDEA 12? If so, can you give me some suggestions as to how I might do this as well.
BACKGROUND INFO:
When I create a new project within IntelliJ, I set Play 2 home to ~/bin/opt/play-2.0.4, it creates the project and then a dialog box appears titled "Project Files Changed" which says that "Project file .../.idea/misc.xml has been changed externally. It is recommended to reload project for changes to take effect." If I ignore the prompt to reload the project, and I ctl-ins on app/, I get the following options:
Java Class
Scala Class
File
Package
I then create a package 'models', and a scala file 'Models.scala' with the code shown below, 'Hello' is syntax-highlighted as red and when I hover over the code, IDEA indicates that it can't find 'Hello' within the object MyDB:
package models
case class Hello(id: Int, name: String)
object MyDB {
val hellos: List[Hello] = List(Hello(1, "Foo"), Hello(2, "Bar"))
}
I can now create create app/models/Models.scala with the code above and there are no highlighting errors. However, when I go to project settings -> Modules -> Dependencies, it says that 'sbt-and-plugins' has a broken path and "Module 'untitled': invalid item 'scala-2.9.1' in the dependencies list"
On the other hand, if I click 'ok' to reload the project for the changes to take effect, then if I I ctl-ins on app/, I get the following options:
File
Directory
This second option occurs also if I generate idea from within play at the command line (as well, with-sources), and also if I compile the project (either before or after I run idea).
As a further hint the app directory is colored blue if I don't reload the project, but once I reload it, then the app directory icon is brownish (like the others).
It is the same whether I use play-2.0.4 that I downloaded myself or whether I ask IntelliJ to download it when I create the new project. It also is the same whether I have the playframework with Play 2.0 Support or just the Play 2.0 Support by itself.
For further information, I'm running Arch Linux, Oracle Java 1.7.0_09, scala-2.9.1.final, Play 2.0.4, IntelliJ 12.0 IU-123.72. Plugins: Scala (0.6.371), Play 2.0 Support (0.1.86), Playframework Support (both with and without this, I get the same error).
UPDATE:
Here's the stacktrace http://pastebin.com/uWEpv5Gd, which shows that IDEA throws an exception when creating the project, as follows:
[ 87553] ERROR - com.intellij.ide.IdeEventQueue - Error during dispatching of java.awt.event.InvocationEvent[INVOCATION_DEFAULT,runnable=com.intellij.openapi.progress.util.ProgressWindow$MyDialog$1#3b5a26d6,notifier=null,catchExceptions=false,when=1355073846201] on sun.awt.X11.XToolkit#1bd172ba
I usually to the following to get Play projects running in IntelliJ 12:
Create the project from the terminal with play new "projectname"
Go into the new folder "projectname"
Run play idea
Open that folder with IntelliJ and enable syntax highlighting
Hope this helps
The problem was not the JVM, the problem is with the Play 2.0 plugin. I tested it with various JVMs in 1.7 and 1.6 and was still getting the same problem. I tried a fresh install of Intellij IDEA 12 by deleting the configuration directory, and it was doing the same thing. When I create a new project with IDEA 12, here's what the directory structure of target looks like:
[ambantis#okosmos target]$ tree
.
├── scala-2.9.1
│   └── cache
│   └── update
│   ├── inputs
│   └── output
└── streams
└── $global
├── ivy-configuration
│   └── $global
│   └── out
├── ivy-sbt
│   └── $global
│   └── out
├── project-descriptors
│   └── $global
│   └── out
└── update
└── $global
└── out
13 directories, 6 files
what's missing are /target/scala-2.9.1/classes and /target/scala-2.9.1/classes_managed. The solution is as follows:
After the build process, if you see a dialog box that says, "Project Files Changed", like this:
do not click OK, instead escape. Then open the play console and compile the application. At this point it will work. You will only see errors that there are unused jar files, but otherwise everything will work fine.