IDEA 12 Create Scala Play 2.0: Project Files Changed - scala

PROBLEM:
When I'm working with a Scala Play 2.0.4 application from within IntelliJ IDEA 12, I'm getting a lot of red syntax highlighting errors that don't show up as errors when I run the application from within Play! at the command line.
QUESTION:
Are there others who are successfully running Scala Play 2.0 applications from within IntelliJ IDEA 12? If so, can you give me some suggestions as to how I might do this as well.
BACKGROUND INFO:
When I create a new project within IntelliJ, I set Play 2 home to ~/bin/opt/play-2.0.4, it creates the project and then a dialog box appears titled "Project Files Changed" which says that "Project file .../.idea/misc.xml has been changed externally. It is recommended to reload project for changes to take effect." If I ignore the prompt to reload the project, and I ctl-ins on app/, I get the following options:
Java Class
Scala Class
File
Package
I then create a package 'models', and a scala file 'Models.scala' with the code shown below, 'Hello' is syntax-highlighted as red and when I hover over the code, IDEA indicates that it can't find 'Hello' within the object MyDB:
package models
case class Hello(id: Int, name: String)
object MyDB {
val hellos: List[Hello] = List(Hello(1, "Foo"), Hello(2, "Bar"))
}
I can now create create app/models/Models.scala with the code above and there are no highlighting errors. However, when I go to project settings -> Modules -> Dependencies, it says that 'sbt-and-plugins' has a broken path and "Module 'untitled': invalid item 'scala-2.9.1' in the dependencies list"
On the other hand, if I click 'ok' to reload the project for the changes to take effect, then if I I ctl-ins on app/, I get the following options:
File
Directory
This second option occurs also if I generate idea from within play at the command line (as well, with-sources), and also if I compile the project (either before or after I run idea).
As a further hint the app directory is colored blue if I don't reload the project, but once I reload it, then the app directory icon is brownish (like the others).
It is the same whether I use play-2.0.4 that I downloaded myself or whether I ask IntelliJ to download it when I create the new project. It also is the same whether I have the playframework with Play 2.0 Support or just the Play 2.0 Support by itself.
For further information, I'm running Arch Linux, Oracle Java 1.7.0_09, scala-2.9.1.final, Play 2.0.4, IntelliJ 12.0 IU-123.72. Plugins: Scala (0.6.371), Play 2.0 Support (0.1.86), Playframework Support (both with and without this, I get the same error).
UPDATE:
Here's the stacktrace http://pastebin.com/uWEpv5Gd, which shows that IDEA throws an exception when creating the project, as follows:
[ 87553] ERROR - com.intellij.ide.IdeEventQueue - Error during dispatching of java.awt.event.InvocationEvent[INVOCATION_DEFAULT,runnable=com.intellij.openapi.progress.util.ProgressWindow$MyDialog$1#3b5a26d6,notifier=null,catchExceptions=false,when=1355073846201] on sun.awt.X11.XToolkit#1bd172ba

I usually to the following to get Play projects running in IntelliJ 12:
Create the project from the terminal with play new "projectname"
Go into the new folder "projectname"
Run play idea
Open that folder with IntelliJ and enable syntax highlighting
Hope this helps

The problem was not the JVM, the problem is with the Play 2.0 plugin. I tested it with various JVMs in 1.7 and 1.6 and was still getting the same problem. I tried a fresh install of Intellij IDEA 12 by deleting the configuration directory, and it was doing the same thing. When I create a new project with IDEA 12, here's what the directory structure of target looks like:
[ambantis#okosmos target]$ tree
.
├── scala-2.9.1
│   └── cache
│   └── update
│   ├── inputs
│   └── output
└── streams
└── $global
├── ivy-configuration
│   └── $global
│   └── out
├── ivy-sbt
│   └── $global
│   └── out
├── project-descriptors
│   └── $global
│   └── out
└── update
└── $global
└── out
13 directories, 6 files
what's missing are /target/scala-2.9.1/classes and /target/scala-2.9.1/classes_managed. The solution is as follows:
After the build process, if you see a dialog box that says, "Project Files Changed", like this:
do not click OK, instead escape. Then open the play console and compile the application. At this point it will work. You will only see errors that there are unused jar files, but otherwise everything will work fine.

Related

Best practices for Scala physical directory structure [duplicate]

I am learning Scala now and I want to write some silly little app like a console Twitter client, or whatever. The question is, how to structure application on disk and logically. I know python, and there I would just create some files with classes and then import them in the main module like import util.ssh or from tweets import Retweet (strongly hoping you wouldn't mind that names, they are just for reference). But how should I do this stuff using Scala? Also, I have not much experience with JVM and Java, so I am a complete newbie here.
I'm going to disagree with Jens, here, though not all that much.
Project Layout
My own suggestion is that you model your efforts on Maven's standard directory layout.
Previous versions of SBT (before SBT 0.9.x) would create it automatically for you:
dcs#ayanami:~$ mkdir myproject
dcs#ayanami:~$ cd myproject
dcs#ayanami:~/myproject$ sbt
Project does not exist, create new project? (y/N/s) y
Name: myproject
Organization: org.dcsobral
Version [1.0]:
Scala version [2.7.7]: 2.8.1
sbt version [0.7.4]:
Getting Scala 2.7.7 ...
:: retrieving :: org.scala-tools.sbt#boot-scala
confs: [default]
2 artifacts copied, 0 already retrieved (9911kB/134ms)
Getting org.scala-tools.sbt sbt_2.7.7 0.7.4 ...
:: retrieving :: org.scala-tools.sbt#boot-app
confs: [default]
15 artifacts copied, 0 already retrieved (4096kB/91ms)
[success] Successfully initialized directory structure.
Getting Scala 2.8.1 ...
:: retrieving :: org.scala-tools.sbt#boot-scala
confs: [default]
2 artifacts copied, 0 already retrieved (15118kB/160ms)
[info] Building project myproject 1.0 against Scala 2.8.1
[info] using sbt.DefaultProject with sbt 0.7.4 and Scala 2.7.7
> quit
[info]
[info] Total session time: 8 s, completed May 6, 2011 12:31:43 PM
[success] Build completed successfully.
dcs#ayanami:~/myproject$ find . -type d -print
.
./project
./project/boot
./project/boot/scala-2.7.7
./project/boot/scala-2.7.7/lib
./project/boot/scala-2.7.7/org.scala-tools.sbt
./project/boot/scala-2.7.7/org.scala-tools.sbt/sbt
./project/boot/scala-2.7.7/org.scala-tools.sbt/sbt/0.7.4
./project/boot/scala-2.7.7/org.scala-tools.sbt/sbt/0.7.4/compiler-interface-bin_2.7.7.final
./project/boot/scala-2.7.7/org.scala-tools.sbt/sbt/0.7.4/compiler-interface-src
./project/boot/scala-2.7.7/org.scala-tools.sbt/sbt/0.7.4/compiler-interface-bin_2.8.0.RC2
./project/boot/scala-2.7.7/org.scala-tools.sbt/sbt/0.7.4/xsbti
./project/boot/scala-2.8.1
./project/boot/scala-2.8.1/lib
./target
./lib
./src
./src/main
./src/main/resources
./src/main/scala
./src/test
./src/test/resources
./src/test/scala
So you'll put your source files inside myproject/src/main/scala, for the main program, or myproject/src/test/scala, for the tests.
Since that doesn't work anymore, there are some alternatives:
giter8 and sbt.g8
Install giter8, clone ymasory's sbt.g8 template and adapt it to your necessities, and use that. See below, for example, this use of unmodified ymasory's sbt.g8 template. I think this is one of the best alternatives to starting new projects when you have a good notion of what you want in all your projects.
$ g8 ymasory/sbt
project_license_url [http://www.gnu.org/licenses/gpl-3.0.txt]:
name [myproj]:
project_group_id [com.example]:
developer_email [john.doe#example.com]:
developer_full_name [John Doe]:
project_license_name [GPLv3]:
github_username [johndoe]:
Template applied in ./myproj
$ tree myproj
myproj
├── build.sbt
├── LICENSE
├── project
│   ├── build.properties
│   ├── build.scala
│   └── plugins.sbt
├── README.md
├── sbt
└── src
└── main
└── scala
└── Main.scala
4 directories, 8 files
np plugin
Use softprops's np plugin for sbt. In the example below, the plugin is configured on ~/.sbt/plugins/build.sbt, and its settings on ~/.sbt/np.sbt, with standard sbt script. If you use paulp's sbt-extras, you'd need to install these things under the right Scala version subdirectory in ~/.sbt, as it uses separate configurations for each Scala version. In practice, this is the one I use most often.
$ mkdir myproj; cd myproj
$ sbt 'np name:myproj org:com.example'
[info] Loading global plugins from /home/dcsobral/.sbt/plugins
[warn] Multiple resolvers having different access mechanism configured with same name 'sbt-plugin-releases'. To avoid conflict, Remove duplicate project resolvers (`resolvers`) or rename publishing resolver (`publishTo`).
[info] Set current project to default-c642a2 (in build file:/home/dcsobral/myproj/)
[info] Generated build file
[info] Generated source directories
[success] Total time: 0 s, completed Apr 12, 2013 12:08:31 PM
$ tree
.
├── build.sbt
├── src
│   ├── main
│   │   ├── resources
│   │   └── scala
│   └── test
│   ├── resources
│   └── scala
└── target
└── streams
└── compile
└── np
└── $global
└── out
12 directories, 2 files
mkdir
You could simply create it with mkdir:
$ mkdir -p myproj/src/{main,test}/{resource,scala,java}
$ tree myproj
myproj
└── src
├── main
│   ├── java
│   ├── resource
│   └── scala
└── test
├── java
├── resource
└── scala
9 directories, 0 files
Source Layout
Now, about the source layout. Jens recommends following Java style. Well, the Java directory layout is a requirement -- in Java. Scala does not have the same requirement, so you have the option of following it or not.
If you do follow it, assuming the base package is org.dcsobral.myproject, then source code for that package would be put inside myproject/src/main/scala/org/dcsobral/myproject/, and so on for sub-packages.
Two common ways of diverging from that standard are:
Omitting the base package directory, and only creating subdirectories for the sub-packages.
For instance, let's say I have the packages org.dcsobral.myproject.model, org.dcsobral.myproject.view and org.dcsobral.myproject.controller, then the directories would be myproject/src/main/scala/model, myproject/src/main/scala/view and myproject/src/main/scala/controller.
Putting everything together. In this case, all source files would be inside myproject/src/main/scala. This is good enough for small projects. In fact, if you have no sub-projects, it is the same as above.
And this deals with directory layout.
File Names
Next, let's talk about files. In Java, the practice is separating each class in its own file, whose name will follow the name of the class. This is good enough in Scala too, but you have to pay attention to some exceptions.
First, Scala has object, which Java does not have. A class and object of the same name are considered companions, which has some practical implications, but only if they are in the same file. So, place companion classes and objects in the same file.
Second, Scala has a concept known as sealed class (or trait), which limits subclasses (or implementing objects) to those declared in the same file. This is mostly done to create algebraic data types with pattern matching with exhaustiveness check. For example:
sealed abstract class Tree
case class Node(left: Tree, right: Tree) extends Tree
case class Leaf(n: Int) extends Tree
scala> def isLeaf(t: Tree) = t match {
| case Leaf(n: Int) => println("Leaf "+n)
| }
<console>:11: warning: match is not exhaustive!
missing combination Node
def isLeaf(t: Tree) = t match {
^
isLeaf: (t: Tree)Unit
If Tree was not sealed, then anyone could extend it, making it impossible for the compiler to know whether the match was exhaustive or not. Anyway, sealed classes go together in the same file.
Another naming convention is to name the files containing a package object (for that package) package.scala.
Importing Stuff
The most basic rule is that stuff in the same package see each other. So, put everything in the same package, and you don't need to concern yourself with what sees what.
But Scala also have relative references and imports. This requires a bit of an explanation. Say I have the following declarations at the top of my file:
package org.dcsobral.myproject
package model
Everything following will be put in the package org.dcsobral.myproject.model. Also, not only everything inside that package will be visible, but everything inside org.dcsobral.myproject will be visible as well. If I just declared package org.dcsobral.myproject.model instead, then org.dcsobral.myproject would not be visible.
The rule is pretty simple, but it can confuse people a bit at first. The reason for this rule is relative imports. Consider now the following statement in that file:
import view._
This import may be relative -- all imports can be relative unless you prefix it with _root_.. It can refer to the following packages: org.dcsobral.myproject.model.view, org.dcsobral.myproject.view, scala.view and java.lang.view. It could also refer to an object named view inside scala.Predef. Or it could be an absolute import refering to a package named view.
If more than one such package exists, it will pick one according to some precedence rules. If you needed to import something else, you can turn the import into an absolute one.
This import makes everything inside the view package (wherever it is) visible in its scope. If it happens inside a class, and object or a def, then the visibility will be restricted to that. It imports everything because of the ._, which is a wildcard.
An alternative might look like this:
package org.dcsobral.myproject.model
import org.dcsobral.myproject.view
import org.dcsobral.myproject.controller
In that case, the packages view and controller would be visible, but you'd have to name them explicitly when using them:
def post(view: view.User): Node =
Or you could use further relative imports:
import view.User
The import statement also enable you to rename stuff, or import everything but something. Refer to relevant documentation about it for more details.
So, I hope this answer all your questions.
Scala supports and encourages the package structure of Java /JVM and pretty much the same recommendation apply:
mirror the package structure in the directory structure. This isn't necessary in Scala, but it helps to find your way around
use your inverse domain as a package prefix. For me that means everything starts with de.schauderhaft. Use something that makes sense for you, if you don't have you own domain
only put top level classes in one file if they are small and closely related. Otherwise stick with one class/object per file. Exceptions: companion objects go in the same file as the class. Implementations of a sealed class go into the same file.
if you app grows you might want to have something like layers and modules and mirror those in the package structure, so you might have a package structure like this: <domain>.<module>.<layer>.<optional subpackage>.
don't have cyclic dependencies on a package, module or layer level

How to integrate Imebra v5 to a swift command line project

I'm trying to import Imebra into a basic swift 5 command line project using Xcode 12. I followed the official steps but I failed. I can summarise the whole structure:
The structure of the project is just
./
├── main.swift
├── Data
└── DX_0.dcm
├── Imebra
└── CMakeLists.txt
└── docs
└── examples
└── library
└── test
└── wrappers
└── build_imebra_macos
The main swift file is
// main.swift
import Foundation
print("Hello, Imebra!")
do {
let pDataSet = try ImebraCodecFactory.load(fromFile: "PathToDicomFileFromExecutable")
let pImage = try pDataSet.getImageApplyModalityTransform(0)
print("The image width is", pImage.width)
} catch {
print(error)
}
Following the documentation, I compile the library by going to the build_imebra_macos folder and running
build_imebra_macos % cmake -GXcode -DCMAKE_BUILD_TYPE=Release ..
build_imebra_macos % cmake --build . --config Release
The build is successful and the new folder Release has the dynamic library. Now in Xcode project of the CL Swift application, SwiftyImebra.xcodeproj, I followed the next instruction "open the target Build Settings and under “Swift Compiler/ObjectiveC Bridging Header” specify the path to imebra_location/wrappers/objectivec/include/imebraobjc/imebra.h.”, with Imebra_location changed to Imebra.
Then when I build I get the error
Showing All Messages
Undefined symbol: _OBJC_CLASS_$_ImebraCodecFactory
I'm new in Swift and I guess I need to specify somewhere in Xcode where the source or the dynamic library is. However, I am not sure about this either as we have generated a cpp dynamic library so this can only interact with objective-C (?). I apologise if this is a basic question...
In addition, I'd like to learn how to use Imebra as a static library with swift.
The imebra dynamic library must be added to the project.
Drag the imebra.dylib (generated into your build folder build_imebra_macos) into your project.
Additionally, specify the folder containing the dylib into the "Library search path" in the compiler options.

How do I properly use go modules in vscode?

I have used vscode 1.41.1 on my mac for a few months and it worked good until I started to use go modules for dependency management. At the moment I am rewriting a simple tool and introduce packages for separate functionalities.
My code structure looks like this:
├── bmr.go -> package main & main(), uses below packages
├── check
│   ├── check.go -> package check
│   └── check_test.go
├── go.mod
├── go.sum
├── push
│   ├── push.go -> package push
│   └── push_test.go
└── s3objects
├── s3objects.go -> package s3objects
└── s3objects_test.go
My go.mod file:
module github.com/some-org/business-metrics-restore
go 1.13
require (
github.com/aws/aws-sdk-go v1.28.1
github.com/go-redis/redis v6.15.6+incompatible
github.com/sirupsen/logrus v1.4.2
github.com/spf13/viper v1.6.1
github.com/stretchr/testify v1.4.0
golang.org/x/sys v0.0.0-20200113162924-86b910548bc1
)
All is fine when I invoke go test/run/build commands from the shell. But when I use 'Debug' -> 'Run Without Debugging' I get:
go: finding github.com/some-org/business-metrics-restore/push latest
go: finding github.com/some-org/business-metrics-restore latest
go: finding github.com/some-org/business-metrics-restore/check latest
go: finding github.com/some-org/business-metrics-restore/s3objects latest
build command-line-arguments: cannot load github.com/some-org/business-metrics-restore/check: module github.com/some-org/business-metrics-restore#latest found (v0.0.0-20191022092726-d1a52439dad8), but does not contain package github.com/some-org/business-metrics-restore/check
Process exiting with code: 1
My code currently is in a feature branch and d1a52439dad8 is the first (init) and only commit on master. No code for the tool (incl. 3 mentioned non main packages) is in the master branch.
The problem here is that for some reason as you see above vscode fetches state from master and I cannot override this behaviour.
Can anyone help me?
Thanks!
Best Regards,
Rafal.
I realized that if the go.mod is not at the root of your project VSCode does not work properly. I have an AWS SAM project with the following structure:
├── Makefile
├── README.md
├── nic-update
│   ├── go.mod
│   ├── go.sum
│   ├── main.go
│   ├── main_test.go
│   └── r53service
│   └── r53.go
├── samconfig.toml
└── template.yaml
and the only way it works if by starting VSCode from the nic-update directory.
My go.mod has the following content:
require (
github.com/aws/aws-lambda-go v1.13.3
github.com/aws/aws-sdk-go v1.32.12
)
module github.com/jschwindt/ddns-route53/nic-update
go 1.14
I realized that if the go.mod is not at the root of your project VSCode does not work properly
That might now (Oct. 2020) be supported, as a consequence of gopls v0.5.1 and its experimental feature Multi-module workspace support from the proposal 32394.
Even if you don't have multiple modules, a go.mod in a sub-folder (instead of the root folder of your project) will be better managed (if you activate the gopls.experimentalWorkspaceModule setting).
As noted by kayochin in the comments:
The setting should be "gopls": {"build.experimentalWorkspaceModule": true}
See the documentation "gopls / Settings / experimentalWorkspaceModule bool".
I have also had trouble with VS Code and modules. The current status of VS Code support for Go Modules is kept up to date here: https://github.com/golang/vscode-go#Set-up-your-environment
In that link they suggest ditching most of the existing extensions VS Code encourages you to install with Go and instead using the language server gopls with these directions:
Add the below in your settings to use it.
"go.useLanguageServer": true
Note: You will be prompted to install the latest stable version of gopls as and when the Go tools team tag a new version as stable.
You should also fix autoimporting:
Add the setting "go.formatTool": "goimports" and then use Go: Install/Update Tools to install/update goimports as it has recently added support for modules.
When you do these things, keep in mind that you'll also lose a couple of features:
Completion of unimported packages doesnt work
Find references and rename only work in a single package

What is the simplest way to create an importable file in Scala?

TLDR: What is the simplest way to create an importable file in Scala?
This is a beginner's question.
I've been learning Scala for a few weeks and now have some code that I would like to share between a couple of files/different projects. I have tried a lot of import structures but none of them worked. The requirements are:
File to be imported should reside in a totally different directory.
File to be imported should be importable by independent projects.
File to be imported is a single .scala file.
(Optional) file to be imported should contain defs, objects and case classes.
Example:
File to be imported location: /some/path/to_be_imported.scala.
File using project (1) location: /abc/def/will_import01.scala.
File using project (2) location: /xyz/rst/will_import02.scala.
I'm not trying to create a package or distribute it.
See how I would address this considering the programming language I already know:
Since I'm versed in Python I'll give an expected version of the answer should this problem refer o Python:
In that case you could:
Put your file on the same directory of your executed file then just run: python3 ./your_file.py. For instance:
➜ another_path|$ python3 ./main_module/main_file.py
1
self printing
➜ another_path|$ tree .
=======================================================================
.
└── main_module
├── main_file.py
├── __pycache__
│   └── sample_file_to_be_imported.cpython-36.pyc
└── sample_file_to_be_imported.py
Notice that they are in the exact same directory (this contradicts point 2 above nevertheless it solves the problem).
Add the directory of your file to the PYTHONPATH environment variable then run your module (best answer):
➜ random_path|$ PYTHONPATH=$PYTHONPATH:./sample_module python3 ./main_module/main_file.py
1
self printing
=======================================================================
➜ random_path|$ tree .
.
├── main_module
│   └── main_file.py
└── sample_module
├── __pycache__
│   └── sample_file_to_be_imported.cpython-36.pyc
└── sample_file_to_be_imported.py
3 directories, 3 files
Content of the files:
➜ random_path|$ cat ./main_module/main_file.py
from sample_file_to_be_imported import func1, Class01
print(func1())
x = Class01()
x.cprint()
=======================================================================
➜ random_path|$ cat ./sample_module/sample_file_to_be_imported.py
def func1():
return 1
class Class01():
def cprint(self):
print('self printing')
Edit 01: #felipe-rubin answer does not work:
$ scala -cp /tmp/scala_stack_exchange/ myprogram.scala
/tmp/scala_stack_exchange/path01/myprogram.scala:3: error: not found: value Timer
val x = Timer(1)
^
one error found
=======================================================================
➜ path01 tree /tmp/scala_stack_exchange
/tmp/scala_stack_exchange
├── anotherpath
│   ├── Timer.class
│   └── timer.scala
└── path01
└── myprogram.scala
2 directories, 3 files
=======================================================================
$ cat /tmp/scala_stack_exchange/anotherpath/timer.scala
class Timer(a: Int) {
def time(): Unit = println("time this")
}
=======================================================================
$ cat /tmp/scala_stack_exchange/path01/myprogram.scala
import anotherpath.Timer
val x = Timer(1)
x.time()
The simplest way would be to compile a .scala file with scalac:
Linux/OSX: scalac mypackage/Example.scala
Windows: scalac mypackage\Example.scala
The above should generate a .class file (or more).
Assuming the file contains a class called Example you can import it somewhere else like this:
import mypackage.Example
When compiling another file which does the above import, you will need to have 'mypackage' in the classpath. You can add directories to the classpath when calling scalac by using the -cp flag like:
Linux/OSX: scalac -cp .:path/to/folder/where/mypackage/is/located AnotherExample.scala
Windows: scalac -cp .;path\to\folder\where\mypackage\is\located AnotherExample.scala
Doing this for bigger projects gets complicated, in which case you might resort to a build tool (e.g. SBT) or an IDE (e.g. IntelliJ Idea) to do the complicated work for you.
Other notes:
If you don't have scalac, you can get it from the scala website ('download binaries' option)
the -cp flag stand for "classpath". There is also a -classpath flag which does the same thing
Welcome to Scala :)
I finally got this working. Thanks for the valuable input from the other answers.
I have diversified the name of every path, file and object to be as general as possible. This probably does not follow the guidelines of the scala community but is the most explicit, illustrative help I could find. Project layout:
File Layout
$ tree /tmp/scala_stack_exchange
/tmp/scala_stack_exchange
├── anotherpath
│   ├── file_defines_class.scala
│   └── some_package_name
│   ├── MyObj.class
│   └── MyObj$.class
└── path01
└── myprogram.scala
3 directories, 4 files
Where I want to run myprogram.scala which should import classes defined in file_defines_class.scala.
Preparation
Compile the file you want to be imported by other modules:
cd /tmp/scala_stack_exchange/anotherpath && scalac ./file_defines_class.scala
Execution
cd /tmp/scala_stack_exchange/path01 && scala -cp /tmp/scala_stack_exchange/anotherpath/ ./myprogram.scala
Results
myobj time
Contents of the files
// /tmp/scala_stack_exchange/path01/myprogram.scala
import some_package_name.MyObj
val x = new MyObj(10)
x.time()
// /tmp/scala_stack_exchange/anotherpath/file_defines_class.scala
package some_package_name
object MyObj
case class MyObj(i: Int) {
def time(): Unit = println("myobj time")
}
Feels like magic. However this whole process is rather cumbersome :/

Scala application structure

I am learning Scala now and I want to write some silly little app like a console Twitter client, or whatever. The question is, how to structure application on disk and logically. I know python, and there I would just create some files with classes and then import them in the main module like import util.ssh or from tweets import Retweet (strongly hoping you wouldn't mind that names, they are just for reference). But how should I do this stuff using Scala? Also, I have not much experience with JVM and Java, so I am a complete newbie here.
I'm going to disagree with Jens, here, though not all that much.
Project Layout
My own suggestion is that you model your efforts on Maven's standard directory layout.
Previous versions of SBT (before SBT 0.9.x) would create it automatically for you:
dcs#ayanami:~$ mkdir myproject
dcs#ayanami:~$ cd myproject
dcs#ayanami:~/myproject$ sbt
Project does not exist, create new project? (y/N/s) y
Name: myproject
Organization: org.dcsobral
Version [1.0]:
Scala version [2.7.7]: 2.8.1
sbt version [0.7.4]:
Getting Scala 2.7.7 ...
:: retrieving :: org.scala-tools.sbt#boot-scala
confs: [default]
2 artifacts copied, 0 already retrieved (9911kB/134ms)
Getting org.scala-tools.sbt sbt_2.7.7 0.7.4 ...
:: retrieving :: org.scala-tools.sbt#boot-app
confs: [default]
15 artifacts copied, 0 already retrieved (4096kB/91ms)
[success] Successfully initialized directory structure.
Getting Scala 2.8.1 ...
:: retrieving :: org.scala-tools.sbt#boot-scala
confs: [default]
2 artifacts copied, 0 already retrieved (15118kB/160ms)
[info] Building project myproject 1.0 against Scala 2.8.1
[info] using sbt.DefaultProject with sbt 0.7.4 and Scala 2.7.7
> quit
[info]
[info] Total session time: 8 s, completed May 6, 2011 12:31:43 PM
[success] Build completed successfully.
dcs#ayanami:~/myproject$ find . -type d -print
.
./project
./project/boot
./project/boot/scala-2.7.7
./project/boot/scala-2.7.7/lib
./project/boot/scala-2.7.7/org.scala-tools.sbt
./project/boot/scala-2.7.7/org.scala-tools.sbt/sbt
./project/boot/scala-2.7.7/org.scala-tools.sbt/sbt/0.7.4
./project/boot/scala-2.7.7/org.scala-tools.sbt/sbt/0.7.4/compiler-interface-bin_2.7.7.final
./project/boot/scala-2.7.7/org.scala-tools.sbt/sbt/0.7.4/compiler-interface-src
./project/boot/scala-2.7.7/org.scala-tools.sbt/sbt/0.7.4/compiler-interface-bin_2.8.0.RC2
./project/boot/scala-2.7.7/org.scala-tools.sbt/sbt/0.7.4/xsbti
./project/boot/scala-2.8.1
./project/boot/scala-2.8.1/lib
./target
./lib
./src
./src/main
./src/main/resources
./src/main/scala
./src/test
./src/test/resources
./src/test/scala
So you'll put your source files inside myproject/src/main/scala, for the main program, or myproject/src/test/scala, for the tests.
Since that doesn't work anymore, there are some alternatives:
giter8 and sbt.g8
Install giter8, clone ymasory's sbt.g8 template and adapt it to your necessities, and use that. See below, for example, this use of unmodified ymasory's sbt.g8 template. I think this is one of the best alternatives to starting new projects when you have a good notion of what you want in all your projects.
$ g8 ymasory/sbt
project_license_url [http://www.gnu.org/licenses/gpl-3.0.txt]:
name [myproj]:
project_group_id [com.example]:
developer_email [john.doe#example.com]:
developer_full_name [John Doe]:
project_license_name [GPLv3]:
github_username [johndoe]:
Template applied in ./myproj
$ tree myproj
myproj
├── build.sbt
├── LICENSE
├── project
│   ├── build.properties
│   ├── build.scala
│   └── plugins.sbt
├── README.md
├── sbt
└── src
└── main
└── scala
└── Main.scala
4 directories, 8 files
np plugin
Use softprops's np plugin for sbt. In the example below, the plugin is configured on ~/.sbt/plugins/build.sbt, and its settings on ~/.sbt/np.sbt, with standard sbt script. If you use paulp's sbt-extras, you'd need to install these things under the right Scala version subdirectory in ~/.sbt, as it uses separate configurations for each Scala version. In practice, this is the one I use most often.
$ mkdir myproj; cd myproj
$ sbt 'np name:myproj org:com.example'
[info] Loading global plugins from /home/dcsobral/.sbt/plugins
[warn] Multiple resolvers having different access mechanism configured with same name 'sbt-plugin-releases'. To avoid conflict, Remove duplicate project resolvers (`resolvers`) or rename publishing resolver (`publishTo`).
[info] Set current project to default-c642a2 (in build file:/home/dcsobral/myproj/)
[info] Generated build file
[info] Generated source directories
[success] Total time: 0 s, completed Apr 12, 2013 12:08:31 PM
$ tree
.
├── build.sbt
├── src
│   ├── main
│   │   ├── resources
│   │   └── scala
│   └── test
│   ├── resources
│   └── scala
└── target
└── streams
└── compile
└── np
└── $global
└── out
12 directories, 2 files
mkdir
You could simply create it with mkdir:
$ mkdir -p myproj/src/{main,test}/{resource,scala,java}
$ tree myproj
myproj
└── src
├── main
│   ├── java
│   ├── resource
│   └── scala
└── test
├── java
├── resource
└── scala
9 directories, 0 files
Source Layout
Now, about the source layout. Jens recommends following Java style. Well, the Java directory layout is a requirement -- in Java. Scala does not have the same requirement, so you have the option of following it or not.
If you do follow it, assuming the base package is org.dcsobral.myproject, then source code for that package would be put inside myproject/src/main/scala/org/dcsobral/myproject/, and so on for sub-packages.
Two common ways of diverging from that standard are:
Omitting the base package directory, and only creating subdirectories for the sub-packages.
For instance, let's say I have the packages org.dcsobral.myproject.model, org.dcsobral.myproject.view and org.dcsobral.myproject.controller, then the directories would be myproject/src/main/scala/model, myproject/src/main/scala/view and myproject/src/main/scala/controller.
Putting everything together. In this case, all source files would be inside myproject/src/main/scala. This is good enough for small projects. In fact, if you have no sub-projects, it is the same as above.
And this deals with directory layout.
File Names
Next, let's talk about files. In Java, the practice is separating each class in its own file, whose name will follow the name of the class. This is good enough in Scala too, but you have to pay attention to some exceptions.
First, Scala has object, which Java does not have. A class and object of the same name are considered companions, which has some practical implications, but only if they are in the same file. So, place companion classes and objects in the same file.
Second, Scala has a concept known as sealed class (or trait), which limits subclasses (or implementing objects) to those declared in the same file. This is mostly done to create algebraic data types with pattern matching with exhaustiveness check. For example:
sealed abstract class Tree
case class Node(left: Tree, right: Tree) extends Tree
case class Leaf(n: Int) extends Tree
scala> def isLeaf(t: Tree) = t match {
| case Leaf(n: Int) => println("Leaf "+n)
| }
<console>:11: warning: match is not exhaustive!
missing combination Node
def isLeaf(t: Tree) = t match {
^
isLeaf: (t: Tree)Unit
If Tree was not sealed, then anyone could extend it, making it impossible for the compiler to know whether the match was exhaustive or not. Anyway, sealed classes go together in the same file.
Another naming convention is to name the files containing a package object (for that package) package.scala.
Importing Stuff
The most basic rule is that stuff in the same package see each other. So, put everything in the same package, and you don't need to concern yourself with what sees what.
But Scala also have relative references and imports. This requires a bit of an explanation. Say I have the following declarations at the top of my file:
package org.dcsobral.myproject
package model
Everything following will be put in the package org.dcsobral.myproject.model. Also, not only everything inside that package will be visible, but everything inside org.dcsobral.myproject will be visible as well. If I just declared package org.dcsobral.myproject.model instead, then org.dcsobral.myproject would not be visible.
The rule is pretty simple, but it can confuse people a bit at first. The reason for this rule is relative imports. Consider now the following statement in that file:
import view._
This import may be relative -- all imports can be relative unless you prefix it with _root_.. It can refer to the following packages: org.dcsobral.myproject.model.view, org.dcsobral.myproject.view, scala.view and java.lang.view. It could also refer to an object named view inside scala.Predef. Or it could be an absolute import refering to a package named view.
If more than one such package exists, it will pick one according to some precedence rules. If you needed to import something else, you can turn the import into an absolute one.
This import makes everything inside the view package (wherever it is) visible in its scope. If it happens inside a class, and object or a def, then the visibility will be restricted to that. It imports everything because of the ._, which is a wildcard.
An alternative might look like this:
package org.dcsobral.myproject.model
import org.dcsobral.myproject.view
import org.dcsobral.myproject.controller
In that case, the packages view and controller would be visible, but you'd have to name them explicitly when using them:
def post(view: view.User): Node =
Or you could use further relative imports:
import view.User
The import statement also enable you to rename stuff, or import everything but something. Refer to relevant documentation about it for more details.
So, I hope this answer all your questions.
Scala supports and encourages the package structure of Java /JVM and pretty much the same recommendation apply:
mirror the package structure in the directory structure. This isn't necessary in Scala, but it helps to find your way around
use your inverse domain as a package prefix. For me that means everything starts with de.schauderhaft. Use something that makes sense for you, if you don't have you own domain
only put top level classes in one file if they are small and closely related. Otherwise stick with one class/object per file. Exceptions: companion objects go in the same file as the class. Implementations of a sealed class go into the same file.
if you app grows you might want to have something like layers and modules and mirror those in the package structure, so you might have a package structure like this: <domain>.<module>.<layer>.<optional subpackage>.
don't have cyclic dependencies on a package, module or layer level