Android HAL .so not build - android-source

I was following this guide to learn HAL layer and framework layer in AOSP. I've managed to run the whole process. But there's one small issue. When I build the whole ROM/Android, hello.default.so won't be built/shown up in the following,
${output}/system/lib/hw
${output}/system/lib64/hw
${output}/vendor/lib/hw
${output}/vendor/lib64/hw
Only by manually executing mmm hardware/libhardware/modules/hello/, can I get the hello.default.so.
I have remember to append the module into PRODUCT_PACKAGES macro as following. Packages/Modules vim, hello-lkm-client have been successfully integrated. But not the hello.default module.
PRODUCT_PACKAGES += \
vim
PRODUCT_PACKAGES += \
hello-lkm-client
PRODUCT_PACKAGES += \
hello.default
Here's the Android.mk file for the hello HAL
include $(CLEAR_VARS)
LOCAL_MODULE_RELATIVE_PATH := hw
LOCAL_PROPRIETARY_MODULE := true
LOCAL_SHARED_LIBRARIES := liblog
#############
# I'm following $hw/modules/gralloc/Android.mk
#############
LOCAL_MODULE_TAGS := optional
LOCAL_PRELINK_MODULE := false
# LOCAL_C_INCLUDES := hardware/libhardware
LOCAL_SRC_FILES := hello.c
LOCAL_HEADER_LIBRARIES := libhardware_headers
LOCAL_MODULE := hello.default
# LOCAL_LDLIBS := -L$(SYSROOT)/usr/lib -llog
include $(BUILD_SHARED_LIBRARY)
Here's the complete hello HAL module.

OK, found it. I have to add the hello module in Android.mk of Module modules, just like the following.
chang#ryzen:~/bulk2/rockpi4-atv9-chang/hardware/libhardware$ git diff
diff --git a/modules/Android.mk b/modules/Android.mk
index a430a650..7bbeaeb9 100644
--- a/modules/Android.mk
+++ b/modules/Android.mk
## -2,5 +2,6 ## hardware_modules := \
camera \
gralloc \
sensors \
- hw_output
+ hw_output \
+ hello

Related

Yocto - applying a bbappend file to a recipe (from github)

I'm working with the meta-atmel layer in Yocto to create an image for a SAMA5D4 board.
I've created a custom layer & would like to patch a file (specifically https://github.com/linux4sam/egt/blob/master/src/app.cpp) with a diff I created:
index 869b1e2..c86ad1a 100644
--- a/app.cpp
+++ b/app.cpp.modified
## -342,8 +342,9 ## void Application::setup_inputs()
}
}
+// Modify to force use of tslib
#ifdef HAVE_LIBINPUT
- m_inputs.push_back(std::make_unique<detail::InputLibInput>(*this));
+// m_inputs.push_back(std::make_unique<detail::InputLibInput>(*this));
#endif
}
I recreated the directory structure in my custom layer to match the location of the file I'd like to alter:
yocto/meta-atmel/recipes-graphics/libegt/libegt_1.2.bb
yocto/meta-custom2/recipes-graphics/libegt/libegt_%.bbappend
My bbappend file is:
# Modify https://github.com/linux4sam/egt/src/app.cpp
# Issue with file path
SRC_URI += "file:0001-disable-libinput.patch"
FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"
PACKAGE_ARCH = "${MACHINE_ARCH}"
How do I correctly include my patchfile to the file I'd like to modify?
Many thanks for looking!
Well looks like I may have solved the issue.
I moved my diff file to the directory:
yocto/meta-custom2/recipes-graphics/libegt/files
& corrected some errors:
Diff should be:
diff --git a/src/app.cpp b/src/app.cpp
index 869b1e2..c86ad1a 100644
--- a/src/app.cpp
+++ b/src/app.cpp
## -342,8 +342,9 ## void Application::setup_inputs()
}
}
+// Modify to force use of tslib
#ifdef HAVE_LIBINPUT
- m_inputs.push_back(std::make_unique<detail::InputLibInput>(*this));
+// m_inputs.push_back(std::make_unique<detail::InputLibInput>(*this));
#endif
}
where the '/src/app.cpp' referes to the location of the file that needs to be patched (i.e. similar if one carried out a git clone). Next my bbappend should have been:
# Modify https://github.com/linux4sam/egt/src/app.cpp
SRC_URI += "file://0001-disable-libinput.patch"
FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
PACKAGE_ARCH = "${MACHINE_ARCH}"
Hope this helps others & thanks to the teams working on Yocto/ OE!

Create a recipe for a QT5 application

I have been struggling for too long, so I need help :)
I made a big QT5.8 application and usually when I want to compile it with my PC I just have to run the following command: qmake -qt=5.9 -spec linux-arm-gnueabihf-g++ -config configuration_name.
With this command, I can cross compile my source code for the armhf architecture using linux-arm-gnueabihf-g++ toolchain.
But now, as can easily create a yocto image for my target (Raspberry pi), I want to make a recipe in order to compile my qt software and put it into my image.
For now, I achieved to perform these following task in my recipe without errors:
do_fetch -> Yocto fetch the source from git repo (OK)
do_unpack -> OK
After that I want to perform a qmake command in order to generate my makefile, but here is my problem :/
First, I included the qmake5 class in my recipe using
require recipes-qt/qt5/qt5.inc
Then I tried a lot of things..
writing "qmake" into the do_configure task doesn't work. Last thing I tried was: '${OE_QMAKE_QMAKE} ${S}/my_software.pro -config my_config' but still the same error:
Could not find qmake spec 'linux-oe-g++'
I don't know what to do and I can't find any recipe exemple doing the things that I want to do.
If somebody already experienced this issue or have experience compiling qt5 software with a yocto recipe I would like your help :)
my recipe:
LICENSE = "CLOSED"
LIC_FILES_CHKSUM = " "
USERNAME = "****"
PASSWORD = "*****"
S = "${WORKDIR}/git"
require recipes-qt/qt5/qt5.inc
do_fetch(){
git clone http://${USERNAME}:${PASSWORD}#gitlab.....
}
do_configure () {
${OE_QMAKE_QMAKE} ${S}/my_software.pro -config my_config
}
Thanks
Short answer
Add "inherit qmake5" and let Yocto take care of it :)
Long answer
Here's an example of how I added a QT project. It is not using git, it is using local files. However for getting one step further I suggest:
Use my way as a test. Copy the QT project to /yocto/local_sources/Myproject/
and make this known to Yocto by using FILESEXTRAPATHS_prepend_ (as shown below).
If this works on your environment, adapt it to your needs (e.g. build from git instead of local_source, which is better off course.)
This way is tested and works also well for later remote-debugging with qt-creator and the yocto SDK by the way. Stick to Yocto, it's worth it in the end.
Here's my .bb file:
#
# Build QT xyz application
#
SUMMARY = "..."
SECTION = "Mysection"
LICENSE = "CLOSED"
#Add whatever you need here
DEPENDS += "qtbase qtmultimedia qtsvg"
#Add here your .pro and all other relevant files (if you use git later this step will be less tedious ...)
SRC_URI += "file://Myproject.pro"
SRC_URI += "file://*.h"
SRC_URI += "file://*.cpp"
SRC_URI += "file://subdir1/*.h"
SRC_URI += "file://subdir1/*.cpp"
SRC_URI += "file://subdir2/*.h"
SRC_URI += "file://subdir2/*.cpp"
SRC_URI += "file://subdir2/subdir3/*.h"
SRC_URI += "file://subdir2/subdir3/*.cpp"
#If you need autostart:
#SRC_URI += "file://myproject.service"
#Register for root file system aggregation
FILES_${PN} += "${bindir}/Myproject"
#RBE todo: both needed ?
FILESEXTRAPATHS_prepend_${PN} := "/yocto/local_sources/Myproject/Src:"
FILESEXTRAPATHS_prepend := "/yocto/local_sources/Myproject/Src:"
S = "${WORKDIR}"
#If you want to auto-start this add:
#SYSTEMD_SERVICE_${PN} = "Myproject.service"
FILES_${PN} = "${datadir} ${bindir} ${systemd_unitdir}"
FILES_${PN}-dbg = "${datadir}/${PN}/.debug"
FILES_${PN}-dev = "/usr/src"
#Check what's needed in your case ...
RDEPENDS_${PN} += "\
qtmultimedia-qmlplugins \
qtvirtualkeyboard \
qtquickcontrols2-qmlplugins \
gstreamer1.0-libav \
gstreamer1.0-plugins-base-audioconvert \
gstreamer1.0-plugins-base-audioresample \
gstreamer1.0-plugins-base-playback \
gstreamer1.0-plugins-base-typefindfunctions \
gstreamer1.0-plugins-base-videoconvert \
gstreamer1.0-plugins-base-videoscale \
gstreamer1.0-plugins-base-volume \
gstreamer1.0-plugins-base-vorbis \
gstreamer1.0-plugins-good-autodetect \
gstreamer1.0-plugins-good-matroska \
gstreamer1.0-plugins-good-ossaudio \
gstreamer1.0-plugins-good-videofilter \
"
do_install_append () {
# Install application
install -d ${D}${bindir}
install -m 0755 Myproject ${D}${bindir}/
# Uncomment if you want to autostart this application as a service
#install -Dm 0644 ${WORKDIR}/myproject.service ${D}${systemd_system_unitdir}/myproject.service
# Install resource files (example)
#install -d ${D}${datadir}/${PN}/Images
#for f in ${S}/Images/*; do \
# install -Dm 0644 $f ${D}${datadir}/${PN}/Images/
#done
}
#Also inherit "systemd" if you need autostart
inherit qmake5

Caffe+GPU+Opencv3.1+Python3.5+Anaconda:fatal error: Python.h: No such file or directory

Briefly I want to use Caffe these day for my project.
My OS is Ubuntu 14.04, with Opencv3.1+Python3.5+Anaconda+GPU
I have already passed all:
make all
make pycaffe
make test
make runtest
However when can try to make pycaffe, it cannot pass:
Python.h: No such file or directory
Here is my 'makefile.config', and I am sure the 'Python.h' has already in the path, which make me quite confused.
USE_CUDNN := 1
OPENCV_VERSION := 3
ANACONDA_HOME := $(HOME)/anaconda3
PYTHON_INCLUDE := $(ANACONDA_HOME)/include \
$(ANACONDA_HOME)/include/python3.5m \
$(ANACONDA_HOME)/lib/python3.5/site-packages/numpy/core/include \
PYTHON_LIB := $(ANACONDA_HOME)/lib
INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include
LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib
USE_PKG_CONFIG := 1
PYTHON_LIBRARIES := boost_python3 python3.5m
PYTHON_INCLUDE := /usr/include/python3.5m \
/usr/lib/python3.5/dist-packages/numpy/core/include
Because I use Python3.5, so I uncomment the following:
PYTHON_INCLUDE := /usr/include/python2.7 \
/usr/lib/python2.7/dist-packages/numpy/core/include
PYTHON_LIB := /usr/lib
I really appreciate someone could help,
You have two definitions for PYTHON_INCLUDE: you need to decide if you go for the "python3" flavor, or the "anaconda" flavor...
Where is your python.h file anyway? try in shell
find / -name "Python.h" -type f
and see where it actually is. Then pick the correct settings for PYTHON_INCLUDE in your makefile.config
I almost spent(waisted) one week to configure caffe on Ubuntu 14.04, the reason why it too time consuming is that I am using the newest version of Opencv Python and anaconda. Here I want to share my experience.
Makefile.config
# cuDNN acceleration switch (uncomment to build with cuDNN).
USE_CUDNN := 1
# Uncomment if you’re using OpenCV 3
OPENCV_VERSION := 3
# CUDA directory contains bin/ and lib/ directories that we need.
CUDA_DIR := /usr/local/cuda
# CUDA architecture setting: going with all of them.
# For CUDA < 6.0, comment the *_50 lines for compatibility.
CUDA_ARCH := -gencode arch=compute_20,code=sm_20 \
-gencode arch=compute_20,code=sm_21 \
-gencode arch=compute_30,code=sm_30 \
-gencode arch=compute_35,code=sm_35 \
-gencode arch=compute_50,code=sm_50 \
-gencode arch=compute_50,code=compute_50
# BLAS choice: atlas for ATLAS (default)
BLAS := atlas
# We need to be able to find Python.h and numpy/arrayobject.h.
ANACONDA_HOME := $(HOME)/anaconda3
PYTHON_INCLUDE := $(ANACONDA_HOME)/include \
$(ANACONDA_HOME)/include/python3.5m \
$(ANACONDA_HOME)/lib/python3.5/site-packages/numpy/core/include \
# Uncomment to use Python 3 (default is Python 2)
PYTHON_LIBRARIES := boost_python3 python3.5m
# We need to be able to find libpythonX.X.so or .dylib.
PYTHON_LIB := $(ANACONDA_HOME)/lib
# Whatever else you find you need goes here.
INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include
LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib
# N.B. both build and distribute dirs are cleared on `make clean`
BUILD_DIR := build
DISTRIBUTE_DIR := distribute
# The ID of the GPU that ‘make runtest’ will use to run unit tests.
TEST_GPUID := 0
# enable pretty build (comment to see full commands)
Q ?= #
/.bashrc
#Caffemake
export PYTHONPATH=~/caffe/python/:$PYTHONPATH
#Opencv
export LD_LIBRARY_PATH=/home/kaku/anaconda3/lib:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=”/lib/x86_64-linux-gnu:$LD_LIBRARY_PATH”
Notes:
1. library must be installed:
libboost-all-dev, although in some tutorial mentioned must install libboost1.55-all-dev.
protobuf-cpp-3.0.0-beta-2.zip or upper one
protobuf-python-3.0.0-beta-2.zip or upper one
http://blog.csdn.net/lien0906/article/details/51784191
https://github.com/google/protobuf/issues/1276
Other Debug: Details in my own blog.
After facing same issue and being on Gentoo system I tried something else. I have 2 python instances installed at same time via Gentoo slots:
ares ~ # eselect python list
Available Python interpreters, in order of preference:
[1] python3.4
[2] python2.7
My default was 2.7, so I tried to switch to 3.4. Well problem is that it required some changes in 2 files.
I note that similar changes with 2.7 simply didn't work, the path was correct, but something is broken underlying...
Makefile.config file I changed to work with Python 3 (3.4) :
PYTHON_LIBRARIES := boost_python3 python3.4m
PYTHON_INCLUDE := /usr/include/python3.4m \
/usr/lib64/python3.4/site-packages/numpy/core/include
Still, when you just change this, it won't work as soon as CMake still points to 2.7. I checked by doing:
mkdir build; cd build;cmake ..;
And the output was:
-- Python:
-- Interpreter : /usr/bin/python2.7 (ver. 2.7.12)
-- Libraries : /usr/lib64/libpython2.7.so (ver 2.7.12)
-- NumPy : /usr/lib64/python2.7/site-packages/numpy/core/include (ver 1.12.1)
So I changed this line in CMakeLists.txt file:
set(python_version "2" CACHE STRING "Specify which Python version to use")
To (change value 2 to 3):
set(python_version "3" CACHE STRING "Specify which Python version to use")
And did cmake again(after cleanup) and got finally:
-- Python:
-- Interpreter : /usr/bin/python3 (ver. 3.4.5)
-- Libraries : /usr/lib64/libpython3.4m.so (ver 3.4.5)
-- NumPy : /usr/lib64/python3.4/site-packages/numpy/core/include (ver 1.12.1)
Now the make -j8 command finish without issues. I note I used multithread option on compile (-j8) as I found on some forums suggesting to go only with -j1 (single thread), so this was not the case for me.
I have the followings in my Make.config:
PYTHON_LIB := /usr/lib
PYTHON_INCLUDE := /usr/include/python2.7 \
/usr/lib/python2.7/dist-packages/numpy/core/include
And the following in my ~/.bashrc:
export PYTHONPATH=$HOME/caffe/python
export CAFFE_ROOT=$HOME/caffe
You have to run the following in cd $CAFFE_ROOT:
make all
make pycaffe
make test
make runtest
My setup is in CentOS and for Python 2.7 but it should be a similar idea.
[jalal#ivcgpu1 caffe]$ lsb_release -a
LSB Version: :core-4.1-amd64:core-4.1-noarch:cxx-4.1-amd64:cxx-4.1-noarch:desktop-4.1-amd64:desktop-4.1-noarch:languages-4.1-amd64:languages-4.1-noarch:printing-4.1-amd64:printing-4.1-noarch
Distributor ID: CentOS
Description: CentOS Linux release 7.4.1708 (Core)
Release: 7.4.1708
Codename: Core
[jalal#ivcgpu1 caffe]$ uname -a
Linux ivcgpu1.bu.edu 3.10.0-514.26.2.el7.x86_64 #1 SMP Tue Jul 4 15:04:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

Add custom.xml file to AOSP etc folder

I wanted to add a resource file/ xml file to etc folder in AOSP. I would like to have my resource file available just like platform.xml file.
So I basically added my xml file in AOSP/frameworks/base/data/etc folder and correspondingly added the following lines in the make file Android.mk
########################
include $(CLEAR_VARS)
LOCAL_MODULE := custom.xml
LOCAL_MODULE_TAGS := optional
LOCAL_MODULE_CLASS := ETC
# This will install the file in /system/etc/permissions
#
LOCAL_MODULE_PATH := $(TARGET_OUT_ETC)/permissions
LOCAL_SRC_FILES := $(LOCAL_MODULE)
include $(BUILD_PREBUILT)
EDIT
With the above added I was not able to see my file in /system/etc/permissions folder. Am I missing something?
It looks that your change is device specific and not framework related. In that case you probably want to include your file in aosp / device / ... / model /
You can check how is done looking in the device makefile from samsung tuna:
https://android.googlesource.com/device/samsung/tuna/+/master/device.mk
The PRODUCT_COPY_FILES variable includes the "origin_file:destination_file"
PRODUCT_COPY_FILES += \
$(LOCAL_KERNEL):kernel \
device/samsung/tuna/init.tuna.rc:root/init.tuna.rc \
device/samsung/tuna/init.tuna.usb.rc:root/init.tuna.usb.rc \
device/samsung/tuna/fstab.tuna:root/fstab.tuna \
device/samsung/tuna/ueventd.tuna.rc:root/ueventd.tuna.rc \
device/samsung/tuna/media_profiles.xml:system/etc/media_profiles.xml \
device/samsung/tuna/media_codecs.xml:system/etc/media_codecs.xml \
device/samsung/tuna/gps.conf:system/etc/gps.conf

How can I get automatic dependency resolution in my scala scripts?

I'm just learning scala coming out of the groovy/java world. My first script requires a 3rd party library TagSoup for XML/HTML parsing, and I'm loath to have to add it the old school way: that is, downloading TagSoup from its developer website, and then adding it to the class path.
Is there a way to resolve third party libraries in my scala scripts? I'm thinking Ivy, I'm thinking Grape.
Ideas?
The answer that worked best for me was to install n8:
curl https://raw.github.com/n8han/conscript/master/setup.sh | sh
cs harrah/xsbt --branch v0.11.0
Then I could import tagsoup fairly easily example.scala
/***
libraryDependencies ++= Seq(
"org.ccil.cowan.tagsoup" % "tagsoup" % "1.2.1"
)
*/
def getLocation(address:String) = {
...
}
And run using scalas:
scalas example.scala
Thanks for the help!
While the answer is SBT, it could have been more helpful where scripts are regarded. See, SBT has a special thing for scripts, as described here. Once you get scalas installed, either by installing conscript and then running cs harrah/xsbt --branch v0.11.0, or simply by writing it yourself more or less like this:
#!/bin/sh
java -Dsbt.main.class=sbt.ScriptMain \
-Dsbt.boot.directory=/home/user/.sbt/boot \
-jar sbt-launch.jar "$#"
Then you can write your script like this:
#!/usr/bin/env scalas
!#
/***
scalaVersion := "2.9.1"
libraryDependencies ++= Seq(
"net.databinder" %% "dispatch-twitter" % "0.8.3",
"net.databinder" %% "dispatch-http" % "0.8.3"
)
*/
import dispatch.{ json, Http, Request }
import dispatch.twitter.Search
import json.{ Js, JsObject }
def process(param: JsObject) = {
val Search.text(txt) = param
val Search.from_user(usr) = param
val Search.created_at(time) = param
"(" + time + ")" + usr + ": " + txt
}
Http.x((Search("#scala") lang "en") ~> (_ map process foreach println))
You may also be interested in paulp's xsbtscript, which creates an xsbtscript shell that has the same thing as scalas (I guess the latter was based on the former), with the advantage that, without either conscript or sbt installed, you can get it ready with this:
curl https://raw.github.com/paulp/xsbtscript/master/setup.sh | sh
Note that it installs sbt and conscript.
And there's also paulp's sbt-extras, which is an alternative "sbt" command line, with more options. Note that it's still sbt, just the shell script that starts it is more intelligent.
SBT (Simple Build Tool) seems to be the build tool of choice in the Scala world. It supports a number of different dependency resolution mechanisms: https://github.com/harrah/xsbt/wiki/Library-Management
Placed as an answer cause it doesn't fit in comment length constraint.
In addition to #Chris answer, I would like to recommend you some commons for sbt (which I personally think is absolutely superb). Although sbt denote Simple Build Tool, sometimes it is not so easy for first-timers to setup project with sbt (all this things with layouts, configs, and so on).
Use giter (g8) to create new project with predefined template (which g8 fetches from github.com). There are templates for Android app, unfiltered and more. Sometimes they are include some of the dependencies by default.
To create layout just type:
g8 gseitz/android-sbt-project
(An example for Android app)
Alternatively, use np pluggin for sbt, which provides interactive type-through way to create new project and basic layout.
A corrected and simplified version of the current main answer: use scalas.
You have to compose your script of 3 parts. One would be sbt, another would be a very simple wrapper around SBT called scalas, the last one is your custom script. Note that the first two scripts can be installed either globally (/usr/bin/, ~/bin/) or locally (in the same directory).
the first part is sbt. If you already have it installed then good. If not, you can either install it, or use a very cool script from paulp: https://github.com/paulp/sbt-extras/blob/master/sbt BTW, that thing is a charming way to use sbt on Unix. Although not available on windows. Anyways...
the second part is scalas. It's just an entrypoint to SBT.
#!/bin/sh
exec /path/to/sbt -Dsbt.main.class=sbt.ScriptMain -sbt-create \
-Dsbt.boot.directory=$HOME/.sbt/boot \
"$#"
the last part is your custom script. Example:
#!/usr/bin/env scalas
/***
scalaVersion := "2.11.0"
libraryDependencies ++= Seq(
"org.joda" % "joda-convert" % "1.5",
"joda-time" % "joda-time" % "2.3"
)
*/
import org.joda.time._
println(DateTime.now())
//println(DateTime.now().minusHours(12).dayOfMonth())
What Daniel said. Although it's worth mentioning that the sbt docs carefully label this functionality "experimental".
Indeed, if you try to run the embedded script with scalaVersion := "2.10.3", you'll get not found: value !#
Luckily, the !# script header-closer is unnecessary here, so you can leave it out.
Under scalaVersion := "2.10.3", the script will need to have the file extension ".scala"; using the bash shell script file extension, ".sh", won't work.
Also, it isn't clear to me that the latest version of Dispatch (0.11.0) supports dispatch-twitter, which is used in the example.
For more about header-closers in this context, see Alvin Alexander's blog post on Scala scripting, or section 14.10 of his Scala Cookbook.
I have a build.gradle file with the following task:
task classpath(dependsOn: jar) << {
println "CLASSPATH=${tasks.jar.archivePath}:${configurations.runtime.asPath}"
}
Then, in my Scala script:
#!
script_dir=$(cd $(dirname "$0") >/dev/null; pwd -P)
classpath=$(cd ${script_dir} && ./gradlew classpath | grep '^CLASSPATH=' | sed -e 's|^CLASSPATH=||')
PATH=${SCALA_HOME}/bin:${PATH}
JAVA_OPTS="-Xmx4g -XX:MaxPermSize=1g" exec scala -classpath ${classpath} "$0" "$0" "$#"
!#
Note that we don't need a separate scalas executable in our PATH, since we can use the self-executing shell script trick.
Here's an example script, which reads its own content (via the $0 variable), chops off everything before an arbitrary marker (__BEGIN_SCRIPT__) and runs sbt on the result. We use process substitution to pretend this calculated content is a real file. One problem with this approach is that sbt will seek within the given file, i.e. it doesn't read it sequentially. That stops it working with the <(foo) form of process substitution, as found in bash; however zsh has a =(foo) form which is seekable.
#!/usr/bin/env zsh
set -e
# Find the line # in this file ($0) after the line beginning __BEGIN_SCRIPT__
LINENUM=$(awk '/^__BEGIN_SCRIPT__/ {print NR + 1; exit 0; }' "$0")
sbtRun() {
# Run the sbt command, such that it will fetch dependencies and execute a
# script
sbt -Dsbt.main.class=sbt.ScriptMain \
-sbt-create \
-Dsbt.boot.directory="$HOME/.sbt/boot" \
"$#"
}
# Run SBT on the contents of this file, starting at LINENUM
sbtRun =(tail -n+"$LINENUM" "$0")
exit 0
__BEGIN_SCRIPT__
/***
scalaVersion := "2.11.0"
libraryDependencies ++= Seq(
"org.joda" % "joda-convert" % "1.5",
"joda-time" % "joda-time" % "2.3"
)
*/
import org.joda.time._
println(DateTime.now())