I am trying to build an app for serverless using sbt assembly. It works if I compile it using sbt assembly and then run it using serverless invoke local --function func, however if I run it using serverless offline start it will throw an error saying the config for akka is missing.
I already have the following in my sbt file:
assembly / assemblyMergeStrategy := {
case PathList("META-INF", _ #_*) => MergeStrategy.discard
case PathList("reference.conf", _ #_*) => MergeStrategy.concat
case PathList("application.conf", _ #_*) => MergeStrategy.concat
case "reference.conf" => MergeStrategy.concat
case "application.conf" => MergeStrategy.concat
case PathList("logback.xml", _ #_*) => MergeStrategy.concat
case PathList("logback-test.xml", _ #_*) => MergeStrategy.concat
case _ => MergeStrategy.first
}
Akka does a sneaky little thing that might be tripping you up. If you look at the reference.conf in their jar you will see that not all of their configurations live in the reference.conf
# Akka version, checked against the runtime version of Akka. Loaded from generated conf file.
include "version"
so essentially you may have to add entries in your merge strategy for these additional files (which might have duplicates assuming they implement this pattern in their other libraries)
Related
I have a multi-project build with a particularly messy module which contain several mainClasses. I would like to create several distribution packages for this messy module, each distribution package employing distinct file sets and employing different formats. Ideas?
This is the answer from the sbt-nativer-packager issue tracker where the same question was posted.
I'm adding this from the gitter chat as well:
I'm just arriving in this chat room and my knowledge of sbt-native-packager is virtually zero... but anyway... looks to me that JavaAppPackaging and other archetypes should actually be configurations extended from Universal. In this scenario, I would just create my own configuration extended from JavaAppPackaging and tweak the necessary bits according to my needs. And, finally, if the plugin just picks mappings in ThisScope... it would pick my own scope, and not JavaAppPackaging... and not Universal.
So, let's go through this one by one.
The sbt-native-packager plugin always pick mappings in Universal. This is not ideal. It should conceptually pick mappings in ThisScope
SBT native packager provides two categories of AutoPlugins: FormatPlugins and ArchetypePlugins. FormatPlugins provide a new package format, e.g. UniversalPlugin (zip, tarball) or DebianPlugins (.deb). These plugins form a a hierarchy as they are build on top of each other:
SbtNativePackager
+
|
|
+-------+ Universal +--------+
| |
| + |
| | |
+ + +
Docker +-+ Linux +-+ Windows
| |
| |
+ +
Debian RPM
mappings, which define a file -> targetpath relation, are inherited with this pattern
mappings in ParentFormatPluginScope := (mappings in FormatPluginScope).value
So for docker it looks like this
mappings in Docker := (mappings in Universal).value
The linux format plugins use specialized mappings to preserve file permissions, but are basically the same.
Since sbt-native-packager plugin always pick mappings in Universal, I have to redefine mappings in Universal in each of my configurations
Yes. If you want to define your own scope and inherit the mappings and change them you have to do this, like all other packaging plugins, too. I recommend putting this code into custom AutoPlugins in your project folder.
For example (not tested, imports may be missing )
import sbt._
object BuilderRSPlugin extends AutoPlugin {
def requires = JavaAppPackaging
object autoImport {
val BuilderRS = config("builderrs") extend Universal
}
import autoImport._
override lazy val projectSettings = Seq(
mappings in BuilderRS := (mappings in Universal).value
)
}
looks to me that JavaAppPackaging and other archetypes should actually be configurations extended from Universal
JavaAppPackaging is an archetype, which means this plugin doesn't bring any new packaging formats, thus no new scopes. It configures all the packaging formats it can and enables them.
You package stuff by specifying the scope:
universal:packageBin
debian:packageBin
windows:packageBin
So if you need to customize your output format you are doing this in the respecting scope.
mappings in Docker := (mappings in Docker).value.filter( /* what ever you want to filter */)
See: https://github.com/sbt/sbt-native-packager/issues/746
IMPORTANT: This is an "answer in progress". IT DOES NOT WORK YET!
This is an example of how one could achieve this.
The basic idea is that we add configurations for different packages to be generated. Each configuration tells which files will be present in the package. This does not work as expected. See my comments after the code.
lazy val BuilderRS = sbt.config("BuilderRS").extend(Compile,Universal)
lazy val BuilderRV = sbt.config("BuilderRV").extend(Compile,Universal)
addCommandAlias("buildRS", "MessyModule/BuilderRS:packageZipTarball")
addCommandAlias("buildRV", "MessyModule/BuilderRV:packageBin") // ideally should be named packageZip
lazy val Star5FunctionalTestSupport =
project
.in(file("MessyModule"))
.enablePlugins(JavaAppPackaging)
.settings((buildSettings): _*)
.configs(Universal,BuilderRS,BuilderRV)
.settings(inConfig(BuilderRS)(
Defaults.configSettings ++ JavaAppPackaging.projectSettings ++
Seq(
executableScriptName := "rs",
mappings in Universal :=
(mappings in Universal).value
.filter {
case (file, name) => ! file.getAbsolutePath.endsWith("/bin/rv")
},
topLevelDirectory in Universal :=
Some(
"ftreports-" +
new java.text.SimpleDateFormat("yyyyMMdd_HHmmss")
.format(new java.util.Date())),
mainClass in ThisScope := Option(mainClassRS))): _*)
//TODO: SEE COMMENTS BELOW ===============================================
// .settings(inConfig(BuilderRV)(
// Defaults.configSettings ++ JavaAppPackaging.projectSettings ++
// Seq(
// packageBin <<= packageBin in Universal,
// executableScriptName := "rv",
// mappings in ThisScope :=
// (mappings in Universal).value
// .filter {
// case (file, name) => ! file.getAbsolutePath.endsWith("/bin/rs")
// },
// topLevelDirectory in Universal :=
// Some(
// "ftviewer-" +
// new java.text.SimpleDateFormat("yyyyMMdd_HHmmss")
// .format(new java.util.Date())),
// mainClass in ThisScope := Option(mainClassRV))): _*)
Now observe configuration BuilderRV which in comments.
It is basically the same thing as configuration BuilderRS, except that we are now deploying a different shell script in the bin folder. There some other small differences, but not relevant to this argumentation. There are two problems:
The sbt-native-packager plugin always pick mappings in Universal. This is not ideal. It should conceptually pick mappings in ThisScope.
Since sbt-native-packager plugin always pick mappings in Universal, I have to redefine mappings in Universal in each of my configurations. And this is a problem because mappings in Universal is defined as a function of itself in all configurations: the result is that we ended up chaining logic to mapppings in Universal each time we redefined it in each configuration. This causes trouble in this example in particular because the configuration BuilderRV (the second one) will perform not only its filter, but also the filter defined in BuilderRS (the first one), which is not what I want.
How can I have unfiltered-jetty serve static files without allowing directory browsing?
Jetty has the dirAllowed setting, but it does not seem easily accessible from Unfiltered.
This is not a full answer but I bet you can put it together by looking in 2 places:
1. the val unfiltered.jetty.Server.underlying of type org.eclipse.jetty.server.Server in the unfiltered-jetty code
2. 'Configuring a File Server' in the Jetty 8(i think) wiki. Maybe that resource_handler.setDirectoriesListed(true) call?
This is working with Unfiltered 0.8.4 which uses Jetty 8:
import org.eclipse.jetty.server.handler.{HandlerCollection,ContextHandler}
import org.eclipse.jetty.server.Handler
def disableDirBrowsing(hc: Array[Handler]) {
hc.map { h =>
h match {
case nested: HandlerCollection => disableDirBrowsing(nested.getHandlers)
case c: ContextHandler =>
c.setInitParameter("org.eclipse.jetty.servlet.Default.dirAllowed", "false")
case _ => // ignore everything else
}
}
}
If srv is your Unfiltered server object after adding contexts to it, you can now disable directory browsing like so:
disableDirBrowsing(srv.underlying.getHandlers)
I'm struggling to figure out how to install opus plugin for gstreamer. I have installed
opus-tools & libopus0 by apt-get (everything happens on Ubuntu 14.04). I have also gstreamer-plugins-bad installed.
After multiple trials, bugs, etc. gstreamer displays the following error each time I try to call gst-inspect-1.0:
*(gst-plugin-scanner:17408): GStreamer-WARNING **: Failed to load plugin '/opt/gstreamer-1.4.0/lib/gstreamer-1.0/libgstopus.so': /opt/gstreamer-1.4.0/lib/gstreamer-1.0/libgstopus.so: undefined symbol: opus_multistream_encode*
What could have gone wrong during opus installation process that could cause this error?
If it's of any use here's result of ldd /opt/gstreamer-1.4.0/lib/libgstopus.so
*/opt/gstreamer-1.4.0/lib$ ldd /opt/gstreamer-1.4.0/lib/libgstopus.so
linux-vdso.so.1 => (0x00007fff859fe000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f4f9004f000)
libgstaudio-1.0.so.0 => /opt/gstreamer-1.4.0/lib/libgstaudio-1.0.so.0 (0x00007f4f8fe08000)
libgsttag-1.0.so.0 => /opt/gstreamer-1.4.0/lib/libgsttag-1.0.so.0 (0x00007f4f8fbcf000)
libgstrtp-1.0.so.0 => /opt/gstreamer-1.4.0/lib/libgstrtp-1.0.so.0 (0x00007f4f8f9b5000)
libgstbase-1.0.so.0 => /opt/gstreamer-1.4.0/lib/libgstbase-1.0.so.0 (0x00007f4f8f75c000)
libgstreamer-1.0.so.0 => /opt/gstreamer-1.4.0/lib/libgstreamer-1.0.so.0 (0x00007f4f8f450000)
libgobject-2.0.so.0 => /usr/lib/x86_64-linux-gnu/libgobject-2.0.so.0 (0x00007f4f8f1ff000)
libglib-2.0.so.0 => /lib/x86_64-linux-gnu/libglib-2.0.so.0 (0x00007f4f8eef7000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f4f8ecd8000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f4f8e912000)
libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f4f8e6f9000)
libgmodule-2.0.so.0 => /usr/lib/x86_64-linux-gnu/libgmodule-2.0.so.0 (0x00007f4f8e4f4000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f4f8e2f0000)
libffi.so.6 => /usr/lib/x86_64-linux-gnu/libffi.so.6 (0x00007f4f8e0e7000)
libpcre.so.3 => /lib/x86_64-linux-gnu/libpcre.so.3 (0x00007f4f8dea9000)
/lib64/ld-linux-x86-64.so.2 (0x00007f4f90587000)*
I must have missed something when installing opus, however i've no more ideas what to do, so I'm counting someone can give me some hint where to look for, or what could have gone wrong?
Thanks for any help :)
your error means that opus library has not been correctly linked into your gstreamer.
Did you install gstreamer-plugins-bad yourself? while doing so, when you run the configuration script from the plugin-bad package, you could add following options:
./configure --host=xxxx --with-plugin=opus --prefix=xxxxx
Here is how it was configured for Sbt 0.12.x:
parallelExecution in test := false
testGrouping in Test <<= definedTests in Test map { tests =>
tests.map { test =>
import Tests._
import scala.collection.JavaConversions._
new Group(
name = test.name,
tests = Seq(test),
runPolicy = SubProcess(javaOptions = Seq(
"-server", "-Xms4096m", "-Xms4096m", "-XX:NewSize=3584m",
"-Xss256k", "-XX:+UseG1GC", "-XX:+TieredCompilation",
"-XX:+UseNUMA", "-XX:+UseCondCardMark",
"-XX:-UseBiasedLocking", "-XX:+AlwaysPreTouch") ++
System.getProperties.toMap.map {
case (k, v) => "-D" + k + "=" + v
}))
}.sortWith(_.name < _.name)
}
During migration to Sbt 0.13.x I get the following error:
[error] Could not accept connection from test agent: class java.net.SocketException: socket closed
java.net.SocketException: socket closed
at java.net.DualStackPlainSocketImpl.accept0(Native Method)
at java.net.DualStackPlainSocketImpl.socketAccept(DualStackPlainSocketImpl.java:131)
at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:398)
at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:199)
at java.net.ServerSocket.implAccept(ServerSocket.java:530)
at java.net.ServerSocket.accept(ServerSocket.java:498)
at sbt.ForkTests$$anonfun$mainTestTask$1$Acceptor$2$.run(ForkTests.scala:48)
at java.lang.Thread.run(Thread.java:745)
Migration changes are just updates in sbt & plugin versions.
Are there any other approaches to forking and ordering of tests in Sbt 0.13.x to overcome that exception?
Works fine on Linux and Mac OS.
Got error on Windows because of limit of classpath length that prevents launching of test agent instance with following error in System.err:
Error: Could not find or load main class sbt.ForkMain
I also got this error when moving to Scala repo to sbt version sbt.version = 1.3.8 (previously 1.2.8 was ok). Strangely worked fine on my mac, but failed on teamcity linux build agents.
Fix for me was to set
fork := false,
in build.sbt.
Not sure why repo had it previously set to fork := true (guess it was cut/paste from somewhere else as no strong reason for this in this repo), but this change resolved the issue. Locally on my mac also runs a few seconds faster now.
See here for background
https://www.scala-sbt.org/1.0/docs/Forking.html
How can I find the standard site_perl (non-arch specific) location? Is it safe to just loop over #INC and find the path ending with "site_perl", or is there a standard way to do this?
The reason for trying to find this, is I have a very large project built up from hundreds of individual modules, all with their own Makefile.PL files (pretty much every .pm file has been built as its own CPAN style module). Along with this, each module may have artifacts (templates, .cgi's, etc), in various locations, all which need to be deployed to various locations, nothing is standard. This is the first step in trying to get this under control, basically having a single Makefile which can find and deploy everything, the next step will be getting it in sensible layout in version control.
I've spent time trying to do this with standard installation tools, but have had no luck.
C:\Temp> perl -MConfig -e "print qq{$_ => $Config{$_}\n} for grep { /site/ } keys %Config"
d_sitearch => define
installsitearch => C:\opt\perl\site\lib
installsitebin => C:\opt\perl\site\bin
installsitehtml1dir =>
installsitehtml3dir =>
installsitelib => C:\opt\perl\site\lib
installsiteman1dir =>
installsiteman3dir =>
installsitescript => C:\opt\perl\site\bin
sitearch => C:\opt\perl\site\lib
sitearchexp => C:\opt\perl\site\lib
sitebin => C:\opt\perl\site\bin
sitebinexp => C:\opt\perl\site\bin
sitehtml1dir =>
sitehtml1direxp =>
sitehtml3dir =>
sitehtml3direxp =>
sitelib => C:\opt\perl\site\lib
sitelib_stem =>
sitelibexp => C:\opt\perl\site\lib
siteman1dir =>
siteman1direxp =>
siteman3dir =>
siteman3direxp =>
siteprefix => C:\opt\perl\site
siteprefixexp => C:\opt\perl\site
sitescript =>
sitescriptexp =>
usesitecustomize => define
Or, as #ysth points out in comments, you can use:
C:\Temp> perl -V:.*site.*
on Windows and
$ perl '-V:.*site.*'
in *nix shells.
Is there a reason not to use one of the module installers (ExtUtils::MakeMaker, Module::Build, Module::Install)?
But if you must, the directory is available (after loading Config) as $Config::Config{'installsitelib'}. Note that some platforms may configure perl such that this directory doesn't literally appear in #INC, instead having some other directory that's symlinked to the installsitelib directory.
Just run perl -V. It will print the default #INC at the end.
It is not safe to loop over #INC as it can be modified by code or the environment, and, therefore, may contain multiple directories that end in site_perl.
If you are trying to determine where a given module is installed use %INC.