OCI error "/opt/docker/bin/my_job" : no such file or directory using sbt docker:publishLocal - scala

If you use sbt docker:publishLocal to build a docker image from your scala project, you will see the below lines printed out:
[info] Packaging /home/user123/myUser/repos/my_job/target/scala-2.12/app_internal_2.12-0.1.jar ...
[info] Done packaging.
[info] Sending build context to Docker daemon 129.7MB
[info] Step 1/7 : FROM openjdk:11-jre
[info] ---> 8c8b7f0ab84c
[info] Step 2/7 : LABEL MAINTAINER="no_name#my.org"
[info] ---> Using cache
[info] ---> d5caf9a92999
[info] Step 3/7 : WORKDIR /opt/docker
[info] ---> Using cache
[info] ---> d887eeb10e8e
[info] Step 4/7 : ADD --chown=root:root opt /opt
[info] ---> 1b43a84a5e32
[info] Step 5/7 : USER root
[info] ---> Running in 282c7f7de8ad
[info] Removing intermediate container 282c7f7de8ad
[info] ---> 11fed4892683
[info] Step 6/7 : ENTRYPOINT ["/opt/docker/bin/my_job"]
[info] ---> Running in 1d297dd1e960
[info] Removing intermediate container 1d297dd1e960
[info] ---> 1923a8df3fcf
[info] Step 7/7 : CMD []
[info] ---> Running in 3d9f7a4a262b
[info] Removing intermediate container 3d9f7a4a262b
[info] ---> d67ed46fd3fe
[info] Successfully built d67ed46fd3fe
[info] Successfully tagged docker_app_internal:0.1
[info] Built image docker_app_internal with tags [0.1]
[success] Total time: 25 s, completed Mar 27, 2019 10:23:35 AM
You may get confused by the error. And why:
THIS WORKS:
docker run -it --entrypoint=/bin/bash docker_app_internal:0.1 -i
Does not work:
docker run docker_app_internal:0.1
Thanks to #Muki for creating this helpful project.
Refer: https://github.com/sbt/sbt-native-packager

If you have the project root folder different from MainClass name, then your entrypoint using sbt docker:publishLocal becomes /your/linuxpath/bin/rootFolder. But, the actual file that gets created in the docker image is /your/linuxpath/bin/main-class (if your main class name is MainClass)
To fix this, please explicitly mention the entrypoint in build.sbt as below:
dockerEntrypoint := Seq("/opt/docker/bin/main-class")

Related

Cluster running on single machine eats too much space of /dev/shm

I am running the example provided by official akka: https://github.com/akka/akka-samples/tree/2.5/akka-sample-cluster-scala.
My OS is: Linux Mint 19 with the latest kernel.
And for the Worker Dial-in Example(Transformation Example), I cannot fully run this example as there is no enough space in /dev/shm. Although I have more than 2GB available space.
The problem is when I launch the first frontend node, it eats some KBs space. When I launch the second one, it eats some MBs space. When I launch the third one, it eats some hundred of MBs space. Further I just cannot even launch the fourth one, it just throws an error which causes the whole cluster down:
[info] Warning: space is running low in /dev/shm (tmpfs) threshold=167,772,160 usable=95,424,512
[info] Warning: space is running low in /dev/shm (tmpfs) threshold=167,772,160 usable=45,088,768
[info] [ERROR] [11/05/2018 21:03:56.156] [ClusterSystem-akka.actor.default-dispatcher-12] [akka://ClusterSystem#127.0.0.1:57246/] swallowing exception during message send
[info] io.aeron.exceptions.RegistrationException: IllegalStateException : Insufficient usable storage for new log of length=50335744 in /dev/shm (tmpfs)
[info] at io.aeron.ClientConductor.onError(ClientConductor.java:174)
[info] at io.aeron.DriverEventsAdapter.onMessage(DriverEventsAdapter.java:81)
[info] at org.agrona.concurrent.broadcast.CopyBroadcastReceiver.receive(CopyBroadcastReceiver.java:100)
[info] at io.aeron.DriverEventsAdapter.receive(DriverEventsAdapter.java:56)
[info] at io.aeron.ClientConductor.service(ClientConductor.java:660)
[info] at io.aeron.ClientConductor.awaitResponse(ClientConductor.java:696)
[info] at io.aeron.ClientConductor.addPublication(ClientConductor.java:371)
[info] at io.aeron.Aeron.addPublication(Aeron.java:259)
[info] at akka.remote.artery.aeron.AeronSink$$anon$1.<init>(AeronSink.scala:103)
[info] at akka.remote.artery.aeron.AeronSink.createLogicAndMaterializedValue(AeronSink.scala:100)
[info] at akka.stream.impl.GraphStageIsland.materializeAtomic(PhasedFusingActorMaterializer.scala:630)
[info] at akka.stream.impl.PhasedFusingActorMaterializer.materialize(PhasedFusingActorMaterializer.scala:450)
[info] at akka.stream.impl.PhasedFusingActorMaterializer.materialize(PhasedFusingActorMaterializer.scala:415)
[info] at akka.stream.impl.PhasedFusingActorMaterializer.materialize(PhasedFusingActorMaterializer.scala:406)
[info] at akka.stream.scaladsl.RunnableGraph.run(Flow.scala:588)
[info] at akka.remote.artery.Association.runOutboundOrdinaryMessagesStream(Association.scala:726)
[info] at akka.remote.artery.Association.runOutboundStreams(Association.scala:657)
[info] at akka.remote.artery.Association.associate(Association.scala:649)
[info] at akka.remote.artery.AssociationRegistry.association(Association.scala:989)
[info] at akka.remote.artery.ArteryTransport.association(ArteryTransport.scala:724)
[info] at akka.remote.artery.ArteryTransport.send(ArteryTransport.scala:710)
[info] at akka.remote.RemoteActorRef.$bang(RemoteActorRefProvider.scala:591)
[info] at akka.actor.ActorRef.tell(ActorRef.scala:124)
[info] at akka.actor.ActorSelection$.rec$1(ActorSelection.scala:265)
[info] at akka.actor.ActorSelection$.deliverSelection(ActorSelection.scala:269)
[info] at akka.actor.ActorSelection.tell(ActorSelection.scala:46)
[info] at akka.actor.ScalaActorSelection.$bang(ActorSelection.scala:280)
[info] at akka.actor.ScalaActorSelection.$bang$(ActorSelection.scala:280)
[info] at akka.actor.ActorSelection$$anon$1.$bang(ActorSelection.scala:198)
[info] at akka.cluster.ClusterCoreDaemon.gossipTo(ClusterDaemon.scala:1330)
[info] at akka.cluster.ClusterCoreDaemon.gossip(ClusterDaemon.scala:1047)
[info] at akka.cluster.ClusterCoreDaemon.gossipTick(ClusterDaemon.scala:1010)
[info] at akka.cluster.ClusterCoreDaemon$$anonfun$initialized$1.applyOrElse(ClusterDaemon.scala:496)
[info] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
[info] at akka.actor.Actor.aroundReceive(Actor.scala:517)
[info] at akka.actor.Actor.aroundReceive$(Actor.scala:515)
[info] at akka.cluster.ClusterCoreDaemon.aroundReceive(ClusterDaemon.scala:295)
[info] at akka.actor.ActorCell.receiveMessage(ActorCell.scala:588)
[info] at akka.actor.ActorCell.invoke(ActorCell.scala:557)
[info] at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
[info] at akka.dispatch.Mailbox.run(Mailbox.scala:225)
[info] at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
[info] at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
[info] at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
[info] at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
[info] at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
It seems it is sending huge message(48MB+?) to every nodes.
So what's up here? What is the root cause and how shall I fix this?

bx dev build --debug successfully completes, but can't run bx dev run

I can successfully run bx dev build and successfully run my container locally with bx dev run.
When I execute bx dev build --debug --trace, I get successful completion, and my unit test passed. However, immediately after when I execute bx dev run I get:
FAILED
A successful build of the project is required before running bx dev run. Verify that bx dev build completes successfully before attempting bx dev run
There seems to be something in the debug build that is holding me up, but it finishes successful. Any thoughts? The tail end of my debug trace below (full trace blows the char limit):
Results :
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0
[INFO]
[INFO] --- maven-jar-plugin:2.6:jar (default-jar) # gapstrainingbff ---
[INFO] Building jar: /project/target/gapstrainingbff-1.0-SNAPSHOT.jar
[INFO]
[INFO] --- spring-boot-maven-plugin:1.5.4.RELEASE:repackage (default) # gapstrainingbff ---
[INFO]
[INFO] --- maven-install-plugin:2.5.2:install (default-install) # gapstrainingbff ---
[INFO] Installing /project/target/gapstrainingbff-1.0-SNAPSHOT.jar to /project/.m2/repository/projects/gapstrainingbff/1.0-SNAPSHOT/gapstrainingbff-1.0-SNAPSHOT.jar
[INFO] Installing /project/pom.xml to /project/.m2/repository/projects/gapstrainingbff/1.0-SNAPSHOT/gapstrainingbff-1.0-SNAPSHOT.pom
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 54.659 s
[INFO] Finished at: 2018-02-22T21:39:29+00:00
[INFO] Final Memory: 19M/31M
[INFO] ------------------------------------------------------------------------
OK
Process time: 57.911463s
Stopping the 'bx-dev-gapstrainingbff-tools' container...
OK
acmartinez#Andreas-MacBook-Air gaps-training-bff $ idt run
FAILED
A successful build of the project is required before running bx dev run. Verify that bx dev build completes successfully before attempting bx dev run
acmartinez#Andreas-MacBook-Air gaps-training-bff $
When you run bx dev build --debug, the IDT CLI builds the application for debugging so that you can execute bx dev debug. In order to build your application for release and execute bx dev run, you must first execute bx dev build without the --debug flag. The order of commands matters in this case.
See https://console.bluemix.net/docs/cloudnative/idt/commands.html#run

Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.18.1:test (default-test) on project hive-exec: There are test failures

When I try to build hive from source by following these commands:
$git clone https://git-wip-us.apache.org/repos/asf/hive.git
$cd hive
$mvn clean package -Pdist
I receive this errors after I run "mvn clean package -Pdist
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.18.1:test (default-test) on project hive-exec: There are test failures.
[ERROR]
[ERROR] Please refer to /home/elbasir/hive/ql/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR] mvn <goals> -rf :hive-exec
Can anybody help to solve this? I have been searching the internet for a week to solve it but no success until now.
Here is the failure I got:
Running org.apache.hadoop.hive.ql.parse.repl.load.message.PrimaryToReplicaResourceFunctionTest
Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 4.573 sec <<< FAILURE! - in
org.apache.hadoop.hive.ql.parse.repl.load.message.PrimaryToReplicaResourceFunctionTest
createDestinationPath(org.apache.hadoop.hive.ql.parse.repl.load.message.PrimaryToReplicaResourceFunctionTest) Time elapsed: 1.919 sec <<< FAILURE!
java.lang.AssertionError:
Expected: is "hdfs://somehost:9000/someBasePath/withADir/replicaDbName/somefunctionname/9223372036854775807/ab.jar"
but: was "hdfs://somehost:9000/someBasePath/withADir/replicadbname/somefunctionname/0/ab.jar" at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
at org.junit.Assert.assertThat(Assert.java:865)
at org.junit.Assert.assertThat(Assert.java:832)
at
org.apache.hadoop.hive.ql.parse.repl.load.message.PrimaryToReplicaResourceFunctionTest.createDestinationPath(PrimaryToReplicaResourceFunctionTest.java:100)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.internal.runners.TestMethod.invoke(TestMethod.java:68)
at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.runTestMethod(PowerMockJUnit44RunnerDelegateImpl.java:316)
at org.junit.internal.runners.MethodRoadie$2.run(MethodRoadie.java:88)
at org.junit.internal.runners.MethodRoadie.runBeforesThenTestThenAfters(MethodRoadie.java:96)
at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.executeTest(PowerMockJUnit44RunnerDelegateImpl.java:300)
at org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.executeTestInSuper(PowerMockJUnit47RunnerDelegateImpl.java:131)
at org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.access$100(PowerMockJUnit47RunnerDelegateImpl.java:59)
at org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner$TestExecutorStatement.evaluate(PowerMockJUnit47RunnerDelegateImpl.java:147)
at org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.evaluateStatement(PowerMockJUnit47RunnerDelegateImpl.java:107)
at org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.executeTest(PowerMockJUnit47RunnerDelegateImpl.java:82)
at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.runBeforesThenTestThenAfters(PowerMockJUnit44RunnerDelegateImpl.java:288)
at org.junit.internal.runners.MethodRoadie.runTest(MethodRoadie.java:86)
at org.junit.internal.runners.MethodRoadie.run(MethodRoadie.java:49)
at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.invokeTestMethod(PowerMockJUnit44RunnerDelegateImpl.java:208)
at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.runMethods(PowerMockJUnit44RunnerDelegateImpl.java:147)
at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$1.run(PowerMockJUnit44RunnerDelegateImpl.java:121)
at org.junit.internal.runners.ClassRoadie.runUnprotected(ClassRoadie.java:33)
at org.junit.internal.runners.ClassRoadie.runProtected(ClassRoadie.java:45)
at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.run(PowerMockJUnit44RunnerDelegateImpl.java:123)
at org.powermock.modules.junit4.common.internal.impl.JUnit4TestSuiteChunkerImpl.run(JUnit4TestSuiteChunkerImpl.java:121)
at org.powermock.modules.junit4.common.internal.impl.AbstractCommonPowerMockRunner.run(AbstractCommonPowerMockRunner.java:53)
at org.powermock.modules.junit4.PowerMockRunner.run(PowerMockRunner.java:59)
at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:283)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:173)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:128)
at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:203)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:155)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
Results :
Failed tests:
PrimaryToReplicaResourceFunctionTest.createDestinationPath:100 Expected: is "hdfs://somehost:9000/someBasePath/withADir/replicaDbName/somefunctionname/9223372036854775807/ab.jar"
but: was "hdfs://somehost:9000/someBasePath/withADir/replicadbname/somefunctionname/0/ab.jar"
Tests run: 3305, Failures: 1, Errors: 0, Skipped: 14
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] Hive ............................................... SUCCESS [ 3.223 s]
[INFO] Hive Shims Common .................................. SUCCESS [ 10.496 s]
[INFO] Hive Shims 0.23 .................................... SUCCESS [ 7.779 s]
[INFO] Hive Shims Scheduler ............................... SUCCESS [ 3.569 s]
[INFO] Hive Shims ......................................... SUCCESS [ 1.963 s]
[INFO] Hive Storage API ................................... SUCCESS [01:32 min]
[INFO] Hive Common ........................................ SUCCESS [01:07 min]
[INFO] Hive Service RPC ................................... SUCCESS [ 5.318 s]
[INFO] Hive Serde ......................................... SUCCESS [03:32 min]
[INFO] Hive Standalone Metastore .......................... SUCCESS [ 37.830 s]
[INFO] Hive Metastore ..................................... SUCCESS [02:29 min]
[INFO] Hive Vector-Code-Gen Utilities ..................... SUCCESS [ 0.298 s]
[INFO] Hive Llap Common ................................... SUCCESS [ 7.068 s]
[INFO] Hive Llap Client ................................... SUCCESS [ 22.578 s]
[INFO] Hive Llap Tez ...................................... SUCCESS [ 16.250 s]
[INFO] Spark Remote Client ................................ SUCCESS [ 26.667 s]
[INFO] Hive Query Language ................................ FAILURE [ 01:09 h]
[INFO] Hive Llap Server ................................... SKIPPED
[INFO] Hive Service ....................................... SKIPPED
[INFO] Hive Accumulo Handler .............................. SKIPPED
[INFO] Hive JDBC .......................................... SKIPPED
[INFO] Hive Beeline ....................................... SKIPPED
[INFO] Hive CLI ........................................... SKIPPED
[INFO] Hive Contrib ....................................... SKIPPED
[INFO] Hive Druid Handler ................................. SKIPPED
[INFO] Hive HBase Handler ................................. SKIPPED
[INFO] Hive JDBC Handler .................................. SKIPPED
[INFO] Hive HCatalog ...................................... SKIPPED
[INFO] Hive HCatalog Core ................................. SKIPPED
[INFO] Hive HCatalog Pig Adapter .......................... SKIPPED
[INFO] Hive HCatalog Server Extensions .................... SKIPPED
[INFO] Hive HCatalog Webhcat Java Client .................. SKIPPED
[INFO] Hive HCatalog Webhcat .............................. SKIPPED
[INFO] Hive HCatalog Streaming ............................ SKIPPED
[INFO] Hive HPL/SQL ....................................... SKIPPED
[INFO] Hive Llap External Client .......................... SKIPPED
[INFO] Hive Shims Aggregator .............................. SKIPPED
[INFO] Hive TestUtils ..................................... SKIPPED
[INFO] Hive Packaging ..................................... SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE

Packaging cygnus-common results in an empty jar

I'm trying to install cygnus-common following this guide but the creation of the cygnus-common jar (incl. dependencies) always results in an empty jar. Apache-Flume installed without any issues. I've tried increasing the memory for the Maven JVM as per the guide but this hasn't helped.
~/fiware-cygnus/cygnus-common$ mvn clean compile exec:exec
assembly:single
[INFO] Scanning for projects...
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building cygnus-common 1.1.0
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) # cygnus-common ---
[INFO] Deleting /home/cygnus/fiware-cygnus/cygnus-common/target
[INFO]
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) # cygnus-common ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 2 resources
[INFO]
[INFO] --- maven-compiler-plugin:2.3.2:compile (default-compile) # cygnus-common ---
[INFO] Compiling 50 source files to /home/cygnus/fiware-cygnus/cygnus-common/target/classes
[INFO]
[INFO] --- exec-maven-plugin:1.5.0:exec (default-cli) # cygnus-common ---
[INFO]
[INFO] --- maven-assembly-plugin:2.6:single (default-cli) # cygnus-common ---
[INFO] Building jar: /home/cygnus/fiware-cygnus/cygnus-common/target/cygnus-common-1.1.0-jar-with-dependencies.jar
Killed
Here's the resultant file
~/fiware-cygnus/cygnus-common$ ls -lrth target/
total 12K
drwxrwxr-x 3 cygnus cygnus 4.0K Jun 13 14:57 generated-sources
drwxrwxr-x 3 cygnus cygnus 4.0K Jun 13 14:57 classes
drwxrwxr-x 2 cygnus cygnus 4.0K Jun 13 14:57 archive-tmp
-rw-rw-r-- 1 cygnus cygnus 0 Jun 13 14:58 cygnus-common-1.1.0-jar-with-dependencies.jar
A similar question was asked before with no resolution documented. The only suggestions given for that question was either lack of disk space, memory or permissions. Disk space is definitely not an issue, I have increased the memory for the Maven JVM and the user has the correct permissions.
EDIT:
So I ran the mvn compile in debug mode as suggested by #frb and it was memory issue
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 257024000 bytes for committing reserved memory.
# Possible reasons:
# The system is out of physical RAM or swap space
# In 32 bit mode, the process size limit was hit
# Possible solutions:
# Reduce memory load on the system
# Increase physical memory or swap space
# Check if swap backing store is full
# Use 64 bit Java on a 64 bit OS
# Decrease Java heap size (-Xmx/-Xms)
# Decrease number of Java threads
# Decrease Java thread stack sizes (-Xss)
# Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
#
# Out of Memory Error (os_linux.cpp:2827), pid=48868, tid=140567873734400
[1]: http://fiware-cygnus.readthedocs.io/en/latest/cygnus-common/installation_and_administration_guide/install_from_sources/index.html#section3 "guide"
using top I was able to see that the memory usage was over 70% when the process got killed. The system has ~1.7Gb which is well above the stated requirements, and it is a 64bit system.
I had the system memory increased to 3Gb and the build succeeded so perhaps the hardware requirements should be updated?
Increasing the system memory to 3Gb fixed this issue and the jar was created successfully.

Deploy on Heroku using Scalatra

I am trying to deploy my Scalatra web application in heroku but I am having one problem.
My application works in local with SBT and using "heroku local web". I am using heroku sbt plugin.
When I use "sbt stage deployHeroku" the application is uploaded and started properly, obtaining:
user#user-X550JF:~/Documents/SOFT/cloudrobe$ sbt stage deployHeroku
Detected sbt version 0.13.9
....
....
[info] Packaging /home/user/Documents/SOFT/cloudrobe/target/scala-2.11/cloudrobe_2.11-0.1.0-SNAPSHOT.war ...
[info] Done packaging.
[success] Total time: 2 s, completed May 25, 2016 1:04:51 AM
[info] -----> Packaging application...
[info] - app: cloudrobe
[info] - including: target/universal/stage/
[info] -----> Creating build...
[info] - file: target/heroku/slug.tgz
[info] - size: 45MB
[info] -----> Uploading slug... (100%)
[info] - success
[info] -----> Deploying...
[info] remote:
[info] remote: -----> Fetching set buildpack https://codon-buildpacks.s3.amazonaws.com/buildpacks/heroku/jvm-common.tgz... done
[info] remote: -----> sbt-heroku app detected
[info] remote: -----> Installing OpenJDK 1.8... done
[info] remote:
[info] remote: -----> Discovering process types
[info] remote: Procfile declares types -> web
[info] remote:
[info] remote: -----> Compressing...
[info] remote: Done: 93.5M
[info] remote: -----> Launching...
[info] remote: Released v11
[info] remote: https://cloudrobe.herokuapp.com/ deployed to Heroku
[info] remote:
[info] -----> Done
___________________________________________________________________________
Using "heroku logs" I can see:
2016-05-24T23:14:16.007200+00:00 app[web.1]: 23:14:16.006 [main] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:5, serverValue:5}] to localhost:33333
2016-05-24T23:14:16.370324+00:00 app[web.1]: 23:14:16.370 [main] INFO o.f.s.servlet.ServletTemplateEngine - Scalate template engine using working directory: /tmp/scalate-5146893161861816095-workdir
2016-05-24T23:14:16.746719+00:00 app[web.1]: 23:14:16.746 [main] INFO o.e.j.server.handler.ContextHandler - Started o.e.j.w.WebAppContext#7a356a0d{/,file:/app/src/main/webapp,AVAILABLE}
2016-05-24T23:14:16.782745+00:00 app[web.1]: 23:14:16.782 [main] INFO o.e.jetty.server.ServerConnector - Started ServerConnector#7dc51783{HTTP/1.1}{0.0.0.0:8080}
2016-05-24T23:14:16.782924+00:00 app[web.1]: 23:14:16.782 [main] INFO org.eclipse.jetty.server.Server - Started #6674ms
But, 5 or 10 seconds later appears the following error showing that the connection has been timed out:
2016-05-24T23:52:32.962896+00:00 heroku[router]: at=error code=H20 desc="App boot timeout" method=GET path="/" host=cloudrobe.herokuapp.com request_id=a7f68d98-54a2-44b7-8f5f-47efce0f1833 fwd="52.90.128.17" dyno= connect= service= status=503 bytes=
2016-05-24T23:52:45.463575+00:00 heroku[web.1]: Error R10 (Boot timeout) -> Web process failed to bind to $PORT within 60 seconds of launch
This is my Procfile using the port 5000:
web: target/universal/stage/bin/cloudrobe -Dhttp.address=127.0.0.1
Thank you.
Your app is binding to port 8080, but it needs to bind to the port set as the $PORT environment variable on Heroku. To do this, you need to add -Dhttp.port=$PORT to your Procfile. It also needs to bind to 0.0.0.0 and not 127.0.0.1. So it might look like this:
web: target/universal/stage/bin/cloudrobe -Dhttp.address=0.0.0.0 -Dhttp.port=$PORT