I'm trying to install cygnus-common following this guide but the creation of the cygnus-common jar (incl. dependencies) always results in an empty jar. Apache-Flume installed without any issues. I've tried increasing the memory for the Maven JVM as per the guide but this hasn't helped.
~/fiware-cygnus/cygnus-common$ mvn clean compile exec:exec
assembly:single
[INFO] Scanning for projects...
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building cygnus-common 1.1.0
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) # cygnus-common ---
[INFO] Deleting /home/cygnus/fiware-cygnus/cygnus-common/target
[INFO]
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) # cygnus-common ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 2 resources
[INFO]
[INFO] --- maven-compiler-plugin:2.3.2:compile (default-compile) # cygnus-common ---
[INFO] Compiling 50 source files to /home/cygnus/fiware-cygnus/cygnus-common/target/classes
[INFO]
[INFO] --- exec-maven-plugin:1.5.0:exec (default-cli) # cygnus-common ---
[INFO]
[INFO] --- maven-assembly-plugin:2.6:single (default-cli) # cygnus-common ---
[INFO] Building jar: /home/cygnus/fiware-cygnus/cygnus-common/target/cygnus-common-1.1.0-jar-with-dependencies.jar
Killed
Here's the resultant file
~/fiware-cygnus/cygnus-common$ ls -lrth target/
total 12K
drwxrwxr-x 3 cygnus cygnus 4.0K Jun 13 14:57 generated-sources
drwxrwxr-x 3 cygnus cygnus 4.0K Jun 13 14:57 classes
drwxrwxr-x 2 cygnus cygnus 4.0K Jun 13 14:57 archive-tmp
-rw-rw-r-- 1 cygnus cygnus 0 Jun 13 14:58 cygnus-common-1.1.0-jar-with-dependencies.jar
A similar question was asked before with no resolution documented. The only suggestions given for that question was either lack of disk space, memory or permissions. Disk space is definitely not an issue, I have increased the memory for the Maven JVM and the user has the correct permissions.
EDIT:
So I ran the mvn compile in debug mode as suggested by #frb and it was memory issue
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 257024000 bytes for committing reserved memory.
# Possible reasons:
# The system is out of physical RAM or swap space
# In 32 bit mode, the process size limit was hit
# Possible solutions:
# Reduce memory load on the system
# Increase physical memory or swap space
# Check if swap backing store is full
# Use 64 bit Java on a 64 bit OS
# Decrease Java heap size (-Xmx/-Xms)
# Decrease number of Java threads
# Decrease Java thread stack sizes (-Xss)
# Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
#
# Out of Memory Error (os_linux.cpp:2827), pid=48868, tid=140567873734400
[1]: http://fiware-cygnus.readthedocs.io/en/latest/cygnus-common/installation_and_administration_guide/install_from_sources/index.html#section3 "guide"
using top I was able to see that the memory usage was over 70% when the process got killed. The system has ~1.7Gb which is well above the stated requirements, and it is a 64bit system.
I had the system memory increased to 3Gb and the build succeeded so perhaps the hardware requirements should be updated?
Increasing the system memory to 3Gb fixed this issue and the jar was created successfully.
Related
I am using sbt assembly to package my project into one fat jar. I wanted to understand if I can visualize how much disk space is each dependency taking in my fat jar.
AFAIK there is no "magic" in SBT assembly, meaning the whole content of each dependency JAR is added to the fat JAR. For instance no "tree shaking" like in other languages to remove unused classes.
Thus, one way to get what you're looking for is to package your project without assembly and look at the lib directory: each JAR used in the production code will be present and you can get its size.
Take a look at sbt-dependency-graph plugin.
sbt dependencyStats: Shows a table with each module a row with (transitive) Jar sizes and number of dependencies
➭ sbt dependencyStats
[info] TotSize JarSize #TDe #Dep Module
[info] 61.232 MB ------- MB 88 22 a-projects_2.13:0.1
[info] 22.995 MB 0.393 MB 25 8 client-a-akka_2.13:0.0.1
[info] 22.788 MB 0.186 MB 25 8 client-b-akka_2.13:0.0.1
[info] 19.593 MB 0.012 MB 16 3 de.heikoseeberger:akka-http-json4s_2.13:1.27.0
[info] 15.710 MB 0.176 MB 6 2 io.circe:circe-generic_2.13:0.14.1
[info] 12.429 MB 0.003 MB 7 2 io.circe:circe-parser_2.13:0.14.1
[info] 12.426 MB 0.029 MB 6 2 io.circe:circe-jawn_2.13:0.14.1
[info] 12.313 MB 1.116 MB 4 2 io.circe:circe-core_2.13:0.14.1
[info] 11.553 MB 4.749 MB 7 4 com.typesafe.akka:akka-stream_2.13:2.6.12
[info] 11.184 MB 5.915 MB 2 2 org.typelevel:cats-core_2.13:2.6.1
[info] 8.705 MB 2.855 MB 12 2 com.amazonaws:aws-java-sdk-ssm:1.12.210
[info] 7.818 MB 0.134 MB 12 4 net.codingwell:scala-guice_2.13:5.0.2
[info] 7.763 MB 1.257 MB 13 3 com.amazonaws:aws-java-sdk-s3:1.12.210
[info] 7.060 MB 1.848 MB 3 1 com.typesafe.akka:akka-http_2.13:10.2.3
[info] 6.841 MB 0.755 MB 7 7 com.typesafe.play:play-json_2.13:2.9.2
[info] 6.507 MB 0.656 MB 12 2 com.amazonaws:aws-java-sdk-kms:1.12.210
[info] 6.001 MB 0.151 MB 12 2 com.amazonaws:aws-java-sdk-sts:1.12.210
[info] 5.823 MB 1.044 MB 10 7 com.amazonaws:aws-java-sdk-core:1.12.210
[info] 5.262 MB 5.262 MB 0 0 org.typelevel:cats-kernel_2.13:2.6.1
[info] 5.211 MB 4.207 MB 2 2 com.typesafe.akka:akka-http-core_2.13:10.2.3
[info] 5.013 MB 0.013 MB 5 2 com.github.pureconfig:pureconfig-enumeratum_2.13:0.17.1
[info] 4.598 MB 3.663 MB 2 2 com.typesafe.akka:akka-actor_2.13:2.6.12
[info] 4.546 MB 0.000 MB 5 2 com.github.pureconfig:pureconfig_2.13:0.17.1
[info] 4.545 MB 0.140 MB 4 3 com.github.pureconfig:pureconfig-generic_2.13:0.17.1
...
[info] Columns are
[info] - Jar-Size including dependencies
[info] - Jar-Size
[info] - Number of transitive dependencies
[info] - Number of direct dependencies
[info] - ModuleID
If you use sbt docker:publishLocal to build a docker image from your scala project, you will see the below lines printed out:
[info] Packaging /home/user123/myUser/repos/my_job/target/scala-2.12/app_internal_2.12-0.1.jar ...
[info] Done packaging.
[info] Sending build context to Docker daemon 129.7MB
[info] Step 1/7 : FROM openjdk:11-jre
[info] ---> 8c8b7f0ab84c
[info] Step 2/7 : LABEL MAINTAINER="no_name#my.org"
[info] ---> Using cache
[info] ---> d5caf9a92999
[info] Step 3/7 : WORKDIR /opt/docker
[info] ---> Using cache
[info] ---> d887eeb10e8e
[info] Step 4/7 : ADD --chown=root:root opt /opt
[info] ---> 1b43a84a5e32
[info] Step 5/7 : USER root
[info] ---> Running in 282c7f7de8ad
[info] Removing intermediate container 282c7f7de8ad
[info] ---> 11fed4892683
[info] Step 6/7 : ENTRYPOINT ["/opt/docker/bin/my_job"]
[info] ---> Running in 1d297dd1e960
[info] Removing intermediate container 1d297dd1e960
[info] ---> 1923a8df3fcf
[info] Step 7/7 : CMD []
[info] ---> Running in 3d9f7a4a262b
[info] Removing intermediate container 3d9f7a4a262b
[info] ---> d67ed46fd3fe
[info] Successfully built d67ed46fd3fe
[info] Successfully tagged docker_app_internal:0.1
[info] Built image docker_app_internal with tags [0.1]
[success] Total time: 25 s, completed Mar 27, 2019 10:23:35 AM
You may get confused by the error. And why:
THIS WORKS:
docker run -it --entrypoint=/bin/bash docker_app_internal:0.1 -i
Does not work:
docker run docker_app_internal:0.1
Thanks to #Muki for creating this helpful project.
Refer: https://github.com/sbt/sbt-native-packager
If you have the project root folder different from MainClass name, then your entrypoint using sbt docker:publishLocal becomes /your/linuxpath/bin/rootFolder. But, the actual file that gets created in the docker image is /your/linuxpath/bin/main-class (if your main class name is MainClass)
To fix this, please explicitly mention the entrypoint in build.sbt as below:
dockerEntrypoint := Seq("/opt/docker/bin/main-class")
I am running the example provided by official akka: https://github.com/akka/akka-samples/tree/2.5/akka-sample-cluster-scala.
My OS is: Linux Mint 19 with the latest kernel.
And for the Worker Dial-in Example(Transformation Example), I cannot fully run this example as there is no enough space in /dev/shm. Although I have more than 2GB available space.
The problem is when I launch the first frontend node, it eats some KBs space. When I launch the second one, it eats some MBs space. When I launch the third one, it eats some hundred of MBs space. Further I just cannot even launch the fourth one, it just throws an error which causes the whole cluster down:
[info] Warning: space is running low in /dev/shm (tmpfs) threshold=167,772,160 usable=95,424,512
[info] Warning: space is running low in /dev/shm (tmpfs) threshold=167,772,160 usable=45,088,768
[info] [ERROR] [11/05/2018 21:03:56.156] [ClusterSystem-akka.actor.default-dispatcher-12] [akka://ClusterSystem#127.0.0.1:57246/] swallowing exception during message send
[info] io.aeron.exceptions.RegistrationException: IllegalStateException : Insufficient usable storage for new log of length=50335744 in /dev/shm (tmpfs)
[info] at io.aeron.ClientConductor.onError(ClientConductor.java:174)
[info] at io.aeron.DriverEventsAdapter.onMessage(DriverEventsAdapter.java:81)
[info] at org.agrona.concurrent.broadcast.CopyBroadcastReceiver.receive(CopyBroadcastReceiver.java:100)
[info] at io.aeron.DriverEventsAdapter.receive(DriverEventsAdapter.java:56)
[info] at io.aeron.ClientConductor.service(ClientConductor.java:660)
[info] at io.aeron.ClientConductor.awaitResponse(ClientConductor.java:696)
[info] at io.aeron.ClientConductor.addPublication(ClientConductor.java:371)
[info] at io.aeron.Aeron.addPublication(Aeron.java:259)
[info] at akka.remote.artery.aeron.AeronSink$$anon$1.<init>(AeronSink.scala:103)
[info] at akka.remote.artery.aeron.AeronSink.createLogicAndMaterializedValue(AeronSink.scala:100)
[info] at akka.stream.impl.GraphStageIsland.materializeAtomic(PhasedFusingActorMaterializer.scala:630)
[info] at akka.stream.impl.PhasedFusingActorMaterializer.materialize(PhasedFusingActorMaterializer.scala:450)
[info] at akka.stream.impl.PhasedFusingActorMaterializer.materialize(PhasedFusingActorMaterializer.scala:415)
[info] at akka.stream.impl.PhasedFusingActorMaterializer.materialize(PhasedFusingActorMaterializer.scala:406)
[info] at akka.stream.scaladsl.RunnableGraph.run(Flow.scala:588)
[info] at akka.remote.artery.Association.runOutboundOrdinaryMessagesStream(Association.scala:726)
[info] at akka.remote.artery.Association.runOutboundStreams(Association.scala:657)
[info] at akka.remote.artery.Association.associate(Association.scala:649)
[info] at akka.remote.artery.AssociationRegistry.association(Association.scala:989)
[info] at akka.remote.artery.ArteryTransport.association(ArteryTransport.scala:724)
[info] at akka.remote.artery.ArteryTransport.send(ArteryTransport.scala:710)
[info] at akka.remote.RemoteActorRef.$bang(RemoteActorRefProvider.scala:591)
[info] at akka.actor.ActorRef.tell(ActorRef.scala:124)
[info] at akka.actor.ActorSelection$.rec$1(ActorSelection.scala:265)
[info] at akka.actor.ActorSelection$.deliverSelection(ActorSelection.scala:269)
[info] at akka.actor.ActorSelection.tell(ActorSelection.scala:46)
[info] at akka.actor.ScalaActorSelection.$bang(ActorSelection.scala:280)
[info] at akka.actor.ScalaActorSelection.$bang$(ActorSelection.scala:280)
[info] at akka.actor.ActorSelection$$anon$1.$bang(ActorSelection.scala:198)
[info] at akka.cluster.ClusterCoreDaemon.gossipTo(ClusterDaemon.scala:1330)
[info] at akka.cluster.ClusterCoreDaemon.gossip(ClusterDaemon.scala:1047)
[info] at akka.cluster.ClusterCoreDaemon.gossipTick(ClusterDaemon.scala:1010)
[info] at akka.cluster.ClusterCoreDaemon$$anonfun$initialized$1.applyOrElse(ClusterDaemon.scala:496)
[info] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
[info] at akka.actor.Actor.aroundReceive(Actor.scala:517)
[info] at akka.actor.Actor.aroundReceive$(Actor.scala:515)
[info] at akka.cluster.ClusterCoreDaemon.aroundReceive(ClusterDaemon.scala:295)
[info] at akka.actor.ActorCell.receiveMessage(ActorCell.scala:588)
[info] at akka.actor.ActorCell.invoke(ActorCell.scala:557)
[info] at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
[info] at akka.dispatch.Mailbox.run(Mailbox.scala:225)
[info] at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
[info] at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
[info] at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
[info] at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
[info] at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
It seems it is sending huge message(48MB+?) to every nodes.
So what's up here? What is the root cause and how shall I fix this?
I can successfully run bx dev build and successfully run my container locally with bx dev run.
When I execute bx dev build --debug --trace, I get successful completion, and my unit test passed. However, immediately after when I execute bx dev run I get:
FAILED
A successful build of the project is required before running bx dev run. Verify that bx dev build completes successfully before attempting bx dev run
There seems to be something in the debug build that is holding me up, but it finishes successful. Any thoughts? The tail end of my debug trace below (full trace blows the char limit):
Results :
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0
[INFO]
[INFO] --- maven-jar-plugin:2.6:jar (default-jar) # gapstrainingbff ---
[INFO] Building jar: /project/target/gapstrainingbff-1.0-SNAPSHOT.jar
[INFO]
[INFO] --- spring-boot-maven-plugin:1.5.4.RELEASE:repackage (default) # gapstrainingbff ---
[INFO]
[INFO] --- maven-install-plugin:2.5.2:install (default-install) # gapstrainingbff ---
[INFO] Installing /project/target/gapstrainingbff-1.0-SNAPSHOT.jar to /project/.m2/repository/projects/gapstrainingbff/1.0-SNAPSHOT/gapstrainingbff-1.0-SNAPSHOT.jar
[INFO] Installing /project/pom.xml to /project/.m2/repository/projects/gapstrainingbff/1.0-SNAPSHOT/gapstrainingbff-1.0-SNAPSHOT.pom
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 54.659 s
[INFO] Finished at: 2018-02-22T21:39:29+00:00
[INFO] Final Memory: 19M/31M
[INFO] ------------------------------------------------------------------------
OK
Process time: 57.911463s
Stopping the 'bx-dev-gapstrainingbff-tools' container...
OK
acmartinez#Andreas-MacBook-Air gaps-training-bff $ idt run
FAILED
A successful build of the project is required before running bx dev run. Verify that bx dev build completes successfully before attempting bx dev run
acmartinez#Andreas-MacBook-Air gaps-training-bff $
When you run bx dev build --debug, the IDT CLI builds the application for debugging so that you can execute bx dev debug. In order to build your application for release and execute bx dev run, you must first execute bx dev build without the --debug flag. The order of commands matters in this case.
See https://console.bluemix.net/docs/cloudnative/idt/commands.html#run
Does anyone have any experience working with DeepDive? It involves installing Java, Python 2.x, PostgreSQL, and SBT, then the DeepDive package. I'm not very familiar with PostgreSQL, but I'm intending to learn these simultaneously.
I'm working on Ubuntu 12.04 and PostgreSQL 9.1. I made a superuser for PostgreSQL using the command in the shell createuser tom. It's worth noting that my Ubuntu username is also tom. I then changed the password for tom with the following:
$su - postgres
$psql
--> ALTER USER tom WITH password 'pa$$w0RD';
DeepDive comes with a test script, which gives me the following error (I'm not including all the other text, which doesn't include errors).
[info] LogisticRegressionApp:
[info] - should work *** FAILED ***
[info] org.postgresql.util.PSQLException: FATAL: password authentication failed for user "tom"
[info] at org.postgresql.core.v3.ConnectionFactoryImpl.doAuthentication(ConnectionFactoryImpl.java:398)
[info] at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:173)
[info] at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:64)
[info] at org.postgresql.jdbc2.AbstractJdbc2Connection.<init>(AbstractJdbc2Connection.java:136)
[info] at org.postgresql.jdbc3.AbstractJdbc3Connection.<init>(AbstractJdbc3Connection.java:29)
[info] at org.postgresql.jdbc3g.AbstractJdbc3gConnection.<init>(AbstractJdbc3gConnection.java:21)
[info] at org.postgresql.jdbc4.AbstractJdbc4Connection.<init>(AbstractJdbc4Connection.java:31)
[info] at org.postgresql.jdbc4.Jdbc4Connection.<init>(Jdbc4Connection.java:24)
[info] at org.postgresql.Driver.makeConnection(Driver.java:393)
[info] at org.postgresql.Driver.connect(Driver.java:267)
[info] ...
Then at the end:
[info] Tests: succeeded 68, failed 2, canceled 0, ignored 0, pending 3
[info] *** 2 TESTS FAILED ***
[error] Failed tests:
[error] org.deepdive.test.integration.LogisticRegressionApp
[error] org.deepdive.test.unit.InferenceManagerSpec
[error] Error during tests:
[error] org.deepdive.test.unit.PostgresInferenceDataStoreSpec
[error] org.deepdive.test.unit.PostgresExtractionDataStoreSpec
[error] (test:test) sbt.TestsFailedException: Tests unsuccessful
[error] Total time: 10 s, completed Mar 17, 2014 8:51:47 PM
If anyone can point me in some direction, I'd appreciate it.
OK, I fixed part of the problem, but this led to a different problem. Here's what I did. test.sh contains the following lines:
export PGUSER=${PGUSER:-`whoami`}
export PGPASSWORD=${PGPASSWORD:-}
which I changed to
export PGUSER=tom
export PGPASSWORD=pa$$w0rd
Now the test proceeds farther, and gets to the point where it prints the following:
06:49:40.953 [default-dispatcher-7][$a][LocalActorRef] INFO Message [org.deepdive.calibration.CalibrationDataWriter$WriteCalibrationData] from Actor[akka://deepdive/temp/$a] to Actor[akka://deepdive/user/inferenceManager/$a#-1669803870] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
06:49:40.955 [default-dispatcher-7][$a][LocalActorRef] INFO Message [akka.actor.PoisonPill$] from Actor[akka://deepdive/user/inferenceManager#-354953956] to Actor[akka://deepdive/user/inferenceManager/$a#-1669803870] was not delivered. [2] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
06:49:40.957 [default-dispatcher-5][inferenceManager][InferenceManager$PostgresInferenceManager] INFO Starting
06:49:40.958 [default-dispatcher-6][factorGraphBuilder][FactorGraphBuilder$PostgresFactorGraphBuilder] INFO Starting
06:50:06.679 [TaskManagerSpec-scheduler-1][akka://TaskManagerSpec/user/$$d][TaskManager] INFO Memory usage: 233/982MB (max: 982MB)
06:50:06.699 [TaskManagerSpec-scheduler-1][akka://TaskManagerSpec/user/$$e][TaskManager] INFO Memory usage: 233/982MB (max: 982MB)
06:50:06.709 [TaskManagerSpec-scheduler-1][akka://TaskManagerSpec/user/$$f][TaskManager] INFO Memory usage: 233/982MB (max: 982MB)
06:50:06.738 [TaskManagerSpec-scheduler-1][akka://TaskManagerSpec/user/$$g][TaskManager] INFO Memory usage: 233/982MB (max: 982MB)
06:50:06.759 [TaskManagerSpec-scheduler-1][akka://TaskManagerSpec/user/$$h][TaskManager] INFO Memory usage: 233/982MB (max: 982MB)
06:50:06.780 [TaskManagerSpec-scheduler-1][akka://TaskManagerSpec/user/$$i][TaskManager] INFO Memory usage: 233/982MB (max: 982MB)
06:50:06.799 [TaskManagerSpec-scheduler-1][akka://TaskManagerSpec/user/$$j][TaskManager] INFO Memory usage: 233/982MB (max: 982MB)
06:50:07.396 [default-dispatcher-5][taskManager][TaskManager] INFO Memory usage: 233/982MB (max: 982MB)
And this continues ad infinitum. The key seems to be the first line, about the message not being delivered between the two Actors.
As I noted in a comment below, I checked out the postgresql.conf file, and uncommented the following line
listen_addresses = 'localhost'
listen on;
It resolved one of the original errors, but not the second error.
In item 2 of Patrick's response, here are the parameters from the pg_hba.conf file:
# Database administrative login by Unix domain socket
local all postgres peer
# TYPE DATABASE USER ADDRESS METHOD
# "local" is for Unix domain socket connections only
local all all peer
# IPv4 local connections:
host all all 127.0.0.1/32 md5
# IPv6 local connections:
host all all ::1/128 md5
Doesn't the part local all all support all local connections?
The error you mention can have multiple causes:
Have you modified postgresql.conf to accept incoming TCP/IP connections? Check the listen_addresses parameter.
Have you modified pg_hba.conf? Here you need setup an authentication method for DeepDive and/or the jdbc driver definition.
Lastly, can DeepDive connect to the database it intends to connect to with the credentials you have supplied it (or the jdbc driver definition)?
Both of the configuration files are in your $PGDATA directory, typically /etc/postgresql/9.3/main.
Note that psql logs on using the unix sockets by default (unless you specify -h host_ip) and jdbc uses a TCP/IP connection. Try psql over TCP/IP to see if that works. If not, work on 1, then 2. If it does, work on 2, then 3.