an error only occur when observing in lib/pq - postgresql

I tried to debug a go server with gorm & pgsql, an it exitted with panic when gorm try to connect to the pgsql with code:
d, err := gorm.Open("postgres", param)
and as I followed the stack, I found it is the problem with lib/pq proccess the server version:
case "server_version":
var major1 int
var major2 int
var minor int
//r.string is the string of pgsql version.
//mine is: 12.4 (Debian 12.4-1.pgdg100+1)
_, err = fmt.Sscanf(r.string(), "%d.%d.%d", &major1, &major2, &minor)
if err == nil {
cn.parameterStatus.serverVersion = major1*10000 + major2*100 + minor
}
obivisouly, the string 12.4 (Debian 12.4-1.pgdg100+1) dosen't match format %d.%d.%d, so it exitted with error.
but the strange thing is, it will only exit when debugging in GoLand (don't know about vscode). It will not happen when simplely build & run, the output is:
[INFO] 2022-11-04 15:30:52 +0800 [start_postgres.go:103] detecting database connecting... pgdir=%v/yak/tmp_build/database
[INFO] 2022-11-04 15:30:52 +0800 [start_postgres.go:106] detected exsited database.
[INFO] 2022-11-04 15:30:52 +0800 [core.go:107] health info manager is loading
[INFO] 2022-11-04 15:30:52 +0800 [manager.go:70] health info: cache 60 infos
[INFO] 2022-11-04 15:30:52 +0800 [core.go:112] start to connection postgres
[INFO] 2022-11-04 15:30:52 +0800 [core.go:117] build basic database manager instance
which is absloutly normal. then I did some modify on the source code in lib/pq/conn.go
rString := r.string()
fmt.Printf("\n%s\n\n", rString)
_, err = fmt.Sscanf(r.string(), "%d.%d.%d", &major1, &major2, &minor)
if err == nil {
cn.parameterStatus.serverVersion = major1*10000 + major2*100 + minor
}
and here is the strangest thing, the server crashed with infinite loop of error logging:
[INFO] 2022-11-04 16:37:00 +0800 [start_postgres.go:103] detecting database connecting... pgdir=%v/yak/tmp_build/database
12.4 (Debian 12.4-1.pgdg100+1)
[WARN] 2022-11-04 16:37:00 +0800 [start_postgres.go:110] open database failed: pq: invalid message format; expected string terminator
[INFO] 2022-11-04 16:37:00 +0800 [start_postgres.go:113] try to start a database...
12.4 (Debian 12.4-1.pgdg100+1)
[WARN] 2022-11-04 16:37:02 +0800 [start_postgres.go:206] try pq: invalid message format; expected string terminator times... waiting for the postgres starting up...
12.4 (Debian 12.4-1.pgdg100+1)
[WARN] 2022-11-04 16:37:03 +0800 [start_postgres.go:206] try pq: invalid message format; expected string terminator times... waiting for the postgres starting up...
12.4 (Debian 12.4-1.pgdg100+1)
[WARN] 2022-11-04 16:37:04 +0800 [start_postgres.go:206] try pq: invalid message format; expected string terminator times... waiting for the postgres starting up...
...
I can't figure it out why would this happen, I don't think it's in a goroutine because the main routine is blocked, so is there anyone can offer some helps?
versions of dependencies and IDE:
GoLand v2022.2.4
go version go1.19.2 linux/amd64
gorm v1.9.2
github.com/lib/pq v1.1.0
postgresql v12.4 (Debian 12.4-1.pgdg100+1)

I figured it out about the infinity loop of error, there's a select struct after the init connection, the code is:
select {
case <-ticker:
count++
conn, err := gorm.Open("postgres", param)
//conn, err := net.Dial("tcp", "127.0.0.1:5432")
if err != nil {
log.Warningf("try %v times... waiting for the postgres starting up...", err)
continue
}
_ = conn.Close()
return nil
}
so the problem is, when I print r.string(), it will return an error.
also it will exitted with panic in debug (GoLand) but work normally when build and run.

Related

Converting a Storm 1 Kafka Topology to Heron, have a few questions

Been experimenting with switching a Storm 1.0.6 topology to Heron. Taking a baby step by removing all but the Kafka spout to see how things go. Have a main method as follows (modified from the original Flux version):
import org.apache.heron.eco.Eco;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
public class KafkaTopology {
public static void main(String[] args) throws Exception {
List<String> argList = new ArrayList<String>(Arrays.asList(args));
String file = KafkaTopology.class.getClassLoader().getResource("topology.yaml").getFile();
argList.add("local");
argList.add("--eco-config-file");
argList.add(file);
file = KafkaTopology.class.getClassLoader().getResource("dev.properties").getFile();
argList.add("--props");
argList.add(file);
argList.add("--sleep");
argList.add("36000000");
String[] ecoArgs = argList.toArray(new String[argList.size()]);
Eco.main(ecoArgs);
}
}
YAML is this:
name: "kafkaTopology-XXX_topologyVersion_XXX"
type: "storm"
config:
topology.workers: ${workers.config}
topology.max.spout.pending: ${max.spout.pending}
topology.message.timeout.secs: 120
topology.testing.always.try.serialize: true
storm.zookeeper.session.timeout: 30000
storm.zookeeper.connection.timeout: 30000
storm.zookeeper.retry.times: 5
storm.zookeeper.retry.interval: 2000
properties:
kafka.mapper.zkServers: ${kafka.mapper.zkServers}
kafka.mapper.zkPort: ${kafka.mapper.zkPort}
bootstrap.servers: ${bootstrap.servers}
kafka.mapper.brokerZkStr: ${kafka.mapper.brokerZkStr}
kafka.topic.name: ${kafka.topic.name}
components:
- id: "zkHosts"
className: "org.apache.storm.kafka.ZkHosts"
constructorArgs:
- ${kafka.mapper.brokerZkStr}
- id: "rawMessageAndMetadataScheme"
className: "org.acme.storm.spout.RawMessageAndMetadataScheme"
- id: "messageMetadataSchemeAsMultiScheme"
className: "org.apache.storm.kafka.MessageMetadataSchemeAsMultiScheme"
constructorArgs:
- ref: "rawMessageAndMetadataScheme"
- id: "kafkaSpoutConfig"
className: "org.apache.storm.kafka.SpoutConfig"
constructorArgs:
# brokerHosts
- ref: "zkHosts"
# topic
- ${kafka.topic.name}
# zkRoot
- "/zkRootKafka.kafkaSpout.builder"
# id
- ${kafka.topic.name}
properties:
- name: "scheme"
ref: "messageMetadataSchemeAsMultiScheme"
- name: zkServers
value: ${kafka.mapper.zkServers}
- name: zkPort
value: ${kafka.mapper.zkPort}
# Retry Properties
- name: "retryInitialDelayMs"
value: 60000
- name: "retryDelayMultiplier"
value: 1.5
- name: "retryDelayMaxMs"
value: 14400000
- name: "retryLimit"
value: 0
# spout definitions
spouts:
- id: "kafka-spout"
className: "org.apache.storm.kafka.KafkaSpout"
parallelism: ${kafka.spout.parallelism}
constructorArgs:
- ref: "kafkaSpoutConfig"
Relevant POM entries:
<dependency>
<groupId>org.apache.heron</groupId>
<artifactId>heron-api</artifactId>
<version>0.20.3-incubating</version>
</dependency>
<dependency>
<groupId>org.apache.heron</groupId>
<artifactId>heron-storm</artifactId>
<version>0.20.3-incubating</version>
</dependency>
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-kafka</artifactId>
<version>1.0.6</version>
</dependency>
Main method seems to run fine:
Apr 30, 2021 4:38:49 PM org.apache.heron.eco.parser.EcoParser loadTopologyFromYaml
INFO: Parsing eco config file
Apr 30, 2021 4:38:49 PM org.apache.heron.eco.parser.EcoParser loadTopologyFromYaml
INFO: Performing property substitution.
Apr 30, 2021 4:38:49 PM org.apache.heron.eco.parser.EcoParser loadTopologyFromYaml
INFO: Performing environment variable substitution.
topology type is Storm
Apr 30, 2021 4:38:49 PM org.apache.heron.eco.builder.storm.EcoBuilder buildConfig
INFO: Building topology config
Apr 30, 2021 4:38:49 PM org.apache.heron.eco.Eco printTopologyInfo
INFO: ---------- TOPOLOGY DETAILS ----------
Apr 30, 2021 4:38:49 PM org.apache.heron.eco.Eco printTopologyInfo
INFO: Topology Name: kafkaTopology-XXX_topologyVersion_XXX
Apr 30, 2021 4:38:49 PM org.apache.heron.eco.Eco printTopologyInfo
INFO: --------------- SPOUTS ---------------
Apr 30, 2021 4:38:49 PM org.apache.heron.eco.Eco printTopologyInfo
INFO: kafka-spout [1] (org.apache.storm.kafka.KafkaSpout)
Apr 30, 2021 4:38:49 PM org.apache.heron.eco.Eco printTopologyInfo
INFO: ---------------- BOLTS ---------------
Apr 30, 2021 4:38:49 PM org.apache.heron.eco.Eco printTopologyInfo
INFO: --------------- STREAMS ---------------
Apr 30, 2021 4:38:49 PM org.apache.heron.eco.Eco printTopologyInfo
INFO: --------------------------------------
Apr 30, 2021 4:38:49 PM org.apache.heron.eco.builder.storm.EcoBuilder buildTopologyBuilder
INFO: Building components
Apr 30, 2021 4:38:49 PM org.apache.heron.eco.builder.storm.EcoBuilder buildTopologyBuilder
INFO: Building spouts
Apr 30, 2021 4:38:49 PM org.apache.heron.eco.builder.storm.EcoBuilder buildTopologyBuilder
INFO: Building bolts
Apr 30, 2021 4:38:49 PM org.apache.heron.eco.builder.storm.EcoBuilder buildTopologyBuilder
INFO: Building streams
Process finished with exit code 0
Question 1: The topology exits immediately, is there an Eco flag equivalent to Flux '--sleep' to keep it running for a while (to debug, etc.)?
Question 2: Was a little surprised that I needed to pull storm-kafka in (thought there would be a Heron equivalent) - is this correct (or some other artifact?) and if so, is 1.0.6 an OK version to use or does Heron work better with another version?
Question 3: The above was with type: "storm" in the YAML, trying type: "heron" gives the following error:
INFO: Building spouts
Exception in thread "main" java.lang.ClassCastException: class org.apache.storm.kafka.KafkaSpout cannot be cast to class org.apache.heron.api.spout.IRichSpout (org.apache.storm.kafka.KafkaSpout and org.apache.heron.api.spout.IRichSpout are in unnamed module of loader 'app')
at org.apache.heron.eco.builder.heron.SpoutBuilder.buildSpouts(SpoutBuilder.java:42)
at org.apache.heron.eco.builder.heron.EcoBuilder.buildTopologyBuilder(EcoBuilder.java:70)
at org.apache.heron.eco.Eco.submit(Eco.java:125)
at org.apache.heron.eco.Eco.main(Eco.java:161)
at KafkaTopology.main(KafkaTopology.java:26)
Process finished with exit code 1
Is this just the way it is using Kafka, type needs to be storm and not heron, or is there some workaround here?
Question 1: I'm not sure why the topology would shut down on you. Try to run your submit with the --verbose flag. At this time the functionality of the --sleep argument does not exist. It could be a feature added if you need.
Question 2: There is a Heron equivalent. After Heron was donated to Apache quite a lot of work had to be done to get binary releases out. Most of that work has been done. With the next release I would hope that all binary artifacts will be distributed appropriately.
Question 3:This issue occurs because based on the type specified it looks for bolts/spouts in a certain package. When "storm" is input it expects the classes it implements or extends to be of "org.apache.storm". When "heron" is input it expects the classes it implements or extends to be of "org.apache.heron". If you use the dependency storm-kafka the type will need to be "storm". The heron equivalents can be found here. https://search.maven.org/search?q=heron-kafka
https://search.maven.org/search?q=heron-kafka
There are several Kafka Spouts for Heron. I use Storm(storm-kafka-client-2.1) clone and use it in Production.
https://search.maven.org/artifact/com.github.thinker0.heron/heron-kafka-client/1.0.4.1/jar

How do I run a beam class in dataflow which access google sql instance?

When i run my pipeline from local machine, i can update the table which resides in the cloud Sql instance. But, when i moved this to run using DataflowRunner, the same is failing with the below exception.
To connect from my eclipse, I created the data source config as
.create("com.mysql.jdbc.Driver", "jdbc:mysql://<ip of sql instance > :3306/mydb") .
The same i changed to
.create("com.mysql.jdbc.GoogleDriver", "jdbc:google:mysql://<project-id>:<instance-name>/my-db") while running through the Dataflow runner.
Should i prefix the zone information of the instance to ?
The exception i get when i run this is given below :
Jun 22, 2017 6:53:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2017-06-22T13:23:51.583Z: (840be37ab35d3d0d): Starting 2 workers in us-central1-f...
Jun 22, 2017 6:53:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2017-06-22T13:23:51.634Z: (dabfae1dc9365d10): Executing operation JdbcIO.Read/Create.Values/Read(CreateSource)+JdbcIO.Read/ParDo(Read)+JdbcIO.Read/ParDo(Anonymous)+JdbcIO.Read/GroupByKey/Reify+JdbcIO.Read/GroupByKey/Write
Jun 22, 2017 6:54:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2017-06-22T13:24:44.762Z: (21395b94f8bf7f61): Workers have started successfully.
SEVERE: 2017-06-22T13:25:30.214Z: (3b988386f963503e): java.lang.RuntimeException: org.apache.beam.sdk.util.UserCodeException: java.sql.SQLException: Cannot load JDBC driver class 'com.mysql.jdbc.GoogleDriver'
at com.google.cloud.dataflow.worker.runners.worker.MapTaskExecutorFactory$3.typedApply(MapTaskExecutorFactory.java:289)
at com.google.cloud.dataflow.worker.runners.worker.MapTaskExecutorFactory$3.typedApply(MapTaskExecutorFactory.java:261)
at com.google.cloud.dataflow.worker.graph.Networks$TypeSafeNodeFunction.apply(Networks.java:55)
at com.google.cloud.dataflow.worker.graph.Networks$TypeSafeNodeFunction.apply(Networks.java:43)
at com.google.cloud.dataflow.worker.graph.Networks.replaceDirectedNetworkNodes(Networks.java:78)
at com.google.cloud.dataflow.worker.runners.worker.MapTaskExecutorFactory.create(MapTaskExecutorFactory.java:152)
at com.google.cloud.dataflow.worker.runners.worker.DataflowWorker.doWork(DataflowWorker.java:272)
at com.google.cloud.dataflow.worker.runners.worker.DataflowWorker.getAndPerformWork(DataflowWorker.java:244)
at com.google.cloud.dataflow.worker.runners.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:125)
at com.google.cloud.dataflow.worker.runners.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:105)
at com.google.cloud.dataflow.worker.runners.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:92)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.beam.sdk.util.UserCodeException: java.sql.SQLException: Cannot load JDBC driver class 'com.mysql.jdbc.GoogleDriver'
at org.apache.beam.sdk.util.UserCodeException.wrap(UserCodeException.java:36)
at org.apache.beam.sdk.io.jdbc.JdbcIO$Read$ReadFn$auxiliary$M7MKjX9p.invokeSetup(Unknown Source)
at com.google.cloud.dataflow.worker.runners.worker.DoFnInstanceManagers$ConcurrentQueueInstanceManager.deserializeCopy(DoFnInstanceManagers.java:65)
at com.google.cloud.dataflow.worker.runners.worker.DoFnInstanceManagers$ConcurrentQueueInstanceManager.peek(DoFnInstanceManagers.java:47)
at com.google.cloud.dataflow.worker.runners.worker.UserParDoFnFactory.create(UserParDoFnFactory.java:100)
at com.google.cloud.dataflow.worker.runners.worker.DefaultParDoFnFactory.create(DefaultParDoFnFactory.java:70)
at com.google.cloud.dataflow.worker.runners.worker.MapTaskExecutorFactory.createParDoOperation(MapTaskExecutorFactory.java:365)
at com.google.cloud.dataflow.worker.runners.worker.MapTaskExecutorFactory$3.typedApply(MapTaskExecutorFactory.java:278)
... 14 more
Any help to fix this is really appreciated. This is my first attempt to run a beam pipeline as a dataflow job.
PipelineOptions options = PipelineOptionsFactory.as(DataflowPipelineOptions.class);
((DataflowPipelineOptions) options).setNumWorkers(2);
((DataflowPipelineOptions)options).setProject("xxxxx");
((DataflowPipelineOptions)options).setStagingLocation("gs://xxxx/staging");
((DataflowPipelineOptions)options).setRunner(DataflowRunner.class);
((DataflowPipelineOptions)options).setStreaming(false);
options.setTempLocation("gs://xxxx/tempbucket");
options.setJobName("sqlpipeline");
PCollection<Account> collection = dataflowPipeline.apply(JdbcIO.<Account>read()
.withDataSourceConfiguration(JdbcIO.DataSourceConfiguration
.create("com.mysql.jdbc.GoogleDriver", "jdbc:google:mysql://project-id:testdb/db")
.withUsername("root").withPassword("root"))
.withQuery(
"select account_id,account_parent,account_description,account_type,account_rollup,Custom_Members from account")
.withCoder(AvroCoder.of(Account.class)).withStatementPreparator(new JdbcIO.StatementPreparator() {
public void setParameters(PreparedStatement preparedStatement) throws Exception {
preparedStatement.setFetchSize(1);
preparedStatement.setFetchDirection(ResultSet.FETCH_FORWARD);
}
}).withRowMapper(new JdbcIO.RowMapper<Account>() {
public Account mapRow(ResultSet resultSet) throws Exception {
Account account = new Account();
account.setAccount_id(resultSet.getInt("account_id"));
account.setAccount_parent(resultSet.getInt("account_parent"));
account.setAccount_description(resultSet.getString("account_description"));
account.setAccount_type(resultSet.getString("account_type"));
account.setAccount_rollup("account_rollup");
account.setCustom_Members("Custom_Members");
return account;
}
}));
Have you properly pulled in the com.google.cloud.sql/mysql-socket-factory maven dependency? Looks like you are failing to load the class.
https://cloud.google.com/appengine/docs/standard/java/cloud-sql/#Java_Connect_to_your_database
Hi I think it's better to move on with "com.mysql.jdbc.Driver" because google driver is supporting for app engine deployments
So as it goes this is what my pipeline configurations look alike and it works perfectly fine for me
PCollection < KV < Double, Double >> exchangeRates = p.apply(JdbcIO. < KV < Double, Double >> read()
.withDataSourceConfiguration(JdbcIO.DataSourceConfiguration.create("com.mysql.jdbc.Driver", "jdbc:mysql://ip:3306/dbname?user=root&password=root&useUnicode=true&characterEncoding=UTF-8")
)
.withQuery(
"SELECT PERIOD_YEAR, PERIOD_YEAR FROM SALE")
.withCoder(KvCoder.of(DoubleCoder.of(), DoubleCoder.of()))
.withRowMapper(new JdbcIO.RowMapper < KV < Double, Double >> () {
#Override
public KV<Double, Double> mapRow(java.sql.ResultSet resultSet) throws Exception {
LOG.info(resultSet.getDouble(1)+ "Came");
return KV.of(resultSet.getDouble(1), resultSet.getDouble(2));
}
}));
Hope it will help

akka-http no stack trace or details on error

I got a structure which can basically be summarized as:
outside user makes a rest request to akka-http server
akka-http makes a request(query?) to a (some)data source using asynchttpclient
akka-http transforms the result from asynchttpclient and serves it back to user
At some point I am getting an error from akka which tells me almost nothing. This error happens right after the asynchttpclient returns me some results. (I can infact at this point print the results on the log, they are there parsed from json etc.. but akka had already errored out)
Even in debug logging level I got no decipherable error message from akka or a stacktrace.
only message I got is:
2017-03-24 17:22:55 INFO CompanyRepository:111 - search company with name:"somecompanyname"
2017-03-24 17:22:55 INFO CompanyRepository:73 - [QUERY TIME]: 527ms
[ERROR] [03/24/2017 17:22:55.951] [company-api-system-akka.actor.default-dispatcher-3] [akka.actor.ActorSystemImpl(company-api-system)] Error during processing of request: 'requirement failed'. Completing with 500 Internal Server Error response.
This error message is the only thing I get. Relevant parts of my config:
akka {
loglevel = "DEBUG"
# edit -- tested with sl4jlogger with no change
#loggers = ["akka.event.slf4j.Slf4jLogger"]
#logging-filter = "akka.event.slf4j.Slf4jLoggingFilter"
parsing {
max-content-length = 800m
max-chunk-size = 100m
}
server {
server-header = akka-http/${akka.http.version}
idle-timeout = 120 s
request-timeout = 120 s
bind-timeout = 10s
max-connections = 1024
pipelining-limit = 32
verbose-error-messages = on
}
client {
user-agent-header = akka-http/${akka.http.version}
}
host-connection-pool {
max-connections = 4
}
}
akka.http.routing {
verbose-error-messages = on
}
Anyone knows if I can make akka to spit out more details about what/where the error is occurring?
Edit: I realized I do NOT get this same error on resultsets which are smaller in size. <- ignore
Edit 2:
Added akka.loglevel = DEBUG, spits out a lot more noise but still not detail about the actual error.
Converted asynchttpclient to akka quickly to rule out AHC
I already had a wrapper around my query to time it, added some logging there trying to pinpoint when exactly the error is happening.
def queryTimer[ R <: Future[ Any ] ]( block: => R ): R = {
val t0 = System.currentTimeMillis()
val result = block
result.onComplete { maybeResult =>
val t1 = System.currentTimeMillis()
logger.info( "[QUERY TIME]: " + ( t1 - t0 ) + "ms" )
maybeResult match {
case Success(some) =>
logger.info( "successful feature:")
logger.info( FormattedString.prettyPrint(some))
case Failure(someFailure) =>
logger.info( "failed feature:")
logger.debug( FormattedString.prettyPrint(someFailure))
}
}
result
}
resulting log:
2017-03-28 13:19:10 INFO CompanyRepository:111 - search company with name:"some company"
[DEBUG] [03/28/2017 13:19:10.497] [company-api-system-akka.actor.default-dispatcher-2] [EventStream(akka://xca-api-actor-system)] logger log1-Logging$DefaultLogger started
[DEBUG] [03/28/2017 13:19:10.497] [company-api-system-akka.actor.default-dispatcher-2] [EventStream(akka://xca-api-actor-system)] Default Loggers started
[DEBUG] [03/28/2017 13:19:10.613] [company-api-system-akka.actor.default-dispatcher-2] [AkkaSSLConfig(akka://xca-api-actor-system)] Initializing AkkaSSLConfig extension...
[DEBUG] [03/28/2017 13:19:10.613] [company-api-system-akka.actor.default-dispatcher-2] [AkkaSSLConfig(akka://xca-api-actor-system)] buildHostnameVerifier: created hostname verifier: com.typesafe.sslconfig.ssl.DefaultHostnameVerifier#779e2339
[DEBUG] [03/28/2017 13:19:10.633] [xca-api-actor-system-akka.actor.default-dispatcher-3] [akka://xca-api-actor-system/user/pool-master/PoolInterfaceActor-0] (Re-)starting host connection pool to localhost:27474
[DEBUG] [03/28/2017 13:19:10.727] [xca-api-actor-system-akka.actor.default-dispatcher-3] [akka://xca-api-actor-system/system/IO-TCP/selectors/$a/0] Resolving localhost before connecting
[DEBUG] [03/28/2017 13:19:10.740] [xca-api-actor-system-akka.actor.default-dispatcher-4] [akka://xca-api-actor-system/system/IO-DNS] Resolution request for localhost from Actor[akka://xca-api-actor-system/system/IO-TCP/selectors/$a/0#-815754478]
[DEBUG] [03/28/2017 13:19:10.749] [xca-api-actor-system-akka.actor.default-dispatcher-4] [akka://xca-api-actor-system/system/IO-TCP/selectors/$a/0] Attempting connection to [localhost/127.0.0.1:27474]
[DEBUG] [03/28/2017 13:19:10.751] [xca-api-actor-system-akka.actor.default-dispatcher-4] [akka://xca-api-actor-system/system/IO-TCP/selectors/$a/0] Connection established to [localhost:27474]
2017-03-28 13:19:10 INFO CompanyRepository:73 - [QUERY TIME]: 376ms
2017-03-28 13:19:10 INFO CompanyRepository:77 - successful feature:
[ERROR] [03/28/2017 13:19:10.896] [company-api-system-akka.actor.default-dispatcher-7] [akka.actor.ActorSystemImpl(company-api-system)] Error during processing of request: 'requirement failed'. Completing with 500 Internal Server Error response.
2017-03-28 13:19:10 INFO CompanyRepository:78 - SearchResult(List(
( prettyprint output here!!! lots and lots of legit result, json parsed succcesfully into a bunch of case classes)
as you can see my logging format and akkas' are different, the ERROR is coming from akka with do details, while everything looks like working.
Edit 3: logs with sleep in between calls
new query timer function with sleeps
def queryTimer[ R <: Future[ Any ] ]( block: => R ): R = {
val t0 = System.currentTimeMillis()
val result = block
result.onComplete { maybeResult =>
val t1 = System.currentTimeMillis()
logger.info( "[QUERY TIME]: " + ( t1 - t0 ) + "ms" )
maybeResult match {
case Success(some) =>
Thread.sleep(500)
logger.info( "successful feature:")
Thread.sleep(500)
logger.info( FormattedString.prettyPrint(some))
Thread.sleep(500)
logger.info("we are there!")
case Failure(someFailure) =>
logger.info( "failed feature:")
logger.debug( FormattedString.prettyPrint(someFailure))
}
}
result
}
logs with sleeps
[DEBUG] [03/30/2017 11:11:58.629] [xca-api-actor-system-akka.actor.default-dispatcher-7] [akka://xca-api-actor-system/system/IO-TCP/selectors/$a/0] Attempting connection to [localhost/127.0.0.1:27474]
[DEBUG] [03/30/2017 11:11:58.631] [xca-api-actor-system-akka.actor.default-dispatcher-7] [akka://xca-api-actor-system/system/IO-TCP/selectors/$a/0] Connection established to [localhost:27474]
11:11:59.442 [pool-2-thread-1] DEBUG o.a.netty.channel.DefaultChannelPool - Closed 0 connections out of 0 in 0 ms
11:11:59.496 [pool-1-thread-1] DEBUG o.a.netty.channel.DefaultChannelPool - Closed 0 connections out of 0 in 0 ms
11:12:00.250 [ForkJoinPool-2-worker-15] INFO c.s.s.r.neo4j.CompanyRepository - [QUERY TIME]: 1880ms
[ERROR] [03/30/2017 11:12:00.265] [company-api-system-akka.actor.default-dispatcher-3] [akka.actor.ActorSystemImpl(company-api-system)] Error during processing of request: 'requirement failed'. Completing with 500 Internal Server Error response.
11:12:00.543 [pool-2-thread-1] DEBUG o.a.netty.channel.DefaultChannelPool - Closed 0 connections out of 0 in 0 ms
11:12:00.597 [pool-1-thread-1] DEBUG o.a.netty.channel.DefaultChannelPool - Closed 0 connections out of 0 in 0 ms
11:12:00.752 [ForkJoinPool-2-worker-15] INFO c.s.s.r.neo4j.CompanyRepository - successful feature:
11:12:01.645 [pool-2-thread-1] DEBUG o.a.netty.channel.DefaultChannelPool - Closed 0 connections out of 0 in 0 ms
11:12:01.697 [pool-1-thread-1] DEBUG o.a.netty.channel.DefaultChannelPool - Closed 0 connections out of 0 in 0 ms
11:12:01.750 [ForkJoinPool-2-worker-15] INFO c.s.s.r.neo4j.CompanyRepository - SearchResult(List( "lots of legit result here"
11:12:02.281 [ForkJoinPool-2-worker-15] INFO c.s.s.r.neo4j.CompanyRepository - we are there!
Edit 4 and solution!
Apparently the default exception handler does not print a stack trace! overriding the exception handler with a very basic catch all:
implicit def myExceptionHandler: ExceptionHandler =
ExceptionHandler {
case e: Exception => {
logger.info("---------------- exception log start")
logger.error(e.getMessage, e)
logger.error("cause" , e.getCause)
logger.error("cause" , e.getStackTraceString )
logger.info( FormattedString.prettyPrint(e))
logger.info("---------------- exception log end")
Directives.complete("server made a boo boo")
}
}
results in a stack trace that befuddles the sh*t out of me!!
11:42:04.634 [company-api-system-akka.actor.default-dispatcher-2] INFO c.stepweb.scarifgate.CompanyApiApp$ - ---------------- exception log start
11:42:04.640 [company-api-system-akka.actor.default-dispatcher-2] ERROR c.stepweb.scarifgate.CompanyApiApp$ - requirement failed
java.lang.IllegalArgumentException: requirement failed
at scala.Predef$.require(Predef.scala:212) ~[scala-library-2.11.8.jar:na]
at spray.json.BasicFormats$StringJsonFormat$.write(BasicFormats.scala:121) ~[spray-json_2.11-1.3.2.jar:na]
at spray.json.BasicFormats$StringJsonFormat$.write(BasicFormats.scala:119) ~[spray-json_2.11-1.3.2.jar:na]
at spray.json.ProductFormats$class.productElement2Field(ProductFormats.scala:46) ~[spray-json_2.11-1.3.2.jar:na]
at com.stepweb.scarifgate.services.CompanyService.productElement2Field(CompanyService.scala:14) ~[classes/:na]
at spray.json.ProductFormatsInstances$$anon$3.write(ProductFormatsInstances.scala:73) ~[spray-json_2.11-1.3.2.jar:na]
at spray.json.ProductFormatsInstances$$anon$3.write(ProductFormatsInstances.scala:68) ~[spray-json_2.11-1.3.2.jar:na]
at spray.json.PimpedAny.toJson(package.scala:39) ~[spray-json_2.11-1.3.2.jar:na]
at spray.json.CollectionFormats$$anon$1$$anonfun$write$1.apply(CollectionFormats.scala:26) ~[spray-json_2.11-1.3.2.jar:na]
at spray.json.CollectionFormats$$anon$1$$anonfun$write$1.apply(CollectionFormats.scala:26) ~[spray-json_2.11-1.3.2.jar:na]
at scala.collection.immutable.List.map(List.scala:273) ~[scala-library-2.11.8.jar:na]
at spray.json.CollectionFormats$$anon$1.write(CollectionFormats.scala:26) ~[spray-json_2.11-1.3.2.jar:na]
at spray.json.CollectionFormats$$anon$1.write(CollectionFormats.scala:25) ~[spray-json_2.11-1.3.2.jar:na]
at spray.json.ProductFormats$class.productElement2Field(ProductFormats.scala:46) ~[spray-json_2.11-1.3.2.jar:na]
at com.stepweb.scarifgate.services.CompanyService.productElement2Field(CompanyService.scala:14) ~[classes/:na]
at spray.json.ProductFormatsInstances$$anon$1.write(ProductFormatsInstances.scala:30) ~[spray-json_2.11-1.3.2.jar:na]
at spray.json.ProductFormatsInstances$$anon$1.write(ProductFormatsInstances.scala:26) ~[spray-json_2.11-1.3.2.jar:na]
at akka.http.scaladsl.marshallers.sprayjson.SprayJsonSupport$$anonfun$sprayJsonMarshaller$1.apply(SprayJsonSupport.scala:62) ~[akka-http-spray-json_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshallers.sprayjson.SprayJsonSupport$$anonfun$sprayJsonMarshaller$1.apply(SprayJsonSupport.scala:62) ~[akka-http-spray-json_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.Marshaller$$anonfun$compose$1$$anonfun$apply$15.apply(Marshaller.scala:73) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.Marshaller$$anonfun$compose$1$$anonfun$apply$15.apply(Marshaller.scala:73) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.Marshaller$$anon$1.apply(Marshaller.scala:92) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.GenericMarshallers$$anonfun$optionMarshaller$1$$anonfun$apply$1.apply(GenericMarshallers.scala:19) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.GenericMarshallers$$anonfun$optionMarshaller$1$$anonfun$apply$1.apply(GenericMarshallers.scala:18) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.Marshaller$$anon$1.apply(Marshaller.scala:92) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.PredefinedToResponseMarshallers$$anonfun$fromStatusCodeAndHeadersAndValue$1$$anonfun$apply$5.apply(PredefinedToResponseMarshallers.scala:58) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.PredefinedToResponseMarshallers$$anonfun$fromStatusCodeAndHeadersAndValue$1$$anonfun$apply$5.apply(PredefinedToResponseMarshallers.scala:57) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.Marshaller$$anon$1.apply(Marshaller.scala:92) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.Marshaller$$anonfun$compose$1$$anonfun$apply$15.apply(Marshaller.scala:73) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.Marshaller$$anonfun$compose$1$$anonfun$apply$15.apply(Marshaller.scala:73) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.Marshaller$$anon$1.apply(Marshaller.scala:92) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.ToResponseMarshallable$$anonfun$1$$anonfun$apply$1.apply(ToResponseMarshallable.scala:29) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.ToResponseMarshallable$$anonfun$1$$anonfun$apply$1.apply(ToResponseMarshallable.scala:29) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.Marshaller$$anon$1.apply(Marshaller.scala:92) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.GenericMarshallers$$anonfun$futureMarshaller$1$$anonfun$apply$3$$anonfun$apply$4.apply(GenericMarshallers.scala:33) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.GenericMarshallers$$anonfun$futureMarshaller$1$$anonfun$apply$3$$anonfun$apply$4.apply(GenericMarshallers.scala:33) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.util.FastFuture$.akka$http$scaladsl$util$FastFuture$$strictTransform$1(FastFuture.scala:41) ~[akka-http-core_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.util.FastFuture$$anonfun$transformWith$extension1$1.apply(FastFuture.scala:51) [akka-http-core_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.util.FastFuture$$anonfun$transformWith$extension1$1.apply(FastFuture.scala:50) [akka-http-core_2.11-10.0.0.jar:10.0.0]
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) [scala-library-2.11.8.jar:na]
at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55) [akka-actor_2.11-2.4.16.jar:na]
at akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91) [akka-actor_2.11-2.4.16.jar:na]
at akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91) [akka-actor_2.11-2.4.16.jar:na]
at akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91) [akka-actor_2.11-2.4.16.jar:na]
at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72) [scala-library-2.11.8.jar:na]
at akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:90) [akka-actor_2.11-2.4.16.jar:na]
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:39) [akka-actor_2.11-2.4.16.jar:na]
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:415) [akka-actor_2.11-2.4.16.jar:na]
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) [scala-library-2.11.8.jar:na]
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) [scala-library-2.11.8.jar:na]
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) [scala-library-2.11.8.jar:na]
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) [scala-library-2.11.8.jar:na]
11:42:04.640 [company-api-system-akka.actor.default-dispatcher-2] ERROR c.stepweb.scarifgate.CompanyApiApp$ - cause
11:42:04.641 [company-api-system-akka.actor.default-dispatcher-2] ERROR c.stepweb.scarifgate.CompanyApiApp$ - cause
11:42:04.644 [company-api-system-akka.actor.default-dispatcher-2] INFO c.stepweb.scarifgate.CompanyApiApp$ - java.lang.IllegalArgumentException: requirement failed
11:42:04.644 [company-api-system-akka.actor.default-dispatcher-2] INFO c.stepweb.scarifgate.CompanyApiApp$ - ---------------- exception log end
so... the exception is caused here in spray.json.BasicFormats
implicit object StringJsonFormat extends JsonFormat[String] {
def write(x: String) = {
require(x ne null) // <-----------------------------------
JsString(x)
}
def read(value: JsValue) = value match {
case JsString(x) => x
case x => deserializationError("Expected String as JsString, but got " + x)
}
}
which sort of means one of the strings in this thousands of lines of response is null. Special thanks goes to the laziness of using that "require" without a message. Debugging which string is empty where will be a nightmare but I still think akka should fail in a better way.
akka-http no stack trace or details on error
Well, default akka-http ExceptionHandler doesn't print stack trace and prints only error message or its class name if the message is empty but you can provide custom exception handler that will print anything you want (i.e. stack trace in your example).
Some examples of how to make a custom exception handler are provided at GitHub ExceptionHandlerExamplesSpec.spec
The simplest way in your case seems to be to define your own custom implicit exception handler
import akka.http.scaladsl.model._
import akka.http.scaladsl.server._
import StatusCodes._
import Directives._
implicit def myExceptionHandler: ExceptionHandler =
ExceptionHandler {
case NonFatal(e) =>
logger.error(s"Exception $e at\n${e.getStackTraceString}")
complete(HttpResponse(InternalServerError, entity = "Internal Server Error"))
}
}
Try setting the loggers as well - from your configuration it seems they're not set. Something like:
akka {
loggers = ["akka.event.slf4j.Slf4jLogger"]
loglevel = "DEBUG"
logging-filter = "akka.event.slf4j.Slf4jLoggingFilter"
}
Also, consider using akka-slf4j along with their recommended logging backend logback.
This should make akka spit more details.

MongoDB getting killed on read load

Standalone mongodb on heavy read load is getting killed..
I see no instance of mongo being OOM killed in kern.log.. is there any way I can debug the root cause?
I looked at mongo logs -
db version 2.6.1 -
mongod(_ZN5mongo17PortMessageServer17handleIncomingMsgEPv+0x4fb) [0x117720b]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x8182) [0x7f82e5391182]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7f82e469630d]
2014-08-27T08:37:29.584+0000 [conn28] 808.autoComplete Assertion failure a <= 512*1024*1024 src/mongo/util/alignedbuilder.cpp 104
2014-08-27T08:37:29.601+0000 [conn28] 808.autoComplete 0x11c0e91 0x1163109 0x1146f4e 0x1144f9d 0xa6370f 0xa639c4 0xa563f7 0xa56a9a 0xa53e8f 0xa53f46 0xc3e557 0xc4a0f6 0xb90169 0xb9a388 0x76b6af 0x117720b 0x7f82e5391182 0x7f82e469630d mongod(_ZN5mongo16MyMessageHandler7processERNS_7MessageEPNS_21AbstractMessagingPortEPNS_9LastErrorE+0x9f) [0x76b6af]
mongod(_ZN5mongo17PortMessageServer17handleIncomingMsgEPv+0x4fb) [0x117720b]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x8182) [0x7f82e5391182]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7f82e469630d]
2014-08-27T08:37:29.652+0000 [conn28] dbexception in groupCommit causing immediate shutdown: 0 assertion src/mongo/util/alignedbuilder.cpp:104
2014-08-27T08:37:29.652+0000 [conn28] SEVERE: gc1
2014-08-27T08:37:29.661+0000 [conn28] SEVERE: Got signal: 6 (Aborted).
Backtrace:0x11c0e91 0x11c026e 0x7f82e45d1ff0 0x7f82e45d1f79 0x7f82e45d5388 0xb8c1bb 0xa5683d 0xa56a9a 0xa53e8f 0xa53f46 0xc3e557 0xc4a0f6 0xb90169 0xb9a388 0x76b6af 0x117720b 0x7f82e5391182 0x7f82e469630d

Why does database initialization fail with "ERROR: improper qualified name (too many dotted names)"?

I'm using Scala 2.10 and Slick 2.10-1.0.1 with plain queries.
I tried to init a lazy evaluating database with Tomcat at localhost. For query evaluation I use PostgreSQL on port 5432.
As I tried to compile I got following error message:
ERROR org.quartz.core.JobRunShell - Job DEFAULT.MissionLifecycleManager threw an unhandled Exception: org.postgresql.util.PSQLException: ERROR: improper qualified name (too many dotted names)
Position: 16
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2102) ~[postgresql-9.1-901.jdbc4.jar:na]
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1835) ~[postgresql-9.1-901.jdbc4.jar:na]
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:257) ~[postgresql-9.1-901.jdbc4.jar:na]
at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:500) ~[postgresql-9.1-901.jdbc4.jar:na]
at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:388) ~[postgresql-9.1-901.jdbc4.jar:na]
at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:381) ~[postgresql-9.1-901.jdbc4.jar:na]
at scala.slick.jdbc.StatementInvoker.results(StatementInvoker.scala:34) ~[slick_2.10-1.0.1.jar:1.0.1]
at scala.slick.jdbc.StatementInvoker.elementsTo(StatementInvoker.scala:17) ~[slick_2.10-1.0.1.jar:1.0.1]
....
This is the code of my initialisation:
import com.weiglewilczek.slf4s.Logging
import scala.slick.driver.PostgresDriver._
import scala.slick.session.Database
import Database.threadLocalSession
import scala.slick.jdbc.{GetResult, StaticQuery => Q}
import scala.slick.driver.PostgresDriver.simple._
object SQLUtilities extends Logging with ServiceInjector {
lazy val db = init()
private def init() = {
info("Connecting to postgres database at localhost") //writes in a log file
val qe = Database.forURL("jdbc:postgresql://localhost:5432", "user", "pass", driver = "org.postgresql.Driver")
info("Connected to database")
qe
}
}
Obviously, something went wrong. So I think, my database initiallisation is not correct. Do I have forgotten some parameters? Are my parameters correct at all?
An another - not so fatal - question: If I have a body where I want to log something at the beginning and at the end of a method - let's say always the same log message, but different bodys - as a sign that I started and leaved this method... Is there a proper way to do this than this example here in init()?
Specify a database in your connection string "jdbc:postgresql://localhost:5432/somedatabase".
See http://jdbc.postgresql.org/documentation/head/connect.html