When I tried to connect the elasticsearch from spark using basic authentication to create a new index got below error.
Error from elastic search is not giving full error information to debug more
org.elasticsearch.hadoop.rest.EsHadoopInvalidRequest: [HEAD] on [devl_test_index] failed; server[https://<elasticServerHost>:9200] returned [403|Forbidden:]
at org.elasticsearch.hadoop.rest.RestClient.checkResponse(RestClient.java:477)
at org.elasticsearch.hadoop.rest.RestClient.executeNotFoundAllowed(RestClient.java:447)
at org.elasticsearch.hadoop.rest.RestClient.exists(RestClient.java:539)
at org.elasticsearch.hadoop.rest.RestClient.indexExists(RestClient.java:534)
at org.elasticsearch.hadoop.rest.RestClient.touch(RestClient.java:545)
at org.elasticsearch.hadoop.rest.RestRepository.touch(RestRepository.java:364)
at org.elasticsearch.hadoop.rest.RestService.initSingleIndex(RestService.java:660)
at org.elasticsearch.hadoop.rest.RestService.createWriter(RestService.java:636)
at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:65)
at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:101)
at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:101)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:109)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Code used to connect:
Sbt dependency: "org.elasticsearch" % "elasticsearch-hadoop" % "7.5.0"
import org.elasticsearch.spark.sql._
val spark = SparkSession.builder().appName("SparkJDBC")
.enableHiveSupport()
.config("spark.es.port","9200")
.config("spark.es.nodes", "<elasticServerHost>")
.config("spark.es.nodes.wan.only","true")
.config("spark.es.net.ssl","true")
.config("spark.es.net.http.auth.user","USERNAME")
.config("spark.es.net.http.auth.pass","PASSWRD")
.master("local[*]")
.getOrCreate()
val df = spark.sql("select * from employee")
df.saveToEs("devl_test_index")
This error will be thrown from elastic search server when the user tries to access the index instead of assigned group.
In my case, my user group has access to the index which starts with employee* but I tried to access the index starts with devl*
If you run the spark in debug mode you will get the actual error stack trace with more info like below:
20/02/19 10:38:57 DEBUG wire.header: << "HTTP/1.1 403 Forbidden[\r][\n]"
20/02/19 10:38:57 DEBUG wire.header: << "HTTP/1.1 403 Forbidden[\r][\n]"
20/02/19 10:38:57 DEBUG wire.header: << "content-type: application/json; charset=UTF-8[\r][\n]"
20/02/19 10:38:57 DEBUG wire.header: << "content-length: 259[\r][\n]"
20/02/19 10:38:57 DEBUG wire.header: << "[\r][\n]"
20/02/19 10:38:57 DEBUG wire.content: << "{"error":{"root_cause":[{"type":"security_exception","reason":"action [indices:admin/aliases/get] is unauthorized for user [rdsuser]"}],"type":"security_exception","reason":"action [indices:admin/aliases/get] is unauthorized for user [USERNAME]"},"status":403}"
20/02/19 10:38:57 DEBUG sql.EsDataFrameWriter: Provided index name [devl_test_index] is not an alias. Reason: [org.elasticsearch.hadoop.rest.EsHadoopRemoteException: security_exception: action [indices:admin/aliases/get] is unauthorized for user [USERNAME]
null]
Internally spark will check for the existence of index name before it creates a new one. In this process, it will hit the alias API /_all/_alias/devl_test_index
Error response:
{
"error": {
"root_cause": [
{
"type": "security_exception",
"reason": "action [indices:admin/aliases/get] is unauthorized for user [USERNAME]"
}
],
"type": "security_exception",
"reason": "action [indices:admin/aliases/get] is unauthorized for user [USERNAME]"
},
"status": 403
}
Related
I can consume produced protobuf message data from kafka topic using different tool like conduktor. However, when I try to poll data using JdbcSinkConnector, it throws the exception like
org.apache.kafka.common.errors.SerializationException: Error
deserializing Protobuf message
Please take a look at the following error detail when I call kafka-connect api as follows
URL : http://localhost:8083/connectors?expand=info&expand=status
JSON Response and trace
{
"sink_postgres_03_proto":{
"info":{
"name":"sink_postgres_03_proto",
"config":{
"connector.class":"io.confluent.connect.jdbc.JdbcSinkConnector",
"connection.password":"54321",
"topics":"order-messages",
"value.converter.schema.registry.url":"http://localhost:8081",
"key.converter.schemas.enable":"false",
"auto.evolve":"true",
"connection.user":"postgres",
"value.converter.schemas.enable":"true",
"name":"sink_postgres_03_proto",
"auto.create":"true",
"connection.url":"jdbc:postgresql://localhost:5432/CallHistoryService",
"value.converter":"io.confluent.connect.protobuf.ProtobufConverter",
"insert.mode":"insert",
"key.converter":"org.apache.kafka.connect.storage.StringConverter"
},
"tasks":[
{
"connector":"sink_postgres_03_proto",
"task":0
}
],
"type":"sink"
},
"status":{
"name":"sink_postgres_03_proto",
"connector":{
"state":"RUNNING",
"worker_id":"kafka-connect:8083"
},
"tasks":[
{
"id":0,
"state":"FAILED",
"worker_id":"kafka-connect:8083",
"trace":"org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler\n\tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:196)\n\tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:122)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.convertAndTransformRecord(WorkerSinkTask.java:495)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:472)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:322)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:226)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:198)\n\tat org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:185)\n\tat org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:235)\n\tat java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)\n\tat java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.base/java.lang.Thread.run(Thread.java:834)\nCaused by: org.apache.kafka.connect.errors.DataException: Failed to deserialize data for topic order-messages to Protobuf: \n\tat io.confluent.connect.protobuf.ProtobufConverter.toConnectData(ProtobufConverter.java:123)\n\tat org.apache.kafka.connect.storage.Converter.toConnectData(Converter.java:87)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.lambda$convertAndTransformRecord$1(WorkerSinkTask.java:495)\n\tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:146)\n\tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:180)\n\t... 13 more\nCaused by: org.apache.kafka.common.errors.SerializationException: Error deserializing Protobuf message for id 1\nCaused by: java.net.ConnectException: Connection refused (Connection refused)\n\tat java.base/java.net.PlainSocketImpl.socketConnect(Native Method)\n\tat java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:399)\n\tat java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:242)\n\tat java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:224)\n\tat java.base/java.net.Socket.connect(Socket.java:609)\n\tat java.base/sun.net.NetworkClient.doConnect(NetworkClient.java:177)\n\tat java.base/sun.net.www.http.HttpClient.openServer(HttpClient.java:474)\n\tat java.base/sun.net.www.http.HttpClient.openServer(HttpClient.java:569)\n\tat java.base/sun.net.www.http.HttpClient.<init>(HttpClient.java:242)\n\tat java.base/sun.net.www.http.HttpClient.New(HttpClient.java:341)\n\tat java.base/sun.net.www.http.HttpClient.New(HttpClient.java:362)\n\tat java.base/sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1253)\n\tat java.base/sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1187)\n\tat java.base/sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1081)\n\tat java.base/sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:1015)\n\tat java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1592)\n\tat java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1520)\n\tat java.base/java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:527)\n\tat io.confluent.kafka.schemaregistry.client.rest.RestService.sendHttpRequest(RestService.java:272)\n\tat io.confluent.kafka.schemaregistry.client.rest.RestService.httpRequest(RestService.java:352)\n\tat io.confluent.kafka.schemaregistry.client.rest.RestService.getId(RestService.java:660)\n\tat io.confluent.kafka.schemaregistry.client.rest.RestService.getId(RestService.java:642)\n\tat io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getSchemaByIdFromRegistry(CachedSchemaRegistryClient.java:217)\n\tat io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getSchemaBySubjectAndId(CachedSchemaRegistryClient.java:291)\n\tat io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getSchemaById(CachedSchemaRegistryClient.java:276)\n\tat io.confluent.kafka.serializers.protobuf.AbstractKafkaProtobufDeserializer.deserialize(AbstractKafkaProtobufDeserializer.java:117)\n\tat io.confluent.kafka.serializers.protobuf.AbstractKafkaProtobufDeserializer.deserializeWithSchemaAndVersion(AbstractKafkaProtobufDeserializer.java:235)\n\tat io.confluent.connect.protobuf.ProtobufConverter$Deserializer.deserialize(ProtobufConverter.java:163)\n\tat io.confluent.connect.protobuf.ProtobufConverter.toConnectData(ProtobufConverter.java:107)\n\tat org.apache.kafka.connect.storage.Converter.toConnectData(Converter.java:87)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.lambda$convertAndTransformRecord$1(WorkerSinkTask.java:495)\n\tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:146)\n\tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:180)\n\tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:122)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.convertAndTransformRecord(WorkerSinkTask.java:495)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:472)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:322)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:226)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:198)\n\tat org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:185)\n\tat org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:235)\n\tat java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)\n\tat java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.base/java.lang.Thread.run(Thread.java:834)\n"
}
],
"type":"sink"
}
}
}
Please advice
Thanks you!
If you take the trace node and convert the \n and \t to newlines and tabs you get a readable stack trace with shows the problem:
Caused by: org.apache.kafka.connect.errors.DataException: Failed to deserialize data for topic order-messages to Protobuf:
at io.confluent.connect.protobuf.ProtobufConverter.toConnectData(ProtobufConverter.java:123)
at org.apache.kafka.connect.storage.Converter.toConnectData(Converter.java:87)
at org.apache.kafka.connect.runtime.WorkerSinkTask.lambda$convertAndTransformRecord$1(WorkerSinkTask.java:495)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:146)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:180)
... 13 more
Caused by: org.apache.kafka.common.errors.SerializationException: Error deserializing Protobuf message for id 1
Caused by: java.net.ConnectException: Connection refused (Connection refused)
The error is java.net.ConnectException: Connection refused and means that either (a) you've misconfigured the location of your Schema Registry or (b) the Schema Registry is not running
My Gatling Simulation class,
class <MyClass> extends Simulation {
before {
println("Simulation is about to start!")
}
val smapleTest = scenario("test").exec(karateFeature("classpath:demo/get-user.feature"))
setUp(
smapleTest.inject(rampUsers(1) over (10 seconds))).maxDuration(1 minutes)
//).assertions(global.responseTime.mean.lt(35))
after {
println("Simulation is finished!")
}
}
My get-user.feature file,
Scenario Outline: Hit wskadmin url
Given http://172.17.0.1:5984/whisk_local_subjects/guest
And header Authorization = AdminAuth
And header Content-Type = 'application/json'
When method get
Then status <stat>
* print result
Examples:
| stat |
| 200 |
When i run the simulation class, below console logs i am getting:
Simulation com.karate.openwhisk.performance.SmokePerformanceTest started...
13:20:48.877 [GatlingSystem-akka.actor.default-dispatcher-5] INFO i.gatling.core.controller.Controller - InjectionStopped expectedCount=1
13:20:49.473 [GatlingSystem-akka.actor.default-dispatcher-4] INFO com.intuit.karate - karate.env system property was: null
13:20:49.525 [GatlingSystem-akka.actor.default-dispatcher-7] INFO com.intuit.karate - [print] I am here in get-user
13:20:49.706 [GatlingSystem-akka.actor.default-dispatcher-4] DEBUG com.intuit.karate - request:
1 > GET http://172.17.0.1:5984/whisk_local_subjects/guest
1 > Accept-Encoding: gzip,deflate
1 > Authorization: Basic d2hpc2tfYWRtaW46c29tZV9wYXNzdzByZA==
1 > Connection: Keep-Alive
1 > Content-Type: application/json
1 > Host: 172.17.0.1:5984
1 > User-Agent: Apache-HttpClient/4.5.5 (Java/1.8.0_144)
13:20:49.741 [GatlingSystem-akka.actor.default-dispatcher-4] DEBUG com.intuit.karate - response time in milliseconds: 34
1 < 200
Note: Here i am getting the response in 34 mili seconds, but gating is unable to generate the report. Below is the error message i am getting
Error:
Generating reports...
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at io.gatling.mojo.MainWithArgsInFile.runMain(MainWithArgsInFile.java:50)
at io.gatling.mojo.MainWithArgsInFile.main(MainWithArgsInFile.java:33)
Caused by: java.lang.UnsupportedOperationException: There were no requests sent during the simulation, reports won't be generated
at io.gatling.charts.report.ReportsGenerator.generateFor(ReportsGenerator.scala:48)
at io.gatling.app.RunResultProcessor.generateReports(RunResultProcessor.scala:76)
at io.gatling.app.RunResultProcessor.processRunResult(RunResultProcessor.scala:55)
at io.gatling.app.Gatling$.start(Gatling.scala:68)
at io.gatling.app.Gatling$.fromArgs(Gatling.scala:45)
at io.gatling.app.Gatling$.main(Gatling.scala:37)
at io.gatling.app.Gatling.main(Gatling.scala)
... 6 more
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 14.199 s
[INFO] Finished at: 2018-07-24T13:20:50+05:30
[INFO] Final Memory: 30M/332M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal io.gatling:gatling-maven-plugin:2.2.4:test (default-cli) on project
openwhisk: Gatling failed.: Process exited with an error: 255 (Exit
value: 255) -> [Help 1]
But if i run the same simulation file simple change in feature file as below
Scenario Outline: Hit wskadmin url
Given http://172.17.0.1:5984/whisk_local_subjects/guest
And header Authorization = AdminAuth
And header Content-Type = 'application/json'
When method get
Then status <stat>
* print result
Examples:
| stat |
| 200 |
| 200 |
Then gatling generates the report.
Please help me someone what is the root cause.
Thank you for your interest in karate-gatling and the very detailed report.
This is a bug, which we have fixed and just made a release for.
Can you upgrade your karate-gatling version to 0.8.0.1 and let me know how it goes ?
I was trying to make a SOAP call with the following codes:
MessageContentsList response = producerTemplate
.requestBodyAndHeaders(endpoint, request, headers, MessageContentsList.class);
The codes were returning truncated SOAP response. The logs were showing:
at org.apache.camel.util.ObjectHelper.wrapCamelExecutionException(ObjectHelper.java:1772) ~[camel-core-2.19.0.jar:2.19.0]
at org.apache.camel.util.ExchangeHelper.extractResultBody(ExchangeHelper.java:695) ~[camel-core-2.19.0.jar:2.19.0]
at org.apache.camel.impl.DefaultProducerTemplate.extractResultBody(DefaultProducerTemplate.java:515) ~[camel-core-2.19.0.jar:2.19.0]
at org.apache.camel.impl.DefaultProducerTemplate.extractResultBody(DefaultProducerTemplate.java:511) ~[camel-core-2.19.0.jar:2.19.0]
at org.apache.camel.impl.DefaultProducerTemplate.requestBodyAndHeaders(DefaultProducerTemplate.java:362) ~[camel-core-2.19.0.jar:2.19.0]
at com.sia.csl.schema.cmservicingpax.services.getpaxrecord.GetPaxRecordService.getVendorResponse(GetPaxRecordService.java:94) [classes/:?]
Caused by: org.apache.cxf.interceptor.Fault: Unmarshalling Error: Unexpected end of input block; expected an identifier
at [row,col {unknown-source}]: [1,878852]
at org.apache.cxf.jaxb.JAXBEncoderDecoder.unmarshall(JAXBEncoderDecoder.java:908) ~[cxf-rt-databinding-jaxb-3.1.11.jar:3.1.11]
at org.apache.cxf.jaxb.JAXBEncoderDecoder.unmarshall(JAXBEncoderDecoder.java:714) ~[cxf-rt-databinding-jaxb-3.1.11.jar:3.1.11]
at org.apache.cxf.jaxb.io.DataReaderImpl.read(DataReaderImpl.java:179) ~[cxf-rt-databinding-jaxb-3.1.11.jar:3.1.11]
at org.apache.cxf.wsdl.interceptors.DocLiteralInInterceptor.handleMessage(DocLiteralInInterceptor.java:109) ~[cxf-rt-wsdl-3.1.11.jar:3.1.11]
at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:308) ~[cxf-core-3.1.11.jar:3.1.11]
at org.apache.cxf.endpoint.ClientImpl.onMessage(ClientImpl.java:780) ~[cxf-core-3.1.11.jar:3.1.11]
at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.handleResponseInternal(HTTPConduit.java:1680) ~[cxf-rt-transports-http-3.1.11.jar:3.1.11]
at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream$1.run(HTTPConduit.java:1176) ~[cxf-rt-transports-http-3.1.11.jar:3.1.11]
at org.apache.cxf.workqueue.AutomaticWorkQueueImpl$3.run(AutomaticWorkQueueImpl.java:428) ~[cxf-core-3.1.11.jar:3.1.11]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_91]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_91]
at org.apache.cxf.workqueue.AutomaticWorkQueueImpl$AWQThreadFactory$1.run(AutomaticWorkQueueImpl.java:353) ~[cxf-core-3.1.11.jar:3.1.11]
I got a structure which can basically be summarized as:
outside user makes a rest request to akka-http server
akka-http makes a request(query?) to a (some)data source using asynchttpclient
akka-http transforms the result from asynchttpclient and serves it back to user
At some point I am getting an error from akka which tells me almost nothing. This error happens right after the asynchttpclient returns me some results. (I can infact at this point print the results on the log, they are there parsed from json etc.. but akka had already errored out)
Even in debug logging level I got no decipherable error message from akka or a stacktrace.
only message I got is:
2017-03-24 17:22:55 INFO CompanyRepository:111 - search company with name:"somecompanyname"
2017-03-24 17:22:55 INFO CompanyRepository:73 - [QUERY TIME]: 527ms
[ERROR] [03/24/2017 17:22:55.951] [company-api-system-akka.actor.default-dispatcher-3] [akka.actor.ActorSystemImpl(company-api-system)] Error during processing of request: 'requirement failed'. Completing with 500 Internal Server Error response.
This error message is the only thing I get. Relevant parts of my config:
akka {
loglevel = "DEBUG"
# edit -- tested with sl4jlogger with no change
#loggers = ["akka.event.slf4j.Slf4jLogger"]
#logging-filter = "akka.event.slf4j.Slf4jLoggingFilter"
parsing {
max-content-length = 800m
max-chunk-size = 100m
}
server {
server-header = akka-http/${akka.http.version}
idle-timeout = 120 s
request-timeout = 120 s
bind-timeout = 10s
max-connections = 1024
pipelining-limit = 32
verbose-error-messages = on
}
client {
user-agent-header = akka-http/${akka.http.version}
}
host-connection-pool {
max-connections = 4
}
}
akka.http.routing {
verbose-error-messages = on
}
Anyone knows if I can make akka to spit out more details about what/where the error is occurring?
Edit: I realized I do NOT get this same error on resultsets which are smaller in size. <- ignore
Edit 2:
Added akka.loglevel = DEBUG, spits out a lot more noise but still not detail about the actual error.
Converted asynchttpclient to akka quickly to rule out AHC
I already had a wrapper around my query to time it, added some logging there trying to pinpoint when exactly the error is happening.
def queryTimer[ R <: Future[ Any ] ]( block: => R ): R = {
val t0 = System.currentTimeMillis()
val result = block
result.onComplete { maybeResult =>
val t1 = System.currentTimeMillis()
logger.info( "[QUERY TIME]: " + ( t1 - t0 ) + "ms" )
maybeResult match {
case Success(some) =>
logger.info( "successful feature:")
logger.info( FormattedString.prettyPrint(some))
case Failure(someFailure) =>
logger.info( "failed feature:")
logger.debug( FormattedString.prettyPrint(someFailure))
}
}
result
}
resulting log:
2017-03-28 13:19:10 INFO CompanyRepository:111 - search company with name:"some company"
[DEBUG] [03/28/2017 13:19:10.497] [company-api-system-akka.actor.default-dispatcher-2] [EventStream(akka://xca-api-actor-system)] logger log1-Logging$DefaultLogger started
[DEBUG] [03/28/2017 13:19:10.497] [company-api-system-akka.actor.default-dispatcher-2] [EventStream(akka://xca-api-actor-system)] Default Loggers started
[DEBUG] [03/28/2017 13:19:10.613] [company-api-system-akka.actor.default-dispatcher-2] [AkkaSSLConfig(akka://xca-api-actor-system)] Initializing AkkaSSLConfig extension...
[DEBUG] [03/28/2017 13:19:10.613] [company-api-system-akka.actor.default-dispatcher-2] [AkkaSSLConfig(akka://xca-api-actor-system)] buildHostnameVerifier: created hostname verifier: com.typesafe.sslconfig.ssl.DefaultHostnameVerifier#779e2339
[DEBUG] [03/28/2017 13:19:10.633] [xca-api-actor-system-akka.actor.default-dispatcher-3] [akka://xca-api-actor-system/user/pool-master/PoolInterfaceActor-0] (Re-)starting host connection pool to localhost:27474
[DEBUG] [03/28/2017 13:19:10.727] [xca-api-actor-system-akka.actor.default-dispatcher-3] [akka://xca-api-actor-system/system/IO-TCP/selectors/$a/0] Resolving localhost before connecting
[DEBUG] [03/28/2017 13:19:10.740] [xca-api-actor-system-akka.actor.default-dispatcher-4] [akka://xca-api-actor-system/system/IO-DNS] Resolution request for localhost from Actor[akka://xca-api-actor-system/system/IO-TCP/selectors/$a/0#-815754478]
[DEBUG] [03/28/2017 13:19:10.749] [xca-api-actor-system-akka.actor.default-dispatcher-4] [akka://xca-api-actor-system/system/IO-TCP/selectors/$a/0] Attempting connection to [localhost/127.0.0.1:27474]
[DEBUG] [03/28/2017 13:19:10.751] [xca-api-actor-system-akka.actor.default-dispatcher-4] [akka://xca-api-actor-system/system/IO-TCP/selectors/$a/0] Connection established to [localhost:27474]
2017-03-28 13:19:10 INFO CompanyRepository:73 - [QUERY TIME]: 376ms
2017-03-28 13:19:10 INFO CompanyRepository:77 - successful feature:
[ERROR] [03/28/2017 13:19:10.896] [company-api-system-akka.actor.default-dispatcher-7] [akka.actor.ActorSystemImpl(company-api-system)] Error during processing of request: 'requirement failed'. Completing with 500 Internal Server Error response.
2017-03-28 13:19:10 INFO CompanyRepository:78 - SearchResult(List(
( prettyprint output here!!! lots and lots of legit result, json parsed succcesfully into a bunch of case classes)
as you can see my logging format and akkas' are different, the ERROR is coming from akka with do details, while everything looks like working.
Edit 3: logs with sleep in between calls
new query timer function with sleeps
def queryTimer[ R <: Future[ Any ] ]( block: => R ): R = {
val t0 = System.currentTimeMillis()
val result = block
result.onComplete { maybeResult =>
val t1 = System.currentTimeMillis()
logger.info( "[QUERY TIME]: " + ( t1 - t0 ) + "ms" )
maybeResult match {
case Success(some) =>
Thread.sleep(500)
logger.info( "successful feature:")
Thread.sleep(500)
logger.info( FormattedString.prettyPrint(some))
Thread.sleep(500)
logger.info("we are there!")
case Failure(someFailure) =>
logger.info( "failed feature:")
logger.debug( FormattedString.prettyPrint(someFailure))
}
}
result
}
logs with sleeps
[DEBUG] [03/30/2017 11:11:58.629] [xca-api-actor-system-akka.actor.default-dispatcher-7] [akka://xca-api-actor-system/system/IO-TCP/selectors/$a/0] Attempting connection to [localhost/127.0.0.1:27474]
[DEBUG] [03/30/2017 11:11:58.631] [xca-api-actor-system-akka.actor.default-dispatcher-7] [akka://xca-api-actor-system/system/IO-TCP/selectors/$a/0] Connection established to [localhost:27474]
11:11:59.442 [pool-2-thread-1] DEBUG o.a.netty.channel.DefaultChannelPool - Closed 0 connections out of 0 in 0 ms
11:11:59.496 [pool-1-thread-1] DEBUG o.a.netty.channel.DefaultChannelPool - Closed 0 connections out of 0 in 0 ms
11:12:00.250 [ForkJoinPool-2-worker-15] INFO c.s.s.r.neo4j.CompanyRepository - [QUERY TIME]: 1880ms
[ERROR] [03/30/2017 11:12:00.265] [company-api-system-akka.actor.default-dispatcher-3] [akka.actor.ActorSystemImpl(company-api-system)] Error during processing of request: 'requirement failed'. Completing with 500 Internal Server Error response.
11:12:00.543 [pool-2-thread-1] DEBUG o.a.netty.channel.DefaultChannelPool - Closed 0 connections out of 0 in 0 ms
11:12:00.597 [pool-1-thread-1] DEBUG o.a.netty.channel.DefaultChannelPool - Closed 0 connections out of 0 in 0 ms
11:12:00.752 [ForkJoinPool-2-worker-15] INFO c.s.s.r.neo4j.CompanyRepository - successful feature:
11:12:01.645 [pool-2-thread-1] DEBUG o.a.netty.channel.DefaultChannelPool - Closed 0 connections out of 0 in 0 ms
11:12:01.697 [pool-1-thread-1] DEBUG o.a.netty.channel.DefaultChannelPool - Closed 0 connections out of 0 in 0 ms
11:12:01.750 [ForkJoinPool-2-worker-15] INFO c.s.s.r.neo4j.CompanyRepository - SearchResult(List( "lots of legit result here"
11:12:02.281 [ForkJoinPool-2-worker-15] INFO c.s.s.r.neo4j.CompanyRepository - we are there!
Edit 4 and solution!
Apparently the default exception handler does not print a stack trace! overriding the exception handler with a very basic catch all:
implicit def myExceptionHandler: ExceptionHandler =
ExceptionHandler {
case e: Exception => {
logger.info("---------------- exception log start")
logger.error(e.getMessage, e)
logger.error("cause" , e.getCause)
logger.error("cause" , e.getStackTraceString )
logger.info( FormattedString.prettyPrint(e))
logger.info("---------------- exception log end")
Directives.complete("server made a boo boo")
}
}
results in a stack trace that befuddles the sh*t out of me!!
11:42:04.634 [company-api-system-akka.actor.default-dispatcher-2] INFO c.stepweb.scarifgate.CompanyApiApp$ - ---------------- exception log start
11:42:04.640 [company-api-system-akka.actor.default-dispatcher-2] ERROR c.stepweb.scarifgate.CompanyApiApp$ - requirement failed
java.lang.IllegalArgumentException: requirement failed
at scala.Predef$.require(Predef.scala:212) ~[scala-library-2.11.8.jar:na]
at spray.json.BasicFormats$StringJsonFormat$.write(BasicFormats.scala:121) ~[spray-json_2.11-1.3.2.jar:na]
at spray.json.BasicFormats$StringJsonFormat$.write(BasicFormats.scala:119) ~[spray-json_2.11-1.3.2.jar:na]
at spray.json.ProductFormats$class.productElement2Field(ProductFormats.scala:46) ~[spray-json_2.11-1.3.2.jar:na]
at com.stepweb.scarifgate.services.CompanyService.productElement2Field(CompanyService.scala:14) ~[classes/:na]
at spray.json.ProductFormatsInstances$$anon$3.write(ProductFormatsInstances.scala:73) ~[spray-json_2.11-1.3.2.jar:na]
at spray.json.ProductFormatsInstances$$anon$3.write(ProductFormatsInstances.scala:68) ~[spray-json_2.11-1.3.2.jar:na]
at spray.json.PimpedAny.toJson(package.scala:39) ~[spray-json_2.11-1.3.2.jar:na]
at spray.json.CollectionFormats$$anon$1$$anonfun$write$1.apply(CollectionFormats.scala:26) ~[spray-json_2.11-1.3.2.jar:na]
at spray.json.CollectionFormats$$anon$1$$anonfun$write$1.apply(CollectionFormats.scala:26) ~[spray-json_2.11-1.3.2.jar:na]
at scala.collection.immutable.List.map(List.scala:273) ~[scala-library-2.11.8.jar:na]
at spray.json.CollectionFormats$$anon$1.write(CollectionFormats.scala:26) ~[spray-json_2.11-1.3.2.jar:na]
at spray.json.CollectionFormats$$anon$1.write(CollectionFormats.scala:25) ~[spray-json_2.11-1.3.2.jar:na]
at spray.json.ProductFormats$class.productElement2Field(ProductFormats.scala:46) ~[spray-json_2.11-1.3.2.jar:na]
at com.stepweb.scarifgate.services.CompanyService.productElement2Field(CompanyService.scala:14) ~[classes/:na]
at spray.json.ProductFormatsInstances$$anon$1.write(ProductFormatsInstances.scala:30) ~[spray-json_2.11-1.3.2.jar:na]
at spray.json.ProductFormatsInstances$$anon$1.write(ProductFormatsInstances.scala:26) ~[spray-json_2.11-1.3.2.jar:na]
at akka.http.scaladsl.marshallers.sprayjson.SprayJsonSupport$$anonfun$sprayJsonMarshaller$1.apply(SprayJsonSupport.scala:62) ~[akka-http-spray-json_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshallers.sprayjson.SprayJsonSupport$$anonfun$sprayJsonMarshaller$1.apply(SprayJsonSupport.scala:62) ~[akka-http-spray-json_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.Marshaller$$anonfun$compose$1$$anonfun$apply$15.apply(Marshaller.scala:73) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.Marshaller$$anonfun$compose$1$$anonfun$apply$15.apply(Marshaller.scala:73) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.Marshaller$$anon$1.apply(Marshaller.scala:92) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.GenericMarshallers$$anonfun$optionMarshaller$1$$anonfun$apply$1.apply(GenericMarshallers.scala:19) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.GenericMarshallers$$anonfun$optionMarshaller$1$$anonfun$apply$1.apply(GenericMarshallers.scala:18) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.Marshaller$$anon$1.apply(Marshaller.scala:92) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.PredefinedToResponseMarshallers$$anonfun$fromStatusCodeAndHeadersAndValue$1$$anonfun$apply$5.apply(PredefinedToResponseMarshallers.scala:58) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.PredefinedToResponseMarshallers$$anonfun$fromStatusCodeAndHeadersAndValue$1$$anonfun$apply$5.apply(PredefinedToResponseMarshallers.scala:57) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.Marshaller$$anon$1.apply(Marshaller.scala:92) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.Marshaller$$anonfun$compose$1$$anonfun$apply$15.apply(Marshaller.scala:73) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.Marshaller$$anonfun$compose$1$$anonfun$apply$15.apply(Marshaller.scala:73) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.Marshaller$$anon$1.apply(Marshaller.scala:92) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.ToResponseMarshallable$$anonfun$1$$anonfun$apply$1.apply(ToResponseMarshallable.scala:29) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.ToResponseMarshallable$$anonfun$1$$anonfun$apply$1.apply(ToResponseMarshallable.scala:29) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.Marshaller$$anon$1.apply(Marshaller.scala:92) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.GenericMarshallers$$anonfun$futureMarshaller$1$$anonfun$apply$3$$anonfun$apply$4.apply(GenericMarshallers.scala:33) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.GenericMarshallers$$anonfun$futureMarshaller$1$$anonfun$apply$3$$anonfun$apply$4.apply(GenericMarshallers.scala:33) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.util.FastFuture$.akka$http$scaladsl$util$FastFuture$$strictTransform$1(FastFuture.scala:41) ~[akka-http-core_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.util.FastFuture$$anonfun$transformWith$extension1$1.apply(FastFuture.scala:51) [akka-http-core_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.util.FastFuture$$anonfun$transformWith$extension1$1.apply(FastFuture.scala:50) [akka-http-core_2.11-10.0.0.jar:10.0.0]
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) [scala-library-2.11.8.jar:na]
at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55) [akka-actor_2.11-2.4.16.jar:na]
at akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91) [akka-actor_2.11-2.4.16.jar:na]
at akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91) [akka-actor_2.11-2.4.16.jar:na]
at akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91) [akka-actor_2.11-2.4.16.jar:na]
at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72) [scala-library-2.11.8.jar:na]
at akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:90) [akka-actor_2.11-2.4.16.jar:na]
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:39) [akka-actor_2.11-2.4.16.jar:na]
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:415) [akka-actor_2.11-2.4.16.jar:na]
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) [scala-library-2.11.8.jar:na]
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) [scala-library-2.11.8.jar:na]
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) [scala-library-2.11.8.jar:na]
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) [scala-library-2.11.8.jar:na]
11:42:04.640 [company-api-system-akka.actor.default-dispatcher-2] ERROR c.stepweb.scarifgate.CompanyApiApp$ - cause
11:42:04.641 [company-api-system-akka.actor.default-dispatcher-2] ERROR c.stepweb.scarifgate.CompanyApiApp$ - cause
11:42:04.644 [company-api-system-akka.actor.default-dispatcher-2] INFO c.stepweb.scarifgate.CompanyApiApp$ - java.lang.IllegalArgumentException: requirement failed
11:42:04.644 [company-api-system-akka.actor.default-dispatcher-2] INFO c.stepweb.scarifgate.CompanyApiApp$ - ---------------- exception log end
so... the exception is caused here in spray.json.BasicFormats
implicit object StringJsonFormat extends JsonFormat[String] {
def write(x: String) = {
require(x ne null) // <-----------------------------------
JsString(x)
}
def read(value: JsValue) = value match {
case JsString(x) => x
case x => deserializationError("Expected String as JsString, but got " + x)
}
}
which sort of means one of the strings in this thousands of lines of response is null. Special thanks goes to the laziness of using that "require" without a message. Debugging which string is empty where will be a nightmare but I still think akka should fail in a better way.
akka-http no stack trace or details on error
Well, default akka-http ExceptionHandler doesn't print stack trace and prints only error message or its class name if the message is empty but you can provide custom exception handler that will print anything you want (i.e. stack trace in your example).
Some examples of how to make a custom exception handler are provided at GitHub ExceptionHandlerExamplesSpec.spec
The simplest way in your case seems to be to define your own custom implicit exception handler
import akka.http.scaladsl.model._
import akka.http.scaladsl.server._
import StatusCodes._
import Directives._
implicit def myExceptionHandler: ExceptionHandler =
ExceptionHandler {
case NonFatal(e) =>
logger.error(s"Exception $e at\n${e.getStackTraceString}")
complete(HttpResponse(InternalServerError, entity = "Internal Server Error"))
}
}
Try setting the loggers as well - from your configuration it seems they're not set. Something like:
akka {
loggers = ["akka.event.slf4j.Slf4jLogger"]
loglevel = "DEBUG"
logging-filter = "akka.event.slf4j.Slf4jLoggingFilter"
}
Also, consider using akka-slf4j along with their recommended logging backend logback.
This should make akka spit more details.
Stack trace:
javax.xml.ws.soap.SOAPFaultException: Could not parse message.
at org.apache.cxf.jaxws.JaxWsClientProxy.invoke(JaxWsClientProxy.java:143)
Caused by: org.codehaus.stax2.typed.TypedXMLStreamException: ParseError at [row,col]:[2,603]
Message: Element content can not contain child START_ELEMENT when using Typed Access methods
at com.ctc.wstx.sr.BasicStreamReader._constructTypeException(BasicStreamReader.java:5482)
at com.ctc.wstx.sr.BasicStreamReader._constructUnexpectedInTyped(BasicStreamReader.java:5475)
at com.ctc.wstx.sr.BasicStreamReader.getElementText(BasicStreamReader.java:658)
at org.apache.cxf.binding.soap.interceptor.Soap11FaultInInterceptor.unmarshalFault(Soap11FaultInInterceptor.java:61)
at org.apache.cxf.binding.soap.interceptor.Soap11FaultInInterceptor.handleMessage(Soap11FaultInInterceptor.java:46)
at org.apache.cxf.binding.soap.interceptor.Soap11FaultInInterceptor.handleMessage(Soap11FaultInInterceptor.java:35)
at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:236)
at org.apache.cxf.interceptor.AbstractFaultChainInitiatorObserver.onMessage(AbstractFaultChainInitiatorObserver.java:96)
at org.apache.cxf.binding.soap.interceptor.CheckFaultInterceptor.handleMessage(CheckFaultInterceptor.java:69)
at org.apache.cxf.binding.soap.interceptor.CheckFaultInterceptor.handleMessage(CheckFaultInterceptor.java:34)
at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:236)
at org.apache.cxf.endpoint.ClientImpl.onMessage(ClientImpl.java:658)
at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.handleResponseInternal(HTTPConduit.java:2139)
at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.handleResponse(HTTPConduit.java:2022)
at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.close(HTTPConduit.java:1947)
at org.apache.cxf.transport.AbstractConduit.close(AbstractConduit.java:66)
at org.apache.cxf.transport.http.HTTPConduit.close(HTTPConduit.java:632)
at org.apache.cxf.interceptor.MessageSenderInterceptor$MessageSenderEndingInterceptor.handleMessage(MessageSenderInterceptor.java:62)
at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:236)
at org.apache.cxf.endpoint.ClientImpl.invoke(ClientImpl.java:472)
at org.apache.cxf.endpoint.ClientImpl.invoke(ClientImpl.java:302)
at org.apache.cxf.endpoint.ClientImpl.invoke(ClientImpl.java:254)
at org.apache.cxf.frontend.ClientProxy.invokeSync(ClientProxy.java:73)
at org.apache.cxf.jaxws.JaxWsClientProxy.invoke(JaxWsClientProxy.java:123)
... 57 more