Jetty works for HTTP but not HTTPS - scala

I am trying to create a jetty consumer. I am able to get it successfully running using the endpoint uri:
jetty:http://0.0.0.0:8080
However, when I modify the endpoint uri for https:
jetty:https://0.0.0.0:8443
The page times out trying to load. This seems odd because the camel documentation states it should function right out of the box.
I have since loaded a signed SSL into java's default keystore, with my attempted implementation to load it below:http://camel.apache.org/jetty.html
I have a basic Jetty instance using the akka-camel library with akka and scala. ex:
class RestActor extends Actor with Consumer {
val ksp: KeyStoreParameters = new KeyStoreParameters();
ksp.setPassword("...");
val kmp: KeyManagersParameters = new KeyManagersParameters();
kmp.setKeyStore(ksp);
val scp: SSLContextParameters = new SSLContextParameters();
scp.setKeyManagers(kmp);
val jettyComponent: JettyHttpComponent = CamelExtension(context.system).context.getComponent("jetty", classOf[JettyHttpComponent])
jettyComponent.setSslContextParameters(scp);
def endpointUri = "jetty:https://0.0.0.0:8443/"
def receive = {
case msg: CamelMessage => {
...
}
...
}
...
}
This resulted in some progress, because the page does not timeout anymore, but instead gives a "The connection was interrupted" error. I am not sure where to go from here because camel is not throwing any Exceptions, but rather failing silently somewhere (apparently).
Does anybody know what would cause this behavior?

When using java's "keytool" I did not specify an output file. It didn't throw back an error, so it probably went somewhere. I created a new keystore and explicitly imported my crt into the keyfile. I then explicitly added the filepath to that keystore I created, and everything works now!
If I had to speculate, it is possible things failed silently because I was adding the certs to jetty's general bank of certs to use if eligible, instead of explicitly binding it as the SSL for the endpoint.
class RestActor extends Actor with Consumer {
val ksp: KeyStoreParameters = new KeyStoreParameters();
ksp.setResource("/path/to/keystore");
ksp.setPassword("...");
val kmp: KeyManagersParameters = new KeyManagersParameters();
kmp.setKeyStore(ksp);
val scp: SSLContextParameters = new SSLContextParameters();
scp.setKeyManagers(kmp);
val jettyComponent: JettyHttpComponent = CamelExtension(context.system).context.getComponent("jetty", classOf[JettyHttpComponent])
jettyComponent.setSslContextParameters(scp);
def endpointUri = "jetty:https://0.0.0.0:8443/"
def receive = {
case msg: CamelMessage => {
...
}
...
}
...
}
Hopefully somebody in the future can find use for this code as a template in implementing Jetty over SSL with akka-camel (surprisingly no examples seem to exist)

Related

NullPointerException in Flink custom SourceFunction

I wanted to create a SourceFunction which reads a http stream.
I used ScalaJ which does what I want (it splits the incoming text by \n-s).
Obviously the code works outside Flink, but I get a NullPointerExcetion every time I start it as a Flink job (sometimes immediately sometimes after 1-2 seconds after it transmitted 1-2 elements). It kind of looks like the Http object has some problems.
import org.apache.flink.streaming.api.functions.source.SourceFunction
import scala.io.Source.fromInputStream
import scalaj.http._
class HttpSource(url: String) extends SourceFunction[String] {
#volatile var isRunning = true
override def cancel(): Unit = isRunning = false
override def run(ctx: SourceFunction.SourceContext[String]): Unit =
httpStream(ctx.collect)
private def httpStream(f: String => Unit) = {
val request = Http(url)
request
.execute { inputStream =>
fromInputStream(inputStream)
.getLines()
.takeWhile(_ => isRunning)
.foreach(f)
}
}
}
Here's the exception I usually get:
(Sometimes it's a bit different, for example I tried to make the request value transient, then it's already null when it tries to refer to request)
Caused by: java.lang.NullPointerException
at java.io.Reader.<init>(Reader.java:78)
at java.io.InputStreamReader.<init>(InputStreamReader.java:129)
at scala.io.BufferedSource.reader(BufferedSource.scala:24)
at scala.io.BufferedSource.bufferedReader(BufferedSource.scala:25)
at scala.io.BufferedSource.scala$io$BufferedSource$$charReader$lzycompute(BufferedSource.scala:35)
at scala.io.BufferedSource.scala$io$BufferedSource$$charReader(BufferedSource.scala:33)
at scala.io.BufferedSource.scala$io$BufferedSource$$decachedReader(BufferedSource.scala:62)
at scala.io.BufferedSource$BufferedLineIterator.<init>(BufferedSource.scala:67)
at scala.io.BufferedSource.getLines(BufferedSource.scala:86)
at flinkextension.HttpSource$$anonfun$httpStream$1.apply(HttpSource.scala:21)
at flinkextension.HttpSource$$anonfun$httpStream$1.apply(HttpSource.scala:19)
at scalaj.http.HttpRequest$$anonfun$execute$1.apply(Http.scala:323)
at scalaj.http.HttpRequest$$anonfun$execute$1.apply(Http.scala:323)
at scalaj.http.HttpRequest$$anonfun$toResponse$3.apply(Http.scala:388)
at scalaj.http.HttpRequest$$anonfun$toResponse$3.apply(Http.scala:380)
at scala.Option.getOrElse(Option.scala:121)
at scalaj.http.HttpRequest.toResponse(Http.scala:380)
at scalaj.http.HttpRequest.scalaj$http$HttpRequest$$doConnection(Http.scala:360)
at scalaj.http.HttpRequest.exec(Http.scala:335)
at scalaj.http.HttpRequest.execute(Http.scala:323)
at flinkextension.HttpSource.httpStream(HttpSource.scala:19)
at flinkextension.HttpSource.run(HttpSource.scala:14)
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:87)
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:55)
at org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:95)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:263)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:702)
at java.lang.Thread.run(Thread.java:748)
Everything else seems to be working fine, when I don't use a http request, but something else like file read with the same InputStream type, just a plain while loop with strings or even when I use single http requests, which aren't streaming.
I feel like I'm missing some theoretical background, maybe flink does something in the background which destroys the Http object or the InputStream, but I didn't find anything in the documentation.
UPDATE #1:
If I put a null check into the lambda, the job usually exits immediately, sometimes processes a few elements, sometimes timeouts after hanging for a minute. Here's this version of the httpStream function:
private def httpStream(f: String => Unit) = {
val request = Http(url)
request
.execute { inputStream =>
if (inputStream == null) println("null inputstream")
else {
println("not null inputstream")
fromInputStream(inputStream)
.getLines()
.takeWhile(_ => isRunning)
.foreach(f)
}
}
}
UPDATE #2:
The code actually works in distributed mode and with StreamExecutionEnvironment.createLocalEnvironment()
I only experience the issue if I use start-local.sh and submit the jar to it.

Passing validation exceptions from Camel to CXF SOAP service

I have a problem that i cannot solve for some time already, plus i'm new to apache camel and it does not help.
My simple app exposes SOAP web service using CXF (with jetty as http engine) then soap request are passed to akka actors using camel.
I want to validate SOAP request on the road to actor and check if it contains certain headers and content values. I do not want to use CXF interceptor. Problem is, that what ever happens in camel (exception, fault message return) is not propagate to cxf. I always get SUCCESS 202 as a response and information about validation exception in logs.
This is my simple app:
class RequestActor extends Actor with WithLogger {
def receive = {
case CamelMessage(body: Request, headers) =>
logger.info(s"Received Request $body [$headers]")
case msg: CamelMessage =>
logger.error(s"unknown message ${msg.body}")
}
}
class CustomRouteBuilder(endpointUrl: String, serviceClassPath: String, system: ActorSystem)
extends RouteBuilder {
def configure {
val requestActor = system.actorOf(Props[RequestActor])
from(s"cxf:${endpointUrl}?serviceClass=${serviceClassPath}")
.onException(classOf[PredicateValidationException])
.handled(true)
.process(new Processor {
override def process(exchange: Exchange): Unit = {
val message = MessageFactory.newInstance().createMessage();
val envelope = message.getSOAPPart().getEnvelope();
val body = message.getSOAPBody();
val fault = body.addFault();
fault.setFaultCode("Server");
fault.setFaultString("Unexpected server error.");
val detail = fault.addDetail();
val entryName = envelope.createName("message");
val entry = detail.addDetailEntry(entryName);
entry.addTextNode("The server is not able to complete the request. Internal error.");
log.info(s"Returning $message")
exchange.getOut.setFault(true)
exchange.getOut.setBody(message)
}
})
.end()
.validate(header("attribute").isEqualTo("for_sure_not_defined"))
.to(genericActor)
}
}
object Init extends App {
implicit val system = ActorSystem("superman")
val camel = CamelExtension(system)
val camelContext = camel.context
val producerTemplate = camel.template
val endpointClassPath = classOf[Service].getName
val endpointUrl = "http://localhost:1234/endpoint"
camel.context.addRoutes(new CustomRouteBuilder(endpointUrl, endpointClassPath, system))
}
When i run app i see log from log.info(s"Returning $message") so i'm sure route invokes processor, also actor is not invoked therefore lines:
exchange.getOut.setFault(true)
exchange.getOut.setBody(message)
do their job. But still my SOAP service returns 202 SUCCESS instead of fault information.
I'm not sure is what you are looking for, but I processed Exceptions for CXF endpoint differently. I had to return HTTP-500 with custom details in the SOAPFault (like validation error messages etc.), so...
Keep exception unhandled by Camel to pass it to CXF .onException(classOf[PredicateValidationException]).handled(false)
Create org.apache.cxf.interceptor.Fault object with all needed details out of Exception. (Not SOAP Fault). It allows to set custom detail element, custom FaultCode element, message.
finally replace Exchange.EXCEPTION_CAUGHT property with that cxfFault exchange.setProperty(Exchange.EXCEPTION_CAUGHT, cxfFault)
Resulting message from CXF Endpoint is a HTTP-500 with SOAPFault in the body with details I set in cxfFault
Camel is only looking at the in-portion of the exchange but you are modifying the out-portion.
Try changing
exchange.getOut.setFault(true)
exchange.getOut.setBody(message)
to
exchange.getIn.setFault(true)
exchange.getIn.setBody(message)

Terminate Akka-Http Web Socket connection asynchronously

Web Socket connections in Akka Http are treated as an Akka Streams Flow. This seems like it works great for basic request-reply, but it gets more complex when messages should also be pushed out over the websocket. The core of my server looks kind of like:
lazy val authSuccessMessage = Source.fromFuture(someApiCall)
lazy val messageFlow = requestResponseFlow
.merge(updateBroadcastEventSource)
lazy val handler = codec
.atop(authGate(authSuccessMessage))
.join(messageFlow)
handleWebSocketMessages {
handler
}
Here, codec is a (de)serialization BidiFlow and authGate is a BidiFlow that processes an authorization message and prevents outflow of any messages until authorization succeeds. Upon success, it sends authSuccessMessage as a reply. requestResponseFlow is the standard request-reply pattern, and updateBroadcastEventSource mixes in async push messages.
I want to be able to send an error message and terminate the connection gracefully in certain situations, such as bad authorization, someApiCall failing, or a bad request processed by requestResponseFlow. So basically, basically it seems like I want to be able to asynchronously complete messageFlow with one final message, even though its other constituent flows are still alive.
Figured out how to do this using a KillSwitch.
Updated version
The old version had the problem that it didn't seem to work when triggered by a BidiFlow stage higher up in the stack (such as my authGate). I'm not sure exactly why, but modeling the shutoff as a BidiFlow itself, placed further up the stack, resolved the issue.
val shutoffPromise = Promise[Option[OutgoingWebsocketEvent]]()
/**
* Shutoff valve for the connection. It is triggered when `shutoffPromise`
* completes, and sends a final optional termination message if that
* promise resolves with one.
*/
val shutoffBidi = {
val terminationMessageSource = Source
.maybe[OutgoingWebsocketEvent]
.mapMaterializedValue(_.completeWith(shutoffPromise.future))
val terminationMessageBidi = BidiFlow.fromFlows(
Flow[IncomingWebsocketEventOrAuthorize],
Flow[OutgoingWebsocketEvent].merge(terminationMessageSource)
)
val terminator = BidiFlow
.fromGraph(KillSwitches.singleBidi[IncomingWebsocketEventOrAuthorize, OutgoingWebsocketEvent])
.mapMaterializedValue { killSwitch =>
shutoffPromise.future.foreach { _ => println("Shutting down connection"); killSwitch.shutdown() }
}
terminationMessageBidi.atop(terminator)
}
Then I apply it just inside the codec:
val handler = codec
.atop(shutoffBidi)
.atop(authGate(authSuccessMessage))
.join(messageFlow)
Old version
val shutoffPromise = Promise[Option[OutgoingWebsocketEvent]]()
/**
* Shutoff valve for the flow of outgoing messages. It is triggered when
* `shutoffPromise` completes, and sends a final optional termination
* message if that promise resolves with one.
*/
val shutoffFlow = {
val terminationMessageSource = Source
.maybe[OutgoingWebsocketEvent]
.mapMaterializedValue(_.completeWith(shutoffPromise.future))
Flow
.fromGraph(KillSwitches.single[OutgoingWebsocketEvent])
.mapMaterializedValue { killSwitch =>
shutoffPromise.future.foreach(_ => killSwitch.shutdown())
}
.merge(terminationMessageSource)
}
Then handler looks like:
val handler = codec
.atop(authGate(authSuccessMessage))
.join(messageFlow via shutoffFlow)

As websocket connections increase app starts to hang

Just getting invited to look at an issue in a third party application where the behavior is as more clients connect (using web socket) app hangs after a certain connections. I am trying to get more info and better access to the codebase but below is what I have right now which looks like a standard code flow. Any gottachs to keep in mind when play, akka, web sockets are in the mix? Will post more info as it becomes available.
Controller has
def service = WebSocket.async[JsValue] { request =>
Service.createConnection
}
Service.createConnection looks as
def createConnection: Future[(Iteratee[JsValue, _], Enumerator[JsValue])] = {
val serviceActor = Akka.system.actorOf(Props[ServiceActor])
val socket_id = UUID.randomUUID().toString
val (enumerator, mChannel) = Concurrent.broadcast[JsValue]
(serviceActor ? Connect(socket_id, mChannel)).map{
...........
}
}

Use lift as a proxy

I want to forbid the full access to the Solr core from outside, and let it be used only for querying. Thus I am launching secondary server w/ connector instance inside Jetty servlet container (besides, the main webapp) on the port, that is not accessible from the WWW.
When there is incoming HTTP request to the liftweb application, I hook with RestHelper:
object Dispatcher extends RestHelper {
serve {
case List("api", a # _*) JsonGet _ => JString("API is not implemented yet. rest: " + a)
}
}
Targeting my browser to http://localhost/api/solr/select?q=region I get a response "API is not implemented yet. rest: List(solr, select)", so it seems to work. Now I want to do a connection on internal port (where Solr resides) in order to pass the query using the post-api part of the URL (i.e. http://localhost:8080/solr/select?q=region). I am catching the trailing REST-part of the URL (by means of a # _*), but how can I access URL parameters? It would be ideal to pass a raw string (after api path element) to the Solr instance, just to prevent redundant parse/build steps. So applies to the Solr's response: I would like to avoid parsing building JsonResponse.
This seems to be a good example on doing some HTTP-redirection, but then I would have to open the hidden Solr's port, as far as I can understand.
What is the most effective way to cope with this task?
EDIT:
Well, I missed that after JsonGet comes Req value, which has all the needed info. But is there still a way to avoid unwanted parsing/composing URL to hidden port and JSON-response?
SOLUTION:
This is what I've got consdering Dave's suggestion:
import net.liftweb.common.Full
import net.liftweb.http.{JsonResponse, InMemoryResponse}
import net.liftweb.http.provider.HTTPRequest
import net.liftweb.http.rest.RestHelper
import dispatch.{Http, url}
object ApiDispatcher extends RestHelper {
private val SOLR_PORT = 8080
serve { "api" :: Nil prefix {
case JsonGet(path # List("solr", "select"), r) =>
val u = localApiUrl(SOLR_PORT, path, r.request)
Http(url(u) >> { is =>
val bytes = Stream.continually(is.read).takeWhile(-1 !=).map(_.toByte).toArray
val headers = ("Content-Length", bytes.length.toString) ::
("Content-Type", "application/json; charset=utf-8") :: JsonResponse.headers
Full(InMemoryResponse(bytes, headers, JsonResponse.cookies, 200))
})
}}
private def localApiUrl(port: Int, path: List[String], r: HTTPRequest) =
"%s://localhost:%d/%s%s".format(r.scheme, port, path mkString "/", r.queryString.map("?" + _).openOr(""))
}
I'm not sure that I understand your question, but if you want to return the JSON you receive from solr without parsing it you could use a net.liftweb.http.InMemoryResponse that contains a byte[] representation of the JSON.