Streaming decrypted soap response in Axis2 + Rampart - soap

I have this situation:
SOAP client, implemented in Apache Axis2 + Apache Rampart
Received SOAP messages are decrypted using Rampart (data is encrypted with public key, if that makes any difference)
Response size is around 4MB
I was curious, since SOAP response needs to be decrypted, does that mean that data can't be streamed with Apache Axiom? Axiom utilizes Streaming API for XML.
I.E., to decrypt message, Rampart should have whole object model tree constructed in memory?

Rampart is based on WSS4J using DOM. This requires conversion from Axiom to DOM and back, and it's not possible to implement streaming in this case.

Related

WireMock serve response from file and keep connection alive

We have a peculiar scenario we want to test.
We'll be consuming an HTTP stream which would stay open during certain time-frame. The stream consists of plain-text lines (CSV) and it's streamed using the chunked transfer encoding.
When we connect we expect to get all the data from, possibly, a file on the server side, and once that bulk is being served the connection stays alive, as it's possible that there would be more data transferred over the same connection.
Is it possible for Wiremock to serve everything from a file and keep the connection alive (doesn't send an empty chunk to signal the end of stream)?
The short answer is no.
While WireMock will keep connections alive by default per the HTTP 1.1 spec, it will always terminate the response once everything has been sent, either via the empty chunk or by setting Content-Length.
What you're trying to do (if I understand correctly) is stream out multiple payloads within the context of a single response, which WireMock doesn't have a means for doing.
A possible solution might be for you to concatenate all your response parts into a single file, although I suspect you've discounted that option for reasons not stated.
Another possibility would be to supply your own FileSource implementation to WireMock and thus provide your own InputStreamSource which would give you more control over how the underlying file(s) are streamed out in the response.

Akka Streams' ActorPublisher as a Source for web response - how back-pressure works

I use akka-streams' ActorPublisher actor as a streaming per-connection Source of data being sent to an incoming WebSocket or HTTP connection.
ActorPublisher's contract is to regularly request data by supplying a demand - number of elements that can be accepted by downstream. I am not supposed to send more elements if the demand is 0. I observe that if I buffer elements, when consumer is slow, that buffer size fluctuates between 1 and 60, but mostly near 40-50.
To stream I use akka-http's ability to set WebSocket output and HttpResponse data to a Source of Messages (or ByteStrings).
I wonder how the back-pressure works in this case - when I'm streaming data to a client through network. How exactly these numbers are calculated? Does it check what's happening on network level?
The closest I could find for your question "how the back-pressure works in this case" is from the documentation:
Akka HTTP is streaming all the way through, which means that the
back-pressure mechanisms enabled by Akka Streams are exposed through
all layers–from the TCP layer, through the HTTP server, all the way up
to the user-facing HttpRequest and HttpResponse and their HttpEntity
APIs.
As to "how these numbers are calculated", I believe that is specified in the configuration settings.

Sending large files with Spray

I know very similar questions have been asked before. But I don't think the solutions I found on google/stackoverflow are suitable for me.
I started to write some web services with Scala/Spray, and it seems the best way to send large files without consuming large amouns of memory is using the stream marshalling. This way Spray will send http chunks. Two questions:
Is it possible to send the file without using HTTP chunks and without reading the entire file into memory?
AFAIK akka.io only process one write at a time, meaning it can buffer one write until it has been passed on to the O/S kernel in full. Would it be possible to tell Spray, for each HTTP response, the length of the content? Thereafter Spray would ask for new data (through akka messages) untill the entire content length is completed. Eg, I indicate my content length is 100 bytes. Spray sends a message asking for data to my actor, I provide 50 bytes. Once this data is passed on to the O/S, spray sends another message asking for new data. I provide the remaining 50 bytes... the response is completed then.
Is it possible to send the file without using HTTP chunks [on the wire]
Yes, you need to enable chunkless streaming. See http://spray.io/documentation/1.2.4/spray-routing/advanced-topics/response-streaming/
Chunkless streaming works regardless whether you use the Stream marshaller or provide the response as MessageChunks yourself. See the below example.
without reading the entire file into memory
Yes, that should work if you supply data as a Stream[Array[Byte]] or Stream[ByteString].
[...] Thereafter Spray would ask for new data [...]
That's actually almost like it already works: If you manually provide the chunks you can request a custom Ack message that will be delivered back to you when the spray-can layer is able to process the next part. See this example for how to stream from a spray route.
I indicate my content length is 100 bytes
A note upfront: In HTTP you don't strictly need to specify a content-length for responses because a response body can be delimited by closing the connection which is what spray does if chunkless streaming is enable. However, if you don't want to close the connection (because you would lose this persistent connection) you can now specify an explicit Content-Length header in your ChunkedResponseStart message (see #802) which will prevent the closing of the connection.

Spray chunked request throttle incoming data

I am using Spray 1.3, with incoming-auto-chunking-threshold-size set, to allow streaming of incoming requests.
When a very large request comes in from my client, I want to stream it through the app and out to a backing store in chunks, to limit the memory used by the Spray app.
I am finding that Spray will slurp in the response as fast as it can, creating MessageChunks of the configured size and passing them to my app.
If the backend store is slow, then this results in Spray caching most of the request in local memory, defeating the streaming design.
Is there any way I can get Spray to block or throttle the request stream so that the input data rate matches the output data rate, to cap my app's memory usage?
Relevant spray code:
The HttpMessagePartParser.parseBodyWithAutoChunking method is the one which breaks up the request byte stream into MessageChunk objects. It does so greedily, consuming as many chunks as are immediately available, then returning a NeedMoreData object.
The request pipeline accepts NeedMoreData in the handleParsingResult method of the RawPipelineStage, with the following code:
case Result.NeedMoreData(next) ⇒ parser = next // wait for the next packet
... so it looks to me like there is no "pull" control of the chunking stream in Spray, and the framework will always read in the request as fast as it can manage, and push it out to the app's Actors as MessageChunks. Once a MessageChunk message is in the queue for my Actor, its memory can't be cached to disk.
So there is no way to limit the memory used by Spray for a request?
There is a workaround discussed here: https://github.com/spray/spray/issues/281#issuecomment-40455433
This may be addressed in a future spray release.
EDIT: Spray is now Akka HTTP, which has "Reactive Streams" which gives back-pressure to the TCP stream while still being async: https://groups.google.com/forum/#!msg/akka-dev/PPleJEfI5sM/FbeptEYlicoJ

nServiceBus with large XML messages

I have read about the true messaging and that instead of sending payload on the bus, it sends an identifier. In our case, we have a lot of legacy apps/services and those were designed to receive the payload of messages (xml) that is close to 4MB (close MSMQ limit). Is there a way for nService bus to handle large payload and persist messages automatically or another work-around, so that the publisher/subscriber services don't have to worry neither about the payload size, nor about how to de/re-hydrate the payload?
Thank you in advance.
You could use the Message Sequence pattern. In NServiceBus, you would split the payload in the sender, wrap the chunks in a custom 'Sequence' IMessage, and then implement a saga at the other end to extract the chunks & reassemble. You would need to put some effort into error handling & timeouts.
You can always use the quick "fix" of compressing the messages.
A POCO serialized with the binary serializer can be compressed down by a large margin. We saw our messages that were 20mb compressed down to 3.1mb.
So if your messages are hovering around 4mb it might be simple to just write an IMessageSerializer that automatically compresses the message while it is on the wire.
I'm not aware of any internal NServiceBus capability to associate extra data with a message out of band.
I think you're right on the mark - if the entire payload can't fit within the limit, then it's better to persist it elsewhere on your own and then passing an ID.
However, it may be possible for you to design a message structure such that a message could implement an IHasPayload interface (which would perhaps incorporate an ID and a Type?), and then your application logic could have a common method for getting the payload given an IHasPayload message.