We have a peculiar scenario we want to test.
We'll be consuming an HTTP stream which would stay open during certain time-frame. The stream consists of plain-text lines (CSV) and it's streamed using the chunked transfer encoding.
When we connect we expect to get all the data from, possibly, a file on the server side, and once that bulk is being served the connection stays alive, as it's possible that there would be more data transferred over the same connection.
Is it possible for Wiremock to serve everything from a file and keep the connection alive (doesn't send an empty chunk to signal the end of stream)?
The short answer is no.
While WireMock will keep connections alive by default per the HTTP 1.1 spec, it will always terminate the response once everything has been sent, either via the empty chunk or by setting Content-Length.
What you're trying to do (if I understand correctly) is stream out multiple payloads within the context of a single response, which WireMock doesn't have a means for doing.
A possible solution might be for you to concatenate all your response parts into a single file, although I suspect you've discounted that option for reasons not stated.
Another possibility would be to supply your own FileSource implementation to WireMock and thus provide your own InputStreamSource which would give you more control over how the underlying file(s) are streamed out in the response.
Related
I know very similar questions have been asked before. But I don't think the solutions I found on google/stackoverflow are suitable for me.
I started to write some web services with Scala/Spray, and it seems the best way to send large files without consuming large amouns of memory is using the stream marshalling. This way Spray will send http chunks. Two questions:
Is it possible to send the file without using HTTP chunks and without reading the entire file into memory?
AFAIK akka.io only process one write at a time, meaning it can buffer one write until it has been passed on to the O/S kernel in full. Would it be possible to tell Spray, for each HTTP response, the length of the content? Thereafter Spray would ask for new data (through akka messages) untill the entire content length is completed. Eg, I indicate my content length is 100 bytes. Spray sends a message asking for data to my actor, I provide 50 bytes. Once this data is passed on to the O/S, spray sends another message asking for new data. I provide the remaining 50 bytes... the response is completed then.
Is it possible to send the file without using HTTP chunks [on the wire]
Yes, you need to enable chunkless streaming. See http://spray.io/documentation/1.2.4/spray-routing/advanced-topics/response-streaming/
Chunkless streaming works regardless whether you use the Stream marshaller or provide the response as MessageChunks yourself. See the below example.
without reading the entire file into memory
Yes, that should work if you supply data as a Stream[Array[Byte]] or Stream[ByteString].
[...] Thereafter Spray would ask for new data [...]
That's actually almost like it already works: If you manually provide the chunks you can request a custom Ack message that will be delivered back to you when the spray-can layer is able to process the next part. See this example for how to stream from a spray route.
I indicate my content length is 100 bytes
A note upfront: In HTTP you don't strictly need to specify a content-length for responses because a response body can be delimited by closing the connection which is what spray does if chunkless streaming is enable. However, if you don't want to close the connection (because you would lose this persistent connection) you can now specify an explicit Content-Length header in your ChunkedResponseStart message (see #802) which will prevent the closing of the connection.
I have a use case where by i wish to have a ZeroMQ Request / Reply socket 'stream' back results, is this possible with MultiPart messages (i.e. The Reply sockets streams the frames back before HasMore = false?) or am i approaching this incorrectly?
The situation:
1) Client makes a query (Request) for some records
2) Server looks up Database for results and responds with the current large amount records (Reply) split into frames
3) Server must wait until a Server Side event is generated before the final Frame is sent (HasMore = false)
4) Client wont get the previous Frames until the Final Event has been generated and HasMore = false
Thanks for your help.
As far as I understand what you're aiming for, it sounds like what you have will work the way you expect. See here for more discussion on message frames. The salient points:
As you say, all of the frames will be sent to the client at one time, they will be stored on the server until HasMore is set to false.
One important thing to remember here, if it's a truly large amount of data, you must be able to fit the entire data set into memory, because it'll be stored in your server memory until the entire message with all frames is complete, and then it'll be received into memory before it's processed on the client side.
I assume primarily what you're looking for is a way to iteratively build up a message before you send it? And perhaps to be able to deal with the data on the client iteratively as well? Also you get a guarantee that you won't lose part of the data in the middle, you either get the whole message or lose the whole message (as opposed to instead sending each frame as a separate message). This is one of the primary use cases for frames, so you've done well.
The only thing I object to is using the word "stream", as that implies that the data is being sent to the client continuously as it's being processed on the server, and that's explicitly not what you're trying to do (nor is it possible with ZMQ message frames).
Lets say I have a g-wan server with c script, if a http request comes in and then another http request, I would like for both of these running scripts to be able to read and write from the same section of ram.
In other words, I wish to just have a simple RAM database, an array of data and any http request can read from this RAM database.
I mean for any HTTP request to this server, it could be from any client. I just want to be able to read or write from the same data in RAM.
You can use US_SERVER_DATA:
US_SERVER_DATA, // global pointer (for maintenance script)
I've seen several uses of sockets where programmers send a command or some information over a TCP/IP socket, and expect it to be received in one call on the receiving side.
For eg, transmitting
mySocket.Send("SomeSpecificCommand")
They assume the receive side will receive all the data in one call. For eg:
Dim data(255) As Byte
Dim nReceived As Long = s.Receive(data, 0, data.Count, SocketFlags.None)
Dim str As String = Encoding.ASCII.GetString(data, 0, n)
If str = "SomeSpecificCommand" Then
DoStuff()
...
The example above doesn't use any terminator, so the programmer is relying on the fact that the sockets implementation is not allowed, for example, to return "SomeSpecif" in a first call to Receive(), and "cCommand" in a later call to Receive(). (NOTE - In the example, the buffer is sized to be larger than the expected string).
I've never before given this much thought and had just assumed that this type of coding is unsafe and have always used delimiters. Have I been wasting my time (and processor cycles)?
There is no guarantee that it will all arrive at the same time. The code (the app's protocol) needs to deal with the possibility that data from one send may arrive in multiple pieces or the possibility that data from more than one send could arrive in one receive.
Short snippets of data sent in one short call to send() will usually arrive in one call to recv(), which is why code like that will work most of the time. However, it's not guaranteed and therefore bad practice to rely on it.
TCP buffers the data and may split it up as it sees fit. TCP tries to send as few packets as possible to conserve bandwidth, so it won't split up the data for no good reason. However, if it's been queueing up some data and the data from one call to send() happens to straddle a packet boundary, that data will be split up.
Alternately, TCP could try to send it in one packet, but then a router anywhere along the path to the destination could come back and say "this packet is too big!". Then TCP will split it into smaller packets.
When sending data across a network, you should expect your data to be fragmented across multiple packets and structure your code and data to deal with this. In the example case where you are sending a handful of bytes, everything will work fine.. until you start sending larger packets.
If you are expecting to receive one message at a time then you can just loop reading bytes for an interval after the first bytes arrive. This is simple but inefficient.
A delimiter could be used as suggested but then you have to guard against accidentally including the delimiter within the regular data. If you are only sending text then you can use null or some non-printable character. If you are sending binary data then this becomes more difficult as any occurrence of the delimiter within the data needs to be escaped by the sender and un-escaped by the receiver.
An alternative to delimiters is to add a field to the front of the data containing a message length. This is better than using a delimiter as it removes the need for escaping data and better than simply looping until a timer expires as it will be more responsive.
No, its not a good idea to assume that the server (assuming your the client) is gonna only send you one socket response. The server could be running though a list of procedures that returns multiple results. I would continue to read from the socket until there is nothing left to pick up, then wait a few miliseconds and test again. If nothing shows up, chances are good that the server has finished sending responses.
There are several types of sockets. TCP uses SOCK_STREAM, which don't preserve message boundaries. SOCK_SEQPACKET sockets do preserve message boundaries.
EDIT: SCTP supports both SOCK_STREAM and SOCK_SEQPACKET.
I'm using Perl sockets in AIX 5.3, Perl version 5.8.2
I have a server written in Perl sockets. There is a option called "Blocking", which can be set to 0 or 1. When I use Blocking => 0 and run the server and client send data (5000 bytes), I am able to recieve only 2902 bytes in one call. When I use Blocking => 1, I am able to recieve all the bytes in one call.
Is this how sockets work or is it a bug?
This is a fundamental part of sockets - or rather, TCP, which is stream-oriented. (UDP is packet-oriented.)
You should never assume that you'll get back as much data as you ask for, nor that there isn't more data available. Basically more data can come at any time while the connection is open. (The read/recv/whatever call will probably return a specific value to mean "the other end closed the connection.)
This means you have to design your protocol to handle this - if you're effectively trying to pass discrete messages from A to B, two common ways of doing this are:
Prefix each message with a length. The reader first reads the length, then keeps reading the data until it's read as much as it needs.
Have some sort of message terminator/delimiter. This is trickier, as depending on what you're doing you may need to be aware of the possibility of reading the start of the next message while you're reading the first one. It also means "understanding" the data itself in the "reading" code, rather than just reading bytes arbitrarily. However, it does mean that the sender doesn't need to know how long the message is before starting to send.
(The other alternative is to have just one message for the whole connection - i.e. you read until the the connection is closed.)
Blocking means that the socket waits till there is data there before returning from a recieve function. It's entirely possible there's a tiny wait on the end as well to try to fill the buffer before returning, or it could just be a timing issue. It's also entirely possible that the non-blocking implementation returns one packet at a time, no matter if there's more than one or not. In short, no it's not a bug, but the specific 'why' of it is the old cop-out "it's implementation specific".