Huge or infinite response with WireMock - wiremock

I would like to test our app using a HTTP client with
a huge amount of data. Is it possible to create an infinite or several gigabytes length
output with WireMock without allocating a byte array or String with that size?
As far as I see ResponseDefinitionBuilder has three withBody* methods:
public ResponseDefinitionBuilder withBodyFile(String fileName)
public ResponseDefinitionBuilder withBody(String body)
public ResponseDefinitionBuilder withBody(byte[] body)
I have tried withBodyFile("/dev/zero") but I got the following exception:
WARN (ServletHandler.java:628) - /test.txt
com.github.tomakehurst.wiremock.security.NotAuthorisedException: Access to file /dev/zero is not permitted
at com.github.tomakehurst.wiremock.common.AbstractFileSource.assertFilePathIsUnderRoot(AbstractFileSource.java:160)
at com.github.tomakehurst.wiremock.common.AbstractFileSource.getBinaryFileNamed(AbstractFileSource.java:45)
at com.github.tomakehurst.wiremock.http.StubResponseRenderer.renderDirectly(StubResponseRenderer.java:115)
at com.github.tomakehurst.wiremock.http.StubResponseRenderer.buildResponse(StubResponseRenderer.java:64)
at com.github.tomakehurst.wiremock.http.StubResponseRenderer.render(StubResponseRenderer.java:56)
at com.github.tomakehurst.wiremock.http.AbstractRequestHandler.handle(AbstractRequestHandler.java:50)
at com.github.tomakehurst.wiremock.servlet.WireMockHandlerDispatchingServlet.service(WireMockHandlerDispatchingServlet.java:111)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
...
The other two require a huge in-memory array or string which I also would like to avoid.
I've also checked the Fault enum but it does not seem extendable.

The reason you saw the file security error is that WireMock will only read files under its configured file root, so setting up a symlink might work.
Failing that, just creating a very large file would do the trick, and won't consume a lot of memory as body files are streamed.

Related

Alamofire Chunked Upload - How API knows when its the last chunk?

I have a Golang web server that I have written to handle large file uploads 30GB or more. In a proof of concept using Dropzone.js I can upload files of any size with no issue as long as they are chunked.
The way DropzoneJS.js implemented this is that each chunk has items added the to the header like:
dzchunkindex: 435
dzchunksize: 10000
dztotalchunkcount: 3498274
So I receive a chunk, I create the file (if needed), write the data, and check to see if I'm on the last chunk. Then repeat as needed. Once I see I've written the last chunk I close the file.
It seems like Alamofire supports chunked uploads using its AF.Upload method.
However, how should my server know when the last chunk has been uploaded? I can certainly check this a different way. Just curious what that way should be? Ive combed over the Alamofire docs and can't find much.
I can chunk the file manually and upload it but id rather use Alamofire if possible.
Thanks,
Ed

How to include file bytes into the POST request body in Jmeter? (What encoding to use)

I have to perform POST requests from Jmeter. I use default HTTPRequest sampler, where I specify the JSON structure that is understandable by the testing app. One part of this JSON has to contain binary data from a pdf file.
For reading the file I use BeanShellSampler in the setUp thread group:
File file = new File(bsh.args[0]);
try {
FileInputStream fis = new FileInputStream(file);
byte[] array = new byte[(int)file.length()];
log.info("String is read.");
fis.read(array);
vars.put("fileEntity", new String(array, "cp1252"));
} catch (e) {
e.printStackTrace();
log.error(e.getMessage());
}
The problem is that when I look at the request with Fiddler, I see that difference, how the binary object is represented there in comparison with Postman's requests:
Postman
Jmeter
I think that there is something wrong with the encoding when I create a String object in the BeanShellSampler. What encoding is correct?
I tried to use RawDataSource plugin but it doesn't help for two reasons:
It fails to read my file, saying "Error reading next chunk"
It uses the same approach that I do to read the file, but uses UTF8 encoding. I tried this encoding also, but without any success.
My expectation is that your fis.read(array); function relies on default value of the file.encoding system property which may or may not be cp1252.
I would recommend introducing an InputStreamReader and explicitly specify the encoding there like:
InputStreamReader isr = new InputStreamReader(fis,"cp1252");
Also be aware that starting from JMeter 3.1 it's recommended to use JSR223 Test Elements and Groovy language for scripting mainly because Groovy performs much better comparing to Beanshell.

PlayWS calculate the size of a http call without consuming the stream

I'm currently using the PlayWS http client which returns an Akka stream. From my understanding, I can consume the stream and turn it into a Byte[] to calculate the size. However, this also consumes the stream and I can't use it anymore. Anyway around this?
I think there are two different aspects related to the question.
You want to know the size of the server response in advance to prepare buffer. Unfortunately there is no guaranteed way to do this. HTTP 1.1 spec explicitly allows transfer mode when the server does not know the size of the response in advance via chunked transfer encoding. See also quote from 3.3.1. Transfer-Encoding:
A recipient MUST be able to parse the chunked transfer coding
(Section 4.1) because it plays a crucial role in framing messages
when the payload body size is not known in advance.
Section 3.3.3. Message Body Length specifies how length of a message body is defined and it besides the aforementioned chunked transfer encoding it also contains quite unhelpful
Otherwise, this is a response message without a declared message
body length, so the message body length is determined by the
number of octets received prior to the server closing the
connection.
This is added for backward compatibility and discouraged from usage but is still legally allowed.
Still in many real world scenarios you can use Content-Length header field that the server may return. However there is a catch here as well: if gzip Content-Encoding is used, then Content-Length will contain size of the compressed body.
To sum up: in general case you can't get the size of the message body in advance before you fully get the server response i.e. in terms of code perform a blocking call on the response. You may try to use Content-Length and it might or might not help in your specific case.
You already have a fully downloaded response (or you are OK with blocking on your StreamedResponse) and you want to process it by first getting the size and only then processing the actual data. In such case you may first use getBodyAsBytes method which returns IndexedSeq[Byte] and thus has size, and then convert it into a new Source using Source.single which is actually exactly what the default (i.e. non-streaming) implementation of getBodyAsSource does.

Can I fake uploaded image filesize?

I'm building a simple image file upload form. Programmatically, I'm using the Laravel 5 framework. Through the Input facade (through Illuminate), I can resolve the file object, which in itself is an UploadedFile (through Symfony).
The UploadedFile's API ref page (Symfony docs) says that
public integer | null getClientSize()
Returns the file size. It is extracted from the request from which the
file has been uploaded. It should not be considered as a safe
value. Return Value integer|null The file size
What will be these cases where the uploaded filesize is wrongly reported?
Are there known exploits using this?
How can the admin ensure this is detected (and hence logged as a trespass attempt)?
That method is using the "Content-Length" header, which can easily be forged. You'll want to use the easy construct $_FILES['myfile']['size']. As an answer to another question has already stated: Can $_FILES[...]['size'] be forged?
This value checks the actual size of the file, and is not modified by the provided headers.
If you'd like to check for people misbehaving, you can simply compare the content-length header to your $_FILES['myfile']['size'] value.

How to allow for POST to an MVC5 Controller for large sets of data.

I have seen several posts addressing this issue or similar to this issue for requests or GETs. I am not having this problem getting the data from the server, its solely on the POST.
The Errors I get are
The JSON request was too large to be deserialized.
or either
Error during serialization or deserialization using the JSON JavaScriptSerializer. The length of the string exceeds the value set on the maxJsonLength property. Parameter name: input
I haven't been able to consistently determine which actions result in which error, but it is predominately the latter one.
In an effort to get the value of the MaxJsonSize value, on the Index method of the controller, I get this data and dump it into a viewbag to write to console on the client side. Every time it comes back at 10k (102400).
If I reduce the data package size, and still serialize as previously, I get no errors.
In fiddler I can inspect the package and all the JSON is deserializable in fiddler, so I don't see an issue in my JSON. Additionally if I console.log(data) chrome sees no problems with it either.
The VM in the controller is the same for both POST and GET. With the exception there is more data with the POST than the GET. To test this I got a huge data set from the server;
GeoJSON data for all 50 states. Following was the result.
GET Content-Length: 3229309 return 200
POST Content-Length: 2975244 return 500
The POST failed in this scenario and returned the second error listed previously.
I only changed the data minimally (one string) and don't know why when sent back its smaller, but the JSON for both the GET and the POST is virtually identical.
I've tried changing the web.config file:
<system.web.extensions>
<scripting>
<webServices>
<jsonSerialization maxJsonLength="2147483644"/>
</webServices>
</scripting>
</system.web.extensions>
I just added this to the end of my config file just prior to
I've also added a parameter in Settings.config
<add key="aspnet:MaxJsonDeserializerMembers" value="2147483644" />
I have also verified that this param loads as part of the application settings in IIS.
Is there something else I can try to change to allow for these large data sets to be sent in a POST.
As a last resort, I was going to pull all of the GeoJSON data out of the POST. However when a user navigates back and they haven't changed what they were mapping, we'd have to find all the GeoJSON data again, causing undue work on the server etc. I thought if I only had to fetch it once that would be best from an efficiency perspective.
I struggled with this too, nothing I changed in web.config helped, despite several SO answers looking relevant. They helped with returning large JSON data, but the large JSON post kept failing. In the end I found this:
increase maxJsonLength for JSON POST and used the solution there, and it worked for me.
Quoting from there :
the MVC json serializer does not look at the webconfig to get the max length (thats for asp.net web services). you need to use your own serializer. you override ExecuteResult and supply you own json serializer. to override the input, create a new JsonValueProviderFactory, then override ValueProvider in the controller to return your new json factory when its a json request.