axios data differs from request.response - axios

I'm facing a quite improbable situation. My http request is returning the right data if I read it in the network tab but when trying to retrieve it in the code the data I receive isn't the one I'm expected.
Would you have any lead on understanding what's going on ?
When printing the whole response object, I can see that the response.data doesn't have the same content than the response.request.response
One return an object whose amount is 1450 the other 3750.
Would you have any idea why ?
Screenshot of the console content display different value for data and request.response

Related

How to handle error responses in a REST endpoint that accepts different Accept header values.

I'm trying to add a new content type to a REST endpoint. Currently it only returns json but I now need to be able to return also a CSV file.
As far as I know, the best way to do this is by using the Accept header with value text/csv and then add a converter that is able to react to this and convert the returned body to the proper CSV representation.
I've been able to do this but then I have a problem handling exceptions. Up until know, all the errors returned are in json. The frontend expects any 500 status code to contain a specific body with the error. But now, by adding the option to return either application/json or text/csv to my endpoint, in case of an error, the converter to be used to transform the body is going to be either the jackson converter or my custom one depending on the Accept header passed. Moreover, my frontend is going to need to read the content-type returned and parse the value based on the type of representation returned.
Is this the normal approach to handle this situation?
A faster workaround would be to forget about the Accept header and include a url parameter indicating the format expected. Doing it this way, I'd be able to change the content-type of the response and the parsing of the data directly in the controller as the GET request won't include any Accept header and it will be able to accept anything. There are some parts of the code already doing this where the only expected response format is CSV so I'm going to have a difficult time defending the use of the Accept header unless there is a better way of handling this.
my frontend is going to need to read the content-type returned and parse the value based on the type of representation returned.
Is this the normal approach to handle this situation?
Yes.
For example, RFC 7807 describes a common format for describing problems. So the server would send an application/problem+json or an application/problem+xml representation of the issue in the response, along with the usual meta data in the headers.
Consumers that understand application/problem+json can parse the data with in, and forward a useful description of the problem to the user/logs whatever. Consumers that don't understand that representation are limited to acting on the information in the headers.
A faster workaround would be to forget about the Accept header and include a url parameter indicating the format expected.
That's also fine -- more precisely, you can have a different resource responsible for the each of the different media-types that you support.
It may be useful to review section 3.4 of RFC 7231, which describes the semantics of content negotiation.

PlayWS calculate the size of a http call without consuming the stream

I'm currently using the PlayWS http client which returns an Akka stream. From my understanding, I can consume the stream and turn it into a Byte[] to calculate the size. However, this also consumes the stream and I can't use it anymore. Anyway around this?
I think there are two different aspects related to the question.
You want to know the size of the server response in advance to prepare buffer. Unfortunately there is no guaranteed way to do this. HTTP 1.1 spec explicitly allows transfer mode when the server does not know the size of the response in advance via chunked transfer encoding. See also quote from 3.3.1. Transfer-Encoding:
A recipient MUST be able to parse the chunked transfer coding
(Section 4.1) because it plays a crucial role in framing messages
when the payload body size is not known in advance.
Section 3.3.3. Message Body Length specifies how length of a message body is defined and it besides the aforementioned chunked transfer encoding it also contains quite unhelpful
Otherwise, this is a response message without a declared message
body length, so the message body length is determined by the
number of octets received prior to the server closing the
connection.
This is added for backward compatibility and discouraged from usage but is still legally allowed.
Still in many real world scenarios you can use Content-Length header field that the server may return. However there is a catch here as well: if gzip Content-Encoding is used, then Content-Length will contain size of the compressed body.
To sum up: in general case you can't get the size of the message body in advance before you fully get the server response i.e. in terms of code perform a blocking call on the response. You may try to use Content-Length and it might or might not help in your specific case.
You already have a fully downloaded response (or you are OK with blocking on your StreamedResponse) and you want to process it by first getting the size and only then processing the actual data. In such case you may first use getBodyAsBytes method which returns IndexedSeq[Byte] and thus has size, and then convert it into a new Source using Source.single which is actually exactly what the default (i.e. non-streaming) implementation of getBodyAsSource does.

GoodData Export Reports API Call results in incomplete file

I've developed a method that does the following steps, in this order:
1) Get a report's metadata via /gdc/md//obj/
2) From that, get the report definition and use that as payload for a call to /gdc/xtab2/executor3
3) Use the result from that call as payload for a call to /gdc/exporter/executor
4) Perform a GET on the returned URI to download the generated CSV
So this all works fine, but the problem is that I often get back a blank CSV or an incomplete CSV. My workaround has been to put a sleep() in between getting the URI back and actually calling a GET on the URI. However, as our data grows, I have to keep increasing the delay on this, and even then it is no guarantee that I got complete data. Is there a way to make sure that the report has finished exporting data to the file before calling the URI?
The problem is that export runs as asynchronous task - result on the URL returned in payload of POST to /gdc/exporter/executor (in form of /gdc/exporter/result/{project-id}/{result-id}) is available after exporter task finishes its job.
If the task has not been done yet, GET to /gdc/exporter/result/{project-id}/{result-id} should return status code 202 which means "we are still exporting, please wait".
So you should periodically poll on the result URL until it returns status 200 which will contain a payload (or 40x/50x if something wrong happened).

How to allow for POST to an MVC5 Controller for large sets of data.

I have seen several posts addressing this issue or similar to this issue for requests or GETs. I am not having this problem getting the data from the server, its solely on the POST.
The Errors I get are
The JSON request was too large to be deserialized.
or either
Error during serialization or deserialization using the JSON JavaScriptSerializer. The length of the string exceeds the value set on the maxJsonLength property. Parameter name: input
I haven't been able to consistently determine which actions result in which error, but it is predominately the latter one.
In an effort to get the value of the MaxJsonSize value, on the Index method of the controller, I get this data and dump it into a viewbag to write to console on the client side. Every time it comes back at 10k (102400).
If I reduce the data package size, and still serialize as previously, I get no errors.
In fiddler I can inspect the package and all the JSON is deserializable in fiddler, so I don't see an issue in my JSON. Additionally if I console.log(data) chrome sees no problems with it either.
The VM in the controller is the same for both POST and GET. With the exception there is more data with the POST than the GET. To test this I got a huge data set from the server;
GeoJSON data for all 50 states. Following was the result.
GET Content-Length: 3229309 return 200
POST Content-Length: 2975244 return 500
The POST failed in this scenario and returned the second error listed previously.
I only changed the data minimally (one string) and don't know why when sent back its smaller, but the JSON for both the GET and the POST is virtually identical.
I've tried changing the web.config file:
<system.web.extensions>
<scripting>
<webServices>
<jsonSerialization maxJsonLength="2147483644"/>
</webServices>
</scripting>
</system.web.extensions>
I just added this to the end of my config file just prior to
I've also added a parameter in Settings.config
<add key="aspnet:MaxJsonDeserializerMembers" value="2147483644" />
I have also verified that this param loads as part of the application settings in IIS.
Is there something else I can try to change to allow for these large data sets to be sent in a POST.
As a last resort, I was going to pull all of the GeoJSON data out of the POST. However when a user navigates back and they haven't changed what they were mapping, we'd have to find all the GeoJSON data again, causing undue work on the server etc. I thought if I only had to fetch it once that would be best from an efficiency perspective.
I struggled with this too, nothing I changed in web.config helped, despite several SO answers looking relevant. They helped with returning large JSON data, but the large JSON post kept failing. In the end I found this:
increase maxJsonLength for JSON POST and used the solution there, and it worked for me.
Quoting from there :
the MVC json serializer does not look at the webconfig to get the max length (thats for asp.net web services). you need to use your own serializer. you override ExecuteResult and supply you own json serializer. to override the input, create a new JsonValueProviderFactory, then override ValueProvider in the controller to return your new json factory when its a json request.

NSURLRequest returning null data (but only sometimes)

I'm trying to extract some data from a craigslist HTML page, but I seem to be running into a strange bug- every once in a while, the page I try to load with an NSURLRequest comes back as some strange form of data, which when converted to a parseable string, returns null. However, I can't consistently reproduce it- it'll suddenly stop working, and then I'll try it again an hour later and it'll be working perfectly, and then some time later it'll stop working again. Anyone know what could be causing it? I'm using an NSURLRequest, asynchronous, with the 'didReceiveData' and 'didReceiveResponse' delegate methods. If I cast the NSURLResponse to an NSHTTPURLResponse and check the response code, I get 200, meaning there were no issues. But when I go to initialize a string with the response data, it returns null, and I obviously then can't parse it.
The URL that seems to do it most often is: http://sarasota.craigslist.org/app/
I've tried messing with the User-Agent header for the request, the cache policy, everything I can think of... but nothing seems to fix it.
If there is data but when you ask for a string it's null then I might suspect the string encoding you're using to decode the data? Is there an odd character that is only sometimes in craigs list advert titles?
Just out of interest, why don't you use the rss feed instead - it's probably more consistently formatted / strictly encoded as it's xml not html.
http://sarasota.craigslist.org/app/index.rss