CKFinder being used is 3.4 for ASP.NET. It is operating on S3 file system (custom driver) with the following structure.
When content of the "Forms" folder is requested, CKFinder never completes and keeps displaying "Please wait. Loading..." even though GetFiles request has completed and returned JSON result. Here is the request.
And here is the response.
According to the docs (http://docs.cksource.com/ckfinder3-net/commands.html#command_getfiles) GetFiles response should be a JSON object structured like { files:[...], currentFolder:{...}, resourceType:'...' }.
But for the "Forms" folder CKFinder only returns "files" data, there is no "currentFolder" and "recourceType" properties. This leads to the following JS error and makes CKFinder break and show "Please wait. Loading..." forever.
So, for some folders CKFinder returns incomplete JSON response which leads to JS error and frozen UI.
Does anyone have an idea why CKFinder would generate incomplete GetFiles response?
There was inaccurate folder detection logic in the IFileSystem.FolderExistsAsync() method of custom IFileSystem implementation.
Related
I am calling a REST API for a list of resources and getting the json response as below if atleast one resource is there.
{
"value": [
"res1_id",
"res2_id",
"res3_id"
]
}
and the HTTP response code is 200.
But when no resource is there the server is returning HTTP response code as 404.
My doubt it why it is designed that way(it is an enterprise product).
I was expecting a empty list as below and HTTP response code 200:
{
"value": []
}
I am not able to understand what design decision has been taken into consideration to return 404 instead of an empty json.
Please help me to understand the reasoning behind it.
What I am reading here is that you have a 'collection' resource. This collection resource contains a list of sub-resources.
There's two possible interpretations for this case.
The collection resource is 0-length.
The collection resource doesn't exist.
Most API's will treat a 0-length collection as a resource that still exists, with a 200 OK. An empty coffee cup is still a coffee cup. The coffee cup itself did not disappear after the last drop is gone.
There are cases where it might be desirable for a collection resource to 404. For example, if you want to indicate that a collection never existed or never has been created.
One way to think about this is the difference between an empty directory or a directory not existing. Both cases can 'tell' the developer something else.
You're trying to access a resource which does not exist. That's the reason for 404.
As a simple google search says
The HTTP 404, 404 Not Found, and 404 error message is a Hypertext
Transfer Protocol (HTTP) standard response code, in computer network
communications, to indicate that the client was able to communicate
with a given server, but the server could not find what was requested.
For your case
{
"value": []
}
This can be a status code 200 which means the resource is accessed and data is returned which is an empty array or you can customize it to a 404. But for that you've mentioned best is to return 200.
A status code of 404 is the correct response for when there is no resource to return based on a request you made. You asked for data and there is no data. It returns an empty response. They're relying on you to base your interpretation of the results on the status code.
https://en.wikipedia.org/wiki/HTTP_404
Some companies prefer making their errors known instead of hiding them. I know where I work we shy away from try catch blocks because we want our applications to blow up during testing so we can fix the problem. Not hide it. So it's possible that the dev that created the API wants you to use the status code of the response as a way of telling if the service was successful.
To delete an existing module the documentation (http://cumulocity.com/guides/reference/real-time-statements/) says to do a DELETE request against /cep/module/<<moduleId>>. But this results in a response status 500 with the reason that "Request method 'DELETE' not supported".
What is the correct request to delete a single module?
The documentation gives the wrong path. It needs to be /cep/modules/<<moduleId>> instead of /cep/module/<<moduleId>>. This can be seen by debugging the behaviour of the web-GUI.
I am trying to synchronize an Outlook folder (say the Inbox) using the beta version of the Outlook Rest Api see doc here
I need only to retrieve the property IsRead and the PR_INTERNET_MESSAGE_ID
So following documentation, for the first synchronization my requests look like:
The following Http headers are always added:
request.Headers.Add("Prefer", "odata.track-changes");
request.Headers.Add("Prefer", "odata.maxpagesize=5"); //Use a small page size easier for debugging
The first initial full synchronization request
https://outlook.office365.com/api/beta/Me/MailFolders('inbox')/messages?$select=IsRead&$expand=SingleValueExtendedProperties($filter=(PropertyId eq 'String 0x1035'))
Good results the value array contain what I need.
The second request after the first request uses the deltatoken
https://outlook.office365.com/api/beta/Me/MailFolders('inbox')/messages?$select=IsRead,Subject&$expand=SingleValueExtendedProperties($filter=(PropertyId eq 'String 0x1035'))&$deltatoken=a758b90491954a61ad463ef3a0e690a2
Bad results, no SingleValueExtendedProperties entries
Next requests for paginations with skiptoken...
https://outlook.office365.com/api/beta/Me/MailFolders('inbox')/messages?$select=IsRead,Subject&$expand=SingleValueExtendedProperties($filter=(PropertyId eq 'String 0x1035'))&$skiptoken=e99ad10324464488b6b219ca5ed6be1c
Bad results again, same as 2.
It looks like a bug to me. Can you provide a workaround? From a list of ItemId is possible to retrieve easily the list of corresponding PR_InternetMessage_Id efficiently (not item per item)?
Note also that in the documentation it is written that:
The response will include a Preference-Applied: odata.track-changes
header. If you attempt to sync a resource that is not supported, this
header will not be returned in the response. Check for this header
before processing the response to avoid errors.
It seems that for 2. and 3. calls this response header "Preference-Applied" is not set.
The sync functionality today doesn't support extended properties. However, we are working to enable this and it should start working in a few weeks.
EDIT:
For a workaround for the very special case of the PR_INTERNETMESSAGE_ID look at the comment below.
I've developed a method that does the following steps, in this order:
1) Get a report's metadata via /gdc/md//obj/
2) From that, get the report definition and use that as payload for a call to /gdc/xtab2/executor3
3) Use the result from that call as payload for a call to /gdc/exporter/executor
4) Perform a GET on the returned URI to download the generated CSV
So this all works fine, but the problem is that I often get back a blank CSV or an incomplete CSV. My workaround has been to put a sleep() in between getting the URI back and actually calling a GET on the URI. However, as our data grows, I have to keep increasing the delay on this, and even then it is no guarantee that I got complete data. Is there a way to make sure that the report has finished exporting data to the file before calling the URI?
The problem is that export runs as asynchronous task - result on the URL returned in payload of POST to /gdc/exporter/executor (in form of /gdc/exporter/result/{project-id}/{result-id}) is available after exporter task finishes its job.
If the task has not been done yet, GET to /gdc/exporter/result/{project-id}/{result-id} should return status code 202 which means "we are still exporting, please wait".
So you should periodically poll on the result URL until it returns status 200 which will contain a payload (or 40x/50x if something wrong happened).
I have seen several posts addressing this issue or similar to this issue for requests or GETs. I am not having this problem getting the data from the server, its solely on the POST.
The Errors I get are
The JSON request was too large to be deserialized.
or either
Error during serialization or deserialization using the JSON JavaScriptSerializer. The length of the string exceeds the value set on the maxJsonLength property. Parameter name: input
I haven't been able to consistently determine which actions result in which error, but it is predominately the latter one.
In an effort to get the value of the MaxJsonSize value, on the Index method of the controller, I get this data and dump it into a viewbag to write to console on the client side. Every time it comes back at 10k (102400).
If I reduce the data package size, and still serialize as previously, I get no errors.
In fiddler I can inspect the package and all the JSON is deserializable in fiddler, so I don't see an issue in my JSON. Additionally if I console.log(data) chrome sees no problems with it either.
The VM in the controller is the same for both POST and GET. With the exception there is more data with the POST than the GET. To test this I got a huge data set from the server;
GeoJSON data for all 50 states. Following was the result.
GET Content-Length: 3229309 return 200
POST Content-Length: 2975244 return 500
The POST failed in this scenario and returned the second error listed previously.
I only changed the data minimally (one string) and don't know why when sent back its smaller, but the JSON for both the GET and the POST is virtually identical.
I've tried changing the web.config file:
<system.web.extensions>
<scripting>
<webServices>
<jsonSerialization maxJsonLength="2147483644"/>
</webServices>
</scripting>
</system.web.extensions>
I just added this to the end of my config file just prior to
I've also added a parameter in Settings.config
<add key="aspnet:MaxJsonDeserializerMembers" value="2147483644" />
I have also verified that this param loads as part of the application settings in IIS.
Is there something else I can try to change to allow for these large data sets to be sent in a POST.
As a last resort, I was going to pull all of the GeoJSON data out of the POST. However when a user navigates back and they haven't changed what they were mapping, we'd have to find all the GeoJSON data again, causing undue work on the server etc. I thought if I only had to fetch it once that would be best from an efficiency perspective.
I struggled with this too, nothing I changed in web.config helped, despite several SO answers looking relevant. They helped with returning large JSON data, but the large JSON post kept failing. In the end I found this:
increase maxJsonLength for JSON POST and used the solution there, and it worked for me.
Quoting from there :
the MVC json serializer does not look at the webconfig to get the max length (thats for asp.net web services). you need to use your own serializer. you override ExecuteResult and supply you own json serializer. to override the input, create a new JsonValueProviderFactory, then override ValueProvider in the controller to return your new json factory when its a json request.