"JSONValue failed" Error while fetching data from server into iPhone - iphone

I am fetching data from server into my iPhone app.
For fetching data from server, I am using HTTP Post method and for parsing data obtained I am using SBJSON Parser.
When the first time my app launches, the data is not fetched.
It shows the following failure log in Console. The app does not crash but just that data is not fetched.
<html>Your request timed out.
Please retry the request. </html>
2011-04-21 08:39:06.339 Hive[1594:207] -JSONValue failed. Error trace is: (
"Error Domain=org.brautaset.JSON.ErrorDomain Code=3 \"Unrecognised leading character\" UserInfo=0x4cabe90 {NSLocalizedDescription=Unrecognised leading character}"
)
The app fetches data properly from the second time onwards. It only gives this error when the app runs the first time.
What could be wrong?

Without analysis of the server and its resources it is difficult to determine why the server is taking too long to respond.
One thing to think about is how much time occurs between the last time you make the JSON attempt and the next time you make your "first attempt". Maybe then see if you an recreate it using a web browser.
Is the server a production quality server? If not, it may be "spinning up" to answer the first request which takes too long for the first response.
Personally, I wrote a generic JSON feed class that has a failure retry option. If it receives nothing or invalid JSON, it will retry x times at y seconds intervals based on what you pass it. It takes a little more work initially but it will payoff for two reasons.
1) It can be reused over and over and an update, like using ASIHTTRequest like Terente's good suggestion can be made in a single file.
2) While you may not expect a response to fail, server slowness or network issues can occur causing a flawed response.

You could use ASIHTTRequest and if you get an time out try to make an new request to the server.

Related

Facebook API - reduce the amount of data you're asking for, then retry your request for 1 row

I have the following logic for my ad insights request:
If Facebook asks me to reduce the amount of data I'm requesting, I half the date range. If the date range is the same, I half the limit.
It gets to the point I send this request:
https://graph.facebook.com/v3.2/{account}/insights?level=ad&time_increment=1&limit=1&time_range={"since":"2019-03-29","until":"2019-03-29"}&breakdowns=country&after=MjMwNwZDZD
But I still get that error:
Please reduce the amount of data you're asking for, then retry your request
There is no more reducing I can do.
Note, that this only happens sometimes.
One way to avoid the error is when you only request 1 item (limit=1) to start splitting the fields and request half the fields in each request.
Another way is to run an async report, which should not have such a low time limit.
Official Facebook API team response:
It looks like you are requesting a lot of fields, this is likely the
cause of this error. This will cause the request to time-out.
Could you try using asynchronous requests as described here:
https://developers.facebook.com/docs/marketing-api/insights/best-practices/#asynchronous?
Async requests have a much longer time limit, this will likely resolve
your issue.

OutSystems e-mail fails after 100s

I'm using OutSystems plataform and recently I'm getting timeout from a periodic e-mail. The timer responsible for this action has 20min timeout but the timer fails after 100s.
Some times the timer executes in 99s and the process finish successfully.
The error:
OutSystems.HubEdition.RuntimePlatform.EmailException: Error creating Email. The operation has timed out
How can I change this behavior to extend this 100s timeout?
You can increase the timeout setting on the Aggregate / Advanced query that you are using to retrieve the data.
Improving the query is always first prize, but increasing the timeout could by you some time.
UPDATE
According to the OutSystems documentation you cannot set the timeout for email rendering. You would have to speed up the rendering.
You could perhaps split your logic into an action that executes the query and stores the result for quick retrieval during the email preparation.
Probably the issue you're having is that the email is taking too long to render. You can check if this is the case by looking at the error log in Service Center. You should see something like:
Error creating Email. The operation has timed out
at System.Net.HttpWebRequest.GetResponse()
at OutSystems.HubEdition.RuntimePlatform.Email.EmailHelper.HttpGetContent(String ssUrl, String method, String contentType, String userAgent, Cookie cookie, QueryParameter[] parameters, String& ssContent, String& ssContentEncoding)
If this is the case, you need to optimize the email in order to render it faster. One good place to start looking is the Slow Queries report, maybe you have some long running query that's slowing down your email rendering...
Best of luck! If you want more details, you can check this community post.

REST API: how to notify a client that the request has failed when the service already returned 200 and some data?

REST API: how to notify a client that the request has failed when the service already returned 200 and some data?
What I am doing?
I am developing a REST Web service that returns data from two sources:
An CSV file from an HTTP server which changes often and sometimes is huge.
A local file.
When a client invokes the service, it does this:
It sends a request to the HTTP server to obtain the CSV file.
After obtaining the CSV file, it combines the data from both sources.
Sends the result to the client. The result is an XML document.
Problem
Sometimes, after I have already returned some data to the client, the HTTP server fails so I cannot continue sending data to the client.
When this happens, I would like to notify the client that there was an error. How should I do this? The service already returned the HTTP code 200 and some data. So I cannot send the client an error 500.
Should I simply write to the output an error message? The client will fail because it the XML-document will not be valid.
The service cannot wait to send the response until the entire file from the HTTP server is read. The reasons is that sometimes the file obtained from the HTTP is very big and does not fit in memory.
Environment: although I do not think this is important, this service is developed in Jersey 1.x.
As you say, there are a couple options:
Start sending the response 200 OK before your upload request is complete, but rely on the client to detect an invalid ontent response; or
Wait until your request file upload is complete before sending the HTTP response. Then you can send the correct status code (2xx or 500).
I would recommend waiting until the upload is complete.
If the file cannot fit in server-side memory, then find a technique to write the stream to persistence not in memory, such as a cache, nosql db, or the filesystem. This will allow for faster processing of the file upload.
If you require additional time to process the file on the server side after upload, you can return a 202 Accepted status, with the Location: header having the resource to the long-running job. The client can keep checking if the job is complete. This will avoid having to process the whole thing in one HTTP round-trip.
some good examples of using RESTful long-operations:
Best practice for implementing long-running searches with REST
http://billhiggins.us/blog/2011/04/27/resty-long-ops/
REST with JAX-RS - Handling long running operations
Replying to myself. This may be useful for someone else.
Initially, I developed this option: if there was an error generating the output of the service when the HTTP code 200 was already sent, the service would write the error message to the output and close the connection. In these cases, the XML of the response was invalid.
Later, I had to change this behavior because users complained that in this scenario, the response was an invalid XML. As a consequence, all they were seeing was the error returned by the XML parser of their applications saying that the XML was invalid, not the actual error message.
To avoid this issue, I changed the behavior of the service:
When there are no errors, the response looks like this:
<view name="demo_stats">
<demo_stats>
<int_type>1</int_type>
<numeric_type>1.1</numeric_type>
</demo_stats>
<demo_stats>
<int_type>2</int_type>
<numeric_type>2.2</numeric_type>
</demo_stats>
</view>
If there is an error generating the output of the service and the service already sent the HTTP code 200, the response looks like this:
<view name="demo_stats">
<demo_stats>
<int_type>1</int_type>
<numeric_type>1.1</numeric_type>
</demo_stats>
<demo_stats>
<int_type>2</int_type>
<numeric_type>2.2</numeric_type>
</demo_stats>
<errors>
<error>There was an error transforming the value of row #3</error>
</errors>
</view>
The element errors is optional and only appears when there is an error during the generation of the output. This is a valid XML document and it allows client application to control better for this situation.

NSMutableURLRequest on succession of another NSMutableURLRequest's success

Basically, I want to implement SYNC functionality; where, if internet connection is not available, data gets stored on local sqlite database. Whenever, internet connection is available, SYNC gets into the action.
Now, Say for example; 5 records are stored locally, and then internet connection is available. I want the server to be updated. So, What I do currently is:
Post first record to the server.
Wait for the success of first request.
Post local NSNotification to routine, that the first record has been updated on server & now second request can go.
The routine fires the second post request on server and so on...
Question: Is this approach right and efficient enough to implement SYNC functionality; OR anything I should change into it ??
NOTE: Records to be SYNC will have no limit in numbers.
Well it depends on the requirements on the data that you save. If it is just for backup then you should be fine.
If the 5 records are somehow dependent on each other and you need to access this data from another device/application you should take care on the server side that either all 5 records are written or none. Otherwise you will have an inconsistent state if only 3 get written.
If other users are also reading / writing those data concurrently on the server then you need to implement some kind of lock on all records before writing and also decide how to handle conflicts when someone attempts to overwrite somebody else changes.

SqlBulkCopy unusual TimeOut Error

I have a SqlBulkCopy operation that is taking data from an MS-Access 2007 database (via OleDbConnection) and using SqlBulkCopy to transfer that data to a SQL Server database. This has previously been working and continues to work for one MS-Access database, but not the other.
I get the error message:
Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
It is hard to believe it is a timeout ast the oledbCommand.CommandTimeout = 0 the sqlBulkCopy.BulkCopyTimeout = 0 and on either side (MS-Access and SQL Server the timeouts have now been set to 0).
Are there other issues/exceptions that the above error message could be hiding? Is there a way to determine what the base cause of a sqlBulkCopy.WriteToServer exception is (there doesn't appear to be any inner exceptions etc...)
So the issue was that there were dates being transfered and some of those dates were invalid for SQL, but valid in Access. For whatever reason this was presenting as a Timeout rather than "invalid date/time" - though if you reduce the data being transfered to a handful of rows (200) rather than the full transfer (500,000) it reports as invalid date/time ... curious.