OutSystems e-mail fails after 100s - email

I'm using OutSystems plataform and recently I'm getting timeout from a periodic e-mail. The timer responsible for this action has 20min timeout but the timer fails after 100s.
Some times the timer executes in 99s and the process finish successfully.
The error:
OutSystems.HubEdition.RuntimePlatform.EmailException: Error creating Email. The operation has timed out
How can I change this behavior to extend this 100s timeout?

You can increase the timeout setting on the Aggregate / Advanced query that you are using to retrieve the data.
Improving the query is always first prize, but increasing the timeout could by you some time.
UPDATE
According to the OutSystems documentation you cannot set the timeout for email rendering. You would have to speed up the rendering.
You could perhaps split your logic into an action that executes the query and stores the result for quick retrieval during the email preparation.

Probably the issue you're having is that the email is taking too long to render. You can check if this is the case by looking at the error log in Service Center. You should see something like:
Error creating Email. The operation has timed out
at System.Net.HttpWebRequest.GetResponse()
at OutSystems.HubEdition.RuntimePlatform.Email.EmailHelper.HttpGetContent(String ssUrl, String method, String contentType, String userAgent, Cookie cookie, QueryParameter[] parameters, String& ssContent, String& ssContentEncoding)
If this is the case, you need to optimize the email in order to render it faster. One good place to start looking is the Slow Queries report, maybe you have some long running query that's slowing down your email rendering...
Best of luck! If you want more details, you can check this community post.

Related

Facebook API - reduce the amount of data you're asking for, then retry your request for 1 row

I have the following logic for my ad insights request:
If Facebook asks me to reduce the amount of data I'm requesting, I half the date range. If the date range is the same, I half the limit.
It gets to the point I send this request:
https://graph.facebook.com/v3.2/{account}/insights?level=ad&time_increment=1&limit=1&time_range={"since":"2019-03-29","until":"2019-03-29"}&breakdowns=country&after=MjMwNwZDZD
But I still get that error:
Please reduce the amount of data you're asking for, then retry your request
There is no more reducing I can do.
Note, that this only happens sometimes.
One way to avoid the error is when you only request 1 item (limit=1) to start splitting the fields and request half the fields in each request.
Another way is to run an async report, which should not have such a low time limit.
Official Facebook API team response:
It looks like you are requesting a lot of fields, this is likely the
cause of this error. This will cause the request to time-out.
Could you try using asynchronous requests as described here:
https://developers.facebook.com/docs/marketing-api/insights/best-practices/#asynchronous?
Async requests have a much longer time limit, this will likely resolve
your issue.

Avoid duplicate POSTs with REST

I have been using POST in a REST API to create objects. Every once in a while, the server will create the object, but the client will be disconnected before it receives the 201 Created response. The client only sees a failed POST request, and tries again later, and the server happily creates a duplicate object...
Others must have had this problem, right? But I google around, and everyone just seems to ignore it.
I have 2 solutions:
A) Use PUT instead, and create the (GU)ID on the client.
B) Add a GUID to all objects created on the client, and have the server enforce their UNIQUE-ness.
A doesn't match existing frameworks very well, and B feels like a hack. How does other people solve this, in the real world?
Edit:
With Backbone.js, you can set a GUID as the id when you create an object on the client. When it is saved, Backbone will do a PUT request. Make your REST backend handle PUT to non-existing id's, and you're set.
Another solution that's been proposed for this is POST Once Exactly (POE), in which the server generates single-use POST URIs that, when used more than once, will cause the server to return a 405 response.
The downsides are that 1) the POE draft was allowed to expire without any further progress on standardization, and thus 2) implementing it requires changes to clients to make use of the new POE headers, and extra work by servers to implement the POE semantics.
By googling you can find a few APIs that are using it though.
Another idea I had for solving this problem is that of a conditional POST, which I described and asked for feedback on here.
There seems to be no consensus on the best way to prevent duplicate resource creation in cases where the unique URI generation is unable to be PUT on the client and hence POST is needed.
I always use B -- detection of dups due to whatever problem belongs on the server side.
Detection of duplicates is a kludge, and can get very complicated. Genuine distinct but similar requests can arrive at the same time, perhaps because a network connection is restored. And repeat requests can arrive hours or days apart if a network connection drops out.
All of the discussion of identifiers in the other anwsers is with the goal of giving an error in response to duplicate requests, but this will normally just incite a client to get or generate a new id and try again.
A simple and robust pattern to solve this problem is as follows: Server applications should store all responses to unsafe requests, then, if they see a duplicate request, they can repeat the previous response and do nothing else. Do this for all unsafe requests and you will solve a bunch of thorny problems. Repeat DELETE requests will get the original confirmation, not a 404 error. Repeat POSTS do not create duplicates. Repeated updates do not overwrite subsequent changes etc. etc.
"Duplicate" is determined by an application-level id (that serves just to identify the action, not the underlying resource). This can be either a client-generated GUID or a server-generated sequence number. In this second case, a request-response should be dedicated just to exchanging the id. I like this solution because the dedicated step makes clients think they're getting something precious that they need to look after. If they can generate their own identifiers, they're more likely to put this line inside the loop and every bloody request will have a new id.
Using this scheme, all POSTs are empty, and POST is used only for retrieving an action identifier. All PUTs and DELETEs are fully idempotent: successive requests get the same (stored and replayed) response and cause nothing further to happen. The nicest thing about this pattern is its Kung-Fu (Panda) quality. It takes a weakness: the propensity for clients to repeat a request any time they get an unexpected response, and turns it into a force :-)
I have a little google doc here if any-one cares.
You could try a two step approach. You request an object to be created, which returns a token. Then in a second request, ask for a status using the token. Until the status is requested using the token, you leave it in a "staged" state.
If the client disconnects after the first request, they won't have the token and the object stays "staged" indefinitely or until you remove it with another process.
If the first request succeeds, you have a valid token and you can grab the created object as many times as you want without it recreating anything.
There's no reason why the token can't be the ID of the object in the data store. You can create the object during the first request. The second request really just updates the "staged" field.
Server-issued Identifiers
If you are dealing with the case where it is the server that issues the identifiers, create the object in a temporary, staged state. (This is an inherently non-idempotent operation, so it should be done with POST.) The client then has to do a further operation on it to transfer it from the staged state into the active/preserved state (which might be a PUT of a property of the resource, or a suitable POST to the resource).
Each client ought to be able to GET a list of their resources in the staged state somehow (maybe mixed with other resources) and ought to be able to DELETE resources they've created if they're still just staged. You can also periodically delete staged resources that have been inactive for some time.
You do not need to reveal one client's staged resources to any other client; they need exist globally only after the confirmatory step.
Client-issued Identifiers
The alternative is for the client to issue the identifiers. This is mainly useful where you are modeling something like a filestore, as the names of files are typically significant to user code. In this case, you can use PUT to do the creation of the resource as you can do it all idempotently.
The down-side of this is that clients are able to create IDs, and so you have no control at all over what IDs they use.
There is another variation of this problem. Having a client generate a unique id indicates that we are asking a customer to solve this problem for us. Consider an environment where we have a publicly exposed APIs and have 100s of clients integrating with these APIs. Practically, we have no control over the client code and the correctness of his implementation of uniqueness. Hence, it would probably be better to have intelligence in understanding if a request is a duplicate. One simple approach here would be to calculate and store check-sum of every request based on attributes from a user input, define some time threshold (x mins) and compare every new request from the same client against the ones received in past x mins. If the checksum matches, it could be a duplicate request and add some challenge mechanism for a client to resolve this.
If a client is making two different requests with same parameters within x mins, it might be worth to ensure that this is intentional even if it's coming with a unique request id.
This approach may not be suitable for every use case, however, I think this will be useful for cases where the business impact of executing the second call is high and can potentially cost a customer. Consider a situation of payment processing engine where an intermediate layer ends up in retrying a failed requests OR a customer double clicked resulting in submitting two requests by client layer.
Design
Automatic (without the need to maintain a manual black list)
Memory optimized
Disk optimized
Algorithm [solution 1]
REST arrives with UUID
Web server checks if UUID is in Memory cache black list table (if yes, answer 409)
Server writes the request to DB (if was not filtered by ETS)
DB checks if the UUID is repeated before writing
If yes, answer 409 for the server, and blacklist to Memory Cache and Disk
If not repeated write to DB and answer 200
Algorithm [solution 2]
REST arrives with UUID
Save the UUID in the Memory Cache table (expire for 30 days)
Web server checks if UUID is in Memory Cache black list table [return HTTP 409]
Server writes the request to DB [return HTTP 200]
In solution 2, the threshold to create the Memory Cache blacklist is created ONLY in memory, so DB will never be checked for duplicates. The definition of 'duplication' is "any request that comes into a period of time". We also replicate the Memory Cache table on the disk, so we fill it before starting up the server.
In solution 1, there will be never a duplicate, because we always check in the disk ONLY once before writing, and if it's duplicated, the next roundtrips will be treated by the Memory Cache. This solution is better for Big Query, because requests there are not imdepotents, but it's also less optmized.
HTTP response code for POST when resource already exists

Query URL without redirect in Go

I am writing a benchmark test for a redirect script.
I wisg my program to query certain URL that redirects to AppStore. But I do not wish to download AppStore page. I just wish to log redirect URL or error.
How do I tell Go to query URL without second redirect query?
UPDATE
Both answers are correct BUT:
I tried both solutions. I am doing benchmarking.
I run 1 or many go processes with 10 - 500 go routines. They query URL in a loop.
My server is also written in go. It reports number of requests every second.
First solution: http.DefaultTransport.RoundTrip - works slow, gives errors.
First 4 seconds works fine. Making 300-500 queries then performance drops to 80 query per second.
Then drops to 0-5 query per second and queryies script start getting errors like
dial tcp IP:80: A connection attempt failed because the connected
party did not properly respond after a period of time, or established
connection failed because connected host has failed to respond.
I guess it re-use connection that is closed.
Second solution: CheckRedirect field works with constant performance. I am not sure if it re-uses connections or it opens a new connection for every request. I create client for every request in a loop. It is how it will behave in a real life (every request is a new connection). Is there way to ensure that connections are closed after each query and not re-used?
That is why I am going to mark second solution as such that answer my question. But for my research it is very important that each query was a new connection. How can I ensure with second solution?
You need to use an http.Transport instead of an http.Client. Transport is lower-level and does not follow redirects.
req, err := http.NewRequest("GET", "http://example.com/redirectToAppStore", nil)
// ...
resp, err := http.DefaultTransport.RoundTrip(req)
For completeness' sake, you can use an http.Client and not follow redirects. http.Client has a CheckRedirect field which is a function. It is called before following any redirection.
If this function returns an error, then httpClient.Do(...) will not follow the redirect (see doFollowingRedirects() function in Go's source code) and instead will return an error (its concrete type will be url.Error, and its URL field will be the redirect-to URL, aka the Location header value, see this code).
You can see my gocrawl library for a concrete example of this use.

How to guard against repeated request?

we have a button in a web game for the users to collect reward. That should only be clicked once, and upon receiving the request, we'll mark it collected in DB.
we've already blocked the buttons in the client from repeated clicking. But that won't help if people resend the package multiple times to our server in short period of time.
what I want is a method to block this from server side.
we're using Playframework 2 (2.0.3-RC2) for server side and so far it's stateless, I'm tempted to use a Set to guard like this:
if processingSet has userId then BadRequest
else put userId in processingSet and handle request
after that remove userId from that Set
but then I'd have to face problem like Updating Scala collections thread-safely and still fail to block the user once we have more than one server behind load balancing.
one possibility I'm thinking about is to have a table in DB in place of the processingSet above, but that would incur 1+ DB operation per request, are there any better solution~?
thanks~
Additional DB operation is relatively 'cheap' solution in that case. You should use it if you'e planning to save the buttons state permanently.
If the button is disabled only for some period of time (for an example until the game is over) you can also consider using the cache API however keep in mind that's not dedicated for solutions which should be stored for long time (it should not be considered as DB alternative).
Given that you're using Mongo and so don't have transactions spanning separate collections, I think you can probably implement this guard using an atomic operation - namely "Update if current", which is effectively CompareAndSwap.
Assuming you've got a collection like "rewards" which has a "collected" attribute, you can update the collected flag to true only if it is currently false and if that operation doesn't fail you can proceed to apply the reward knowing that for any other requests the same operation will fail.

"JSONValue failed" Error while fetching data from server into iPhone

I am fetching data from server into my iPhone app.
For fetching data from server, I am using HTTP Post method and for parsing data obtained I am using SBJSON Parser.
When the first time my app launches, the data is not fetched.
It shows the following failure log in Console. The app does not crash but just that data is not fetched.
<html>Your request timed out.
Please retry the request. </html>
2011-04-21 08:39:06.339 Hive[1594:207] -JSONValue failed. Error trace is: (
"Error Domain=org.brautaset.JSON.ErrorDomain Code=3 \"Unrecognised leading character\" UserInfo=0x4cabe90 {NSLocalizedDescription=Unrecognised leading character}"
)
The app fetches data properly from the second time onwards. It only gives this error when the app runs the first time.
What could be wrong?
Without analysis of the server and its resources it is difficult to determine why the server is taking too long to respond.
One thing to think about is how much time occurs between the last time you make the JSON attempt and the next time you make your "first attempt". Maybe then see if you an recreate it using a web browser.
Is the server a production quality server? If not, it may be "spinning up" to answer the first request which takes too long for the first response.
Personally, I wrote a generic JSON feed class that has a failure retry option. If it receives nothing or invalid JSON, it will retry x times at y seconds intervals based on what you pass it. It takes a little more work initially but it will payoff for two reasons.
1) It can be reused over and over and an update, like using ASIHTTRequest like Terente's good suggestion can be made in a single file.
2) While you may not expect a response to fail, server slowness or network issues can occur causing a flawed response.
You could use ASIHTTRequest and if you get an time out try to make an new request to the server.