Whenever I move large amounts of content (roughly 300 pages) via CQ within the siteadmin interface I consistently get an error message that pops up after roughly 10-15 minutes. It seems to happen like clockwork but the pages still move successfully and the references get updated. All the error says is "An Error Occurred" and I check the logs and there are no error messages that I can find. Any ideas why this error pops up in CQ and if this is a larger problem with my instance?
From my experience this is just a timeout error from the javascript. The move is triggered by an ajax call which waits for a response. But if a few hundred pages are moved at once the response takes really long and after a few minutes the javascript gives up waiting for a response and throws this error.
The more CPU and memory your server has, the faster the move will take place which reduces the chance for this error to pop up.
Related
I'm working with Movesense 2.0.0 on a HR+ sensor and the latest MDSlib for Android.
I have an App that calls my custom WB services to download some data stored on sensor's EEPROM.
It happens that sometimes (once every few hundred calls, not always on the same endpoint and when it happens it keeps doing it for a few subsequent requests) sensor's requests handler is not called, and I get this messages from debugger:
ERROR: SF-N invalid CRC
ERROR: SF-N frame too short
ERROR: SF-N invalid CRC
Usually if I send the request again after a few seconds it's correctly handled.
I also tried sending hundreds of thousands of requests from wbcmd through serial port, but the error never appeared.
Is there something I can look at to troubleshoot this issue?
The CRC error indicates that the datapipe experienced dropping or incorrect data, which was caught by the whiteboard protocol CRC checks. Corruption usually happens when the radio link is on the verge of dropping completely and there really is not much that can be done.
You can read the radio link RSSI value (signal strength) in both iOS and Android to see bad the connection is. Most of the time I don't bother. Easier just to retry failed operations later.
Full Disclaimer: I work for the Movesense team
Does the AFD cache push / replicate out to additional POP's, once an initial request to a single POP has completed? I cant find any documentation on the AFD documentation in relation to this.
Ultimately what I want to know is:
If an initial request occurs from EU, served by the EU POP (after pull though from origin server); then 1h later an identical request occurs from the US, is this request a 'cache hit' in the US POP, or a miss with a read through?
The first request by the any POP will not be cached. As per your example, the request sent form US will be a MISS and any subsequent request will be cached in that POP server.
We were seeing mixed behaviour on our systems, and the below extract explains how AFD functions, and why we got the mixed behaviour.
Lets say out of out of 100 POPs globally, we have 20 super POPs. The remaining 80 will
never reach the API backend directly but will only go to super POPs.
And then the Super POPs go to your API to pull the content. There on
not just the initial POP but also the super POP now have the cached
content. Across the globe the Super POPs keep getting warmed up with
the cache content. Each Edge POP has a mapping to a Super POP. This
overall helps improve cache hit ration significantly and also
significantly reduces the load on your application backends.
Hello fellow programmers, I wish everyone a good morning.
The Situation
Laravel is great. Laravel Mail queues and the beanstalkd integration is great. It took me almost no time to get everything working. The sun is shining and its not raining. Its awesome.
Except when an exception is thrown while sending an email. Then thise mail is processed again and again and again and the exception is also thrown again and again and again.
Infinite loop.
I think I wouldnt even notice this if I wouldn't have seeded the database with invalid data. Validation usually would have taken care of that, that emails like 361FlorindaMatthäi#gmail.com dont end up with the folowing exception:
[Swift_RfcComplianceException]
Address in mailbox given [361FlorindaMatthäi#gmail.com] does not
comply with RFC 2822, 3.6.2.
But what validation wouldnt have taken care for is for example, when my mandrill account reaches its limits or my server looses internet connection, whatever. An Exception sends it into an infinite loop.
In the world where the sun is shining and everything is great the job has to be marked as buried or suspended and the next email should be processed. An infinite loop with an invalid email address is not great.
Basicly your application doesnt send out any emails anymore. This guy has roughly the same issue.
How can I fix this? Has anyone else encountered this Error?
Any Help is much appreciated.
You just need to travel Laravel how many times to try a specific job, before deciding it has failed:
php artisan queue:daemon --tries=3
This way, it will stop processing that specific job after 3 tries.
The hard part of any queue-based system is dealing with the errors, I've run tens of millions of jobs through BeanstalkD and many more through other systems like SQS.
With this Swift_RfcComplianceException exception it's clear that the job will never be able to succeed, and so trying it again would be futile.
Some other problems might be able to be recovered, but in either event, you have to wrap the code in a try/catch block and do what you can.
Since there is no way to 'fix' this particular issue, I would record what happened (the name of the exception and any message, and the data) to a log to check on, and then delete or bury the job. If you store the job-id in the log when it is buried it, you can go back and delete or kick that particular job again later - this would be after being able to change what happens to the job (rather than having it fail again).
We are struggling to get a good performance from NServiceBus 4.0.4 with MSMQ. We experience that when the messages comes in at a slow rate at about 40-50 messages a second every thing works well and our handler are able to keep the queue empty.
Increasing the message rate to like 400 messages a second average the handler cannot keep up any more. Our handlers are just an empty handler without any logic at this point. They seem to maybe cover about 300 messages out of 400 average per second, and the message queue slowly builds.
And here is where I really struggle to understand what happens. If I then increase further to like 1500 -2000 message a second the handlers step up the game and handles close to 1500 messages a second, the queue still build but not with the extra amount of messages.
We have tried to fiddle with NumberOfWorkerThreads(On/Off and 0- 100), MaxRetries(On/Off and 0- 100), MaximumConcurrencyLevel(On/Off and 0- 100), MaximumMessageThroughputPerSecond(On/Off and 0 - 10000) and IsTransitional(On/Off) Nothing seems to influence this behavior.
We are able to send thousands of messages, but not handle them, even though handling of today is picking them from the queue and throwing them away.
Does anyone know what this may come from, or have any good tips to how we can increase the performance of our bus?
When testing this issue i tested runneing the process without debugger from VS studio. It turns out that for some reason my installation of VS2013 attached debugger anyway.
When i detached debbugger i still had some connection to the NServiceBus.host process. Running the appilication outside VS speeded up everything substansially, and therby solved this issue for me. I then reinstalled VS, and the issues whent away there as well.
sorry for any inconvenience.
I'm not really sure what's going on, but today I've noticed that the facebook api is working extremely slow for me.
At first I though it was a bug in my code, but I tried the Graph API Explorer, and even that's causing timeout errors half the time (just using /me):
Failed to load resource: the server responded with a status of 504 (Server timeout)
I don't think its my internet connection, since everything else seems to be working quickly, and http://speedtest.net is giving me good results.
Is my problem somehow fixable or is this just some sort of freak occurance?
Has this happened for anyone else?
Do I need to consider the case that it will take exceedingly long in my application to recieve a response?
I currently have a registration page that waits for a FB.api response (with a spinner gif) before displaying the form. I could use a timeout to wait a few seconds and show it if the api doesn't respond, but I'd really rather not have to use this same sort of logic in every api call that my application depends on...
EDIT: its spontaneously fixed itself now. still no clue what happened.
You can check facebook api live status with this URL
https://developers.facebook.com/live_status
today at 11:13pm: API issues We're currently experiencing a problem
that may result in high API latency and timeouts. We are working on a
fix now.