If you work regularly with vBulletin, you've probably seen this type of error before.
Database error in vBulletin :
mysql_connect() [function.mysql-connect]: Can't connect to local MySQL server
through socket
'/var/lib/mysql/mysql.sock' (2)
/home/detroit/public_html/blab/includes/class_core.php
on line 311
MySQL Error : Error Number :
Request Date : Tuesday, November 16th
2010 # 10:57:57 AM Error Date :
Tuesday, November 16th 2010 # 10:57:57
AM Script :
url_removed_to_avoid_spam_flagging
Referrer : IP Address :
xx.xx.xx.xxx Username : Classname : vB_Database MySQL
Version :
My question doesn't regard the error itself, but rather its age.
Our team is receiving roughly 20-30 of these each hour, with the e-mails arriving in a cluster between the third and seventh minute of the hour. The weird thing is that all of the errors appear to be from the same five-minute block from this morning.
I'm grepping for the errors themselves, but in case someone has a faster answer here (since grep is slow and I don't see any localized PHP error files at a glance): Is there an easy way to see these errors in real time?
My fear is that, far from solving our database problem, we've simply generated so many errors that an e-mail filter somewhere along the messages' route is embargoing the error messages, dribbling them out so slowly as to be useless. A real-time view of the errors will allow us to know whether we've actually got a handle on things, (as we think we do -- in which case we can then look for a way to stop the dribble of old error messages) or whether we need to take additional action.
Thanks in advance for any comments on this. You people rock.
You probably have fixed this problem by now but:
Any time I've seen weird timestamps on vBulletin mails it has been due to throttling by e-mail providers. If you have a look at the timestamps on the Received: lines in the headers of the mails you'll probably see where the throttling is happening.
If you're getting clusters of errors at specific times, then the best place to start looking for culprits is the vBulletin "Scheduled Tasks" section of the Admin Control Panel. Some of the scheduled tasks can be expensive, depending on your site size, traffic profile, etc and one of them may be running an expensive query which might be locking a table or two for a very long time.
Have you tried looking at the DB using mtop at the times when the errors occur?
Related
I have few ctps subscribing to a tp.
subscription is established with no problems but data doesn't seem to hit 1 of my ctps.
I have 2 ctps subscribing to the same table. one is getting data the other doesn't.
I checked. u.w and I can see the handles being open for the said table but when I check the upd on my ctp... it receives all other tables except this one.
upd on my ctp it's a simple insert. I cannot see any data at all for the table. the t parameter is never set to the the name of the table I am interested in. I don't know what else to check. any suggestions would be greatly appreciated. the pub logic is the default pub logic.
no errors in the tp.
UPDATE1: I can send other messages and I receive data from the tp for other tables. issue doesn't seem to persist I dr just prod. I cannot debug much in prod
Without seeing more of your code it's hard to create a good answer.
Couple things you could try:
Check if you can send a generic message (e.g. h(+;1;2)) from your tp to ctp via the handle in .u.w this will make sure the connection is ok.
If you can send a message then you can check if the issue is in your ctp. You can see exactly what is being sent by adding some logging to your upd function, or if you thing the message isn't getting that far, to your .z.ps message handler function, e.g. .z.ps:{0N!x;value x} will perform some very basic client side logging.
If you can't send a message down the handle in the tp then it's possible there's other network issues at play (although I expect you would be seeing errors in your tp if that was the case). You could check .z.W in your ctp in this case to see if the corresponding handle for the tp is present there.
Can also send a test update to your tickerplant and add logging along each step of the way if you really want to see the chain of events but this could be quite invasive.
I realize this is a bad question, but I don't know where else to turn.
can someone point me to where I can find the list of reports failure codes for IBM? I've tried searching for it in the IBM documentation, and in general google search, but this particular error is unique and I've never seen it before.
I'm trying to find out what code 262148 means.
Background:
I built a datastage job that has:
ORACLE CONNECTOR --> TRANSFORMER -> HIERARCHICAL DATA
The intent is to pull data from a ORACLE table, and output the response of the select statement into a JSON file. I'm using the HIERARCHICAL stage to set it. When tested in the stage, no problems, I see the JSON output.
However, when I run the job, it squawks:
reports failure code 262148
then the job aborts. There are no warnings, no signs, no errors prior to this line.
Until I know what it is, I can't troubleshoot.
If someone can point me to where the list of failure codes are, i can proceed.
Thanks!
can someone point me to where I can find the list of reports failure codes for IBM?
Here you go:
https://www.ibm.com/support/knowledgecenter/en/ssw_ibm_i_73/rzahb/rzahbsrclist.htm
While this list does not list your specific error code, it does categorize many other codes, and explains how the code breakdown works. While this list is not specifically for DataStage, in my experience IBM standards are generally compatible across different products. In this list every code that starts with a 2 is a disk failure, so maybe run a disk checker. That's the best I've got as far as error codes.
Without knowledge of the inner workings of the product, there is not much more you can do beyond checking system health in general (especially disk, network and permissions in this case). Personally, I prefer to go get internal knowledge whenever exterior knowledge proves insufficient. I would start with a network capture, as I'm sure there's a socket involved in the connection between the layers. Compare the network capture from when the select statement is run from the hierarchical from one captured when run from the job. There may be clues in there, like reset or refused connections.
I've run into a mystifying XMLA timeout error when running an ADOMD.Net command from a .Net application. The Visual Basic routine iterates over a list of mining models residing on a SQL Server Analysis Services 2014 instance and performs a cross-validation test on each one. Whenever the time elapsed on the cross-validation test reaches the 60 minute mark, the XML for Analysis parser throws an error, saying that the request timed out. For any routine operations taking less than one hour, I can use the same ADOMD.Net connections with the same server and application without any hitches. The culprit in such cases is often the ExternalCommandTimeout setting on the server, which defaults to 3600 seconds, i.e one hour. In this case, however, all of the following timeout properties on the server are set to zero: CommitTimeout, ExternalCommandTimeout, ExternalConnectionTimeout, ForceCommitTimeout, IdleConnectionTimeout, IdleOrphanSessionTimeout, MaxIdleSessionTimeout and ServerTimeout.
There are only three other timeout properties available, none of which is set to one hour: MinldleSessionTimeout (currently at 2700), DatabaseConnectionPoolConnectTimeout (now at 60 seconds) and DatabaseConnectionPoolTimeout (at 120000). The MSDN documentation lists another three timeout properties that aren't visible with the Advanced Properties checked in SQL Server Management Studio 2017:
AdminTimeout, DefaultLockTimeoutMS and DatabaseConnectionPoolGeneralTimeout. The first two default to no timeout and the third defaults to one minute. MSDN also mentions a few "forbidden" timeout properties, like SocketOptions\ LingerTimeout, InitialConnectTimeout, ServerReceiveTimeout, ServerSendTimeout, which all carry the warning, "An advanced property that you should not change, except under the guidance of Microsoft support." I do not see any means of setting these through the SSMS 2017 GUI though.
Since I've literally run out of timeout settings to try, I'm stumped as to how to correct this behavior and allow my .Net app to wait on those cross-validations through ADOMD. Long ago I was able to solve a few arcane SSAS timeout issues by appending certain property settings to the connection strings, such as "Connect Timeout=0;CommitTimeout=0;Timeout=0" and so on. Nevertheless, attempting to assign an ExternalCommandTimeout value through the connection string in this manner results in the XMLA error
"The ExternalCommandTimeout property was not recognized." I have not tested each and every one of the SSAS server timeouts in this manner, but this exception signifies that ADOMD.Net connection strings can only accept a subset of the timeout properties.
Am I missing a timeout setting somewhere? Does anyone have any ideas on what else could cause this kind of esoteric error? Thanks in advance. I've put this issue on the back burner about as long as I can and really need to get it fixed now. I wonder if perhaps ADOMD.Net has its own separate timeout settings, perhaps going by different names, but I can't find any documentation to that effect...
I tracked down the cause of this error: buried deep in the VB.Net code on the front end was a line that set the CommandTimeout property of the ADOMD.Net Command object to 3600 seconds. This overrode the connection string settings mentioned above, as well as all of the server-level settings. The problem was masked by the fact that cross-validation retrieval operations were also timing out in the Visual Studio 2017 GUI. That occurred because the VS instance was only recently installed and the Connection and Query Timeouts hadn't yet been set to 0 under Options menu/Business Intelligence Designers/Analysis Services Designs/General.
I have ScriptA with some functions in files that have triggers that all run under UserA and consume about 2 hours of runtime per day.
I have another project ScriptB with some other functions in other files that have triggers that all run under UserA (the same user as ScriptB users) and consume about 3 hours of runtime per day.
Is my Trigger Aggregate Execution Time quota (from quota page here) aggregated per user or per script? That is, is it:
Five hours (2 + 3) for UserA or is it
Two hours for ScriptA and 3hrs for ScriptB?
I have seen this answer but it doesn't explicitly address the scoping question I'm asking.
Obviously is per user not ler script. Otherwise quotas wouldnt make sense.
In the interests of getting some evidence together for this:
At 4m25 in this March 2013 episode of Google Apps Unscripted, Kalyan Reddy says that the quotas are "per account type" and as you can see in the dashboard, the Quota table is gridded and has columns labelled with those account types too.
I have also done some testing and made a script that uses quite a bit of time. It started to max out other scripts running under the same account and many of that account's triggered scripts started to get errors "Service using too much computer time for one day". But... interestingly, after a couple of days of those errors have subsided. I believe on a consumer account I am now getting way more execution time than 1 hr per day.
While not a direct answer to the question and still a leap of logic/assumption, these two things make me feel that "per account" is more likely to be correct than "per script". I'll keep the question open for a bit longer for any comments (esp Googlers).
We are trying to implement a suite of spreadsheets that will handle budget figures for a set of stores. Everything works fine until we try to implement a spreadsheet that will collect data from all store spreadsheets and present statistics. Due to the limitation of ImportRange, of a maximum of 50 uses per spreadsheet doc, we have been implementing a Google docs script instead to handle the importing of data. But now when we have made a copy of the document to have one for each month, we are getting problems with our time triggers. We have setup a trigger to run the script once every minute and that results in an error message stating; Service invoked too many times: trigger.
What are the limitations here? And how do we best solve this?
We are also getting some other error messages and would like to know how to solve these;
Document tEHGO48zIBIFYRpb7Xhjwqg is missing (perhaps it was deleted?) (line 191)
Exceeded maximum execution time
Service error: Spreadsheets (line 290)
Where can we find documentation describing the different limitations and error messages?
Quota Limits for many services used with Google Apps Scripts have now been published on the Dashboard at:
https://docs.google.com/macros/dashboard
Just happened the same to me. It seems there is a non-published limit:
Premier accounts usually have larger quotas for every limitation.
The argument is that the account is better verified and less likely to exploit to resources.
But neither the regular limitations or Premier's better quotas are published by Google. And it seems that Googlers can't say it here in the forums either. The only well defined GAS limitation is the email quota, accessible through:
MailApp.getRemainingDailyQuota()
Which is 500 for regular accounts and 1500 for Premier.
Source: Google Support forums
Solutions are:
Join several scripts into one big trigger in case there is a limit in number of triggers
Optimize code (join loops, refresh only the necessary fields, etc.) in case it is based in CPU usage
Move minute timer triggers to OnEdit or OnOpen triggers whenever possible
Get a Premium Account
For your other errors, I haven't encountered any similar. You should post some details on the script or publish some code so we can debug it.