I have just created zend project on my local machine. but when I try to run it in the browser, it just loads for at least a minute and then shows this error.
Fatal error: Maximum execution time of 30 seconds exceeded in /opt/lampp/htdocs/launchmind/library/Zend/Db/Adapter/Abstract.php on line 815
It displays the same error with some other line number on some other file every time I reload the page.
Please help. Thank you.
Most likely you have some firewall issue that causes the script to timeout. Check connections from you local machine to all the database hosts in the config and/or webservices and/or other involved servers.
Here are few tips:
Monitor xampp error_log and /var/log/messages...
Since it's local server, disable the firewall temporarily to see if the problem is with your local firewall (preventing outgoing connection) or remote server firewall (preventing incoming connection). On RHEL use sudo service iptables stop or /etc/init.d/iptables stop
if you have very time consuming tasks in your app (very unlikely) then you can try to bump up the max execution time either by modifying /opt/lampp/etc/php.ini (somewhere there) or set max execution time in your app config phpSettings.max_execution_time = 60 or use ini_set in bootstrap
Hope this helps.
Related
I have a Windows Server 2016 that used to run Tableau Server v2018.1 (and a few versions before that); during this last update, I performed a backup and continued to wipe Tableau off the server (used the tableau-obliterate script which removed all things Tableau).
I then proceeded to install Tableau v2018.2 as a clean install, set up the configuration to use port 80 and started the server successfully.
However, I quickly discovered that Tableau moved the gateway to port 8000; I proceeded to review the ports to ensure nothing else is using this (this VM has nothing other than Tableau installed on it); I used TCPView and monitored the ports while the Tableau Server was running and Stopping/Starting; the only hint I found of something touching port 80 was the output of netstat, which showed an entry of TCP vizqlserver.exe with the state of CLOSE_WAIT.
I have tried manually setting the port through TSM configuration (run set, confirm with get, restart), TSM Settings import, and manually adjusting the configuration file for gateway, but Tableau just reverts back to port 8000.
I am at a loss as to why this is happening as again, nothing else has ever been on this server and nothing has changed since removing v2018.1 (which was running on port 80).
I tried to post this on the Tableau community forum, but 20 hrs later, it is still pending moderator approval :(
Would appreciate any help!
A recent Windows update has been causing some port conflicts try this:
https://kb.tableau.com/articles/Issue/kb4338818-windows-update-causing-tableau-server-to-become-unstable
When I run a query that takes a long time on my Postgres server (maybe 30 minutes), I get the error. I've verified the query is running with active status on the server using pgAdmin. I've also verified the correctness of the query, as it runs successfully on a smaller dataset. Server configurations are default, I haven't changed anything. Please help!
Look into the PostgreSQL server log.
Either you'll find a crash report there, which would explain the broken connection, or there is something in your network that cuts connections with no activity after a while.
Investigate your firewalls!
Maybe it is a solution to set the configuration parameter tcp_keepalives_idle to a value shorter than the time when the connection is cut. That will cause the server operating system to send keepalive messages on idle connections, which may be enough to prevent the overzealous connection reaper in your environment from disrupting your work.
I'm having an issue with TeamCity that is proving very difficult to solve for a number of reasons. I've looked around and not found any useful answers so far.
We have a teamcity server running on port 8080 with two agents connecting to it on ports 9090 and 9091 respectively. The agents register successfully and can accept new builds just fine. When the build is complete, tests have passed and the logs state "Sending artifacts" things stop and the artifacts never reach the server. Having left this sit overnight I make requests to stop the build which fail.
We have recently switched to a new firewall but things have been working after setting the required port rules for 8080, 9090 and 9091. No changes have been made since we got things working but now things do not work.
To the logs...
The server is aware of the failure as I can see logs in several places stating:
jetbrains.buildServer.SERVER - Failed to upload artifact, due to error: org.apache.commons.fileupload.FileUploadBase$IOFileUploadException: Processing of multipart/form-data request failed. Read timed out
The agent also has logs stating a similar reason:
jetbrains.buildServer.AGENT - Failed to publish artifacts because of error: java.net.SocketException: Connection reset by peer: socket write error, will try again.
During all this the firewall logs show that all traffic on the expected ports is being allowed through. What is odd though are some logs that look like this:
2016-04-01 10:45:00 Deny [sourceIp] [targetIP] 49426/tcp 8080 49426 0-External Firebox tcp syn checking failed (expecting SYN packet for new TCP connection, but received ACK, FIN, or RST instead). 558 113 (Internal Policy) proc_id="firewall" rc="101" msg_id="3000-0148" tcp_info="offset 5 A 478076245 win 258"
Examining port 49426 on the agent shows that it was being used by java.exe. Now I'm assuming this might have something to do with TeamCity as it runs in the JVM. The next step was to scour every bit of config I can find to figure out where this port number comes from. After a while the agent decided to retry and the port changed. It looks to me that java is just using whatever port it wants (as if unassigned in code) so could there be something missing in the agent config instructing it which port to use for artifact uploads?
I did read somewhere that perhaps the server or the firewall doesn't like requests or file uploads that exceed a certain size (the largest file is 81 meg) but we found nothing to suggest there was such a rule in place.
The Teamcity version is old (v7.1.1) but we are currently unable to upgrade (I am waiting on approval to use a newer, bigger server due to hard disk space issues).
UPDATE
We very briefly opened up a bit of the firewall to see if it was the cause of the issues to no avail. At this point I'm not convinced the firewall is the problem.
Any ideas?
Thanks in advance.
UPDATE 2
I've ended up setting up a whole new build server and things work just fine there. The new server has the latest TeamCity version but the agents are the same machines and artifact uploads appear to work just fine. This isn't really a solution to the question but at least I have a working setup now.
This can happen when the agent is too slow to start sending data for whatever reason. This workaround by Jetbrains employee Pavel Sher might help:
Increase the connectionTimeout value in the server.xml file
<Connector port="8111" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8543"
enableLookup="false"
useBodyEncodingForURI="true">
To 20000 to 60000 or even more.
i have a strange Problem with Nagios. After restart everything runs perfectly fine.
Then some hours later, Hosts are shown down and a minute later up again(see History log below). After that all Services fail with a timeout.
This doesn´t happen with all Servers at the same time. It seems rather randomly which Server fails.
History log:
[2013-06-26 19:19:07] SERVICE ALERT: HyperV 1;Check CPU HyperV 1;CRITICAL;SOFT;1;CHECK_NRPE: Socket timeout after 120 seconds.
[2013-06-26 19:17:27] HOST ALERT: HyperV 1;UP;SOFT;2;PING OK - Packet loss = 0%, RTA = 3.01 ms
[2013-06-26 19:16:17] HOST ALERT: HyperV 1;DOWN;SOFT;1;PING CRITICAL - Packet loss = 100%
What i have tried so far.
-Increased the timeouts
-Changed the Host check, so that it get checked more often before fail (5 times instead of 1)
-Executed the scripts from command line -> Also fail (maybe Ubuntu problem?)
-Checked Logs on both sides for errors (nothing found)
After a restart everything is fine again.
System Infos:
-Nagios is running on an Ubuntu 13.04
-Some clients are running different Windows with NSClient++
-ESX with Versions from 4.0 to 5.1
Plugins:
-check_nrpe
-check_vmfs from Nagios Exchange
I sth. is unclear don´t hesitate to ask.
Thx & Best,
Pille
You seem to have a networking issue, not a Nagios issue. Possibly a bad cable, failing NIC, routing problem, switch flapping, arp table overflow, could be any number of things.
Since this affects all hosts/services, and intermittently, and clears itself up, I would suggest you start looking for a problem on your local connections first. If it only affects some items and not others, then find which hosts have common network components and check there.
Sometimes, when i click on my app link, it takes about 30 seconds before the app starts loading, while it's in the verifying stage. Other times, with the same install, one which i have loaded and used many times, it takes no time at all. Why? What can i change about the deployment to stop this? Please note that i have no admin access to the proxy server, nor possibility of changing the proxy server.
I think this is a performance issue that you need to take up with the guys who support the proxy server. The request is hitting the proxy server and waiting for it to give permission to come through andretrieve the files. It probably depends on how much traffic the server has at any given time.