Why is my server response so high (First Byte Time)? - server

One of my websites has suddenly become extremely slow in server response - currently 41859 ms First Byte Time.
Already talked to hosting provider, but they did not notice any changes or problems with the server. So I am lost on what is the problem. Already tried plugins for caching and image optimization but I know that won't help.
Does anyone know what might be the problem? I've also run a security check but nothing was found, in case it's a virus.
Any ideas?

Related

Wordpress in waiting state

I built a website for someone and I used https://gtmetrix.com to get some analytics, mainly because the wait time is huge (~20 sec) without having any heavy images. Please find attached a screenshot here:
http://img42.com/05yvZ
One of my problems is that it takes quite a long time to perform the 301 redirect. Not sure why, but if someone has a key to the solution I would really appreciate. At least some hints to search would be nice.
The second problem is after the redirection, the waiting time is still huge. As expected I have a few plugins. Their javascripts are called approx. 6 secs after the redirection. Would someone please show me some directions on where to search please?
P.S. I have disabled all plugins and started from a naked plain Twenty Eleven theme, but I still have waiting times during redirection and smaller delay after redirection.
Thanks in advance
But a few suggestions:
1 and 2.) If the redirect is adding noticeable delays; test different redirect methods. There are several approaches to this -- including HTML meta and server side (ie PHP) methods -- I typically stick to server side; if it's showing noticeable delays using a server side method, this may be a great indicator that your experiencing server issues - and may be very well your server all along causing your speed issues; contact your host provider.
3.) Take a look at the size of your media. Images and Video; also Flash if your using any. Often cases it's giant images that were just sliced / saved poorly and not optimized for web in a image editing software like PhotoShop. Optimize your images for web and re-save them at a lower weight to save significant on load time. Also, many cases nowadays and you can avoid using clunky images in general by building the area out using pure CSS3. (ie. Odd repeatable .gifs to create gradients or borders etc.)

An attempt was made to access a socket in a way forbidden by its access permissions in Azure Web Apps

I'm running a webapi on an Azure website that makes calls to external web services. The webapi handles approximately 2K-3K requests per minute.
Periodically, lots of socket errors start occurring that indicate: "An attempt was made to access a socket in a way forbidden by its access permissions". This error seems to occur regardless of the ip address of the external web service.
At first, I thought it might be ephemeral port exhaustion, but I've limited "connectionManagement" to a maximum of 100 connections.
What would be causing this?
Thanks very much. Happy to provide whatever information might be helpful.
Update 6/1: - doesn't work per 6/2
I added the following to my web.config system.net section:
<defaultProxy enabled="false" useDefaultCredentials="false">
<proxy/>
<bypasslist/>
<module/>
</defaultProxy>
It appears to have helped as I haven't seen this issue in the last 6 hours. I have no idea why this would actually help though as I'm not using any proxy-related stuff.
Any thoughts?
Update 6/2:
Adding the defaultProxy doesn't actually appear to help. The problem is still occurring. Back to the drawing board.
I've finally figured out the cause of this problem. The issue was occurring due to port exhaustion.
I was using an NLog email target which was grabbing and holding onto too many SMTP connections over time (despite the 100 max connection limit). After removing the email target, the issue no longer occurs. I haven't figured out why NLog was exhibiting this behavior.

Log4perl problems on Ubuntu Server

I have been running a large-ish site for years, with a typical nginx / apache setup, and all the "pages" are mod_perl. Up until recently, I was running on FreeBSD. After a hardware-replacement, and due to other reasons, I was forced to migrate to Ubuntu (12.04.2 LTS), which I use on many other servers, so no big deal. However, I have a problem with my logs now.
For some reason, more and more "actions" are no longer logged through Log4Perl. This was never a problem on my previous setup, but now I seem to "lose" between 2 and 15 % of my log-entries. This is checked and verified by logging the data to a database at the same time.
Does anyone have a clue why this would happen?
Is there something I should know about large log-files and ubuntu? (It's not that large tbh, 390 MB atm)?
I get nothing in my error-logs anywhere, and as the database-logging happens AFTER the $log->info("ENTRY HERE"), the script obviously doesn't crash. But I am missing a lot of those ENTRY HEREs :)
The log in question is "hit" about once per second on average, but I should not think this would be a big problem?
Could there be "too many processes" trying to write to the log in parallel, causing locking-issues, preventing data form being appended to the file? Any typical Ubuntu-settings that might be adjusted for something like this?
Any help would be greatly appreciated.
Spinner

How Do I Optimize Zend Framework

I have a application built on Zend Framework I am trying to optimize.
I did some Xdebug profiling and although i cant say i understand every nitty gritty of the results i got, some things were quite obvious from the result.
For instance, the file Bootstrap.php seems to be the one gulping most of the time taking 4,553MS seconds which accounts for 92.49% of the total time.
And if i dig further, I could see that Zend_Application_Bootstrap_Boostrap->run takes the bulk of the time. Checking this out again, I found out that Zend_Controller_Front->Dispatch might actually be the function inside the Boostrap.php that takes time to execute.
Question is, from these indices that i have, how best can I go about Optimizing the application? If it caching, how do i go about applying Caching to this situation?
Thanks
From the look of the callgrinds, on the login page the app is spending most of it's time in curl_exec, which is to be expected if you're doing a remote login. But it is doing 10 separate curl_execs which seems excessive. I'm not familiar with the LinkedIn login auth, but is it possible your app is running the remote login code multiple times?
On the standard page request the app is spending most of its time connecting to MySQL, and it seems to be doing this twice. Are you using a remote DB server, and do you need two separate DB connections?
Assuming you are using a remote DB server and it is on the same network as your web server, there seems to be some networking issue there. I'd check the latency to that server if you can, and try connecting to the IP address instead of a hostname to see if that makes any difference (if doing this is much faster this would suggest an issue with the DNS setup on your web server).

Why am I getting this WSDL SOAP error with authorize.net?

I have my script email me when there is a problem creating a recurring transaction with authorize.net. I received the following at 5:23AM Pacific time:
SOAP-ERROR: Parsing
WSDL: Couldn't load from 'https://api.authorize.net/soap/v1/service.asmx?wsdl' :
failed to load external entity "https://api.authorize.net/soap/v1/service.asmx?wsdl"
And of course, when I did exactly the same thing that the user did, it worked fine for me.
Does this mean authorize.net's API is down? Their knowledge base simply sucks and provides no information whatsoever about this problem. I've contacted the company, but I'm not holding my breath for a response. Google reveals nothing. Looking through their code, nothing stands out. Maybe an authentication error?
Has anyone seen an error like this before? What causes this?
Maybe as a back up you can cache the WSDL file locally and in case of network issues use the local copy. I doubt it changes often so if you refresh it weekly that should be satisfactory as the file will probably not be stale by then.